Not too long ago it was often said that computer vision could not compete with the visual abilities of a one-year-old. That is no longer true: computers can now recognize objects in images about as well as most adults can, and there are computerized cars on the road that drive themselves more safely than an average sixteen-year-old could. And rather than being told how to see or drive, computers have learned from experience, following a path that nature took millions of years ago. What is fueling these advances is gushers of data. Data are the new oil. Learning algorithms are refineries that extract information from raw data; information can be used to create knowledge; knowledge leads to understanding; and understanding leads to wisdom. Welcome to the brave new world of deep learning.
Deep learning is a branch of machine learning that has its roots in mathematics, computer science, and neuroscience. Deep networks learn from data the way that babies learn from the world around them, starting with fresh eyes and gradually acquiring the skills needed to navigate novel environments. The origin of deep learning goes back to the birth of artificial intelligence in the 1950s, when there were two competing visions for how to create an AI: one vision was based on logic and computer programs, which dominated AI for decades; the other was based on learning directly from data, which took much longer to mature.
In the twentieth century, when computers were puny and data storage was expensive by today’s standards, logic was an efficient way to solve problems. Skilled programmers wrote a different program for each problem, and the bigger the problem, the bigger the program. Today computer power and big data are abundant and solving problems using learning algorithms is faster, more accurate, and more efficient. The same learning algorithm can be used to solve many difficult problems; its solutions are much less labor intensive than writing a different program for every problem.
Learning How to Drive
The $2 million cash prize for the Defense Advanced Research Projects Agency (DARPA) Grand Challenge in 2005 was won by Stanley, a self-driving car instrumented by Sebastian Thrun’s group at Stanford, who taught it how to navigate across the desert in California using machine learning. The 132-mile course had narrow tunnels and sharp turns, including Beer Bottle Pass, a winding mountain road with a sheer drop-off on one side and a rock face on the other. Rather than follow the traditional AI approach by writing a computer program to anticipate every contingency, Thrun drove Stanley around the desert, and it learned for itself to predict how to steer based on sensory inputs from its vision and distance sensors.
Thrun later founded Google X, a skunk works for high-tech projects, where the technology for self-driving cars was developed further. Google’s self-driving cars have since logged 3.5 million miles driving around the San Francisco Bay Area. Uber has deployed a fleet of self-driving cars in Pittsburgh. Apple is moving into self-driving cars to extend the range of products that its operating systems control, hoping to repeat its successful foray into the cell phone market. Seeing a business that had not changed for 100 years transformed before their eyes, automobile manufacturers are following in their tracks. General Motors paid $1 billion for Cruise Automation, a Silicon Valley start-up that is developing driverless technology, and invested an additional $600 million in 2017 in research and development. In 2017, Intel purchased Mobileye, a company that specializes in sensors and computer vision for self-driving cars, for $15.3 billion dollars. The stakes are high in the multitrillion-dollar transportation sector of the economy.
Self-driving cars will soon disrupt the livelihoods of millions of truck and taxi drivers. Eventually, there will be no need to own a car in a city when a self-driving car can show up in a minute and take you safely to your destination, without your having to park it. The average car today is only used 4 percent of the time, which means it needs to be parked somewhere 96 percent of the time. But because self-driving cars can be serviced and parked outside cities, vast stretches of city land now covered with parking lots can be repurposed for more productive uses. Urban planners are already thinking ahead to the day when parking lots become parkland. Parking lanes along streets can become real bike lanes. Many other car-related businesses will be affected, including auto insurance agencies and body shops. No more speeding or parking tickets. There will be fewer deaths from drunk drivers and from drivers falling asleep at the wheel. Time wasted commuting to work will be freed for other purposes. According to the U.S. Census Bureau, in 2014, 139 million Americans spent an average of 52 minutes commuting to and from work each workday. That amounts to 29.6 billion hours per year, or an astounding 3.4 million years of human lives that could have been put to better use. Highway capacity will be increased by a factor of four by caravaning. And, once developed and widely used, self-driving cars that can drive themselves home without a steering wheel will put an end to grand theft auto. Although there are many regulatory and legal obstacles in the way, when self-driving cars finally become ubiquitous, we will indeed be living in a brave new world. Trucks will be the first to become autonomous, probably in 10 years; taxis in 15 years and passenger cars in 15 to 25 years from start to finish.
The iconic position that cars have in our society will change in ways that we cannot imagine and a new car ecology will emerge. Just as the introduction of the automobile more than 100 years ago created many new industries and jobs, there is already a fast-growing ecosystem being created around self-driving cars. Waymo, the self-driving spin-off from Google, has invested $1 billion over 8 years and has constructed a secretive testing facility in California’s central valley with a 91-acre fake town, including fake bicycle riders and fake auto breakdowns. The goal is to broaden the training data to include special and unusual circumstances, called edge cases. Rare driving events that occur on highways often lead to accidents. The difference with self-driving cars is that when one car experiences a rare event, the learning experience will propagate to all other self-driving cars, a form of collective intelligence. Many similar test facilities are being constructed by other self-driving car companies. These create new jobs that did not exist before, and new supply chains for the sensors and lasers that are needed to guide the cars.
Self-driving cars are just the most visible manifestation of a major shift in an economy being driven by information technology (IT). Information flows through the Internet like water through city pipes. Information accumulates in massive data centers run by Google, Amazon, Microsoft, and other IT companies that require so much electrical power that they need to be located near hydroelectric plants, and streaming information generates so much heat that it needs rivers to supply the coolant. In 2013, data centers in the United States consumed 10 million megawatts, equivalent to the power generated by thirty-four large power plants. But what is now making an even bigger impact on the economy is how this information is used. Extracted from raw data, the information is being turned into knowledge about people and things: what we do, what we want, and who we are. And, more and more, computer-driven devices are using this knowledge to communicate with us through the spoken word. Unlike the passive knowledge in books that is externalized outside brains, knowledge in the cloud is an external intelligence that is becoming an active part of everyone’s lives.
Learning How to Play Go
In March 2016, Lee Sedol, the Korean Go 18-time world champion, played and lost a five-game match against DeepMind’s AlphaGo, a Go-playing program that used deep learning networks to evaluate board positions and possible moves. Go is to Chess in difficulty as chess is to checkers. If chess is a battle, Go is a war. A 19×19 Go board is much larger than an 8×8 chessboard, which makes it possible to have several battles raging in different parts of the board. There are long-range interactions between battles that are difficult to judge, even by experts. The total number of legal board positions for Go is 10^170, far more than the number of atoms in the universe.
In addition to several deep learning networks to evaluate the board and choose the best move, AlphaGo had a completely different learning system, one used to solve the temporal credit assignment problem: which of the many moves were responsible for a win, and which were responsible for a loss? The basal ganglia of the brain, which receive projections from the entire cerebral cortex and project back to it, solve this problem with a temporal difference algorithm and reinforcement learning. AlphaGo used the same learning algorithm that the basal ganglia evolved to evaluate sequences of actions to maximize future rewards. AlphaGo learned by playing itself—many, many times.
The Go match that pitted AlphaGo against Lee Sedol had a large following in Asia, where Go champions are national figures and treated like rock stars. AlphaGo had earlier defeated a European Go champion, but the level of play was considerably below the highest levels of play in Asia, and Lee Sedol was not expecting a strong match. Even DeepMind, the company that had developed AlphaGo, did not know how strong their deep learning program was. Since its last match, AlphaGo had played millions of games with several versions of itself and there was no way to benchmark how good it was.
It came as a shock to many when AlphaGo won the first three of five games, exhibiting an unexpectedly high level of play. This was riveting viewing in South Korea, where all the major television stations had a running commentary on the games. Some of the moves made by AlphaGo were revolutionary. On the thirty-eighth move in the match’s second game, AlphaGo made a brilliantly creative play that surprised Lee Sedol, who took nearly ten minutes to respond. AlphaGo lost the fourth game, a face-saving win for humans, and ended the match by winning four games to one. I stayed up into the wee hours of those March nights in San Diego and was mesmerized by the games. They reminded me of the time I sat glued to the TV in Cleveland on June 2, 1966, at 1:00 a.m., as the Surveyor robotic spacecraft landed on the moon and beamed back the first photo of a moonscapes I witnessed these historic moments in real time. AlphaGo far exceeded what I and many others thought was possible.
On January 4, 2017, a Go player on an Internet Go server called “Master” was unmasked as AlphaGo 2.0 after winning sixty out of sixty games against some of the world’s best players, including the world’s reigning Go champion, the nineteen-year-old prodigy Ke Jie of China. It revealed a new style of play that went against the strategic wisdom of the ages. On May 27, 2017, Ke Jie lost three games to AlphaGo at the Future of Go Summit in Wuzhen, China. These were some of the best Go games ever played, and hundreds of millions of Chinese followed the match. “Last year, I think the way AlphaGo played was pretty close to human beings, but today I think he plays like the God of Go,” Ke Jie concluded.
After the first game, which he lost by a razor-thin margin of one-half point, Ke Jie said that he “was very close to winning the match in the middle of the game” and that he was so excited “I could feel my heart thumping! Maybe because I was too excited I made some stupid moves. Maybe that’s the weakest part of human beings.” What Ke Jie experienced was an emotional overload, but a less elevated level of emotions is needed to reach peak performance. Indeed, stage actors know that if they don’t have butterflies in their stomachs before their performances, they won’t be in good form. Their performances follow an inverted U-shaped curve, with their best ones in an optimal state between low and high levels of arousal. Athletes call this being “in the zone.”
AlphaGo also defeated a team of five top players on May 26, 2017. These players have analyzed the moves made by AlphaGo and are already changing their strategies. In a new version of “ping-pong diplomacy,” the match was hosted by the Chinese government. China is making a large investment in machine learning, and a major goal of their brain initiative is to mine the brain for new algorithms.
The next chapter in this Go saga is even more remarkable, if that is possible. AlphaGo was jump-started by supervised learning from 160,000 human Go games before playing itself. Some thought this was cheating— an autonomous AI program should be able to learn how to play Go without human knowledge. In October, 2017, a new version, called AlphaGo Zero, was revealed that learned to play Go starting with only the rules of the game, and trounced AlphaGo Master, the version that beat Kie Jie, winning 100 games to none. Moreover, AlphaGo Zero learned 100 times faster and with 10 times less compute power than AlphaGo Master. By completely ignoring human knowledge, AlphaGo Zero became super-superhuman. There is no known limit to how much better AlphaGo might become as machine learning algorithms continue to improve.
AlphaGo Zero had dispensed with human play, but there was still a lot of Go knowledge handcrafted into the features that the program used to represent the board. Maybe AlphaGo Zero could improve still further without any Go knowledge. Just as Coca-Cola Zero stripped all the calories from Coca-Cola, all domain knowledge of Go was stripped from AlphaZero. As a result, AlphaZero was able to learn even faster and decisively beat AlphaGo Zero. To make the point that less is more even more dramatically, AlphaZero, without changing a single learning parameter, learned how to play chess at superhuman levels, making alien moves that no human had ever made before. AlphaZero did not lose a game to Stockfish, the top chess program already playing at superhuman levels. In one game, AlphaZero made a bold bishop sacrifice, sometimes used to gain positional advantage, followed by a queen sacrifice, which seemed like a colossal blunder until it led to a checkmate many moves later that neither Stockfish nor humans saw coming. The aliens have landed and the Earth will never be the same again.
AlphaGo’s developer, DeepMind, was cofounded in 2010 by neuroscientist Demis Hassabis, who had been a postdoctoral fellow at University College London’s Gatsby Computational Neuroscience Unit (directed by Peter Dayan, a former postdoctoral fellow in my lab and winner of the prestigious Brain Prize in 2017 along with Raymond Dolan and Wolfram Schultz for their research on reward learning). DeepMind was acquired by Google for $600 million in 2014. The company employs more than 400 engineers and neuroscientists in a culture that is a blend between academia and start-ups. The synergies between neuroscience and AI run deep and are quickening.
Learning How to Become More Intelligent
Is AlphaGo intelligent? There has been more written about intelligence than any other topic in psychology except consciousness, both of which are difficult to define. Psychologists since the 1930s distinguish between fluid intelligence, which uses reasoning and pattern recognition in new situations to solve new problems, without depending on previous knowledge, and crystallized intelligence, which depends on previous knowledge and is what the standard IQ tests measure. Fluid intelligence follows a developmental trajectory, reaching a peak in early adulthood and decreasing with age, whereas crystallized intelligence increases slowly and asymptotically as you age until fairly late in life. AlphaGo displays both crystallized and fluid intelligence in a rather narrow domain, but within this domain, it has demonstrated surprising creativity. Professional expertise is also based on learning in narrow domains. We are all professionals in the domain of language and practice it every day.
The reinforcement learning algorithm used by AlphaGo can be applied to many problems. This form of learning depends only on the reward given to the winner at the end of a sequence of moves, which paradoxically can improve decisions made much earlier. When coupled with many powerful deep learning networks, this leads to many domain-dependent bits of intelligence. And, indeed, cases have been made for different domain-dependent kinds of intelligence: social, emotional, mechanical, and constructive, for example. The “g factor” that intelligence tests claim to measure is correlated with these different kinds. There are reasons to be cautious about interpreting IQ tests. The average IQ has been going up all over the world by three points per decade since it was first studied in the 1930s, a trend called the “Flynn effect.” There are many possible explanations for the Flynn effect, such as better nutrition, better health care, and other environmental factors. This is quite plausible because the environment affects gene regulation, which in turn affects brain connectivity, leading to changes in behavior. As humans increasingly are living in artificially created environments, brains are being molded in ways that nature never intended. Could it be that humans have been getting smarter over a much longer period of time? For how long will the increase in IQ continue? The incidence of people playing computers in chess, backgammon, and now Go has been steadily increasing since the advent of computer programs that play at championship levels, and so has the machine augmented intelligence of the human players. Deep learning will boost the intelligence not just of scientific investigators but of workers in all professions.
Scientific instruments are generating data at prodigious rate. Elementary particle collisions at the Large Hadron Collider (LHC) in Geneva generate 25 petabyes of data each year. The Large Synoptic Sky Telescope (LSST) will generate 6 petabytes of data each year. Machine learning is being used to analyze the huge physics and astronomy datasets that are too big for humans to search by traditional methods. For example, DeepLensing is a neural network that recognizes images of distant galaxies that have been distorted by light bending by “gravitational lenses” around another galaxy along the line of sight. This allows many new distant galaxies to be automatically discovered. There are many other “needle-in-a-haystack” problems in physics and astronomy for which deep learning vastly amplifies traditional approaches to data analysis.
Is Artificial Intelligence an Existential Threat?
When AlphaGo convincingly beat Lee Sedol at Go in 2016, it fueled a reaction that had been building over the last several years concerning the dangers that artificial intelligence might present to humans. Computer scientists signed pledges not to use AI for military purposes. Stephen Hawking and Bill Gates made public statements warning of the existential threat posed by AI. Elon Musk and other Silicon Valley entrepreneurs set up a new company, OpenAI, with a one-billion-dollar nest egg and hired Ilya Sutskever, one of Geoffrey Hinton’s former students, to be its first director. Although OpenAI’s stated goal was to ensure that future AI discoveries would be publicly available for all to use, it had another, implicit and more important goal—to prevent private companies from doing evil. For, with AlphaGo’s victory over world Go champion Sedol, a tipping point had been reached. Almost overnight, artificial intelligence had gone from being judged a failure to being perceived as an existential threat.
This is not the first time an emergent technology has seemed to pose an existential threat. The invention, development, and stockpiling of nuclear weapons threatened to blow up the world, but somehow we have managed to keep that from happening, at least until now. When recombinant DNA technology first appeared, there was fear that deadly engineered organisms would be set loose to cause untold suffering and death across the globe. Genetic engineering is now a mature technology, and so far we have managed to survive its creations. The recent advances in machine learning pose a relatively modest threat compared to nuclear weapons and killer organisms. We will also adapt to artificial intelligence, and, indeed, this is already happening.
One of the implications of DeepStack’s success is that a deep learning network can learn how to become a world-class liar. What deep networks can be trained to do is limited only by the trainer’s imagination and data. If a network can be trained to safely drive a car, it can also be trained to race Formula 1 cars, and someone probably is willing to pay for it. Today it still requires skilled and highly trained practitioners to build products and services using deep learning, but as the cost of computing power continues to plummet and as software becomes automated, it will soon become possible for high school students to build AI applications. Otto, the highest-earning online e-commerce company in Germany for clothing, furnishings, and sport, is using deep learning to predict ahead of time what its customers are likely to order based on their past history of ordering and then to preorder it for them. With 90 percent accuracy, customers receive merchandise almost before they order it. Done automatically without human intervention, preordering not only saves the company millions of euros a year in reduced surplus stock and product returns but also results in greater customer satisfaction and retention. Rather than displacing Otto’s workers, deep learning has boosted their productivity. AI can make you more productive at your job.
Although the major high-tech companies have pioneered deep learning applications, machine learning tools are already widely available and many other companies are beginning to benefit. Alexa, a wildly popular digital assistant operating in tandem with the Amazon Echo smart speaker, responds to natural language requests based on deep learning. Amazon Web Services (AWS) has introduced toolboxes called “Lex,” “Poly” and “Comprehend” that make it easy to develop the same natural language interfaces based on automated test-to-speech, speech recognition and natural language understanding, respectively. Applications with conversational interactions are now within the reach of smaller businesses that can’t afford to hire machine learning experts. AI can enhance customer satisfaction.
When chess-playing computer programs eclipsed the best human chess players, did that stop people from playing chess? On the contrary, it raised their level of play. It also democratized chess. The best chess players once came from big cities like Moscow and New York that had a concentration of grandmasters who could teach younger players and raise their level of play. Chess-playing computer programs made it possible for Magnus Carlson, who grew up in a small town in Norway, to become a chess grandmaster at thirteen, and today he is the world chess champion. The benefits of artificial intelligence will affect not just the playing of games, however, but every aspect of human endeavor, from art to science. AI can make you smarter.
# # #
Adapted from "The Deep Learning Revolution" by Terrence J. Sejnowski, The MIT Press, 2018.
Shares