When smartphones are smarter than us: deep learning and neuroscience

Will you let a computer drive your car?

Are self-driving cars, or autonomous cars safe enough to be on the road? Set aside the question of whether cars can deal with ethical problems, driving itself is a highly complex task that requires constant sampling and processing of information from the environment. To teach machines to drive on the road, many companies and research institutes have combined various technologies such as radar and GPS. These cars usually require hardcoded programs and information templates to tell them what to do. Google self-driving cars, for instance, work in concert with inch-precision map of the area and pre-programmed traffic light information [1].  However, it might not be the most efficient way to do it. When humans drive, we don’t need any inch-precise map of everything or a table of when traffic lights go on and off. Instead, we primarily make use of the information we get from our two eyes and that is enough most of the time. What if machines could “think”, or process information the way humans do, so that we can just lean back and snack while driving? NVIDIA, an American graphic card company, has been teaching cars how to think and drive like humans. As human drivers need to practice driving to be good at it, these cars practiced to analyze video records from a front-facing camera in the car. Although they never explicitly taught the cars to detect road outlines, their performance improved over time, as if they were learning to do so [2].

1

Nvidia Drive PX2 is an open AI car computing platform that can understand in real-time what’s happening around the vehicle. [image source: flickr, CC BY-NC-ND 2.0]

 

How computers learned to drive

The technology that was used in Nvidia autonomous cars is called “deep learning”. It is a branch of machine learning, which refers to ways of enabling programs to learn. In conventional machine learning, a lot of human expertise is required to construct algorithms to transform the raw data into suitable representations. In face recognition, for instance, features such as pixel values, edges and colors might all be relevant to the task and humans need to manually identify those features to make algorithms to extract them from images [3]. Using deep learning, however, you feed a machine with natural data in its raw form, which will be some pictures in this case, and the computer will automatically discover the representations needed [3]. The process requires a large set of data, some of which are labelled by humans in the initial phase. Beginning with this tutorial, or the stage of “supervised learning”, the computer can then train itself to classify new images. It is called “deep” learning because there is a depth of multiple levels of representation starting from the raw data. As the level of the layer goes higher, the representation becomes more abstract and relevant to the task. The crucial difference between deep learning and other machine-learning techniques is that in deep learning, these layers are not designated by the engineer, but learned by the machine [3]. As a general-purpose mechanism, it is being used for many complex tasks that have been too challenging for conventional algorithms, from face recognition to language processing.

 

Can artificial intelligence (A.I.) beat humans with deep learning?

With all the progress, computers are now challenging the realms believed to belong exclusively to humans. One thing that had been considered impossible for computers to do was to play “Go”, an ancient Chinese board game which involves abstract strategies. It was already sensational when IBM’s chess robot Deep Blue defeated the human champion in 1997. But in Go, for nearly 20 years since then, no program has matched the performance of human professionals, until the development of Google DeepMind’s AlphaGo. While Deep Blue uses “brute force” search, where all possible solutions are simulated [4], this is not a suitable strategy in Go, simply because there are too many number of cases with the board of 19×19 grid lines. To catch up with human performance, computers had to begin thinking like humans, like AlphaGo did. Designed as a deep learner, AlphaGo won all but one game in a five-game Go match against 18 time-world champion Lee Sedol in March 2016 [5]. Interestingly, behind this triumph of A.I. was neuroscience. Before the development of AlphaGo, the co-founder of Google DeepMind Demis Hassabis had once broken the course of his career and returned to academia to study neuroscience. His goal was to find inspiration in the human brain for new algorithms for artificial intelligence [6].

 

Neuroscience of computer brains

Then how is the brain relevant to artificial intelligence? Just as man-made structures can get inspirations from natural structures, artificial intelligence can benefit from the knowledge about our natural intelligence. According to Demis Hassabis, neuroscience has two key contributions to the progress in artificial intelligence [6]. First, the structures in the brain may inspire new algorithms and architectures. Second, neuroscience findings may validate the plausibility of existing algorithms being integral parts of a general A.I. system. A good example of a biology-inspired architecture are multilayered networks used in artificial vision [7]. In primate brain, different visual areas are hierarchically interconnected. Neurons in earlier areas process simple features of the image (orientations, spatial frequencies and colors), while neurons in later areas respond to more complex and abstract features (motion direction or geometric shapes). Implementing this type of hierarchy in computer vision made the system efficient in that it needed less training. In addition to the nature of the visual system, human visual capabilities such as gist extraction and location of an object by landmark points can also be applied in a robotic system [8]. For sure, deep learning is also one of the examples of how computers can work like human brains, and one of the big advantages of an artificial brain using deep learning is that it can learn some common concepts just by watching lots of videos, like the Google artificial brain has self-learned what cats look like [9].

 

cat-1728596_960_720

Computers seem to enjoy watching cat videos as much as humans do. One of the neurons in the Google artificial neural network learned to detect cats [9]!

[image source: pixabay, CC0 Public domain]

 

How far can deep learning bring us?

So far, the future of A.I. appears to be bright. In spite of the ongoing controversy surrounding self-driving cars, some decades later, we might be wondering how we used to count on human drivers. For sure, machines will not drink and drive, talk on the phone or play games while driving. In addition to that, computer brains have the possibility of having an information storage that is larger than human memory’s capacity and keeping it intact for a long time, whereas human memory is susceptible to changes. When it comes to simple calculations, computers have outperformed humans in speed long ago. It might sound like all these advantages combined with human-like adaptability of deep learning could result in a real superhuman intelligence.

 

po go blog

Remember the controversy over safety issues of Pokémon Go when the game was first released? In the end, Niantic had to warn users not to Pokémon Go and drive (link here). Something that our intelligent computers will never do.

[image source: Pixabay, CC0 Public domain]

 

However, some aspects of our natural intelligence is yet impossible for computers to copy. One crucial difference between brains and computers is the material. Brains are made of cells that generate chemical changes as well as electrical signals. This enables the neurons to operate in a cylindrical manner, receiving and sending signals back and forth. Computers, on the other hand, are built with silicon chips that operate in a linear way [6]. The material does make a difference, because the exchange of information takes place much faster in the silicon chips than in neurons. At the same time, neurons are designed to converge information conveyed by many processes taking place simultaneously. This might be a reason why computers still fail to reconstruct the “soft” side of human mind such as creativity or sense of self. Furthermore, from a broader viewpoint, modelling computers based on our models of the brain might be a loop we are trapped in, because the modern brain models are based on computers after all [6]. For instance, the view that the brain has different areas for different functions is analogous to the fact that computers operate with different modules to carry out different tasks.

While deep learning has expanded the horizon of what artificial intelligence can do, brains and computers still remain different. It is an interesting question for neuro- and computer scientists whether artificial intelligence will have more overlap with our natural intelligence or further diverge from it in the future. An ideal scenario will be that they remain closely interacting with each other so that discoveries specific to each field can inspire the other, resulting in a synergetic effect like tandem partners teaching each one’s own language to another.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s