Skip to main content

Verified by Psychology Today

Artificial Intelligence

What Caused the AI Renaissance

Much Ado about Deep Learning Backpropagation

pixabay
Source: pixabay

Artificial intelligence (AI) is not a new concept, with its origins dating back to the 1950s. Yet AI has only recently emerged to the forefront of investment interest from industry, government, and venture capital after decades of relative dormancy. What contributed to the thawing of AI’s winter and current boom?

AI is booming largely due to advancements in pattern recognition capabilities resulting from deep learning—a subset of machine learning where an artificial neural network consists of over two layers of processing. Machine learning is a subset of AI that involves algorithms that are able to learn from data, versus a programmer having to explicitly code instructions. This learning can be done with supervised or unsupervised training of data. In supervised learning, the data used for training is labeled, whereas in unsupervised learning, there are no data labels.

AI machine learning neural networks was, for the most part, stalled during the 1970s and 1980s, subsequent to the publication of MIT’s Marvin Minsky and Seymour Papert research titled Perceptrons: an introduction to computational geometry in 1969. In their paper, the scientific duo points out the “severe limitations” of perceptrons, neural nets developed in the 1950s by American psychologist Frank Rosenblatt, for the intended purposes of AI pattern recognition.

Minsky and Papert questioned the ability of perceptrons to train, or learn, in neural networks that had more than two layers of neurons – the input and output layer. They arrived at their conclusions based on mathematical proofs. The scientists wrote that perhaps “some powerful convergence theorem will be discovered, or some profound reason for the failure to produce an interesting ‘learning theorem’ for the multilayer machine will be found.”

A year later in 1970, Finnish mathematician Seppo Linnainmaa wrote in his master thesis on the estimation of rounding errors and the reverse mode of automatic differentiation (AD). Unbeknownst to him, this idea that he thought of while in a Copenhagen park on a sunny afternoon would later provide the seed for deep learning to germinate years later to blossom into an AI renaissance decades later. Linnainmaa earned the first doctorate in computer science from the University of Helsinki in 1974.

Also in 1974, scientist Paul J. Werbos published his Harvard University Ph.D. dissertation on the training of artificial neural networks through the backpropagation of errors. Werbos conceived of novel intelligent control designs that had parallels to the human brain. Werbos was a recipient of a 1995 IEEE Neural Networks Council Pioneer Award for his discovery of backpropagation and other contributions to AI neural networks.

In 1986 Geoffrey Hinton, David E. Rumelhart, and Ronald J. Williams popularized the concept of using backpropagation through networks of neuron-like units with their paper published in Nature, “Learning representations by back-propagating errors.” The procedure involves adjusting the weights of connections in the network (nodes or neurons), in a manner to minimize a measure of the difference between the actual output vector of the net and the desired output vector. Resulting from the weight adjustments are internal hidden units which are not part of the input nor output. Essentially, Hinton and his team demonstrated that deep neural networks consisting of more than two layers could be trained via backpropagation. Here was the powerful learning technique for more than two neural layers that Minsky and Papert had speculated as a possibility in 1969. Yet this alone was not enough to resurrect AI.

Another major contributing factor to the AI boom is due to the rise of video gaming. In the 1970s arcade video games used specialized graphic chips due to cost. During the 1980s to early 2000s, the graphics processing unit (GPU) have eventually evolved from mainly gaming use towards general computing purposes. GPUs are able to process large amounts of data in parallel, a distinct advantage over the standard CPU (central processing unit). The parallel processing power of GPU for general computing is well-suited for the processing of massive amounts of big data for machine learning purposes.

In 2012 Geoffrey Hinton, Alex Krizhevsky, and Ilya Sutskever announced their success in training a deep convolutional neural network with 60 million parameters, 650,000 neurons, and five convolutional layers, to classify 1.2 million high-resolution images into 1,000 different classes. The team utilized a GPU implementation to accelerate the overall training time. Hinton and his team made history by demonstrating that a large, deep convolutional neural network could achieve “record-breaking results on a highly challenging dataset using purely supervised learning” with backpropagation.

Artificial intelligence has been resurrected from dormancy by deep learning backpropagation and GPU technology. Deep learning is in the early stages of applied commercialization. In the coming decade, AI will continue to rapidly gain momentum as it approaches crossing the technology chasm towards mass global proliferation. Artificial intelligence (AI) is trending across health care, transportation, drug discovery, biotech, genomics, consumer electronics, enterprise software applications, precision medicine, esports, autonomous vehicles, social media applications, manufacturing, scientific research, entertainment, geopolitics, and many more areas. In the not-so-far future, artificial intelligence will become as ubiquitous as the internet.

Copyright © 2019 Cami Rosso All rights reserved.

References

Griewank, Andreas. “Who Invented the Reverse Mode of Differentiation?” Documenta Mathematica. Extra Volume ISMP 389-400. 2012.

IEEE. “Guest Editorial Neural Networks Council Awards.” IEEE Transactions on Neural Networks. Vol 7, No 1. January 1996.

Rumelhart, David E., Hinton, Geoffrey E., Williams, Ronald J. “Learning representations by back-propagating errors.” Nature. Vol. 323. 9 October 1986.

Krizhevsky, Alex, Sutskever, Ilya, Hinton, Geoffrey E. “ImageNet Classification with Deep Convolutional Neural Networks.” Advances in Neural Information Processing Systems 25. 2012

advertisement
More from Cami Rosso
More from Psychology Today
More from Cami Rosso
More from Psychology Today