Skip to main content

Verified by Psychology Today

Artificial Intelligence

New AI Paradigm May Reduce a Heavy Carbon Footprint

Researchers create an energy-efficient neural network without losing accuracy.

Degon/Pixabay
Source: Degon/Pixabay

Artificial intelligence (AI) machine learning can have a considerable carbon footprint. Deep learning is inherently costly, as it requires massive computational and energy resources. Now researchers in the U.K. have discovered how to create an energy-efficient artificial neural network without sacrificing accuracy and published the findings in Nature Communications on August 26, 2020.

The biological brain is the inspiration for neuromorphic computing—an interdisciplinary approach that draws upon neuroscience, physics, artificial intelligence, computer science, and electrical engineering to create artificial neural systems that mimic biological functions and systems. The human brain is a complex system of roughly 86 billion neurons, 200 billion neurons, and hundreds of trillions of synapses. With the performance capability of processing nearly a thousand operations per second, the brain is remarkably energy efficient with a power consumption of 10-23 watts.

Machine learning, on the other hand, is computationally costly with a high demand for energy. For example, training a large AI model with neural architecture search can emit 284 metric tons of carbon dioxide equivalent, which is roughly the same as the lifetime emissions generated by five average American cars according to a University of Massachusetts Amherst study published last year.

Why is AI machine learning so costly? The answer is mostly due to the computing hardware architecture. AI usually runs on general purpose computers that typically have what is called von Neumann architecture—where there are separate memory and arithmetic logic units. This design separation requires data to be transferred to and from the memory and computing units when processing. For large artificial neural networks (ANN) with over a hundred parameters, processing on computers with von Neumann architecture requires significant time during training and inference phases.

At the heart of neuromorphic computing is the memristor (memory resistor). Memristive devices offer an alternative to von Neumann architecture by minimizing data transfers by enabling the computing of data directly in the memory. Examples of memristive devices are resistive random-access memories and phase-change memories.

However, there are drawbacks to using memristive implementations of artificial neural networks. Usually there is a trade-off between energy-efficiency and accuracy when using memristive devices. Faulty devices, random telegraph noise, device-to-device variability, and line resistance can all negatively impact the accuracy of artificial neural networks. Hence, although neuromorphic computing may be more eco-friendly, the results may have less precision.

To solve the problem of accuracy in neuromorphic computing, the team of researchers led by Dr. Adnan Mehonic applied the concept of a committee machine to a memristor-based neural network. In this study, the team used multiple neural networks, a committee, in order to increase the accuracy of inference. The researchers simulated the conditions that could impact accuracy and then formed committees of different artificial neural networks. The outputs of individual networks in a committee were averaged to create a single output vector. The feed-forward ANN was trained to recognize handwriting of digits using 60,000 images from the Modified National Institute of Standards and Technology (MNIST) database. The optimization was via minimizing the cross-entropy error function with stochastic gradient descent.

“Using simulations and experimental data from three different types of memristive devices, we show that committee machines employing ensemble averaging can successfully increase inference accuracy in physically implemented neural networks that suffer from faulty devices, device-to-device variability, random telegraph noise and line resistance,” the researchers reported. “Importantly, we demonstrate that the accuracy can be improved even without increasing the total number of memristors.”

Copyright © 2020 Cami Rosso All rights reserved.

advertisement
More from Cami Rosso
More from Psychology Today
More from Cami Rosso
More from Psychology Today