Artificial neural networks are brain-inspired computer systems that can be trained to solve complex tasks better than humans.
These networks are frequently used in social media, streaming, online gaming, and areas where users are targeted with posts, movies, fun games, or other content that matches their individual preferences. Elsewhere, neural networks are used in healthcare to recognize tumors on CT scans, among other things.
While the technology is incredibly effective, a Danish researcher behind a new study believes that it should not be abused. The study authors demonstrated that all the energy in the world could be used to train a single neural network without ever reaching perfection.
“The problem is that an infinite amount of energy can be used to, for example, train these neural networks just to target us with ads. The network would never stop training and improving. That’s like a black hole that swallows up all the energy you send to it, which is by no means sustainable,” says Mikkel Abrahamsen, assistant professor in the computer science department at the University of Copenhagen.
Therefore, this technology must be deployed wisely and carefully considered before each use, as simpler and more energy-efficient solutions may suffice. Abrahamsen clarifies:
“It is important for us to determine where to use neural networks, in order to provide the greatest value for us humans. Some will see neural networks as better suited for scanning medical imaging of tumors than for targeting us with advertising and products on our social media and streaming platforms. In some cases, one might be able to do with less resource-intensive techniques, like regression tasks or random decision forests.
A theoretical study:
- In collaboration with German and Dutch researchers, Mikkel Abrahamsen provides a theoretical explanation for why neural networks used by social media, among other things, consume huge amounts of energy because they can never be trained to perfection.
- Researchers have proven that neural network training belongs to a heavier class of complexity than previously thought. Neural networks should be moved up in the complexity class of existential theory of reals (∃ℝ) instead of the lighter class, known as NP.
- The new class ∃ℝ contains problems similar to solving many quadratic equations with several simultaneous unknowns, which is impossible in practice.
The study was published at the NeurIPS Conference last December.
Neural networks are trained by feeding them data. These could be digitized images of tumours, through which a neural network learns to spot cancer in a patient.
In principle, such training can continue indefinitely. In their new study, the researchers demonstrate that this is a bottomless pit, as the process resembles solving very advanced equations with many unknowns.
“Current best algorithms can only handle up to eight unknowns, whereas neural networks can be configured to take into account billions of parameters. Therefore, an optimal solution might never be found when forming a network, even if the entire global energy supply were to be used,” explains Mikkel Abrahamsen.
Neural networks are getting worse and worse at using the energy supplied to them.
“Things get slower and slower as we train neural networks. For example, they may reach 80% accuracy after one day, but a whole month longer to reach 85%. Thus, we benefit less and less from the energy used in training, without ever reaching perfection,” he says.
Many people don’t realize that networks can be formed indefinitely, which is why Abrahamsen thinks we need to focus on their appetite for power.
“We don’t appreciate our contribution to this enormous energy consumption when we log on to Facebook or Twitter, compared, for example, to our awareness of the impacts of intercontinental flights or clothing purchases. We must therefore open our eyes to the extent to which this technology pollutes and affects our climate,” concludes Abrahamsen.
What is a neural network?
- A neural network is a machine learning model inspired by the activity of neurons in the human brain that can be trained to perform complex tasks at extremely efficient superhuman levels.
- Neural networks have many parameters that need to be tuned for them to provide meaningful output – a process called training.
- Neural networks are usually trained using an algorithm known as backpropagation, which gradually adjusts the parameters in the right direction.