Artificial neural networks learn like humans in their sleep
Researchers at TU Ilmenau have made groundbreaking progress in the field of machine learning. In an extensive study, the scientists investigated how the mechanisms of action that enable humans to deepen learned knowledge while asleep can be transferred to artificial neural networks. The research results will help to drastically increase the performance of self-learning machines.
Just a few years ago, sleep researchers were able to demonstrate in experiments that humans learn during deep sleep. The scientists built on an observation they made while humans were awake: The connections between neurons, called synapses, not only actively learn, but also amplify or reduce chemical or electrical signals from the neurons, the nerve cells. Thus, synapses not only transmit signals from neuron to neuron, but also amplify or attenuate their intensity. In this way, the synapses enable the neurons to absorb and adapt to the changing influences of the environment. During sleep, this state of excitation returns to normal and the nervous system is able to process in memory the new information absorbed during the active waking phase and, by forgetting random or unimportant information, consolidate what has been learned while becoming more sensitive to the reception of new information.
Professor Patrick Mäder, head of the Software Engineering for Safety-Critical Systems Group at TU Ilmenau, built on this process, which experts refer to as synaptic plasticity: "Synaptic plasticity is responsible for the function and performance of our brain and is thus the basis of learning. If the synapses were to always remain in an activated state, this would ultimately make learning more difficult, as we know from animal experiments. It's the recovery phase during sleep that makes it possible for us to retain what we've learned in memory."
The ability of the synaptic system to react dynamically to a wide variety of stimuli and to keep the nervous system stable and in balance is mimicked by the researchers at TU Ilmenau in artificial neuronal networks. Using so-called synaptic scaling, they transfer the mechanisms that regulate the dynamic brain system to machine learning methods – with the result that the artificial neuronal models behave as effectively as their natural models.
Applications for such highly efficient self-learning machines can be found in medicine, for example, where EEG or tomography examinations are used to detect diseases on the basis of biological data. Other areas of application include the smart grid control of electrical networks or automated laser manufacturing.
The methods developed in the TU Ilmenau study to transfer the self-regulating mechanisms of the brain to artificial neural networks were published in the high-ranking journal "IEEE Transactions on Neural Networks and Learning Systems" and have received international attention in the scientific community. Martin Hofmann, PhD student of Prof. Mäder and co-author of the publication, recognizes a major problem with methods borrowed from nature for artificial intelligence applications: the so-called overfitting: "We call it overfitting when a model has memorized certain patterns in the training data, but is not flexible enough to use them to make correct predictions on unknown test data. We are therefore looking for ways to counteract overfitting and instead get closer to the brain's mechanisms of self-regulation."
Numerous mechanisms of action in biology have found their way into the development of learning machines, starting with the first replicas of neuronal networks. The new findings from TU Ilmenau open up additional fascinating possibilities for Deep Learning, or highly efficient machine learning.
IEEE publication: https://ieeexplore.ieee.org/document/9337198