Research

Energy-efficient and data-saving: research team develops audio system for fault detection in industry

How can bio-inspired microsensor technology be used to reliably and resource-efficiently detect acoustic anomalies in industrial environments? An interdisciplinary research team from the Ilmenau School of Electronics (ISGE) at TU Ilmenau is looking into this question.

Wissenschaftler am Audisensor TU Ilmenau/Annika Mehlis
As one of twelve doctoral students at the Ilmenau School of Green Electronics, Christian Kehling is working with colleagues to develop an artificial auditory system

Whether on the factory floor, in traffic or in hospital: wherever people or objects move, noises are generated, be it machine noise, the humming of motors or the sound of surgical robots. If these noises change, this can tell us something about the quality of the products or industrial processes: "If something suddenly rattles or grinds, for example, this can be a sign that a machine is broken or overloaded and needs to be serviced, switched off or repaired," says Dr. Stephan Werner, Head of Electronic Media Technology Group, describing the motivation for the research project:

We want to detect and localize such status changes and events with the auditory system we are developing - and do so in the most energy-efficient way possible.

At best, the researchers want to use their system to detect the smallest deviations long before a person notices them and thus prevent machine malfunctions or safety risks.

 

A bio-inspired artificial auditory system

At the heart of the system is a bio-inspired microelectronic mechanical sensor (MEMS) developed by scientists at TU Ilmenau in collaboration with the Biomedical Sensors and Microsystems Research Group at Ulm University.

It has similar properties to the human ear, i.e. it reacts particularly sensitively to selected frequencies and can be used individually or together with other sensors

explains Christian Kehling, a doctoral student at the Ilmenau School of Green Electronics (ISGE) at the Center for Micro and Nanotechnologies (ZMN).

The sensor was co-developed by Dr. Tzvetan Ivanov, research assistant at the Group of Micro- and Nanoelectronic Systems and third member of the ISGE research group:

Unlike other sensors, incoming audio signals and stimuli are processed non-linearly in our sensor. For volume, for example, this means that it is not transformed passively at all frequencies in the same way as with a microphone, but actively. This means that loud values are compressed by an integrated circuit, a so-called FPGA board, and quiet tones are emphasized.

Compared to conventional systems, which have to process signals in separate modules, the bio-inspired system is able to carry out these steps automatically based on the nature of the sensor alone. In addition, the sensor is very narrow-band, meaning that it only responds to certain frequencies - similar to the inner hair cells in our ears. Stephan Werner explains:

This means that the sensor itself already pre-processes the signals in a similar way to our ear. We therefore assume that this frequency-selective pre-processing of the input signal means that smaller, efficient neural networks are sufficient to acoustically classify the sensor signals and reliably recognize changing sounds.

 

Small sensors for a large room area

Instead of attaching many different sensors to a machine and networking them together, as is usual for conventional acoustic monitoring, the research team at the Ilmenau School of Green Electronics has chosen a different approach, says the scientist:

We want to develop an artificial auditory system that consists of as few sensors as possible. Instead of being placed on the machine, it is positioned in the room in order to cover as large an area of the room as possible auditorily.

This not only reduces the number of sensors and measuring systems required to detect and localize events. "The system also offers advantages in terms of data security," says Stephan Werner. Unlike a microphone, which picks up the sounds themselves, the system developed by the Ilmenau researchers merely deduces from the incoming signals where the noise sources are located, what kind of event it is and what, if anything, has changed:

The audio content itself and therefore, for example, conversations in the factory hall are not recorded and passed on.

The system that the research team in the VR Lab at the Ilmenau Interactive Immersive Technologies Center (I3TC) is currently working on is still very large. However, Christian Kehling does not want it to stay that way:

Our goal is to downsize the electronics so that the system itself is also 'green'.

Then, according to the scientist, it could also be used in wearables or hearing aids, for example.

Contact

Dipl.-Ing. Christian Kehling