Prof. Martin Ziegler
Micro- an Nanoelectronic Systems Group
Phone: +49 3677 69-3711
The main goal of this project is to build a smart sensor system for auditory signals, which provides a pre-processing of information directly at the sensor level, can adapt to the actual hearing environment for optimal sensing, and can learn to adapt to unknown situations. In this way frequency resolution, dynamic range, and in particular signal-to-noise ratio are improved and sensing can be tuned into special signals, which are selected by attention focus. The developments will be inspired by the capabilities of the biological colchlea, where adaption is driven by processes like active motilities of the hair cells, coupling between hair cells, fluid, and membrane environment and diverse cascaded feedback signals from higher auditory processing systems. Adapting the sensors themselves rather than the processing stage provides several advantages over conventional microphones with subsequent signal processing units: it reduces signal distortion, which can occur due to processing and filtering steps, it improves detectability of low volume sounds in noisy environments (as in the cocktail party effect) due to the integrated selective amplification (higher signal-to-noise ratio), and it provides higher efficiency regarding energy consumption and computation efforts. The system consists of a mechanical neural network (MNN) based on MEMS cantilevers, which provides sensing informations to a suitable designed controller. The disired nonlinear dynamcs of the MNN arising from intenal coupling and nonlinear feedback mechanisms will be tuned by the controller to achieve dynamical manipulation omparable to the efferent feedback to adapt sensor properties in the cochlea.