Underwater 3D scanning system for cultural heritage documentation. - In: Remote sensing, ISSN 2072-4292, Bd. 15 (2023), 7, 1864, S. 1-14
Three-dimensional capturing of underwater archeological sites or sunken shipwrecks can support important documentation purposes. In this study, a novel 3D scanning system based on structured illumination is introduced, which supports cultural heritage documentation and measurement tasks in underwater environments. The newly developed system consists of two monochrome measurement cameras, a projection unit that produces aperiodic sinusoidal fringe patterns, two flashlights, a color camera, an inertial measurement unit (IMU), and an electronic control box. The opportunities and limitations of the measurement principles of the 3D scanning system are discussed and compared to other 3D recording methods such as laser scanning, ultrasound, and photogrammetry, in the context of underwater applications. Some possible operational scenarios concerning cultural heritage documentation are introduced and discussed. A report on application activities in water basins and offshore environments including measurement examples and results of the accuracy measurements is given. The study shows that the new 3D scanning system can be used for both the topographic documentation of underwater sites and to generate detailed true-scale 3D models including the texture and color information of objects that must remain under water.
https://doi.org/10.3390/rs15071864
Interactive robot teaching based on finger trajectory using multimodal RGB-D-T-data. - In: Frontiers in robotics and AI, ISSN 2296-9144, Bd. 10 (2023), 1120357, S. 01-13
The concept of Industry 4.0 brings the change of industry manufacturing patterns that become more efficient and more flexible. In response to this tendency, an efficient robot teaching approach without complex programming has become a popular research direction. Therefore, we propose an interactive finger-touch based robot teaching schema using a multimodal 3D image (color (RGB), thermal (T) and point cloud (3D)) processing. Here, the resulting heat trace touching the object surface will be analyzed on multimodal data, in order to precisely identify the true hand/object contact points. These identified contact points are used to calculate the robot path directly. To optimize the identification of the contact points we propose a calculation scheme using a number of anchor points which are first predicted by hand/object point cloud segmentation. Subsequently a probability density function is defined to calculate the prior probability distribution of true finger trace. The temperature in the neighborhood of each anchor point is then dynamically analyzed to calculate the likelihood. Experiments show that the trajectories estimated by our multimodal method have significantly better accuracy and smoothness than only by analyzing point cloud and static temperature distribution.
https://doi.org/10.3389/frobt.2023.1120357
Training program for the metric specification of imaging sensors. - In: Acta IMEKO, ISSN 2221-870X, Bd. 11 (2022), 4, S. 1-6
Measurement systems in industrial practice are becoming increasingly complex and the system-technical integration levels are increasing. Nevertheless, the functionalities can in principle always be traced back to proven basic functions and basic technologies, which should, however, be understood and developed. For this very reason, the teaching of elementary basics in engineering education is unavoidable. The present paper presents a concept to implement a contemporary training program within the practical engineering education on university level in the special subject area of optical coordinate measuring technology. The students learn to deal with the subject area in a fundamentally oriented way and to understand the system-technical integration in detail from the basic idea to the actual solution, which represents the common practice in the industrial environment. The training program is designed in such a way that the basics have to be worked out at the beginning, gaps in knowledge are closed by the aspect of group work and the targeted intervention of a supervisor. After the technology has been fully developed theoretically, the system is put into operation and applied with regard to a characterizing measurement. The measurement data are then evaluated using standardized procedures. A special part of the training program, which is to promote the own creativity and the comprehensible understanding, represents the evaluation of the modulation transfer function (MTF) of the system by a self-developed algorithmic program section in the script-oriented development environment MALTAB, whereby students can supportively fall back on predefined functions for the evaluation, whose implementation however still must be accomplished.
https://doi.org/10.21014/actaimeko.v11i4.1361
Inline process monitoring of hairpin welding using optical and acoustic quality metrics. - In: IEEE Xplore digital library, ISSN 2473-2001, (2022), insges. 8 S.
Due to the electrification of the drive train, hairpin technology for stators has established itself in the automotive industry. Hairpins are rectangular copper wires connected to full stator coils using laser beam welding. The welding results depend on the preliminary processes and tolerances of the welding system. The quality of the connection is defined by its cross-sectional area that is influenced by porosity. Another quality characteristic are spatters. The quantification of the cross-sectional area and porosity of the welding seams is usually carried out post-process using X-Ray computed tomography (CT). This paper investigates spatters using high-speed camera-based image processing and airborne sound using an optical microphone. By varying influencing factors of the welding result (position deviations, maximum laser power, focal position shift, stripping quality), different quality levels of the welding joints are generated. With the developed method for in-process evaluation of high-speed camera images, a spatter detection and quantification of their number and size is enabled. The method shows an accuracy of 84%. In addition, the characteristics of the recorded airborne sound during the process allows conclusions to be drawn about the welding results. Deviations in maximum laser power, height offset, focal position shift and stripping quality can be detected. Both methods provide a reliable inline monitoring of the welding process as well as its quantitative quality assessment.
https://doi.org/10.1109/EDPC56367.2022.10019745
Process control of laser ablated coated surface applying an adapted image processing system. - In: 12th CIRP Conference on Photonic Technologies, (2022), S. 584-587
The aim of the work was the implementation of a control system for laser ablation processes of optical coatings. Therefore, a glass substrate with metal-based coating was ablated using picosecond laser radiation. Depending on the laser fluence and scanning parameters (pulse and line distance) the removal of the layer has been investigated. With the help of an adapted image processing system a global characterization of the transmissive properties of the surfaces where the optical coating was ablated has been performed. After the ablation process, the surfaces were scanned and the resulting areas where the transmissive properties are OK or not OK have been detected. Due to the generated transformation of the image data to x- and y-coordinates a subsequent laser ablation corrects the detected areas which are not OK in a control loop. The developed approach can help to reuse expensive optics and masks in industrial and scientific applications.
https://doi.org/10.1016/j.procir.2022.08.155
Efficient freeform-based pattern projection system for 3D measurements. - In: Optics express, ISSN 1094-4087, Bd. 30 (2022), 22, S. 39534-39543
For three-dimensional (3D) measurement of object surface and shape by pattern projection systems, we used a hybrid projection system, i.e., a combination of a projection lens and a transmissive freeform to generate an aperiodic sinusoidal fringe pattern. Such a freeform effects a light redistribution, thus leading to an effective and low-loss pattern projection, as it increases the total transmission intensity of the system and has less power dissipation than classical projection systems. In this paper, we present the conception and realization of the measurement setup of a transmissive fringe projection system. We compare the characteristics of the generated intensity distribution with the classical system based on GOBO (GOes Before Optics) projection and show measurement results of different surface shapes, recorded with the new system.
https://doi.org/10.1364/OE.470564
Methoden zur Reduktion der Messlatenz von GOBO-Projektor-basierten 3D-Sensoren. - Ilmenau : Universitätsbibliothek, 2022. - 1 Online-Ressource (95 Seiten)
Technische Universität Ilmenau, Dissertation 2022
Genaue, optische 3D-Messverfahren werden vielfältig in der Industrie, der Medizin und der Wissenschaft eingesetzt. Bei etlichen dieser Anwendungen ist eine schnelle Reaktion auf Veränderungen in der Messszene erforderlich. Dies ist z.B. der Fall, wenn eine Maschine aus Sicherheitsgründen abgeschaltet oder angehalten werden muss oder eine direkte Rückmeldung an einen Menschen gegeben werden soll. Steht das 3D-Ergebnis der Messung nach hinreichend kurzer Zeit ab Beginn der Messung zur Verfügung, kann die Reaktion basierend auf diesem Ergebnis angestoßen werden. Das Prinzip des GOBO-Projektor-basierten, aktiven Stereo-Sensors hat sich als genaues, optisches 3D-Messverfahren etabliert. Bei diesem Verfahren wird ein sich zeitlich änderndes, aperiodisches Streifenmuster auf das Messobjekt projiziert, während zwei kalibrierte Kameras jeweils eine Bildsequenz synchron aufnehmen. Innerhalb dieser Bildsequenzen werden anschließend die Abbilder von Objektpunkten, welche in beiden Kameras sichtbar sind, einander zugeordnet. Für jedes solche Paar wird dann die 3D-Koordinate des zugehörenden Objektpunktes trianguliert. Das Verfahren erlaubt auch 3D-Aufnahmen mit speziellen Anforderungen, die von anderen, genauen 3D-Sensorprinzipien nur schwer erreicht werden. Dazu gehört die Messung mit speziellen Lichtwellenlängen, wie z.B. dem Nah-Infrarotbereich, womit blendfreie Vermessung ermöglicht wird, oder die Erfassung sehr schneller Prozesse, wie die Messung von Airbag-Entfaltungen. Bisher war es jedoch nicht möglich, die 3D-Messergebnisse in so kurzer Zeit (z.B. 100 ms), d.h. mit so kurzer Messlatenz, zur Verfügung zu stellen, dass eine unmittelbare Reaktion auf eine Veränderung der Messszene erfolgen kann. Diese Verkürzung der Messlatenz ist das Ziel dieser Arbeit. Es werden Methoden beschrieben und untersucht, mit denen die Messlatenz von GOBO-Projektor-basierten, aktiven Stereo-Sensoren auf unter 100 ms verkürzt werden kann. Die Verbesserungen konzentrieren sich auf zwei Bereiche: die schnelle Rekonstruktion des 3D-Modells aus den aufgenommenen Bildsequenzen und die Reduktion der Aufnahmezeit durch Verkürzung der Bildsequenz-Länge. Letztere wird mittels einer Optimierung der Musterprojektion ermöglicht, welche bei kurzen Bildsequenz-Längen eine erhebliche Reduktion unerwünschter Messartefakte bewirkt. Abschließend werden mehrere Anwendungen gezeigt, die von diesen Verbesserungen profitieren.
https://doi.org/10.22032/dbt.53040
The duality of ray-based and pinhole-camera modeling and 3D measurement improvements using the ray-based model. - In: Sensors, ISSN 1424-8220, Bd. 22 (2022), 19, 7540, S. 1-15
Geometrical camera modeling is the precondition for 3D-reconstruction tasks using photogrammetric sensor systems. The purpose of this study is to describe an approach for possible accuracy improvements by using the ray-based-camera model. The relations between the common pinhole and the generally valid ray-based-camera model are shown. A new approach to the implementation and calibration of the ray-based-camera model is introduced. Using a simple laboratory setup consisting of two cameras and a projector, experimental measurements were performed. The experiments and results showed the possibility of easily transforming the common pinhole model into a ray-based model and of performing calibration using the ray-based model. These initial results show the model’s potential for considerable accuracy improvements, especially for sensor systems using wide-angle lenses or with deep 3D measurements. This study presents several approaches for further improvements to and the practical usage of high-precision optical 3D measurements.
https://doi.org/10.3390/s22197540
Contactless heart rate measurement in newborn infants using a multimodal 3D camera system. - In: Frontiers in Pediatrics, ISSN 2296-2360, Bd. 10 (2022), 897961, S. 01-11
Richtiger Name des 5. Verfassers: Gunther Notni
Newborns and preterm infants require accurate and continuous monitoring of their vital parameters. Contact-based methods of monitoring have several disadvantages, thus, contactless systems have increasingly attracted the neonatal communities' attention. Camera-based photoplethysmography is an emerging method of contactless heart rate monitoring. We conducted a pilot study in 42 healthy newborn and near-term preterm infants for assessing the feasibility and accuracy of a multimodal 3D camera system on heart rates (HR) in beats per min (bpm) compared to conventional pulse oximetry. Simultaneously, we compared the accuracy of 2D and 3D vision on HR measurements. The mean difference in HR between pulse oximetry and 2D-technique added up to + 3.0 bpm [CI−3.7 - 9.7; p = 0.359, limits of agreement (LOA) ± 36.6]. In contrast, 3D-technique represented a mean difference in HR of + 8.6 bpm (CI 2.0-14.9; p = 0.010, LOA ± 44.7) compared to pulse oximetry HR. Both, intra- and interindividual variance of patient characteristics could be eliminated as a source for the results and the measuring accuracy achieved. Additionally, we proved the feasibility of this emerging method. Camera-based photoplethysmography seems to be a promising approach for HR measurement of newborns with adequate precision; however, further research is warranted.
https://doi.org/10.3389/fped.2022.897961
FPGA-based multi-view stereo system with flexible measurement setup. - In: Measurement: sensors, ISSN 2665-9174, Bd. 24 (2022), 100425, S. 1-9
In recent years, stereoscopic image processing algorithms have gained importance for a variety of applications. To capture larger measurement volumes, multiple stereo systems are combined into a multi-view stereo (MVS) system. To reduce the amount of data and the data rate, calculation steps close to the sensors are outsourced to Field Programmable Gate Arrays (FPGAs) as upstream computing units. The calculation steps include lens distortion correction, rectification and stereo matching. In this paper a FPGA-based MVS system with flexible camera arrangement and partly overlapping field of view is presented. The system consists of four FPGA-based passive stereoscopic systems (Xilinx Zynq-7000 7020 SoC, EV76C570 CMOS sensor) and a downstream processing unit (Zynq Ultrascale ZU9EG SoC). This synchronizes the sensor near processing modules and receives the disparity maps with corresponding left camera image via HDMI. The subsequent computing unit calculates a coherent 3D point cloud. Our developed FPGA-based 3D measurement system captures a large measurement volume at 24 fps by combining a multiple view with eight cameras (using Semi-Global Matching for an image size of 640 px × 460 px, up to 256 px disparity range and with aggregated costs over 4 directions). The capabilities and limitation of the system are shown by an application example with optical non-cooperative surface.
https://doi.org/10.1016/j.measen.2022.100425