Triangle-Mesh-Rasterization-Projection (TMRP): an algorithm to project a point cloud onto a consistent, dense and accurate 2D raster image. - In: Sensors, ISSN 1424-8220, Bd. 23 (2023), 16, 7030, S. 1-28
The projection of a point cloud onto a 2D camera image is relevant in the case of various image analysis and enhancement tasks, e.g., (i) in multimodal image processing for data fusion, (ii) in robotic applications and in scene analysis, and (iii) for deep neural networks to generate real datasets with ground truth. The challenges of the current single-shot projection methods, such as simple state-of-the-art projection, conventional, polygon, and deep learning-based upsampling methods or closed source SDK functions of low-cost depth cameras, have been identified. We developed a new way to project point clouds onto a dense, accurate 2D raster image, called Triangle-Mesh-Rasterization-Projection (TMRP). The only gaps that the 2D image still contains with our method are valid gaps that result from the physical limits of the capturing cameras. Dense accuracy is achieved by simultaneously using the 2D neighborhood information (rx,ry) of the 3D coordinates in addition to the points P(X,Y,V). In this way, a fast triangulation interpolation can be performed. The interpolation weights are determined using sub-triangles. Compared to single-shot methods, our algorithm is able to solve the following challenges. This means that: (1) no false gaps or false neighborhoods are generated, (2) the density is XYZ independent, and (3) ambiguities are eliminated. Our TMRP method is also open source, freely available on GitHub, and can be applied to almost any sensor or modality. We also demonstrate the usefulness of our method with four use cases by using the KITTI-2012 dataset or sensors with different modalities. Our goal is to improve recognition tasks and processing optimization in the perception of transparent objects for robotic manufacturing processes.
https://doi.org/10.3390/s23167030
Productive teaming under uncertainty : when a human and a machine classify objects together. - In: 2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO), (2023), S. 9-14
Konferenz: IEEE International Conference on Advanced Robotics and Its Social Impacts, ARSO, Berlin, Germany, 05-07 June 2023
https://doi.org/10.1109/arso56563.2023.10187430
Komponenten und Methoden für die multimodale Gefahrenanalyse in öffentlichen Räumen. - In: 3D-NordOst 2022, (2023), S. 37-46
A review of different multispectral indices for monitoring plant health in mid-mountain sites. - In: Image Sensing Technologies: Materials, Devices, Systems, and Applications X, (2023), S. 125140H-1-125140H-19
The effects of climate change, such as drought and pest infestation, will pose new challenges for forest management in the coming years to ensure the preservation of biodiversity and vegetation balance. A combination of various sensor technologies enables early detection of changes and initiation of necessary mitigation steps. Here, hyperspectral cameras provide direct measurement of the health status on the plants themselves. The achievable spatial and spectral resolutions have been steadily increasing due to the use of drones instead of airplanes and satellites. Nevertheless, only canopy measurement is possible in this case. The measurement below the tree canopy can grant new insights and increase the resolution up to the level of the leaf. The aim of this work is to define the basic requirements for a spectral system suitable for this purpose. For these high-resolution spectral images of typical plants of the mid-mountain range during desiccation were acquired. On the basis of these, various vegetation indices were calculated and the influence of filter properties such as the half-width were simulated. During this investigation, a clear reaction to desiccation was observed in all samples after a brief period of time. Different vegetation indices show a comparable behavior despite the application of different wavelengths.
https://doi.org/10.1117/12.2669172
Depth-of-field extension in structured-light 3D surface imaging by fast chromatic focus stacking. - In: Dimensional Optical Metrology and Inspection for Practical Applications XII, (2023), S. 1252406-1-1252406-7
The depth range that can be captured by structured-light 3D sensors is limited by the depth of field of the lenses which are used. Focus stacking is a common approach to extend the depth of field. However, focus variation drastically reduces the measurement speed of pattern projection-based sensors, hindering their use in high-speed applications such as in-line process control. Moreover, the lenses’ complexity is increased by electromechanical components, e.g., when applying electronically tunable lenses. In this contribution, we introduce chromatic focus stacking, an approach that allows for a very fast focus change by designing the axial chromatic aberration of an objective lens in a manner that the depth-of-field regions of selected wavelengths adjoin each other. In order to experimentally evaluate our concept, we determine the distance-dependent 3D modulation transfer function at a tilted edge and present the 3D measurement of a printed circuit board with comparatively high structures.
https://doi.org/10.1117/12.2661183
Detecting residuals at plastic samples to optimize laser cutting processes. - In: Pattern Recognition and Tracking XXXIV, (2023), S. 125270G-1-125270G-11
When using a molding machine to produce plastic samples, unwanted residuals can occur. Within this study two image processing methods for the detection of residuals at plastic samples are evaluated. The aim of the two suggested methods is to detect the position of the residuals at the plastic sample reliable and to transform the image-based information into laser machine coordinates. By using the transferred coordinates, the laser machine can remove the detected residuals by laser cutting accurately without damaging the sample. The measurement setup for both methods is identical, the difference is in the processing of the captured raw image. The first method compares the raw image with the image masking template to determine the residual. The second method processes the raw image directly by comparing the light intensity transmitted through the sample to distinguish the residual from the main sample. Once the residuals can be detected, binary shifting are then performed to locate the cut lines for the residuals. The lines obtained from the image in pixel scale must then be accurately converted into millimeter-scale so that the laser machine can use them. By comparing the two methods mentioned above, the method that uses template images has more accurate and detailed results, leaving no small residuals on the sample. Meanwhile, in the method that compares the intensity of the transmitted light through the sample, there were undetectable residuals that did not produce the desired straight line. However, using the image template-matching method has some drawbacks, such as requiring each measurement to be in the same position. And thus, a more detailed design process is needed to stabilize the measurement process. In this study, a design has been made in terms of hardware as well as software with a GUI that can set several important parameters for measurement. From the results of this study, we obtained a system that can cut the residuals on the sample without damaging the sample.
https://doi.org/10.1117/12.2663457
High-speed 3D shape measurement of transparent objects by sequential thermal fringe projection and image acquisition in the long-wave infrared. - In: Thermosense: Thermal Infrared Applications XLV, (2023), S. 125360P-1-125360P-11
Recently, we have successfully tackled the challenge of measuring the 3D shape of uncooperative materials, i.e., materials with optical properties such as being glossy, transparent, absorbent, or translucent. By projecting sequential thermal fringes in the long-wave infrared (LWIR) combined with a stereo camera setup in the midwave infrared (MWIR), we were able to three-dimensionally record object shapes within one second. However, in many applications, e.g., for 100 % quality assurance, even shorter measurement times are required. To achieve camera frame rates higher than 125 fps at room temperature, Max Planck’s law of thermal emission teaches us a change in the camera spectral range from MWIR to LWIR. If irradiation and image acquisition have to run in parallel, the camera chips must therefore be protected against the radiation projected by the CO2 laser at a wavelength of 10.6 µm. Appropriate filters have been available only recently. In this contribution, we present our high-speed LWIR 3D sensor. The work includes a characterization of our setup regarding its measurement accuracy and speed. The results are compared to the performance of previous thermal 3D sensors. We show 3D measurement results of static objects as well as of a dynamic measurement situation of a transparent object. Furthermore, we demonstrate that our setup enables us to extend the measurability of material classes towards objects with high thermal conductivities.
https://doi.org/10.1117/12.2663331
Underwater 3D scanning system for cultural heritage documentation. - In: Remote sensing, ISSN 2072-4292, Bd. 15 (2023), 7, 1864, S. 1-14
Three-dimensional capturing of underwater archeological sites or sunken shipwrecks can support important documentation purposes. In this study, a novel 3D scanning system based on structured illumination is introduced, which supports cultural heritage documentation and measurement tasks in underwater environments. The newly developed system consists of two monochrome measurement cameras, a projection unit that produces aperiodic sinusoidal fringe patterns, two flashlights, a color camera, an inertial measurement unit (IMU), and an electronic control box. The opportunities and limitations of the measurement principles of the 3D scanning system are discussed and compared to other 3D recording methods such as laser scanning, ultrasound, and photogrammetry, in the context of underwater applications. Some possible operational scenarios concerning cultural heritage documentation are introduced and discussed. A report on application activities in water basins and offshore environments including measurement examples and results of the accuracy measurements is given. The study shows that the new 3D scanning system can be used for both the topographic documentation of underwater sites and to generate detailed true-scale 3D models including the texture and color information of objects that must remain under water.
https://doi.org/10.3390/rs15071864
Interactive robot teaching based on finger trajectory using multimodal RGB-D-T-data. - In: Frontiers in robotics and AI, ISSN 2296-9144, Bd. 10 (2023), 1120357, S. 01-13
The concept of Industry 4.0 brings the change of industry manufacturing patterns that become more efficient and more flexible. In response to this tendency, an efficient robot teaching approach without complex programming has become a popular research direction. Therefore, we propose an interactive finger-touch based robot teaching schema using a multimodal 3D image (color (RGB), thermal (T) and point cloud (3D)) processing. Here, the resulting heat trace touching the object surface will be analyzed on multimodal data, in order to precisely identify the true hand/object contact points. These identified contact points are used to calculate the robot path directly. To optimize the identification of the contact points we propose a calculation scheme using a number of anchor points which are first predicted by hand/object point cloud segmentation. Subsequently a probability density function is defined to calculate the prior probability distribution of true finger trace. The temperature in the neighborhood of each anchor point is then dynamically analyzed to calculate the likelihood. Experiments show that the trajectories estimated by our multimodal method have significantly better accuracy and smoothness than only by analyzing point cloud and static temperature distribution.
https://doi.org/10.3389/frobt.2023.1120357
Miniaturisiertes, ortsaufgelöstes, multispektrales, echtzeitfähiges Bildverarbeitungssystem für industrielle und biomedizinische Anwendungen (MINIMIZE); Teilvorhaben: MINIMIZE-Processing : Schlussbericht zum Teilvorhaben MINIMIZE-Processing : Laufzeit des Vorhabens: 01.06.2018-31.12.2021. - Ilmenau : Technische Universiät Ilmenau, Fachgebiet Qualitätssicherung und Industrielle Bildverarbeitung. - 1 Online-Ressource (20 Seiten, 6,49 MB)Förderkennzeichen BMBF 13N14835
https://doi.org/10.2314/KXP:1852350644