Publikationsliste Hochschul-Bibliografie

Anzahl der Treffer: 37
Erstellt: Mon, 06 May 2024 23:18:36 +0200 in 0.1762 sec


Aganian, Dustin; Köhler, Mona; Baake, Sebastian; Eisenbach, Markus; Groß, Horst-Michael
How object information improves skeleton-based human action recognition in assembly tasks. - In: IJCNN 2023 conference proceedings, (2023), insges. 9 S.

As the use of collaborative robots (cobots) in industrial manufacturing continues to grow, human action recognition for effective human-robot collaboration becomes increasingly important. This ability is crucial for cobots to act autonomously and assist in assembly tasks. Recently, skeleton-based approaches are often used as they tend to generalize better to different people and environments. However, when processing skeletons alone, information about the objects a human interacts with is lost. Therefore, we present a novel approach of integrating object information into skeleton-based action recognition. We enhance two state-of-the-art methods by treating object centers as further skeleton joints. Our experiments on the assembly dataset IKEA ASM show that our approach improves the performance of these state-of-the-art methods to a large extent when combining skeleton joints with objects predicted by a state-of-the-art instance segmentation model. Our research sheds light on the benefits of combining skeleton joints with object information for human action recognition in assembly tasks. We analyze the effect of the object detector on the combination for action classification and discuss the important factors that must be taken into account.



https://doi.org/10.1109/IJCNN54540.2023.10191686
Aganian, Dustin; Köhler, Mona; Stephan, Benedict; Eisenbach, Markus; Groß, Horst-Michael
Fusing hand and body skeletons for human action recognition in assembly. - In: Artificial Neural Networks and Machine Learning - ICANN 2023, (2023), S. 207-219

As collaborative robots (cobots) continue to gain popularity in industrial manufacturing, effective human-robot collaboration becomes crucial. Cobots should be able to recognize human actions to assist with assembly tasks and act autonomously. To achieve this, skeleton-based approaches are often used due to their ability to generalize across various people and environments. Although body skeleton approaches are widely used for action recognition, they may not be accurate enough for assembly actions where the worker’s fingers and hands play a significant role. To address this limitation, we propose a method in which less detailed body skeletons are combined with highly detailed hand skeletons. We investigate CNNs and transformers, the latter of which are particularly adept at extracting and combining important information from both skeleton types using attention. This paper demonstrates the effectiveness of our proposed approach in enhancing action recognition in assembly scenarios.



https://doi.org/10.1007/978-3-031-44207-0_18
Aganian, Dustin; Stephan, Benedict; Eisenbach, Markus; Stretz, Corinna; Groß, Horst-Michael
ATTACH dataset: annotated two-handed assembly actions for human action understanding. - In: ICRA 2023, (2023), S. 11367-11373

With the emergence of collaborative robots (cobots), human-robot collaboration in industrial manufacturing is coming into focus. For a cobot to act autonomously and as an assistant, it must understand human actions during assembly. To effectively train models for this task, a dataset containing suitable assembly actions in a realistic setting is cru-cial. For this purpose, we present the ATTACH dataset, which contains 51.6 hours of assembly with 95.2k annotated fine-grained actions monitored by three cameras, which represent potential viewpoints of a cobot. Since in an assembly context workers tend to perform different actions simultaneously with their two hands, we annotated the performed actions for each hand separately. Therefore, in the ATTACH dataset, more than 68% of annotations overlap with other annotations, which is many times more than in related datasets, typically featuring more simplistic assembly tasks. For better generalization with respect to the background of the working area, we did not only record color and depth images, but also used the Azure Kinect body tracking SDK for estimating 3D skeletons of the worker. To create a first baseline, we report the performance of state-of-the-art methods for action recognition as well as action detection on video and skeleton-sequence inputs. The dataset is available at https://www.tu-ilmenau.de/neurob/data-sets-code/attach-dataset.



https://doi.org/10.1109/ICRA48891.2023.10160633
Eisenbach, Markus; Lübberstedt, Jannik; Aganian, Dustin; Groß, Horst-Michael
A little bit attention is all you need for person re-identification. - In: ICRA 2023, (2023), S. 7598-7605

Person re-identification plays a key role in applications where a mobile robot needs to track its users over a long period of time, even if they are partially unobserved for some time, in order to follow them or be available on demand. In this context, deep-learning-based real-time feature extraction on a mobile robot is often performed on special-purpose devices whose computational resources are shared for multiple tasks. Therefore, the inference speed has to be taken into account. In contrast, person re-identification is often improved by architectural changes that come at the cost of significantly slowing down inference. Attention blocks are one such example. We will show that some well-performing attention blocks used in the state of the art are subject to inference costs that are far too high to justify their use for mobile robotic applications. As a consequence, we propose an attention block that only slightly affects the inference speed while keeping up with much deeper networks or more complex attention blocks in terms of re-identification accuracy. We perform extensive neural architecture search to derive rules at which locations this attention block should be integrated into the architecture in order to achieve the best trade-off between speed and accuracy. Finally, we confirm that the best performing configuration on a re-identification benchmark also performs well on an indoor robotic dataset.



https://doi.org/10.1109/ICRA48891.2023.10160304
Köhler, Mona; Eisenbach, Markus; Groß, Horst-Michael
Few-shot object detection: a comprehensive survey. - In: IEEE transactions on neural networks and learning systems, ISSN 2162-2388, Bd. 0 (2023), 0, S. 1-21

Humans are able to learn to recognize new objects even from a few examples. In contrast, training deep-learning-based object detectors requires huge amounts of annotated data. To avoid the need to acquire and annotate these huge amounts of data, few-shot object detection (FSOD) aims to learn from few object instances of new categories in the target domain. In this survey, we provide an overview of the state of the art in FSOD. We categorize approaches according to their training scheme and architectural layout. For each type of approach, we describe the general realization as well as concepts to improve the performance on novel categories. Whenever appropriate, we give short takeaways regarding these concepts in order to highlight the best ideas. Eventually, we introduce commonly used datasets and their evaluation protocols and analyze the reported benchmark results. As a result, we emphasize common challenges in evaluation and identify the most promising current trends in this emerging field of FSOD.



https://doi.org/10.1109/TNNLS.2023.3265051
Stephan, Benedict; Aganian, Dustin; Hinneburg, Lars; Eisenbach, Markus; Müller, Steffen; Groß, Horst-Michael
On the importance of label encoding and uncertainty estimation for robotic grasp detection. - In: IROS 2022 Kyōoto - IEEE/RSJ International Conference on Intelligent Robots and Systems, (2022), S. 4781-4788

Automated grasping of arbitrary objects is an essential skill for many applications such as smart manufacturing and human robot interaction. This makes grasp detection a vital skill for automated robotic systems. Recent work in model-free grasp detection uses point cloud data as input and typically outperforms the earlier work on RGB(D)-based methods. We show that RGB(D)-based methods are being underestimated due to suboptimal label encodings used for training. Using the evaluation pipeline of the GraspNet-1Billion dataset, we investigate different encodings and propose a novel encoding that significantly improves grasp detection on depth images. Additionally, we show shortcomings of the 2D rectangle grasps supplied by the GraspNet-1Billion dataset and propose a filtering scheme by which the ground truth labels can be improved significantly. Furthermore, we apply established methods for uncertainty estimation on our trained models since knowing when we can trust the model's decisions provides an advantage for real-world application. By doing so, we are the first to directly estimate uncertainties of detected grasps. We also investigate the applicability of the estimated aleatoric and epistemic uncertainties based on their theoretical properties. Additionally, we demonstrate the correlation between estimated uncertainties and grasp quality, thus improving selection of high quality grasp detections. By all these modifications, our approach using only depth images can compete with point-cloud-based approaches for grasp detection despite the lower degree of freedom for grasp poses in 2D image space.



https://doi.org/10.1109/IROS47612.2022.9981866
Eisenbach, Markus; Aganian, Dustin; Köhler, Mona; Stephan, Benedict; Schröter, Christof; Groß, Horst-Michael
Visual scene understanding for enabling situation-aware cobots. - Ilmenau : Universitätsbibliothek. - 1 Online-Ressource (2 Seiten)Publikation entstand im Rahmen der Veranstaltung: IEEE International Conference on Automation Science and Engineering ; 17 (Lyon, France) : 2021.08.23-27, TuBT7 Special Session: Robotic Control and Robotization of Tasks within Industry 4.0

Although in the course of Industry 4.0, a high degree of automation is the objective, not every process can be fully automated - especially in versatile manufacturing. In these applications, collaborative robots (cobots) as helpers are a promising direction. We analyze the collaborative assembly scenario and conclude that visual scene understanding is a prerequisite to enable autonomous decisions by cobots. We identify the open challenges in these visual recognition tasks and propose promising new ideas on how to overcome them.



https://doi.org/10.22032/dbt.51471
Balada, Christoph; Eisenbach, Markus; Groß, Horst-Michael
Evaluation of transfer learning for visual road condition assessment. - In: Artificial neural networks and machine learning - ICANN 2021, (2021), S. 540-551

Through deep learning, major advances have been made in the field of visual road condition assessment in recent years. However, many approaches train from scratch and avoid transfer learning due to the different nature of road surface data and the ImageNet dataset, which is commonly used for pre-training neural networks for visual recognition. We show that, despite the huge differences in the data, transfer learning outperforms training from scratch in terms of generalization. In extensive experiments, we explore the underlying cause by examining various transfer learning effects. For our experiments, we are incorporating seven known architectures. Therefore, this is the first comprehensive study of transfer learning in the field of visual road condition assessment.



Aganian, Dustin; Eisenbach, Markus; Wagner, Joachim; Seichter, Daniel; Groß, Horst-Michael
Revisiting loss functions for person re-identification. - In: Artificial neural networks and machine learning - ICANN 2021, (2021), S. 30-42

Appearance-based person re-identification is very challenging, i.a. due to changing illumination, image distortion, and differences in viewpoint. Therefore, it is crucial to learn an expressive feature embedding that compensates for changing environmental conditions. There are many loss functions available to achieve this goal. However, it is hard to judge which one is the best. In related work, the experiments are only performed on the same datasets, but the use of different setups and different training techniques compromises the comparability. Therefore, we compare the most widely used and most promising loss functions under identical conditions on three different setups. We provide insights into why some of the loss functions work better than others and what additional benefits they provide. We further propose sequential training as an additional training trick that improves the performance of most loss functions. In our conclusion, we provide guidance for future usage an d research regarding loss functions for appearance-based person re-identification. Source code is available (Source code: https://www.tu-ilmenau.de/neurob/data-sets-code/re-id-loss/).



Eisenbach, Markus;
Personenwiedererkennung mittels maschineller Lernverfahren. - In: Ausgezeichnete Informatikdissertationen, Bd. 2019 (2021), S. 59-68

Eisenbach, Markus;
Personenwiedererkennung mittels maschineller Lernverfahren für öffentliche Einsatzumgebungen. - Ilmenau : Universitätsverlag Ilmenau, 2020. - 1 Online-Ressource (xix, 523 Seiten)
Technische Universität Ilmenau, Dissertation 2019

Die erscheinungsbasierte Personenwiedererkennung in öffentlichen Einsatzumgebungen ist eines der schwierigsten, noch ungelösten Probleme der Bildverarbeitung. Viele Teilprobleme können nur gelöst werden, wenn Methoden des maschinellen Lernens mit Methoden der Bildverarbeitung kombiniert werden. In dieser Arbeit werden maschinelle Lernverfahren eingesetzt, um alle Abarbeitungsschritte einer erscheinungsbasierten Personenwiedererkennung zu verbessern: Mithilfe von Convolutional Neural Networks werden erscheinungsbasierte Merkmale gelernt, die eine Wiedererkennung auf menschlichem Niveau ermöglichen. Für die Generierung des Templates zur Beschreibung der Zielperson wird durch Einsatz maschineller Lernverfahren eine automatische Auswahl personenspezifischer, diskriminativer Merkmale getroffen. Durch eine gelernte Metrik können beim Vergleich von Merkmalsvektoren szenariospezifische Umwelteinflüsse kompensiert werden. Eine Fusion komplementärer Merkmale auf Score Level steigert die Wiedererkennungsleistung deutlich. Dies wird vor allem durch eine gelernte Gewichtung der Merkmale erreicht. Das entwickelte Verfahren wird exemplarisch anhand zweier Einsatzszenarien - Videoüberwachung und Robotik - evaluiert. Bei der Videoüberwachung ermöglicht die Wiedererkennung von Personen ein kameraübergreifendes Tracking. Dies hilft menschlichen Operateuren, den Aufenthaltsort einer gesuchten Person in kurzer Zeit zu ermitteln. Durch einen mobilen Serviceroboter kann der aktuelle Nutzer anhand einer erscheinungsbasierten Wiedererkennung identifiziert werden. Dies hilft dem Roboter bei der Erfüllung von Aufgaben, bei denen er den Nutzer lotsen oder verfolgen muss. Die Qualität der erscheinungsbasierten Personenwiedererkennung wird in dieser Arbeit anhand von zwölf Kriterien charakterisiert, die einen Vergleich mit biometrischen Verfahren ermöglichen. Durch den Einsatz maschineller Lernverfahren wird bei der erscheinungsbasierten Personenwiedererkennung in den betrachteten unüberwachten, öffentlichen Einsatzfeldern eine Erkennungsleistung erzielt, die sich mit biometrischen Verfahren messen kann.



https://doi.org/10.22032/dbt.45621
Müller, Steffen; Wengefeld, Tim; Trinh, Thanh Quang; Aganian, Dustin; Eisenbach, Markus; Groß, Horst-Michael
A multi-modal person perception framework for socially interactive mobile service robots. - In: Sensors, ISSN 1424-8220, Bd. 20 (2020), 3, 722, insges. 18 S.

https://doi.org/10.3390/s20030722
Eisenbach, Markus; Stricker, Ronny; Sesselmann, Maximilian; Seichter, Daniel; Groß, Horst-Michael
Enhancing the quality of visual road condition assessment by deep learning. - In: XXVI World Road Congress - Abu Dhabi 2019, (2019), S. 1-13

Sesselmann, Maximilian; Stricker, Ronny; Eisenbach, Markus
Deep learning for automatic detection and classification of road damage from mobile LiDAR data :
Einsatz von Deep Learning zur automatischen Detektion und Klassifikation von Fahrbahnschäden aus mobilen LiDAR-Daten. - In: AGIT, ISSN 2509-713X, Bd. 5 (2019), S. 100-114

https://doi.org/10.14627/537669009
Stricker, Ronny; Eisenbach, Markus; Sesselmann, Maximilian; Debes, Klaus; Groß, Horst-Michael
Improving visual road condition assessment by extensive experiments on the extended GAPs dataset. - In: 2019 International Joint Conference on Neural Networks (IJCNN), (2019), S. 1-8

https://doi.org/10.1109/IJCNN.2019.8852257
Schnürer, Thomas; Fuchs, Stefan; Eisenbach, Markus; Groß, Horst-Michael
Real-time 3D pose estimation from single depth images. - In: VISIGRAPP 2019, (2019), S. 716-724

Seichter, Daniel; Eisenbach, Markus; Stricker, Ronny; Groß, Horst-Michael
How to improve deep learning based pavement distress detection while minimizing human effort. - In: 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), (2018), S. 63-70

https://doi.org/10.1109/COASE.2018.8560372
Eisenbach, Markus; Stricker, Ronny; Debes, Klaus; Groß, Horst-Michael
Crack detection with an interactive and adaptive video inspection system. - In: Arbeitsgruppentagung Infrastrukturmanagement, ISBN 978-3-86446-176-7, (2017), insges. 10 S.

Groß, Horst-Michael; Meyer, Sibylle; Scheidig, Andrea; Eisenbach, Markus; Müller, Steffen; Trinh, Thanh Quang; Wengefeld, Tim; Bley, Andreas; Martin, Christian; Fricke, Christa
Mobile robot companion for walking training of stroke patients in clinical post-stroke rehabilitation. - In: IEEE International Conference on Robotics and Automation (ICRA), ISBN 978-1-5090-4633-1, (2017), S. 1028-1035

https://doi.org/10.1109/ICRA.2017.7989124
Eisenbach, Markus; Stricker, Ronny; Seichter, Daniel; Amende, Karl; Debes, Klaus; Sesselmann, Maximilian; Ebersbach, Dirk; Stöckert, Ulrike; Groß, Horst-Michael
How to get pavement distress detection ready for deep learning? : a systematic approach. - In: IJCNN 2017, ISBN 978-1-5090-6182-2, (2017), S. 2039-2047

https://doi.org/10.1109/IJCNN.2017.7966101
Groß, Horst-Michael; Scheidig, Andrea; Debes, Klaus; Einhorn, Erik; Eisenbach, Markus; Müller, Steffen; Schmiedel, Thomas; Trinh, Thanh Quang; Weinrich, Christoph; Wengefeld, Tim; Bley, Andreas; Martin, Christian
ROREAS: robot coach for walking and orientation training in clinical post-stroke rehabilitation - prototype implementation and evaluation in field trials. - In: Autonomous robots, ISSN 1573-7527, Bd. 41 (2017), 3, S. 679-698

http://dx.doi.org/10.1007/s10514-016-9552-6
Eisenbach, Markus; Seichter, Daniel; Groß, Horst-Michael
Are color features important for person detection? - insights into features learned by deep convolutional neural networks. - In: 22. Workshop Farbbildverarbeitung, (2016), S. 169-182

Groß, Horst-Michael; Eisenbach, Markus; Scheidig, Andrea; Trinh, Thanh Quang; Wengefeld, Tim
Contribution towards evaluating the practicability of socially assistive robots - by example of a mobile walking coach robot. - In: Social Robotics, (2016), S. 890-899

http://dx.doi.org/10.1007/978-3-319-47437-3_87
Eisenbach, Markus; Seichter, Daniel; Wengefeld, Tim; Groß, Horst-Michael
Cooperative multi-scale Convolutional Neural Networks for person detection. - In: 2016 International Joint Conference on Neural Networks (IJCNN), ISBN 978-1-5090-0620-5, (2016), S. 267-276

http://dx.doi.org/10.1109/IJCNN.2016.7727208
Groß, Horst-Michael; Scheidig, Andrea; Eisenbach, Markus; Trinh, Thanh Quang; Wengefeld, Tim
Assistive robotics for health assistance - a contribution towards evaluating the practicability by example of a mobile rehab robot :
Assistenzrobotik für die Gesundheitsassistenz - ein Beitrag zur Evaluierung der Praxistauglichkeit am Beispiel eines mobilen Reha-Roboters. - In: Zukunft Lebensräume, ISBN 978-3-8007-4212-7, (2016), S. 58-67

Wengefeld, Tim; Eisenbach, Markus; Trinh, Thanh Q.; Groß, Horst-Michael
May I be your personal coach? : bringing together person tracking and visual re-identification on a mobile robot. - In: Robotics in the era of digitalisation, (2016), S. 141-148

Eisenbach, Markus; Vorndran, Alexander; Sorge, Sven; Groß, Horst-Michael
User recognition for guiding and following people with a mobile robot in a clinical environment. - In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2015), S. 3600-3607

Rehabilitative follow-up care is important for stroke patients to regain their motor and cognitive skills. We aim to develop a robotic rehabilitation assistant for walking exercises in late stages of rehabilitation. The robotic rehab assistant is to accompany inpatients during their self-training, practicing both mobility and spatial orientation skills. To hold contact to the patient, even after temporally full occlusions, robust user re-identification is essential. Therefore, we implemented a person re-identification module that continuously re-identifies the patient, using only few amount of the robot's processing resources. It is robust to varying illumination and occlusions. State-of-the-art performance is confirmed on a standard benchmark dataset, as well as on a recorded scenario-specific dataset. Additionally, the benefit of using a visual re-identification component is verified by live-tests with the robot in a stroke rehab clinic.



http://dx.doi.org/10.1109/IROS.2015.7353880
Eisenbach, Markus; Kolarow, Alexander; Vorndran, Alexander; Niebling, Julia; Groß, Horst-Michael
Evaluation of multi feature fusion at score-level for appearance-based person re-identification. - In: International Joint Conference on Neural Networks (IJCNN), 2015, ISBN 978-1-4799-1961-1, (2015), insges. 8 S.

Robust appearance-based person re-identification can only be achieved by combining multiple diverse features describing the subject. Since individual features perform different, it is not trivial to combine them. Often this problem is bypassed by concatenating all feature vectors and learning a distance metric for the combined feature vector. However, to perform well, metric learning approaches need many training samples which are not available in most real-world applications. In contrast, in our approach we perform score-level fusion to combine the matching scores of different features. To evaluate which score-level fusion techniques perform best for appearance-based person re-identification, we examine several score normalization and feature weighting approaches employing the the widely used and very challenging VIPeR dataset. Experiments show that in fusing a large ensemble of features, the proposed score-level fusion approach outperforms linear metric learning approaches which fuse at feature-level. Furthermore, a combination of linear metric learning and score-level fusion even outperforms the currently best non-linear kernel-based metric learning approaches, regarding both accuracy and computation time.



http://dx.doi.org/10.1109/IJCNN.2015.7280360
Scheidig, Andrea; Einhorn, Erik; Weinrich, Christoph; Eisenbach, Markus; Müller, Steffen; Schmiedel, Thomas; Wengefeld, Tim; Trinh, Thanh; Groß, Horst-Michael; Bley, Andreas; Scheidig, Rüdiger; Pfeiffer, Gustav; Meyer, Sibylle; Oelkers, Silke
Robotischer Reha-Assistent zum Lauftraining von Patienten nach Schlaganfall: erste Ergebnisse zum Laufcoach. - In: 8. AAL-Kongress, (2015), S. 436-445

In diesem Beitrag werden erste Ergebnisse zum Einsatz eines robotischen Reha-Assistenten als Laufcoach in einer Klinik vorgestellt. Zunächst wird das mit Laufcoach bezeichnete Einsatzszenario beschrieben und die zur Umsetzung dieses Szenarios grundlegenden Roboterverhalten sowie die eingesetzten realwelttauglichen Erkennungs-, Navigations- und Interaktionsleistungen zusammen mit der Roboterplattform vorgestellt. Die Herangehensweise an zunächst 4-tägige Funktionstests in der Klinik mit 15.000 m gefahrener Wegstrecke wird dargelegt und erste erreichte Ergebnisse zu den Roboterverhalten der autonomen und höflichen Zielanfahrt, des Lotsens und Folgens einer Person diskutiert.



Kolarow, Alexander; Schenk, Konrad; Eisenbach, Markus; Dose, Michael; Brauckmann, Michael; Debes, Klaus; Groß, Horst-Michael
APFel: the intelligent video analysis and surveillance system for assisting human operators. - In: 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), (2013), S. 195-201

http://dx.doi.org/10.1109/AVSS.2013.6636639
Eisenbach, Markus; Scheiner, Petra; Kolarow, Alexander; Schenk, Konrad; Groß, Horst-Michael; Weinreich, Ilona
Learning illumination maps for color constancy in person reidentification. - In: 19. Workshop Farbbildverarbeitung, (2013), S. 103-114

Kolarow, Alexander; Brauckmann, Michael; Eisenbach, Markus; Schenk, Konrad; Einhorn, Erik; Debes, Klaus; Groß, Horst-Michael
Vision-based hyper-real-time object tracker for robotic applications. - In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2012, (2012), S. 2108-2115

http://dx.doi.org/10.1109/IROS.2012.6385843
Schenk, Konrad; Kolarow, Alexander; Eisenbach, Markus; Debes, Klaus; Groß, Horst-Michael
Automatic calibration of a stationary network of laser range finders by matching movement trajectories. - In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2012, (2012), S. 431-437

http://dx.doi.org/10.1109/IROS.2012.6385620
Schenk, Konrad; Kolarow, Alexander; Eisenbach, Markus; Debes, Klaus; Groß, Horst-Michael
Automatic calibration of multiple stationary laser range finders using trajectories. - In: IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance (AVSS), 2012, (2012), S. 306-312

http://dx.doi.org/10.1109/AVSS.2012.14
Eisenbach, Markus; Kolarow, Alexander; Schenk, Konrad; Debes, Klaus; Groß, Horst-Michael
View invariant appearance-based person reidentification using fast online feature selection and score level fusion. - In: IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance (AVSS), 2012, (2012), S. 184-190

http://dx.doi.org/10.1109/AVSS.2012.81
Schenk, Konrad; Eisenbach, Markus; Kolarow, Alexander; Groß, Horst-Michael
Comparison of laser-based person tracking at feet and upper-body height. - In: KI 2011: advances in artificial intelligence, (2011), S. 277-288

http://dx.doi.org/10.1007/978-3-642-24455-1_27