The NICR Depth-Multi-Task Dataset
The NICR Depth-Multi-Task Dataset consists of depth image patches of humans and obstacles. It was recorded to train person and body posture classifiers. In total the dataset contains more than 235,000 data samples of non-human objects and of 27 persons. Each sample is either labeled as person with posture (standing, sitting, or squatting), as person without posture (i.e. if a posture could not be clearly assigned), or as negative (i.e. non-person).
Details about the data, the recording setup, and the distribution of the data can be found at: GitHub
Request access to the NICR Depth-Multi-Task Dataset
Send the completed form by email to firstname.lastname@example.org to acquire a login. Please note that the dataset is available for academic use only (academic institutions and non-profit organizations) and needs to be signed by a permanent staff member of any of these institutes. Therefore, you may have to ask your supervisor to sign.
Code and Network Weights
The code of our Multi-Task approach and some sample images of the dataset as well as the weights of the best neural networks we used in our paper are available at: Github
A package for reading and using the NICR Depth-Multi-Task Dataset in Python is available at: GitHub
Include Orientation Data into Multi-Task Training
For our Multi-Task approach, we included upper body orientation data into training as well. We used our NICR RGB-D Orientation Data Set for this, which can be found here: NICR RGB-D Orientation Data Set.