TICaM is a time-of-flight dataset of car in-cabin images providing means to test extensive car cabin monitoring systems based on deep learning methods. We provide depth, RGB, and infrared images of front car cabin that have been recorded using a Kinect Azure inside a driving simulator capturing various dynamic scenarios that usually occur while driving. Additionally, we provide a synthetic image dataset of car cabin images similar to the real dataset leveraging advanced simulation software’s capability to generate abundant data with little effort. This can be used to test domain adaptation between synthetic and real data for select classes. For both datasets we provide ground truth annotations for 2D and 3D object detection, as well as for instance segmentation. For real dataset, we also provide activity annotations. Detailed information on the data format of each of the ground truth annotations can be found here.

Real Depth ImageReal RGB ImageReal IR Image
Synthetic Depth ImageSynthetic RGB ImageSynthetic IR Imitation

Data Capturing Setup

The data capturing setup is based on a driving simulator developed at DFKI, consisting of a realistic in-cabin mock-up and a wide-angle projection system for a realistic driving experience. The test platform has been equipped with a wide-angle Kinect AZURE for monitoring the entire interior of the vehicle mock-up and an optical ground truth reference sensor system that allows to track and record the occupant’s body movements synchronously with the 2D and 3D video streams of the camera. Moreover, the precise positioning of the front seats is controlled, varied, and registered via a CAN-interface. More detail on our data capturing setup can be found in this paper.

Data Acquisition

We have tried to cover as many driving scenarios occurring in real life as possible. Therefore we capture sequences including driver, passenger, child/infant in both forward facing and rearward facing seats, and many everyday objects. We ask our participants to perform normal driver and passenger actions in different car seat positions.

Rendering of Synthetic Data

Similar to recorded real data, we render synthetic car cabin images using 3D computer graphic software Blender 2.81. We use methods and materials from SVIRO and vary the body poses of human models to adapt them for realistic driving poses. The 3D models of the car (Mercedes A Class) are from Hum3D, the everyday objects were downloaded from Sketchfab and the human models were generated via MakeHuman. In addition, High Dynamic Range Images (HDRI) was used to get different environmental backgrounds and lightings, and finally, in order to define the reflection properties and colors for the 3D objects, textures from Textures.com were obtained for each object.

Publications

If you would like to use our TICaM dataset, please cite our publication.

@inproceedings{katrolia2021ticam,
  author    = {Jigyasa Singh Katrolia and
               Ahmed El{-}Sherif and
               Hartmut Feld and
               Bruno Mirbach and
               Jason R. Rambach and
               Didier Stricker},
  title     = {TICaM: {A} Time-of-flight In-car Cabin Monitoring Dataset},
  booktitle = {32nd British Machine Vision Conference 2021, {BMVC} 2021, Online,
               November 22-25, 2021},
  pages     = {277},
  publisher = {{BMVA} Press},
  year      = {2021},
  url       = {https://www.bmvc2021-virtualconference.com/assets/papers/0701.pdf},
  timestamp = {Wed, 22 Jun 2022 16:52:45 +0200},
  biburl    = {https://dblp.org/rec/conf/bmvc/KatroliaEFMRS21.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}