SVIRO was created to investigate and benchmark machine learning approaches for application in the passenger compartment regarding common challenges of realistic engineering applications. In particular, SVIRO can be used to evaluate the generalization and robustness of machine learning models when trained on a limited number of variations.
The sceneries in the different vehicle interiors were generated randomly. We partitioned the available human models, child seats and backgrounds such that one part is only used for the training images (for all the vehicles) and the other part is used for the test images. Consequently, the dataset has an intrinsic dominant background, object and texture bias: all of the images are taken in a few passenger compartments, but generalization to new, unseen, passenger compartments and child seats should be achieved.
The dataset consists of 10 different vehicle interiors and 25.000 sceneries in total.
A detailed description of the ground truth data is given on this page. For each scenery, we randomly selected what kind of object is placed at each seat position. We used the following different categories (images are examples for the different categories available):
The child and infant seats can either be empty, or occupied by a baby or child respectively.
The labeling of the objects for the different tasks varies slightly, because we wanted to treat the infant/child and the infant/child seat as two different instances for segmentation and object detection. In the table below, you find the ground truth labels associated with the different objects for the different tasks.
|Classification||Segmentation / Object detection||Keypoints|
|Infant in infant seat||1||-||1|
|Child in child seat||2||-||1|
|Empty infant seat||5||1||0|
|Empty child seat||6||2||0|
At the moment, our dataset consists of ten different car models. The number of windows varies, which causes different lightning conditions, and some cars have only two rear seats instead of three. Further, the camera position and orientation varies, which results in different perspectives.
Hyundai – Tucson
BMW – X5
Renault – Zoe
Lexus – GS F
Toyota – Hilux
Tesla – Model 3
VW – Tiguan
BMW – i3
Mercedes – A Class
Ford – Escape
We used the same people and child seats for the training set of each vehicle and the remaining ones for the test sets. This results in two child seats and one infant seat per data split. We did the same for the background: five were selected for the training and five different ones for the test set. For the everyday objects, we used two bags, a card- box and a cup for the training dataset and a different bag, a paper-bag, pillows and a box of bottles for the test set. The number of people and the distribution of the gender, age and ethnicity for the training and test set can be found in the following table:
The number of images generated for each vehicle and each training and test set are identical. In total, this results in 20000 training and 5000 test sceneries. The number and constellation of appearances varies between the different vehicles, because all the sceneries were generated randomly. The distribution of the different classes along the different vehicles and data splits is summarized in the following table. For each cell, the left number is for the training split and the right one for the test. IS stands for infant seat and CS for child seat. We mark by (R) a randomized dataset (we randomly selected the environments and textures from a large pool of available assets and changed the colors randomly). Empty seats are dominant, which causes an imbalanced distribution along the different classes.
|Empty||IS||CS||Adult||Object||Empty IS||Empty CS|
|A Class||2134 / 614||457 / 126||611 / 121||884 / 191||755 / 179||486 / 124||673 / 145|
|Escape||2079 / 569||489 / 133||581 / 143||940 / 215||742 / 187||443 / 108||726 / 145|
|GS F||2127 / 565||465 / 121||579 / 140||907 / 219||791 / 195||468 / 113||663 / 147|
|Hilux||2218 / 553||457 / 116||560 / 130||847 / 232||769 / 194||510 / 125||639 / 150|
|i3||884 / 180||372 / 117||496 / 98||919 / 223||442 / 129||363 / 113||524 / 140|
|Model 3||2507 / 613||449 / 121||537 / 107||909 / 224||565 / 196||439 / 105||594 / 134|
|Tiguan||2196 / 592||458 / 112||645 / 128||944 / 227||650 / 180||461 / 112||646 / 149|
|Tucson||2202 / 565||458 / 103||608 / 139||900 / 231||658 / 204||481 / 119||693 / 139|
|X5||2400 / 610||371 / 109||569 / 100||892 / 234||767 / 195||418 / 124||583 / 128|
|X5 (R)||2392 / -||397 / -||525 / -||896 / -||754 /-||429 / -||607 / -|
|Zoe||909 / 195||380 / 125||518 / 115||816 / 189||438 / 131||392 / 119||547 / 126|
Many applications in the passenger compartment require an active infrared camera system to work in the dark. We decided to imitate such a system by means of a simple approach: We placed an active red lamp (R=100%, G=0%, B=0%) next to the camera inside of the car illuminating the rear seat, but overlapping with the illumination from the HDR background image. We then took the red channel only from the resulting RGB image. We refer to these images as grayscale images. This is, however, not a physically accurate simulation of a real active infrared camera system. Nevertheless, we become less dependent on the environmental lightning and we can facilitate the tasks. See the figure below for a comparison between a standard RGB image and our grayscale image for a dark scenery, where a lot of information would be lost.
Validation on real infrared images
We tested the transferability of a model trained on SVIRO to real infrared images for instance segmentation. We fine-tuned all layers of a pre-trained Mask R-CNN model with a ResNet-50 backbone. The synthetic images were blurred to be closer to real infrared images. We combined the training images of the i3, Tucson and Model 3 and compare results on synthetic and real images in the X5. Only bounding boxes and masks with a confidence of at least 0.5 are plotted. The model performs similarly across real (bottom row) and synthetic (top row) images and sometimes fails to detect objects. This is expected as the model has only seen a limited amount of variation. However, the similar child seat is detected in the real images, but not in the synthetic ones. We believe that investigations on SVIRO are transferable to real applications as the resulting model behaves similarly on real and synthetic images.
During the data generation process we tried to simulate the conditions of a realistic application. We decided to partition the available human models, child seats and backgrounds such that one part is only used for the training images (for all the vehicles) and the other part is used for the test images. For each of the ten different vehicle passenger compartments and available child seats, we fixed the texture as if real images had been taken. Consequently, the machine learning models need to generalize to previously unknown variations of humans, child seats and environments. The facial expression for all human models is identical and neutral and the seat belts were not attached.
We can create images under defined conditions (e.g. same scenery, but under different lightning conditions) so that additional investigations can be performed in future works. Since our goal was to provide a versatile dataset, the latter can be used to test additional challenges. For example, one can train models only on infant seats with the handle down and test it on seats with the handle up.
We also generated a train dataset with randomly selected textures and backgrounds from a large pool of available images in order to test the influence of the texture on the different tasks.
We used the free and open source 3D computer graphics software Blender 2.79 and its Python API to construct and render the synthetic 3D sceneries. For our dataset, we selected a subset of available seats on the market, from which we then created a 3D model so that it could be used in our simulation. The 3D models were generated using depth cameras (Kinect v1) and precise structured light scanners (Artec Eva). We used textures (Albedo, Normal and Roughness images) from Textures.com (with permission) for all the objects in the scene. The environmental background and lightning were created by means of High Dynamic Range Images (HDRI) from HDRI Haven. The human models (adults, children and babies) and their clothing (additional clothes were downloaded from the community assets), were randomly generated by using the open source 3D graphic software MakeHuman 1.2.0. The 3D models of the cars were purchased from Hum3D and everyday objects (e.g. backpacks, boxes, pillows) were downloaded from Sketchfab.