Overview
CloningDCB is a dataset of synthetic driving sequences generated using the CARLA simulator. The driving tasks are performed by 40 individuals on both dynamic and static driving platforms. Driving is not random but based on orchestrated scenarios.
Each sequence includes RGB images accompanied by standard ground truth data (depth, optical flow, semantic and instance segmentation), ego-vehicle information, and, most notably, eye-tracking and EEG recordings.
Best guidance for training
CloningDCB dataset helps to reduce the amount of driving hours required for a sensorimotor model lo "learn" what to consider in a situation to make a decision.
Open for research and commercial purposes
CloningDCB may be used for research and commercial purposes. It is released publicly under the Creative Commons Attribution-Commercial-ShareAlike 4.0 license. For detailed information, please check our terms of use.
Ground-truth annotations
CloningDCB comes with photorealistic color images, per-pixel semantic segmentation, depth, instance panoptic segmentation, optical flow and CANBUS information. To see some examples of per-pixel ground-truth, please check our examples of annotations.
Diversity of scenarios
CloningDCB features more than 40 hours of curated driving scenarios and free driving performed by human drivers in simulators of two of the most recognised research centers in Spain.
5 weather variations for every driving scenario are also provided.
Annotations (Ground-Truth)
CloningDCB brings per-pixel ground-truth semantic segmentation, scene depth, instance panoptic segmentation, optical flow and real eye tracking. Check some of our examples:
