Yet Another Computer Vision Index To Datasets (YACVID) - Details

Stand: 2017-11-21 000000m 13:08:46 - Overview

ok22
Attribute Current Content New
Name (Institute + Shorttitle)Udacity Annotated Driving Datasets 
Description (include details on usage, files and paper references)Udacity Annotated Driving Datasets have two datasets:

Dataset 1
The dataset includes driving in Mountain View California and neighboring cities during daylight conditions. It contains over 65,000 labels across 9,423 frames collected from a Point Grey research cameras running at full resolution of 1920x1200 at 2hz. The dataset was annotated by CrowdAI using a combination of machine learning and humans.

Labels
Car
Truck
Pedestrian

CSV Format
xmin
ymin
xmax
ymax
frame
label

Size 1.5 GB
Annotator CrowdAI

Dataset 2
This dataset is similar to dataset 1 but contains additional fields for occlusion and an additional label for traffic lights. The dataset was annotated entirely by humans using Autti and is slightly larger with 15,000 frames.

Labels
Car
Truck
Pedestrian
Street Lights

CSV Format
frame
xmin
ymin
xmax
ymax
occluded
label
attributes (Only appears on traffic lights)

Size 3.3 GB
Annotator Autti
 
URL Linkhttps://github.com/udacity/self-driving-car/tree/master/annotations 
Files (#)24423 
References (SKIPPED)
Category (SKIPPED) 
Tags (single words, spaced)classification segmentation urban street selfdriving autonomous udacity annotation california city daylight 
Last Changed2017-11-21 
Turing (2.12+3.25=?) :-)