|Description (include details on usage, files and paper references)||The Caltech Pedestrian Dataset consists of approximately 10 hours of 640x480 30Hz video taken from a vehicle driving through regular traffic in an urban environment. About 250,000 frames (in 137 approximately minute long segments) with a total of 350,000 bounding boxes and 2300 unique pedestrians were annotated. The annotation includes temporal correspondence between bounding boxes and detailed occlusion labels. More information can be found in our PAMI 2011 and CVPR 2009 benchmarking papers.
- Caltech Pedestrian Testing Dataset: All results in our CVPR09 paper were reported on this data. We give two set of results: on 50-pixel or taller, unoccluded or partially occluded pedestrians (reasonable), and a more detailed breakdown of performance as in the paper (detailed).
- Caltech Pedestrian Training Dataset: Results on the training data. These results are provided so researchers can compare their method without submitting a classifier for full evaluation. Results: reasonable, detailed.
- Caltech Pedestrian Japan Dataset: Similar to the Caltech Pedestrian Dataset (both in magnitude and annotation), except video was collected in Japan. We cannot release this data, however, we will benchmark results to give a secondary evaluation of various detectors. Results: reasonable, detailed.
- INRIA Pedestrian Test Dataset: Full image results on the INRIA Pedestrian dataset (evaluation details).
- ETH Pedestrian Dataset: Results on the ETH Pedestrian dataset (evaluation details).
- TUD-Brussels Pedestrian Dataset: Results on the TUD-Brussels Pedestrian dataset (evaluation details).
- Daimler Pedestrian Dataset: Results on the Daimler Pedestrian dataset (evaluation details).
Updated sanitized annotations: