did=197 task=did=197 YACVID - Stanford Background Dataset - Details

Yet Another Computer Vision Index To Datasets (YACVID) - Details

Stand: 2024-03-19 06:15:04 - Overview

ok - did=197 - task=task - dviews=3873
Attribute Current content New content
Name (Institute + Shorttitle)Stanford Background Dataset 
Description (include details on usage, files and paper references)The Stanford Background Dataset is a new dataset introduced in Gould et al. (ICCV 2009) for evaluating methods for geometric and semantic scene understanding. The dataset contains 715 images chosen from existing public datasets: LabelMe, MSRC, PASCAL VOC and Geometric Context. Our selection criteria were for the images to be of outdoor scenes, have approximately 320-by-240 pixels, contain at least one foreground object, and have the horizon position within the image (it need not be visible).

Semantic and geometric labels were obtained using Amazon Mechanical Turk (AMT). The labels are:

horizons.txt image dimensions and location of horizon
labels/*.regions.txt integer matrix indicating each pixel semantic class (sky, tree, road, grass, water, building, mountain, or foreground object). A negative number indicates unknown.
labels/*.surfaces.txt integer matrix indicating each pixel geometric class (sky, horizontal, or vertical).
labels/*.layers.txt integer matrix indicating distinct image regions.


If you use this dataset in your work, you should reference:
S. Gould, R. Fulton, D. Koller. Decomposing a Scene into Geometric and Semantically Consistent Regions. Proceedings of International Conference on Computer Vision (ICCV), 2009. [pdf] 
URL Linkhttp://dags.stanford.edu/projects/scenedataset.html 
Files (#)715 
References (SKIPPED)
Category (SKIPPED) 
Tags (single words, spaced)semantic segmentation urban classification nature geometry 
Last Changed2024-03-19 
Turing (2.12+3.25=?) :-)