|Description (include details on usage, files and paper references)||The CMU Geometric Context dataset by Derek Hoiem, Alexei A. Efros, Martial Hebert consists of 300 images used for training and testing the geometric context method.
We extend our framework from Automatic Photo Pop-up by subclassifying vertical regions into planar (facing left, center, or right) and non- planar (porous and solid). We also provide extensive quantitative evaluation and demonstrate the usefulness of the geometric labels as context for object detection.
Note that all images were contained using Google image search, using
keywords such as "city", "outdoor", "field", and "road". The original
content providers maintain copyrights on these images.
Geometric Context from a Single Image
D. Hoiem, A.A. Efros, and M. Hebert, ICCV 2005.
*.jpg: 300 images used for training and testing
allimsegs2.mat: contains ground truth
imsegs: for each image contains superpixel images (segimage) and
ground truth label for each superpixel (vert_labels, horz_labels)
cluster_images: indices for learning segmentation
cv_images: indices for cross-validation (in blocks of 50)