|Description (include details on usage, files and paper references)||This is the CAD 120 Affordance Segmentation Dataset based on the Cornell Activity Dataset CAD 120 (see http://pr.cs.cornell.edu/humanactivities/data.php).
RGB frames selected from Cornell Activity Dataset. To find out the location of the frame
in the original videos, see video_info.txt.
image crops taken from the selected frames and resized to 321*321. Each crop is a padded
bounding box of an object the human interacts with in the video. Due to the padding,
the crops may contain background and other objects.
In each selected frame, each bounding box was processed. The bounding boxes are already
given in the Cornell Activity Dataset.
The 5-digit number gives the frame number, the second number gives the bounding box number
within the frame.
321*321*6 segmentation masks for the image crops. Each channel corresponds to an
affordance (openabe, cuttable, pourable, containable, supportable, holdable, in this order).
All pixels belonging to a particular affordance are labeled 1 in the respective channel,
321*321 png images, each containing the binary mask for one of the affordances.
Lists containing the train and test sets for two splits. The actor split ensures that
train and test images stem from different videos with different actors while the object split ensures
that train and test data have no (central) object classes in common.
The train sets are additionally subdivided into 3 subsets A,B and C. For the actor split,
the subsets stem from different videos. For the object split, each subset contains
every third crop of the train set.
Maps image crops to their coordinates in the frames.
Maps frames to 2d human pose coordinates. Hand annotated by us.
Maps image crops to the (central) object it contains.
Maps image crops to affordances visible in this crop
The crops contain the following object classes:
Affordances in our set:
Note that our object affordance labeling differs from the Cornell Activity Dataset:
E.g. the cap of a pizza box is considered to be supportable.
Johann Sawatzky, Abhilash Srikantha, Juergen Gall.
Weakly Supervised Affordance Detection.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR17)
H. S. Koppula and A. Saxena.
Physically grounded spatio-temporal object affordances.
European Conference on Computer Vision (ECCV14)