Yet Another Computer Vision Index To Datasets (YACVID) - Details

Stand: 2020-06-02 000000m 22:15:52 - Overview

Attribute Current Content New
Name (Institute + Shorttitle)KTH Action 
Description (include details on usage, files and paper references)The current video database containing six types of human actions (walking, jogging, running, boxing, hand waving and hand clapping) performed several times by 25 subjects in four different scenarios: outdoors s1, outdoors with scale variation s2, outdoors with different clothes s3 and indoors s4 as illustrated below. Currently the database contains 2391 sequences. All sequences were taken over homogeneous backgrounds with a static camera with 25fps frame rate. The sequences were downsampled to the spatial resolution of 160x120 pixels and have a length of four seconds in average.
In our experiments reported in ICPR 2004 all sequences were divided with respect to the subjects into a training set (8 persons), a validation set (8 persons) and a test set (9 persons). The classifiers were trained on a training set while the validation set was used to optimize the parameters of each method. The recognition results were obtained on the test set. 
URL Link 
Files (#)
References (SKIPPED)
Category (SKIPPED)Action Classification, Segmentation 
Tags (single words, spaced)action, classification, video, segmentation 
Last Changed2020-06-02 
Turing (2.12+3.25=?) :-)