|Description (include details on usage, files and paper references)||Many different labeled video datasets have been collected over the past few years, but it is hard to compare them at a glance. So we have created a handy spreadsheet that summarizes their key properties (number of action labels, average length of video, number of cameras, etc) in an easy to read format:
The list is incomplete (eg TRECVID has not been added), but you can help solve this problem by filling out the form listed on the above page (and then please send me an email telling me what you have done :).
Hopefully this list will help reduce the tendency for people to create their own new datasets every time they publish a paper on activity recognition (which makes results incomparable across papers). Ideally, the community will converge on 2-3 standard benchmarks, analogous to what has happened in the static image classification/ object detection community with ILSVRC and PASCAL VOC.