did=323 task=did=323 YACVID - UT Egocentric (UT Ego) Dataset - Details

Yet Another Computer Vision Index To Datasets (YACVID) - Details

Stand: 2024-03-19 08:00:58 - Overview

ok - did=323 - task=task - dviews=1147
Attribute Current content New content
Name (Institute + Shorttitle)UT Egocentric (UT Ego) Dataset 
Description (include details on usage, files and paper references)The Univ. of Texas at Austin Egocentric (UT Ego) Dataset contains 4 videos captured from head-mounted cameras. Each video is about 3-5 hours long, captured in a natural, uncontrolled setting.

We used the Looxcie wearable camera, which captures video at 15 fps at 320 x 480 resolution. Four subjects wore the camera for us: one undergraduate student, two graduate students, and one office worker. The videos capture a variety of activities such as eating, shopping, attending a lecture, driving, and cooking.

* Due to privacy reasons, we are able to share only 4 of the 10 videos originally captured (one from each subject). They correspond to the test videos that we evaluate on in both the CVPR 2012 and CVPR 2013 papers.

References:

Y. J. Lee, J. Ghosh, and K. Grauman. Discovering Important People and Objects for Egocentric Video Summarization. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012.

Z. Lu and K. Grauman. Story-Driven Summarization for Egocentric Video. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. 
URL Linkhttp://vision.cs.utexas.edu/projects/egocentric_data/UT_Egocentric_Dataset.html 
Files (#)
References (SKIPPED)
Category (SKIPPED) 
Tags (single words, spaced)First-person vision, egocentric 
Last Changed2024-03-19 
Turing (2.12+3.25=?) :-)