Yet Another Computer Vision Index To Datasets (YACVID) - Details

Stand: 2017-05-01 000000m 06:20:11 - Overview

ok83
Attribute Current Content New
Name (Institute + Shorttitle)ETH/Yahoo Video2Gif dataset 
Description (include details on usage, files and paper references)The Video2GIF dataset contains over 100,000 pairs of GIFs and their source videos. The GIFs were collected from two popular GIF websites (makeagif.com, gifsoup.com) and the orresponding source videos were collected from YouTube in Summer 2015. We provide IDs and URLs of the GIFs and the videos, along with
temporal alignment of GIF segments to their source videos. The dataset shall be used to train models for GIF creation and video highlight detection.

In addition to the 100K GIF-video pairs, the dataset contains 357 pairs of GIFs and their source videos as the test set. The 357 videos come with a Creative Commons CC-BY license, which allows us to redistribute the material with appropriate credit. We provide this test set to make the results reproducible
even when some of the videos become unavailable.

If you end up using the dataset, we ask you to cite the following paper:

Michael Gygli, Yale Song, Liangliang Cao
Video2GIF: Automatic Generation of Animated GIFs from Video
IEEE CVPR 2016

If you have any question regarding the dataset, please contact:

Michael Gygli gygli@vision.ee.ethz.ch 
URL Linkhttps://github.com/gyglim/video2gif_dataset 
Files (#)100000 
References (SKIPPED)
Category (SKIPPED)video highlight detection 
Tags (single words, spaced)video2gif highlight video summarization gif summary scene understanding 
Last Changed2017-05-01 
Turing (2.12+3.25=?) :-)