Yet Another Computer Vision Index To Datasets (YACVID) - Details

Stand: 2018-12-12 000000m 17:10:38 - Overview

ok242
Attribute Current Content New
Name (Institute + Shorttitle)YouTube Co-localization Dataset (ECCV + IEEE Trans. CSVT papers) [GEU and NTU] 
Description (include details on usage, files and paper references)The dataset consists of bounding box annotations for 15k frames of videos collected from YouTube Objects Dataset.

If you find this dataset useful, kindly cite the following papers:

[1] Koteswar Rao Jerripothula, Jianfei Cai, and Junsong Yuan, “Efficient Video Object Co-localization with Co-saliency Activated Tracklets” in IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT), 2018.

[2] Koteswar Rao Jerripothula, Jianfei Cai, and Junsong Yuan, “CATS: Co-saliency Activated Tracklet Selection for Video Co-localization” in European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 2016.







 
URL Linkhttps://drive.google.com/file/d/1y4EravvIy-zQSk3EJhqphDiieOIW14Na/view 
Files (#)
References (SKIPPED)
Category (SKIPPED)Co-localization 
Tags (single words, spaced)Co-localization Co-segmentation Co-saliency Video CATS Tracklet Benchmark Binary Object Retrieval Segmentation Semantic Similarity Tracking Matching Localization 
Last Changed2018-12-12 
Turing (2.12+3.25=?) :-)