Yet Another Computer Vision Index To Datasets (YACVID) - Details

Stand: 2019-05-22 000000m 22:40:28 - Overview

ok13
Attribute Current Content New
Name (Institute + Shorttitle)Objects365 
Description (include details on usage, files and paper references)Object detection is of significant value to the Computer Vision and Pattern Recognition communities as it is one of the fundamental vision problems. Therefore, MEGVII and Beijing Academy of Artificial Intelligence (BAAI) co-prepared two new benchmark datasets for the object detection task: Objects365 and CrowdHuman, both of which are designed and collected in the natural scenes. Objects365 benchmark targets to address the large-scale detection with 365 object categories. CrowdHuman, on the other hand, is targeting the problem of human detection in the crowd. We hope these two datasets can provide diverse and practical benchmarks to advance the research of object detection. We hope that these two competitions based on the benchmarks, as well as the workshop which will be hosted at CVPR 2019, are able to serve as a platform to push the upper-bound of object detection research.



Task


Objects365 Full Track



The goal of Full Track is to explore the upper-bound performance of object detection systems based on 365 classes and 600K+ training images. 30K images are used for validation and 100K additional images are used for testing. To evaluate the performance of object detection, the evaluation criteria for COCO (IOU from 0.5 to 0.95) benchmark will be adopted.



Objects365 Tiny Track



The Tiny Track is to lower the entry threshold, accelerate the algorithm iteration speed, and study the long tail category detection problem. From the Objects365 dataset, 65 categories are selected, and contestants can train their models by using 10K training data. 
URL Linkhttps://biendata.com/competition/objects365/ 
Files (#)
References (SKIPPED)
Category (SKIPPED) 
Tags (single words, spaced)detection object category large-scale human benchmark 
Last Changed2019-05-22 
Turing (2.12+3.25=?) :-)