Yet Another Computer Vision Index To Datasets (YACVID) - Details

Stand: 2019-05-24 000000m 23:04:14 - Overview

Attribute Current Content New
Name (Institute + Shorttitle)CUHK DeepFashion2 
Description (include details on usage, files and paper references)DeepFashion2 is a comprehensive fashion dataset. It contains 491K diverse images of 13 popular clothing categories from both commercial shopping stores and consumers. It totally has 801K clothing clothing items, where each item in an image is labeled with scale, occlusion, zoom-in, viewpoint, category, style, bounding box, dense landmarks and per-pixel mask.There are also 873K Commercial-Consumer clothes pairs.
The dataset is split into a training set (391K images), a validation set (34k images), and a test set (67k images).

Clothes Detection
This task detects clothes in an image by predicting bounding boxes and category labels to each detected clothing item. The evaluation metrics are the bounding box average precision AP_{box}, AP_{box}^{IoU=0.50} and AP_{box}^{IoU=0.75}.

Landmark and Pose Estimation
This task aims to predict landmarks for each detected clothing item in an each image.Similarly, we employ the evaluation metrics used by COCOfor human pose estimation by calculating the average precision for keypoints AP_{pt}, AP_{pt}^{OKS=0.50} and AP_{pt}^{OKS=0.75}where OKS indicates the object landmark similarity.

Clothes Segmentation
This task assigns a category label (including background label) to each pixel in an item.The evaluation metrics is the average precision including AP_{pt}, AP_{pt}^{OKS=0.50} and AP_{pt}^{OKS=0.75} computed over masks.

Consumer-to-Shop Clothes Retrieval
Given a detected item from a consumer-taken photo, this task aims to search the commercial images in the gallery for the items that are corresponding to this detected item. In this task, top-k retrieval accuracy is employed as the evaluation metric. We emphasize the retrieval performance while still consider the influence of detector. If a clothing item fails to be detected, this query item is counted as missed.

If you use the DeepFashion2 dataset in your work, please cite it as:
author = {Yuying Ge and Ruimao Zhang and Xiaogang Wang and Xiaoou Tang and Ping Luo},
title={DeepFashion2: A Versatile Benchmark for Detection, Pose Estimation, Segmentation and Verification of Clothing Images},

URL Link 
Files (#)800000 
References (SKIPPED)
Category (SKIPPED) 
Tags (single words, spaced)fashion apparel attributes recognition localization human benchmark polygon annotation instance semantic segmentation 
Last Changed2019-05-24 
Turing (2.12+3.25=?) :-)