Yet Another Computer Vision Index To Datasets (YACVID) - Details

Stand: 2020-07-05 000000m 09:09:37 - Overview

Attribute Current Content New
Name (Institute + Shorttitle)MAE Dataset 
Description (include details on usage, files and paper references)The Multimodal Attribute Extraction (MAE) dataset is the first benchmark dataset for the task of multimodal attribute extraction. It is composed of mixed media data for 2.2 million product items. For each item there is a textual description, set of product images, and open-schema table of product attributes. For more information, read our paper:

URL Link 
Files (#)2000000 
References (SKIPPED)
Category (SKIPPED) 
Tags (single words, spaced)multimedia multimodal images text attribute recognition pair product search asset retrieval 
Last Changed2020-07-05 
Turing (2.12+3.25=?) :-)