|Description (include details on usage, files and paper references)||We introduce a labeled dataset of categorized images for evaluating sketch based image retrieval. Using Flickr, we downloaded about 3000 images for each of the 5 keywords: “butterfly”, “coffee mug”, “dog jump”, “giraffe”, and “plane”, together comprising of about 15000 images. For each image, if there is a non-ambiguous object with correct content matching with the query keyword and most part of the object is visible, we mark such an object region. The salient regions are marked at a pixel level. We only label salient object region for objects with almost fully visible since partially occluded objects are is less useful for shape matching. The THUR15000 dataset do not contain a salient region labeled for every image in the dataset, i.e., some images may not have any salient region. This dataset is used to evaluate shape based image retrieval performance.