|Description (include details on usage, files and paper references)||UBC3V is a synthetic dataset for training and evaluation of single or multiview depth-based pose estimation techniques. The nature of the data is similar to the data used in the famous Kinect paper of Shotton et al., but with a few distinctions:
* The dataset distinguishes the back-front and left-right sides of the body.
* The camera location is relatively unconstrained.
* The dataset has three randomly located cameras for each pose, which makes it suitable for multiview pose estimation settings.
* It is freely available to the public.
A. Shafaei, J. J. Little. Real-Time Human Motion Capture with Multiple Depth Cameras. in 13th Conference on Computer and Robot Vision, Victoria, Canada, 2016.