Home Datasets Demo Publications People Acknowledgements

Overview

The hypothesis that image datasets gathered online “in the wild“ can produce biased object recognizers, e.g. preferring professional photography or certain viewing angles, is studied. A new “in the lab“ data collection infrastructure is proposed consisting of a drone which captures images as it circles around objects. It's inexpensive and easily replicable nature may also potentially lead to a scalable data collection effort by the vision community. The procedure's usefulness is demonstrated by creating a dataset of Objects Obtained With fLight (OOWL). Currently, OOWL contains 120,000 images of 500 objects and is the largest “in the lab“ image dataset available when both number of classes and objects per class are considered.

OOWL "In the Lab" [Online Preview]


OOWL "In the Wild"


Demo Video

Publications

CVPR 2019

PIEs: Pose Invariant Embeddings
Chih-Hui Ho, Pedro Morgado, Amir Persekian, Nuno Vasconcelos
Website Paper Supplementary material BibTex Poster

Catastrophic Child’s Play: Easy to Perform, Hard to Defend Adversarial Attacks
Chih-Hui Ho*, Brandon Leung*, Erik Sandström, Yen Chang, Nuno Vasconcelos (*Indicates equal contribution)
Website Paper Supplementary Material BibTex Poster Turk Dataset

People

Current OOWLers

Brandon Leung, Pedro Morgado, Bo Liu, Chih-Hui Ho, Amir Persekian, Nuno Vasconcelos

OOWL Alumni

Yen Chang, Erik Sandstrom, David Orozco

Acknowledgements

This work was partially funded by NSF awards IIS-1546305 and IIS-1637941, a gift from Northrop Grumman, and NVIDIA GPU donations.