Home Abstract Dataset Paper Acknowledgements


The problem of adversarial CNN attacks is considered, with an emphasis on attacks that are trivial to perform but difficult to defend. A framework for the study of such attacks is proposed, using real world object manipulations. Unlike most works in the past, this framework supports the design of attacks based on both small and large image perturbations, implemented by camera shake and pose variation. A setup is proposed for the collection of such perturbations and determination of their perceptibility. It is argued that perceptibility depends on context, and a distinction is made between imperceptible and semantically imperceptible perturbations. While the former survives image comparisons, the latter are perceptible but have no impact on human object recognition. A procedure is proposed to determine the perceptibility of perturbations using Turk experiments, and a dataset of both perturbation classes which enables replicable studies of object manipulation attacks, is assembled. Experiments using defenses based on many datasets, CNN models, and algorithms from the literature elucidate the difficulty of defending these attacks -- in fact, none of the existing defenses is found effective against them. Better results are achieved with real world data augmentation, but even this is not foolproof. These results confirm the hypothesis that current CNNs are vulnerable to attacks implementable even by a child, and that such attacks may prove difficult to defend.

OOWL "In the Lab" [Preview]

Demo Video

CVPR 2019

Catastrophic Child’s Play: Easy to Perform, Hard to Defend Adversarial Attacks

Chih-Hui Ho*, Brandon Leung*, Erik Sandström, Yen Chang, Nuno Vasconcelos
(*Indicates equal contribution)
Paper Supplementary Material Poster Turk Dataset
		author = {Ho, Chih-Hui and Leung, Brandon and Sandstrom, Erik and Chang, Yen and Vasconcelos, Nuno},
		title = {Catastrophic Child's Play: Easy to Perform, Hard to Defend Adversarial Attacks},
		booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
		month = {June},
		year = {2019}


This work was partially funded by NSF awards IIS-1546305 and IIS-1637941, a gift from Northrop Grumman, and NVIDIA GPU donations.