Skip to content
Search
Generic filters
Exact matches only

Avoiding Detection with Adversarial T-shirts | by Param Raval | Aug, 2020

In [1], the authors manage to get a benchmark accuracy of deception of 57% in real-world use cases. However, this is not the first time attempts have been made to deceive an object detector. In [2] the authors designed a way for their model to learn and generate patches that could deceive the detector. This patch, when worn on a cardboard piece (or any flat surface) could evade the person detector albeit with an accuracy of 18%

“Confusing” or “fooling” the neural network like this is called making a physical adversarial attack or a real-world adversarial attack. These attacks, initially based on intricately altered pixel values, confuse the network (based on its training data) into labeling the object as “unknown” or simply ignoring it.

Authors in [2] transform images in their training data, apply an initial patch, and feed the resulting image into the detector. The object loss obtained is used to change the pixel values in the patch and aimed at minimising the objectness score.

However, other than the low accuracy of 18%, this approach is limited to rigid carriers like a cardboard and doesn’t perform well when the captured frame has a distorted or skewed patch. Moreover, it certainly doesn’t work well when printed on t-shirts.

“A person’s movement can result in significantly and constantly changing wrinkles (aka deformations) in their clothes” [1]. Thus making the task of developing a generalised adversarial patch even more difficult.