Skip to content
Search
Generic filters
Exact matches only

Adversarial T-shirts to evade ML person detectors | by Jorge de Guzman | Aug, 2020

Wearing this T-shirt will help you walk past surveillance systems without getting identified

Jorge de Guzman

Neural Networks are exceptionally good at recognizing objects shown in an image and in many cases, they have shown superhuman levels of accuracy(E.g.-Traffic sign recognition).

But they are also known to have an interesting property where we can introduce some small changes to the input photo and have the Neural Network wrongly classify it into something completely different. Such attacks are known as adversarial attacks on a Neural Network. One important variant is known as the Fast Gradient sign method, by Ian GoodFellow et al, as seen in the paper Explaining and Harnessing Adversarial Examples. If properly implemented, such methods can add noise to the image barely perceptible to the human eye but it fools the Neural Network classifier. One classic example is shown below.

Real time Object detectors-The backbone of automated surveillance:

Object detection is a classic problem in Computer Vision where given an image you have to localize an object present in the image as well as classify the category that an object belongs to. Usually this is done by training a neural network on a dataset containing sufficient number of images in each category we want to address and the output being the location of the object as well as the probability of the object belonging to the different categories.Some of the most popular Object detector models are YOLO and Faster RCNN. The detection is real-time and can be embedded in a video feed to detect objects instantaneously.

Fooling the Object detectors:

Using the adversarial attacks discussed above researchers have successfully been able to create images or patches that can baffle an object detector. This can be used to evade detection of glass frames, stop signs and images attached to cardboard. But Surveillance systems often have to view a person from a wide variety of angles so creating just a patch is not sufficient for the problem at hand. To do this the authors tracked the deformations of the t-shirts and mapped them onto adversarial images until it was able to fool the classifier. The result was the creation of a checkerboard pattern whose every pattern of deformation in video frame was measured. Then finally the interpolation technique Thin Plate Spline Technique(TPS) was used to replace the checkerboard patch with other images.Finally, a t-shirt was designed by Kaidi Xu and colleagues at Northeastern, MIT-IBM Watson AI Lab, and MIT that was able to baffle variety of object detection methods including YOLO v2 and Faster RCNN which was unsuccessful at detection of people wearing this shirt.

Results:

Finally after printing these kinds of shirts they were put into action in front of surveillance cameras and videos of the result was recorded. It turns out YOLO v2 model was fooled in 57 percent of frames in the physical world and 74 percent in the digital world, a substantial improvement over the previous state of the art’s 18 percent and 24 percent.

Conclusion:

This paper provides useful information on how adversarial perturbations and noises can be implemented in real life scenarios. Images generated by this method can be used to train the different classifiers and in turn produce more robust classifiers. Such classifiers may prevent object detectors from getting tricked in the future.

[1]Kaidi Xu ,Gaoyuan Zhang, Sijia Liu et al-Adversarial T-shirt! Evading Person Detectors in A Physical World,arXiv 2019

[2] GoodFellow et al-Explaining and Harnessing Adversarial Examples,ICLR 2015