Hackster is hosting Hackster Holidays, Ep. 6: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Monday!Stream Hackster Holidays, Ep. 6 on Monday!

This Real-Life "Invisibility Cloak" Hides You From Person-Detecting Machine Learning Models

Designed to attack detectors rather than classifiers, these wearable "universal patches" delete you right out of a machine's vision.

Gareth Halfacree
2 years ago β€’ Machine Learning & AI

A team of researchers at University of Maryland, College Park, working with Facebook AI, have developed a real-life "invisibility cloak:" a sweater that renders you a ghost to common person-detection machine learning models.

"This paper studies the art and science of creating adversarial attacks on object detectors," the team explains of its work. "Most work on real-world adversarial attacks has focused on classifiers, which assign a holistic label to an entire image, rather than detectors which localize objects within an image. Detectors work by considering thousands of 'priors' (potential bounding boxes) within the image with different locations, sizes, and aspect ratios. To fool an object detector, an adversarial example must fool every prior in the image, which is much more difficult than fooling the single output of a classifier."

If you've ever wanted to disappear, this real-life invisibility cloak can help β€” for computer vision, at least. (πŸ“Ή: Wu et al)

More difficult, certainly, but as the researchers have proven not impossible: as part of a broader investigation into adversarial attacks on detectors, the team succeeded in creating a piece of clothing, which had the unusual effect of making its wearer entirely invisible to a person detection model.

"This stylish pullover is a great way to stay warm this winter," the team writes, "whether in the office or on-the-go. It features a stay-dry microfleece lining, a modern fit, and adversarial patterns the evade most common object detectors. In [our] demonstration, the YOLOv2 detector is evaded using a pattern trained on the COCO dataset with a carefully constructed objective."

Initially, the team's work focused on simulated attacks: generating an "adversarial pattern," which could be applied to detected objects within a given image to prevent the model from recognizing them. The key was in the creation of an "universal adversarial patch:" a single pattern that could be applied over any object to hide it from the model. While it's easy to swap patterns out in simulation, it's harder in the real world β€” especially when you've printed the pattern onto a sweater.

While the team's sweater is perhaps the most impressive demonstration of the attacks, it's not the only one: 10 patches were printed onto posters and deployed at 15 locations, degrading the performance of detectors used on images where the posters were present. Testing the concept on "paper dolls," which could be dressed up in different patches, the team then came up with the wearable "invisibility sweater" range of clothing β€” finding that they "significantly degrade the performance of detectors" compared with regular clothing.

For anyone hoping to become truly invisible, though, there's a catch: compared with attacks against simple classifiers, the detector attacks proved less reliable. In the wearable test, the YOLOv2-targeting adversarial sweatshirts hit around a 50 percent success rate.

More information on the project is available on the University of Maryland website, with a preprint of the paper available on Cornell's arXiv server under open-access terms.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles