logo
logo

Researchers foil people-detecting AI with an ‘adversarial’ T-shirt

avatar
Raymond Maxwell
img

That’s to say the models can be deceived by specially crafted patches attached to real-world targets.

Most research in adversarial attacks involves rigid objects like glass frames, stop signs, or cardboard.

In a preprint paper, they claim it manages to achieve up to 79% and 63% success rates in digital and physical worlds, respectively, against the popular YOLOv2 model.

Incidentally, the university team speculated that their technique could be combined with a clothing simulation to design such a T-shirt.

The researchers from today’s study note that a number of adversarial transformations are commonly used to fool classifiers, including scaling, translation, rotation, brightness, noise, and saturation adjustment.

But they say these are largely insufficient to model the deforming cloth caused by a moving person’s pose changes.

collect
0
avatar
Raymond Maxwell
guide
Zupyak is a free B2B community content platform for publishing and discovering stories, software and businesses. Explore and get your content discovered.