This article is more than 1 year old

Pixellation popped: AI can ID you, even after PhotoShop phuzzing

Like humans, machines can ID obfuscated faces - only faster

Pixellating images turns out to be a dodgy way of obfuscating identities, say researchers from the University of Texas and Cornell Tech who reckon computers can be trained to identify the “protected” people.

There's an "if" here, namely that pixellation can be popped if an "attacker” has a set of clear shots to practice on. If they do, and the AI has access to to those shots, forget about facial blocking as an anonymity mechanism.

That mightn't sound like much – a human operator could achieve the same thing – but because it's computerised, it can be automated. And therefore done faster, at all hours of day or night.

In a paper at Arxiv, Richard McPherson and his collaborators (Reza Shokri and Vitaly Shmatikov of Cornell Tech), say pixellation (mosaicing), blurring (as used in YouTube) and even encrypting JPEG coefficients (a scheme called P3, “Privacy Protecting Photo sharing”) can recover image thought protected.

They believe they can “train artificial neural networks to successfully identify faces and recognize objects and handwritten digits even if the images are protected using any of the above obfuscation techniques”. They claim results between 50 per cent and as high as 95 per cent on different datasets and different obfuscation types.

Their AI doesn't need a human operator isn't needed to point out important features: “we do not need to specify the relevant features in advance. We do not even need to understand what exactly is leaked by a partially encrypted or obfuscated image. Instead, neural networks automatically discover the relevant features and learn to exploit the correlations between hidden and visible information”, they write.

The only difficult part, in dealing with photos, is that the attacker would need to trawl social media for a “set of possible faces that may appear in a given photo”.

The attack can also work for recovering text or handwriting that's been obfuscated, they reckon, as long as the attacker has access to a training dataset (which exist online as as benchmarks for image recognition models).

The paper includes the neural network architectures used for training the boffins' models, and they note that this would be useful as a benchmark for developers working on better ways to protect privacy in images. ®

More about

TIP US OFF

Send us news


Other stories you might like