Apple's NeuralHash CSAM Problems go beyond Privacy Violation

If you thought ransomware was bad, you ain't seen nothing yet

What’s Going On?

How Does Apple’s NeuralHash CSAM Work?

How big of a privacy concern is this?

In response to some of these concerns, Apple announced that it would only target images that are banned in multiple countries. That’s probably not enough to assauge all concerns. The legislative branch is perfectly capable of mandating changes to this system that they wouldn’t be allowed to disclose. Once this system is in place, what is stopping governments from forcing other sets of hashes for matching.

Of course, the specifics of Apple’s NeuralHash CSAM implementation make matters much worse than just risks of privacy violations.

Worse? How can it get worse than that?

THe biggest problem with the Neural Hash CSAM is that it was never supposed to be a cryptographic hash function. It was always going to be easy to make a collision.

It’s kind of hard to overstate the potential for abuse here.

The “send known CSAM” attack has existed for a while but never made sense. However, this technology enables a new class of attacks: “send legal porn, collided to match CSAM perceptual hashes”.

With the previous status quo:

  1. The attacker faces charges of possessing and distributing child pornography
  2. The victim may be investigated and charged with child pornography if LEO is somehow alerted (which requires work, and can be traced to the attacker).

Poor risk/reward payoff, specifically the risk outweighs the reward. So it doesn’t happen (often).

BUT, with the new status quo of lossy, on-device CSAM scanning and automated LEO alerting:

  1. The attacker never sends CSAM, only material that collides with CSAM hashes. They will be looking at charges of CFAA, extortion, and blackmail.
  2. The victim will be automatically investigated by law enforcement, due to Apple’s “Safety Voucher” system. The victim will be investigated for possessing child pornography, particularly if the attacker collides legal pornography that may fool a reviewer inspecting a ‘visual derivative’.

This time there’s a great risk/reward payoff. The reward dramatically outweighs the risk, as you can get someone in trouble for CSAM without ever touching CSAM yourself.

If you think ransomware is bad, just imagine CSAM-collision ransomware. Your files will be replaced with legal pornography that is designed specifically to collide with CSAM hashes and result in automated alerting to law enforcement. Pay XX monero within the next 3030 minutes, or quite literally, you may go to jail, and be charged with possessing child pornography, until you spend $XXX,XXX on lawyers and expert testimony that demonstrates your innocence.

Another delivery mechanism for this is simply sending collided photos over WhatsApp, as WhatsApp allows for up to 3030 media images in one message, and has settings that will automatically add these images to your iCloud photo library.

And if Apple and WhatsApp are aware of these flaws and actively try to check? That just opens up yet another attack vector: Hash-collision-based DDoS.

Under this system you are able to artificially ddos organizations that verify if CP is sent by sending legitimate, low-res porn whose hash has been modified. You can trigger legitimate investigations by sending CSAM through WhatsApp or through social engineering. One could also mess with Apple by sending obvious spam that needs to be checked with this computationally-expensive hash-checker.

Can this flaw be fixed?

Short Answer: Fixable by modifying the neuralhash algorithm? No. Fixable by dropping the neuralhash rollout plans? Yes

Slightly-less-short Answer: A few groups have suggested just layering several different perceptual hash systems, with the assumption that it’s difficult to find a colliding image in all of them. This is pretty suspect - there’s a reason we hold a decades-long competition to select secure hash functions. A function can’t generally achieve cryptographic properties (like collision-resistance, or difficulty of preimage computation) without being specifically designed for it. By it’s nature, any perceptual hash function is trivially not collision resistant, and any set of neural models are highly unlikely to be preimage-resistant.

References

Cited as:

@article{mcateer2021ancpgbpv,
    title = "Apple's NeuralHash CSAM Problems go beyond Privacy Violation",
    author = "McAteer, Matthew",
    journal = "matthewmcateer.me",
    year = "2021",
    url = "https://matthewmcateer.me/blog/apple-neuralhash-csam-problems/"
}

If you notice mistakes and errors in this post, don’t hesitate to contact me at [contact at matthewmcateer dot me] and I will be very happy to correct them right away! Alternatily, you can follow me on Twitter and reach out to me there.

See you in the next post 😄

I write about AI, Biotech, and a bunch of other topics. Subscribe to get new posts by email!


This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

At least this isn't a full-screen popup

That'd be more annoying. Anyways, subscribe to my newsletter to get new posts by email! I write about AI, Biotech, and a bunch of other topics.


This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.