Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Photos in the database are classified for their content. Only images classified as A1 (A: prepubescent minor, 1: sex act) are being included in the hash set on iOS. So this doesn't even include A2 (2: lascivious exhibition), B1 or B2 (B: pubescent minor) let alone images which are in the database and aren't classified as any of A1, A2, B1 or B2.

While I've no doubt that there's a lot of "before and after" images (which are still technically CSAM even if they're not strictly child porn) and possibly many innocuous images, they would not have been flagged as "A1".

I'm sure there's probably still a few images flagged as A1 which shouldn't be in the database at all, but that number is going to be small. How many of these incorrectly flagged images are going to make their way into your photo library? One? Two?

You need 30 in order for your account to be flagged.



If someone is deliberately targeting you with them, 30 isn't very hard to reach.


I think it’s implausible that someone can become aware of 30 images which are miscategorised as A1 CSAM. How would this malicious entity discover them? What’s the likelihood that this random array of odd images could make it into a target‘s photo library?

And what’s the likelihood that a human reviewer will see these 30 odd images and press the “yep it’s CSAM” button?

More likely as soon as Apple’s human review sees these oddball images, they’re going to investigate, mark those hashes as invalid, then contact their upstream data supplier who will fix their data and now those implausible images are now useless.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: