Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>It compares on-device image hashes with hashes of known CP images.

No, it compares against hashes supplied by a government agency (NCMEC). Apple has no way to verify the hashes are in fact CP, as it is only re-hashing hashes.



But once it reaches some non-defined threshold, it's Apple who reviews the content not the government. They don't have a way to verify the hashes, but they would have a way to verify the content which matches the hashes.

Presumably at this stage is where malicious hashes would be detected and removed from the database.


> it's Apple who reviews the content not the government.

An unaccountable "Apple Employee" who is likely (in the US and other countries) to be a LEO themselves will see a "visual derivative" aka a 50x50px greyscale copy of your content.

There is no mechanism to prevent said "employee" from hitting report 100% of the time, and no recourse if they falsely accuse you. The system is RIPE for abuse.

>Presumably at this stage is where malicious hashes would be detected and removed from the database.

Collision attacks have already been demonstrated. I could produce a large amount of false positives by modifying legal adult porn to collide with neural hashes. Anyone could spread these images on adult sites. Apple "employees" that "review" the "image derivatives" will then, even when acting honestly, forward you to prosecution.


> and no recourse if they falsely accuse you

Of course there is. The judicial system.

(Although, to be clear. I don't live in America and I might be more worried about this if I did.)


Doesn't that put you in the position of suing apple after you've:

1) Spent who knows how long in jail

2) Lost your job

3) Defaulted on your mortgage

4) Been divorced

5) Had you reputation ruined

Money can't fix everything, and trusting the courts to make you whole years after the fact is a foolish strategy.


Yes but no one wants the finger pointed at themselves. Even if innocence is proven, someone will go through your files and you will have to deal with the law.

The recourse should be before this reaches the law.


> Presumably at this stage is where malicious hashes would be detected and removed from the database.

How, if 1) the original content is never provided to Apple, and 2) the offending content on consumer devices is never uploaded to Apple?


You're misunderstanding the proposed system. The entire system as-described only runs on content that's in the process of being uploaded to Apple -- it's part of the iCloud Photos upload system. Apple stores all this content encrypted on their servers, but it's not E2E so they have a key.

This entire system was a way for Apple to avoid decrypting your photos on their servers and scanning them there.

Hypothetically, if Apple implemented this system and switched to E2E for the photo storage, you'd be more private overall because Apple would be incapable of seeing anything about your photos until you tripped these hash matches, as opposed to the status quo where they can look at any of your photos whenever they feel like it. (And the hash matches only include a "visual derivative" which we assume means a low res thumbnail.) I say hypothetically because Apple never said this was their plan.

You can argue about whether or not Apple should be doing this. But it does seem to be fairly standard in the cloud file storage industry.


I never heard that Apple was decrypting the original content at all. That implies that there is a team at Apple looking at child pornography all day. Not sure how that's even legal. It was my understanding that CSAM systems were simply hash based and 'hits' were reported to authorities. Do you have a source for them decrypting information?


Here's a summary of how their proposed system was supposed to work: https://educatedguesswork.org/posts/apple-csam-intro/ (if you want to skim it, search for mentions of "manual review" and "visual derivative")

I suspect the key would be that there'd be a team verifying that something is actually child pornography, because the system is a perceptual hash rather than a strict comparing-bytes so before someone looks at it they're not certain.


No thanks, I'll give up my iphone before it gets to this stage. Not an experiment that I'm willing to partake in.


There was a manual review step if the hashes triggered.


Exactly. Apple doesn't want to be in the unenviable position of making that judgement (is a hotdog or not).

As a taxpayer and customer, I concur. I'm glad someone is doing that job. But I don't want it to be a corporation.


A third party reviewer would confirm the images weren't just hash conflicts, then file a police report, according to the documentation.


>third party reviewer

A "third party" paid by apple who is totally-not-a-cop who sees a 50x50pz grayscale "image derivative" is in charge of hitting "is CP" or "Is not CP".

I don't understand how anyone can have faith in such a design.


That's incorrect. That's the Apple reviewer. After that, it goes to further review by NCMEC, where it's verified. NCMEC is the only one legally capable of verifying it, fully, and they're the ones that file the police report.

So, to get flagged, you need many hash collisions destined for iCloud. Then, to get reported, some number must get a false positives in the Apple review, and then some number must somehow fail the full review by NCMEC.


I envy your naïveté. My world would be so much less complex with your child-like acceptance.


You’re assuming quite a bit here. I never claimed to have faith in the system, and there could be problems with the review process, but it’s best to talk about how things are, from an informed perspective.


Also, can we create hashes that are not CSA and match CSA?

https://www.theverge.com/2017/4/12/15271874/ai-adversarial-i...

If we can, then, hypothetically we need to get non-CSA images onto important people's iPhones so they get arrested, jailed for years and have their lives ruined by Apple.

Disclaimer: I buy Apple products.


I have another problem with this. In a lot of jurisdictions virtual CSA images are legal. i.e. cartoons and images created entirely in CG.

These images can be 100% indistinguishable from the real thing. Without knowing the source of the images that they are putting in the database, how do they know the images are actually illegal?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: