- Parth Maniar
A few days ago, Apple announced that it would begin checking content uploaded to iCloud Photos - the company's photo backup/synchronization service - against a list of known CSAM material. This is fine on it's own, however the system could easily be abused.
So what happens when a government says “We want you to add these hashes of government-critical memes to your database”? Apple says they would “refuse such demands”, I doubt they could.
One of the first issues I noticed is with their marketing. Apple is arguing that, because the detection happens on your device, as opposed to on their servers, it's more privacy-friendly. But Apple can already view uploaded data, iCloud does not use end-to-end encryption - in fact, they even dropped plans to do so after the FBI complained.
This is all further evidence that Apple only cares about privacy when they can market it. Explaining to the average consumer that you use “end-to-end encryption” doesn't increase sales; most people don't even know what that means. And if it doesn't increase sales, why even bother in the first place?
The only way to guarantee that such a powerful tool isn't abused and doesn't fall into the wrong hands is to never create it.
- Apple, 2016