Apple New Privacy Features 2021
New privacy announcements?
These are the thorny questions currently being debated as Apple announces the rollout of a controversial – some would say dystopian – Apple new Privacy Features 2021 upcoming iOS update.
To what extent is AI about to become an all-powerful arbiter of what’s right and wrong?
Join us today as we peel back the veil and ask why Apple is secretly scanning your photos. Apple loves to paint itself as the one big tech brand that actually cares about your privacy.
Unlike, say, Google or Facebook, whose entire business model relies on siphoning your precious data to make a fast buck, Apple stans are instead encouraged to feel smug about how their data is never used to inspire a never-ending cascade of spam.
This is why Apple’s latest announcement has caused several self-styled champions of liberty to speak out. So what’s the new announcement all about?
As part of an industry-wide effort to wipe out abusive and illegal practices online, Apple’s forthcoming iOS update will allow the company to compare private images stored on iPhones, iPods, and Macs with a visual database of known Child Abuse Materials, also known as CSAM.
The idea is to identify active child abusers, who’ve long exploited the internet’s potential for anonymity as a means of disseminating their horrific and abusive materials.
If a match between an iPhone photo and a known image of abuse is made, that users’ account will be suspended and referred to law enforcement.
Most people probably have no issue identifying dangerous active child molesters and referring them to the proper authorities.
However, the real question is: by which mechanism are they identified and is that system itself ripe for abuse? Here’s Apple’s explanation, straight from the horse’s mouth.
‘Before an image is stored in iCloud Photos,’ reads the company’s statement, ‘an on-device matching process is performed for that image against the unreadable set of known CSAM hashes.
‘This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result.’
‘Private set intersection (PSI) allows Apple to learn if an image hash matches the known CSAM image hashes, without learning anything about image hashes that do not match.
‘PSI also prevents the user from learning whether there was a match.’ Hashing, in this case, refers to the cryptographic practice of codifying images into unique number strings.
This is the same technology used by YouTube to automatically spot music piracy, by the way, so it’s nothing new. Apple is calling their latest refinement to hash tech ’NeuralHash’.
NeuralHash, in theory at least, can not only match photos with an established database of CSAM images but also automatically spot any sneaky workarounds used by abusers to try and cheat the system.
For instance, if bad actors crop, rotate, or otherwise manipulate the offending images in order to escape detection. If a match is made, another cryptographic principle called ‘threshold secret sharing’ comes into play.
This means that if a certain – undisclosed – number of matches are flagged up by NeuralHash’s automatic process, the images can then be decrypted and potentially escalated to a manual human checker.
So to recap. An official, state-sanctioned database of known abusive images are encrypted onto a hashing database. The photos on your iPhone are similarly encrypted, and their hashes are compared with that database in order to identify a match.
This all happens automatically, by the way, with no human intervention, at this stage anyway. If a given number of those hashes matches up, the offending photos are flagged as potentially abusive, and only then sent on for verification.
Hashing algorithms are by no means foolproof – the YouTube music piracy algorithm, for instance, regularly throws up false positives.
But it certainly isn’t fair to say that Apple – or anybody else – is ‘spying’ unbidden in your private snaps.
According to Apple the chance of a false positive emerging after all the proper steps have been followed – this system was designed in collaboration with eminent Stanford University cryptographer Dan Boneh – is around one in a trillion.
Genuine matches, where there is a legitimate cause for concern, are escalated to the National Centre for Missing and Exploited Children, an entity that acts as a central repository for CSAM investigations and works closely with law enforcement.
At first, the rollout will only take place in the US, at some point in the coming months. Apple is at pains to point out the new process will only be applied to images stored in iCloud photos, and not those stored locally on devices.
The thinking here is that while you might own the phone, you certainly don’t own the cloud, so different rules apply.
In addition to this measure, Apple is also introducing new safety features aimed at helping minors who use iMessage stay safe from online predators, by swiftly identifying potentially explicit images as they’re being exchanged, then flagging them up to parents.
‘Protecting children is an important responsibility,’ says the company in a statement. Certainly, these measures have been welcomed by in-the-know figures in the world of child safeguarding.
John Clark, President & CEO at the National Center for Missing & Exploited Children, called Apple’s new protections ‘a game changer’, adding ‘the reality is that privacy and child protection can co-exist.
‘We applaud Apple, Apple and look forward to working together to make this world a safer place for children.’ Bigwigs in the world of cryptography are similarly impressed.
David Forsyth – Chair in Computer Science in the University of Illinois at Urbana-Champagne College of Engineering – had this to say. ‘Apple’s approach preserves privacy better than any other I am aware of.
‘In my judgment, this system will likely significantly increase the likelihood that people who own or traffic in [CSAM] are found.
‘Harmless users should experience minimal to no loss of privacy because visual derivatives are revealed only if there are enough matches to CSAM pictures, and only for the images that match known CSAM pictures.’
However, several concerned citizens online have pointed out this measure is a classic slippery slope. As it’s technically a government agency, and not Apple, determining what images are legit and which aren’t, the potential for malpractice is obvious.
We can all agree kids need protecting, but what if this infrastructure is used by bad actors in the future to identify, say, political dissent?
Or meetings with friends or family that the state – or any foreign state – deems to be a threat. Clearly, it’s the government that has the whip hand in these issues.
Last year, Apple proposed to encrypt users’ full phone backups to iCloud, but the FBI stepped in claiming it would harm investigations into child abusers and other nefarious activities so Apple New Privacy Features 2021.