Apple is a corporation.

The first thing you should know about it is this. It’s a business that operates just to make money.

It isn’t your pal. It’s not a superhero, to be sure. It’s not a belief system.

It offers you to purchase its goods and services as a business. You are free to leave if you don’t like what it has to offer.

And I believe that this misunderstanding is at the root of most of the criticism leveled at Apple for the new kid safety measures it is adding.

It’s a complicated and emotional topic, and Apple’s messaging, as well as how the media has reported on it, has further added to the uncertainty.

Add in the fact that some individuals become enraged when Apple does something that contradicts their perceptions of the corporation, and you have a formula for disaster.

However, Apple recently released a document outlining how the system will work, the steps in place to keep false positives to a minimum, the mechanisms in place to prevent governments, law enforcement, and even malicious or coerced reviewers from abusing the system, and how Apple will protect end users’ privacy throughout.

“The system is designed so that a user does not need to trust Apple, any other single entity, or even a group of possibly-colluding entities from the same sovereign jurisdiction (that is, under the control of the same government) to be confident that the system is operating as advertised,” Apple says.

It’s a long document, but it’s well worth your time to read it.

However, these are merely words on a page.

It all comes down to one thing in the end.

Do you have trust Apple?

Well, do you?

consider this to be a complex subject that entails more than just looking for photographs of child abuse (something that most people will think is a good thing for Apple to be doing).

The question of trust here is more complicated.

First, Apple has created an on-device scanning technology that can recognize specific information with exceptional accuracy.

Currently, Apple is using this to filter out CSAM (child sexual abuse material) and detect sexually explicit images sent or received by children via iMessage, but there’s nothing stopping that mechanism from being used to detect anything else, including religious, political, terrorist-related, pro/anti vaccine leanings, cat photos, and so on.

That scanning mechanism is built right into the devices.

The current Apple may vow by heart that this system will only be used for good and that it will not be abused, but this is only comforting to a point.

Consider COVID-19 anti-vax misinformation or climate-change denial, both of which are basic but current instances.

What if Apple felt that identifying this material and intervening to prevent its transmission was in the best interests of the greater good? It’s possible that this isn’t a negative thing. It’s possible that enough people will support it.

This would be technically doable thanks to the CSAM process.

Would it be right? 

It may be argued that CSAM is unlawful, whereas anti-vax or climate-change falsehoods are not.

Okay, but laws differ from one country to the next. What if a country asked Apple to intervene and discover and report unlawful content in their country? Is it going to become a game of cherry-picking which materials to identify and which not to detect based on the public relations fallout?

What if Apple chose to check everything for unlawful content?

The mechanism for doing so is already in place.

Also, this isn’t just a matter of space; it’s also a matter of time.

People in charge of Apple today will not be in charge of Apple in the future. Will they be as dedicated to protecting user privacy as they claim to be? Could they become involved in system abuse as a result of government pressure?

These are all slippery-slope arguments, but that doesn’t negate the fact that they exist and that vigilance isn’t a negative thing in and of itself.

LEAVE A REPLY

Please enter your comment!
Please enter your name here