Blog   |   Internet

How automatic encryption ensures safety by default

Apple chief executive Tim Cook reveals the iPhone 6 and Apple Watch in September. Apple's latest software includes automatic encryption. (Getty Images/AFP/Justin Sullivan)

The year is 1991, the month April. EMF is playing on the radio. The term "cyberspace" has existed for only half a decade. The world wide web won't exist for another four months. The software engineer Linus Torvalds has only just started work on the Linux operating system. The fastest computer you can own has a 50 Mhz processor. Yes *megahertz*, with a *M*.

On April 17, The New York Times reports on a proposed counterterrorism bill in the Senate. The bill, introduced by Senator Joseph Biden, contains a passage asking technology companies to ensure that any encryption tools they sell are still amenable to government interception. Phil Zimmerman, a technically proficient security enthusiast, reads the article and decides to act before it is too late. Before the bill has a chance to pass, Zimmerman writes a piece of software called Pretty Good Privacy (PGP), and makes it available for download. Others copy it and drive from payphone to payphone covertly uploading it to bulletin-board systems, the Nineties equivalent of online message boards such as 4chan and file-sharing sites.

PGP was among the first of its kind: encryption software available for anyone who wanted to use it. The software contained no back door for government access. The source code was made available so experts could analyze it and verify its trustworthiness. PGP was intended as a departure from the status quo, allowing anyone to communicate without risk of even the most powerful governments listening in. It was particularly valuable for journalists, who depend on having candid conversation with sources who need be assured of privacy before agreeing to talk.

Unfortunately, the technically complex nature of PGP made it accessible only to a small group of savvy computer users who could compile their own software and navigate an arcane interface of cryptographic protocols, which restricted its use for the press. Still, PGP prompted what is now called the "Crypto Wars" -- more than three years of national debate by policymakers, academics, civil society, and the private sector about the legitimacy of strong cryptography, and whether people have a right, under freedom of speech, to communicate securely and publish secure software. Eventually, the Crypto Wars were won by cypherpunks -- a political movement dedicated to the ideal that people should be free to communicate privately, with security guaranteed by unbreakable cryptography. The U.S. government publicly yielded the point; encryption software could be freely written, shared, and used.

Fast forward 23 years to the middle of 2014. U2's latest album is on everyone's phones, whether they want it or not. The term "cyberspace" is heard regularly in the halls of Washington D.C. Most of us carry around in our pockets tiny computers whose power is measured in gigahertz and, for journalists, the smartphone has become a portable office through which research and contact with sources is carried out. At the same time, the National Security Agency is widely understood to seek to intercept and store almost every electronic communication anywhere in the world. PGP and related tools remain the state-of-the-art in email security. Unfortunately, they also remain the state-of-the-art in difficult-to-use. Every major operating system has the ability to encrypt a computer or phone's hard drive, but the user has to turn it on. Enabling encryption usually takes a couple of minutes so few do. Why bother when there are news feeds to read and emails to send?

But a journalist's smartphone contains work in progress, a Rolodex of contacts, and all manner of in-progress conversations and messages. Purloining it would give unparalleled insight into a journalist's work, including the sort of details a reporter might risk jail time trying to protect.

That risk was reduced in September, when Apple announced the latest version of its flagship smartphone operating system (OS) software. It was full of enhancements and new features, including one that had never been tried on this scale before. Devices running the new OS are automatically encrypted as soon as a screen lock code is picked. No extra work needed. The encryption is sophisticated, contains state-of-the-art algorithms and design and, most importantly, there is no deliberate way around it. Apple says it has no special way to decrypt devices and nor do police. The only way to unlock encrypted data is with the code.

The feature has unsettled a lot of people. At least one prominent legal scholar called the change "dangerous." Commenters in a police forum called it immoral. Google quickly followed Apple's lead, promising automatic encryption in the next version of its mobile OS. Although the reactions have been strong, these new designs have almost no technical differences from previous versions. Automatic or not, any competently implemented encryption is all but impossible to decipher without knowledge of the secret key. This level of nigh-unbreakable encryption has been available to every computer user for decades. So why the sudden uproar?

If encryption is automatic, that means it will be ubiquitous. A busy journalist no longer has to decide whether they will need these tools in the future, whether 10 minutes of technical work is more important than another paragraph written on deadline. The use of encryption no longer indicates that a user cares enough about security and privacy to turn the feature on. When security is the default, there's no additional suspicion associated with using secure tools. Everyone benefits from secure tools, but the most vulnerable users -- such as journalists -- benefit most when they don't need to go out of their way to stay safe.

More on

Like this article? Support our work

Leave a comment