Among the 400 gigabytes of internal documents belonging to surveillance firm Hacking Team that were released online this week are details of the company’s dealings with some of the most oppressive governments in the world. The revelations, which have generated alarm among privacy, security, and human rights advocates, have also fueled debate around the esoteric but important topic of government controls on the export of powerful software that can secretly infiltrate and seize control of targeted computers.
According to the hacked files, Hacking Team’s clients include government agencies in Russia, Ethiopia, Azerbaijan, Bahrain, Kazakhstan, Vietnam, Saudi Arabia, Sudan and other states known to spy on, jail, and murder journalists. They also include agencies in more open states such as the U.S. Federal Bureau of Investigation, which has spent nearly $775,000 on Hacking Team tools since 2011, according to an analysis of the documents by Wired.
In addition to other capabilities, Hacking Team’s exploits can be used to intercept information before it is encrypted for transmission, capture passwords typed into a Web browser, and activate a target’s microphone and camera, according to a February 2014 report on the targeting of Ethiopian journalists by researchers at Citizen Lab, a project of the Munk School of International Affairs at the University of Toronto. And in addition to delivering attacks through traditional computers, Hacking Team has also explored surveillance apps for mobile devices that can be deployed through the Google Play and Apple stores, according to Forbes.
Given the power of the spy tools being developed by Hacking Team and rivals such as Gamma International — and in light of both companies’ lists of dubious clients — the Hacking Team disclosures have invigorated public discussion about government controls on exporting technologies designed to exploit and record all manner of private data.
While it might be tempting to wholeheartedly embrace such regulations, drafting them to be effective and sufficiently narrow is a problem that vexes many of the most sophisticated thinkers on these issues, in part because of the ever-changing nature of technology.
Poorly constructed rules can make it difficult for journalists to access communications mechanisms, antivirus software and circumvention tools, leading to tragic results such as exposing Syrian activists to surveillance by the Assad regime. Although the Obama Administration has sought to liberalize the rules on export of so-called “dual use” technologies, companies have historically been slow to embrace the risk associated with violating U.S. law. Combined with a patchwork of country-specific regulations, and the fact that other countries have their own controls, the situation is not one of clarity.
It can be especially difficult to draw up workable language to regulate technology at the leading edge of applied research. One place where this experimentation takes place is within intelligence agencies; some takes place at for-profit spyware companies such as Hacking Team. A great deal of security research also takes place inside academic institutions, at antivirus and other security firms, and on the initiative of independent technologists. Often the initial research looks very similar across different types of actors — it is what one does with a discovery that dictates its use as a sword or a shield.
In an age of proliferating technological threats, sound security research is vital to the free flow of news, other forms of research, and even the world economy. Regulation that fails to take this into account serves to introduce and perpetuate vulnerabilities into the systems on which many journalists stake their lives.
One regulatory mechanism that seeks to draw the lines somewhat carefully is the multilateral Wassenaar Arrangement, an export control agreement established in 1996 to voluntarily regulate arms exports. Wassenaar was modified in December 2014 to add two additional classes of exports to its list of controlled items: intrusion software and Internet protocol (IP) network surveillance systems. As Stanford Center for Internet and Society Director Jennifer Granick and Stanford researcher Mailyn Fidler pointed out in a post for Just Security in January, “Whether the recent Wassenaar amendments draw the line well remains up for debate … The newly introduced definitions of restricted software, particularly of intrusion software, could be interpreted to include an overly wide range of legitimately traded and used network security tools,” including auto-updaters and anti-virus software.
The risk of bad policy is not just theoretical. In May, the U.S. Department of Commerce published its own proposed implementation of the new Wassenaar rules that Nate Cardozo and Eva Galperin of the Electronic Frontier Foundation have called a “an overly broad set of controls” and a “disaster.” (Formal comments on the proposed implementation may be submitted through July 20.) And although the E.U.’s implementation of the new Wassenaar rules are less extreme than those proposed by the U.S., their impact on the availability of life-saving tools for journalists, and good security research, remains to be seen.
Restrictions on exports of software also have direct implications for free speech. For example, a pair of U.S. federal appellate court cases held at the turn of the millenium that encryption source code is a form of legally-protected speech protected by the First Amendment to the U.S. Constitution. Regulations on other forms of security research likely raise similar issues, and may run into similar problems.
There are alternatives to export-based regulations.
The first is to protect the breathing room security researchers have to uncover abuses and to work to improve the security of systems by ensuring that regulations focus on ensuring transparency and addressing abusive practices, rather than on regulating hard-to-define and ever-changing technologies. This is critical because, as security researcher Matthew Garrett told a group of privacy-minded journalists, attorneys and technologists in San Francisco in February, “Today’s state actor [computer] attack is probably only two or three years away from being an organized crime attack … at most.” To counter such threats by non-state actors requires sound coding and design more than it does international agreements.
Second, despite the difficulties inherent in regulating spyware through export controls, law and policy do have roles to play. In 2014, after the Ethiopian government hacked an American citizen and spied on his communications, he filed a civil lawsuit for money damages based on the government’s specific conduct. Similarly, criminal prosecutions are likely to be appropriate in many cases in which malware is misused, particularly by non-state actors but also by government officials who act outside their legal bounds.
But what about individuals under computer-based attack by their own governments? There are no easy answers, and regulation may play a role. To that extent, any possible regulation should focus on conduct and complicity rather than technologies, especially when those technologies are software-based.
Otherwise, like Hacking Team, we may find ourselves the uncomfortable recipients of a series of unintended consequences.