Friday 20 May 2011

How to Read Freedom House's Censorship Circumvention Report

Freedom House released Leaping Over the Firewall last month, a report covering two angles: details about Internet censorship in Azerbaijan, Burma, China, and Iran; and the use of circumvention software in those countries to bypass Internet censorship. As government censorship of the Internet spreads worldwide, research about the technology, norms and policies determining the flow of information is going to be increasingly vital.
Leaping Over the Firewall blends a non-technical survey method with some lightweight lab testing of circumvention software. This approach is unique, but has some limitations that affect how the report should be read in order to avoid confusion.

What are the report's goals?

The report is about circumvention tools—software used by Internet users to get around blocking and filtering technologies set up by governments. More specifically, the report is a vehicle for two very different sets of information:
  1. the results of non-probability sampled surveys about users of circumvention software, distributed to and collected from users in Azerbaijan, Burma, China, and Iran; and
  2. the results of lab-based testing of circumvention software.
What does Freedom House intend to achieve by releasing this report? In their own words:

Freedom House conducted this product review to help internet users in selected internet-restricted environments assess a range of circumvention tools and choose the tools that are best suited to their needs. [...] By providing this assessment, Freedom House seeks to make circumvention tools more accessible in countries where they are needed, thereby countering internet censorship. The evaluation is also useful for tools developers to learn how their tools are perceived by the users in these countries, and what enhancement would be beneficial.

What are the report's limitations?

The survey results are not representative. Note that the survey was only issued to users in four countries, and that those users were not randomly sampled. Understandably, safety and operational considerations limit the ability of researchers to conduct a more robust survey—but it also means that the findings in this report should not be treated as generally applicable. Internet censorship takes place in many countries apart from Iran, China, Azerbaijan, and Burma. China and Iran in particular are understood to have more sophisticated, aggressive Internet censorship operations than other countries. Readers must be careful to avoid over-generalizing the report's results to other countries that practice censorship of the Internet, but differ in userbase, politics, technology understanding, and more.
The report isn't about how to communicate securely and safely. Internet filtering and blocking is increasingly combined with Internet surveillance, partly because tools capable of surveilling Internet traffic can help better identify what and how to block and filter. A software tool can provide circumvention but still be well-short of providing any kind of meaningful security against a government. The report does surface the complexity of making security decisions around using the Internet, but the report also makes notable misuses of "security" throughout.
Here is an example of "security" being used irresponsibly:

Tor is software that a user can run to give themselves a relatively strong guarantee of anonymity online.1 Tor's design allows it to work as circumvention software, but its value extends beyond getting access to blocked or filtered information—Tor's design is intended to give its users anonymity by taking measures to defeat network surveillance and traffic analysis.
A reader glossing over the report might see that Tor received 2 stars in security, and make the unfortunate judgment that Tor shouldn't be used on that basis. What this graphic actually means is that Freedom House's survey respondents—on a purely anecdotal, non-random sampled basis—evaluated Tor to present "operational problems" while having fewer technical support resources available.2
Everywhere else in the report, this category of poll question is called "security and support," but for some reason, in the box summary, it's inappropriately reduced to just "security." Looking at the questions, the category referred to as "security" in the boxes actually represents survey respondents' views on usability and support—essentially whether or not users had trouble using or understanding the software, and whether or not there were resources to help them understand what was wrong. Usability and support are certainly important characteristics, but describing it as "security" is a gross misnomer.
The security ratings for the circumvention tools don't appear to heavily weight crucial elements of the design of the circumvention software system as a whole—in particular, whether or not the operators of the circumvention software have tracking or data collection capabilities over users of the software, and whether or not the source code of the tool has been made available for analysis.3 One baseline factor in evaluating the security of a piece of software is whether or not the widest possible community of knowledgable technologists has had the opportunity to identify defects, from design and architecture, down to the code itself. From a computer security perspective, most (if not all) software has exploitable flaws—taking advantage of those flaws to disrupt or control a piece of technology is more or less a matter of time and resources. And so there's a general understanding that any tool whose source code hasn't been made widely available doesn't have the benefit of having allowed broad research into the ways that it could be exploited, making claims of security essentially impossible to validate.
The report features a decision-making flowchart, where the resulting recommendation of software rests upon whether or not users are seeking to receive information or upload information; and also whether or not they're interested in speed or security. But without sufficient context—details buried in the written descriptions about software—users are being encouraged to conduct a risk assessment without the broad range of knowledge that may be necessary to make a truly improved decision about which circumvention software to use, and how to use it.

What are the essential take-aways?

Does the report deliver on its stated goals? As far as helping users assess a range of circumvention tools, the report's writeups about circumvention software projects do include valuable contextual details—such as that a software project is not openly documented or described, or that the operator of the software is able to log what its users are accessing. But the reductive star ratings and the conflation of survey content with lab research content throughout cast doubt on how beneficial this report will be to end users who don't read the report in its entirety, with an eye for the few caveats that establish the report's limitations.
A second goal is to help "tools developers to learn how their tools are perceived by the users in these countries, and what enhancement would be beneficial." In this regard, the report could be beneficial if tools developers take the anecdotal survey findings and seek additional evidence to see if there are patterns that need addressing.
Ultimately, the Leaping Over the Firewall report seems to face a difficult internal contradiction: approaching circumvention tools from a largely non-technical perspective. The blocking of Internet content by governments and the circumvention of those blocks is a deeply technical topic where the adage that "code is law and architecture is policy" are powerfully validated.
However, there is value in attempting to identify and quantify what end users of circumvention software experience, and Freedom House's general finding that users will trade security for operational speed raises a number of vital questions about exactly why that choice is being made. Under what conditions does a user switch from a slow, highly secure channel to a faster, less secure channel? And for activists, when is an appropriate time to make that decision, and when should speed be sacrificed for security? The answer to these questions will help tool developers, activists, and users understand how to continue to have free expression on the Internet even and especially when faced with censorship.
  1. 1. EFF sponsored Tor early in its development because of its explicit, sophisticated focus on Internet anonymity, which EFF considers to be central to free expression.
  2. 2. The poll questions chosen by Freedom House leading to the "Security" star rating were:
    • Problems: How often have you encountered operational problems using the abovementioned tools?
    • Solutions: When you have encountered a problem, how easy was it for you to obtain help?
    • Support Validity: How frequently does the help you find come directly from the tool’s developers or the tool’s network?
  3. 3. The technical testing methodology has a section for "logging practises," but is not clear how this detail was represented in the relatively non-granular star rating.
Richard Esguerra @'EFF'

No comments:

Post a Comment