Privacy International defends the right to privacy across the world, and fights surveillance and other intrusions into private life by governments and corporations. Read more »


Chapter: 

IV. Methodology for assessment

About our analytical approach

The methodology for our analysis and ratings of countries is based on a qualitative review of research data resulting mostly from the country reports from EPHR 2010. EPHR 2010 consisted of over 600 pages of reporting from experts from across Europe. These experts are legal, technological, and academic experts who were asked to help update the existing text from previous years' reports, and to add information under a specific number of categories. These categories then informed the criteria for the ratings.

The research was complemented by Privacy International's own research monitoring privacy developments around the world. We also have a network of advisory board members and colleagues in countries across Europe, as well as relationships with a number of regulatory officials who could guide us when we encountered information-gathering challenges.

We were also able to identify other research studies that have looked at cross-country issues relating to privacy and human rights. Most notably, we used:

  • The Economist Intelligence Unit's 'Democracy Index 2010', published in December 2010, as this was a strong gauge on the democratic accountability of each of the states included in our own study. As a result we used the Economist's study as the primary source for our category of 'Democratic safeguards'. There were two arising challenges from relying on the Economist's study: (1) The Economist study was based on a score out of 10, whereas our own studies were previously based on scores out of 5. (2) The Economist study included as part of its own criteria the category of 'civil liberties', so there was some risk of 'double-counting' on this issue.
  • The Council for Responsible Genetics released a report in December 2010 summarising their initial research on policy DNA databases around the world. We have a close working relationship with CRG and trust the integrity of their research process. Additionally, we cross checked their own findings with the EPHR country reports and found a strong level of synergy in the results when we had data on DNA databases in our study's countries. We did notice in one case where our results were more up to date on recent developments, and similarly, had found a couple of cases where our country reports were not as recently informed as CRG's own study.
  • We also relied upon a number of European Commission commissioned-research studies on the divergency of laws on privacy and the work of the regulators to raise awareness of privacy. These helped inform our thinking on the criteria.

Changes to our Approach

This is the third time we are conducting a cross-country comparison/analysis of privacy protections. Each time we develop more sophisticated methods for analysing and assessing privacy and surveillance. We consult with experts and advisors from around the world on a regular basis in order to develop and innovate on our methodology.

The first shift in our approach, and perhaps the most significant, is that this study only looks at European countries. This was not intentional -- the nature of the funding we received was such that we only had the resources to look at European countries. One side effect of this shift is that these countries all have a very similar regime of laws and so we would need to find a more nuanced method of identifying the differences between countries.

Another significant change in this study is that we have moved away from a 5-point scheme to a more complicated schema. Though it was helpful that this matched the Economist's own study, this was not the primary purpose of this shift. Rather we had also conducted a literature review of cross-country comparisons conducted on human rights issues and identified a number of criticisms.

In previous years the reasoning behind our asessments were somewhat opague. This is a common criticism of such studies in that it is hard to explain why a group of experts would assess one country as having 'x' type of protections but another country would have 'y'. This was a criticism of many of the other studies in this space.

As a result, rather than only relying upon multiple experts' assessments of the merits of a given country's system, as informed by the Country Reports from EPHR, we decided to be more granular in our assessments. Introducing granularity would also let us capture more nuance within a given category, and even cater for contradictions within policy domains.

For instance, under 'constitutional protections', a given country may not actually have an explicit statement of privacy within the constitution and so previously we may have marked this country down. A country may not have the right terms in a constitution but regardless cases may yet be raised to the Constitutional Court and measured on a right to privacy. So the Courts may have established a right to privacy within other basic rights. And then even in countries with constitutional statements on privacy, the Courts may come out against privacy protections. So we needed to move to a measurement system that could cater for these dynamics: given that the constitution didn't mention privacy it may have made it more challenging for the judiciary to speak of privacy, and yet they may speak favourably nonetheless, and perhaps more so than in other countries with explicit protections.

To cater for these dynamics that are practically inherent in all the categories, we developed a series of questions applied to each category, and a scoring system. We would judge the country against the various subquestions in the category and the country would be marked:

  • '0' - no safeguards or protections
  • '1' - some safeguards
  • '2' - advanced protections

This system has two risks of bias. First it has a positive bias in that a country may not be 'punished' for a particularly poor score, and rather would just receive nominal points so long as we could not 'delete' points from the results of the other questions that we scored. We felt that having a bias in favour of the governments was worth pursuing. In fact, we compounded this bias to allow for a special mark of '2*' or 3 points for countries with particularly strong protections in one subdomain that although it did not necessarily affect the other questions within the category we felt that this should be noted nonetheless.

The second possible bias is that a country for which we had more limited information would be judged more harshly than for one where we had more information. To mitigate the risks of mis-judging countries, we were more willing to not apply any judgement to a country within a category if too little information was available. We also would average out the answers within a category based on the quality of the information.

Therefore all categories were measured out of 10 points. Again, multiple experts reviewed this process, and many of the decisions were justified in the 'analysis' section of this report.

Rankings v. Ratings

We have also abandoned the idea of a 'rankings', where one country is awarded the 'worst' mark. We believe that there is some merit to this practice, but we felt that we would fnd the results more interesting to see the classifications rather than the number figures. That is, if country A had an average of 4.2 and country B had an average of 4.5, we are unsure if it would be fair to say that country A was 'worst' out of this list. Similarly, we would be cautious to say that country B was 'best'. Rather it is more valuable to see the gradations in each category, and the similarities and disparities between countries when they are categorised by both criteria and average results. As such, we felt that a 'ratings' scheme would be more appropriate.

Normative approach

The obvious challenge of devising questions and criteria for the measurements in each category is that it requires us to be more explicit as to what we think are 'good' answers to the questions. This meant that we had to decide what it is that is considered a set of acceptable practices for a democratic state. We are therefore urged to make both subjective and normative judgements.

As much as we can, we make use of objective indicators to the largest extent possible (e.g. existence of laws, number of cases, powers and extent of data collection and access). Inevitably, we then have to apply a level of subjectivity based on our experts' and analysts' perceptions of whether this is good practice or merely an acceptable practice (this makes up the '2' within the scheme). This critique applies to any such study, however: even the chosen 'objects' are in fact merely subjectively chosen indicators, and chosen for the purpose of appearing objective when in fact they may not always be indicative of the state of affairs.

The great challenge for a privacy advocacy organisation in conducting this rating, and perhaps for any advocacy group in any field acting accordingly is that if the criteria are fair, and if there is a positive bias, then we have to be willing to be explicit about what we find 'acceptable', or perhaps even 'desirable'. It is easy to condemn a practice, and thus grant a '0'; but how do we measure something within a surveillance scheme as a '1', '2', or even '2*'. We could never give out a positive mark towards any surveillance scheme because, by nature, it is conducting surveillance. Of course we could also just diminish our expectations and just celebrate whenever something within a surveillance scheme is less worse than in another country's scheme.

Rather, the solution lies in being honest in our goals: we do not aim to see a world in which surveillance is entirely absent. Instead, we would like to see a world where surveillance is minimised, conducted under law, only when it is necessary in a democratic society, and proportionate, with appropriate inbuilt safeguards, and rights of recourse. So a country can have a communications data retention scheme and still get positive marks; just as many positive marks as a country without a communications data retention scheme; and possibly even more as some countries may not have a law but the practice is widespread nonetheless.

Importantly, privacy advocates see not only the state of laws as the ultimate goal, but rather we believe that the protection of privacy is strongest in countries where the debate about privacy is alive and well. That is, a country's framework is perhaps stronger when the protections in words and laws is strong, but also where the debate is strong. Sure we can count the laws, but if people are unaware of their rights and organisations unaware of their duties, then nothing has been accomplished. The challenge then becomes one of measuring the policy discourse around privacy, and we do so sometimes objectively by looking at the numbers of complaints to regulators; and sometimes by looking a little more subjectively at the types of media coverage in a country, the strength of its civil society, and the willingness of a citizenry to question a practice both publicly and when necessary, legally, and the extent to which a nation supports these types of action through allocation of resources.

As advocates and academics, we believe that the key challenge to all methodologies is for the researcher to know what he or she is hoping to achieve.

Some References

  • 'Democracy index 2010: Democracy in retreat', Economist Intelligence Unit, December 2010.
  • UNDP and Global Integrity, 'A User's Guide to Measuring Corruption', UNDP and Global Integrity, 2008.
  • 'Evaluating the Evaluators: Media Freedom Indexes and What They Measure', Centre for International Media Assistance of the National Endowment for Democracy and the Center for Global Communications Studies at the Annenberg School for Communication, 2010.