Talking Wikipedia, Facebook, and privacy harm at Seattle TA3M

OK, first this, in case you don’t make it thru the blog: “The Boundaries of Privacy Harm,” by Ryan Calo. Wonky but well worth a read, especially if you’re interested in privacy and policymaking.

This week we went to Seattle TA3M, which we try to do every month, because they are close allies of ours and they put on some great talks.

This month was outstanding, with speakers from the Wikimedia Foundation and the University of Washington’s Tech Policy Lab.

Jonathan T. Morgan from the Wikimedia Foundation talked about open, collaborative groups, with a specific emphasis on Wikipedia, and about what works, what doesn’t, and how interested people can help. As probably most Seattle Privacy readers know, Wikipedia is a fabulous resource that is written primarily by people from a relatively narrow demographic group and could and should have a lot more input from a wider variety of people. Because we struggle with reaching out effectively to people outside of our immediately familiar zone here in Seattle, it was useful to hear about some of Wikimedia Foundation’s ways of measuring engagement and reaching out. (I immediately wanted to sign my 93-year old father up for the Seniors Write Wikipedia” effort, for example.) We also talked about how some of those outreach efforts can introduce new problems — particularly in the realm of privacy. The Wikipedia Zero project, for example, allows people to edit Wikipedia by mobile phone, which is great in the sense that editing becomes available to people who don’t have access to a desktop, and not great in that it makes their interests and concerns immediately obvious to their cell providers (For example, Saudi Telecom Company) and anyone they choose to share it with. (Editing the “Anarchist” entry, were we…?). The Wikipedia Zero project started off with a system based on the ubiquitous text message (SMS) but is now moving towards hypertext as web browsers are becoming nearly universal on mobile devices. Some countries have enabled zero-cost routing for encrypted hypertext traffic to Wikipedia Zero which protects the privacy of readers, however this process is not complete and access by HTTPS should be the norm.

The second speaker, Ryan Calo, prof at the UW Law School and Faculty Director of the Tech Policy Lab, talked about the Facebook “emotional manipulation” experiment (a popular name that wrongly conflates two pieces of the study’s name, it turns out), and why he considered it not a big deal, but how it points to a VERY big deal, that is, what Calo calls Digital Market Manipulation. Since the man has written a paper on the subject, I’m not going to try to recap the issue here, except to note that the gathering of big data by corporations and governments creates a very scary information asymmetry and more or less blows the concept of “informed consumers” out of the water, with many implications for price manipulation and introducing inefficiencies into economic transactions, and, well, go read the paper. In group discussion we talked at length about the meaning of the public response to Facebook’s experiment and how symbolic that reaction seems to be of our larger and growing discomfort and unease with knowing that people we don’t know anything about know everything about us.

An audience member who noted that while right now we talk in terms of corporate control of data as primarily an issue of economics, in fact it has huge political implications as well, particularly if the roles of certain gigantic corporations shift in relation to governing, which it seems quite possible they might.

At Seattle Privacy, we’ve been working on connecting various groups and institutions with an interest in privacy in hopes of putting together public informational events particular directed at City of Seattle employees and elected officials. The Tech Policy Lab has definitely been on our list of important groups to connect with. Prof Calo kindly agreed to come help us talk to the city about privacy and our Proposal for Seattle.

We’re especially excited about this because we still haven’t exactly refined an “elevator pitch” for the value of privacy. Calo, however, has given the issue a great deal of attention. He offers two categories of “privacy harms” — objective and subjective.

Calo describes subjective privacy harm as “the perception of unwanted observation, which results in unwelcome mental states—anxiety, embarrassment, fear—that stem from the belief that one is being watched or monitored”, whether by a landlord or an ex, or a massive government surveillance project.

He describes objective privacy harms as “the unanticipated or coerced use of information concerning a person against that person. These are negative, external actions justified by reference to personal information.” This could range from identity theft to redlining to having your blood samples used against you at a DUI stop.
The subjective and objective categories represent the anticipation and consequence of a loss of control over personal information. Here’s what makes this approach so valuable:

It uncouples privacy harm from privacy violations, demonstrating that no person need commit a privacy violation for privacy harm to occur (and vice versa). It creates a “limiting principle” capable of revealing when another value—autonomy or equality, for instance—is more directly at stake. It also creates a “rule of recognition” that permits the identification of a privacy harm when no other harm is apparent. Finally, this approach permits the measurement and redress of privacy harm in novel ways.

In other words, Calo is talking about a methodology that makes the harm of privacy violations testable and rankable — an approach that courts and regulators can use to investigate privacy harms and determine their severity. Finally, it takes into account the increasing automation of surveillance by addressing the perception of privacy violation as a separate harm, and eliminating the requirement that “human sensing” be involved for privacy to be harmed. (I don’t know privacy law well enough to elaborate on this point and my apologies to those of you who do and are banging your foreheads on your keyboards right now. The point is, we look forward to getting Calo together with our city’s lawmakers, and seeing where their conversations lead.)

2 Replies to “Talking Wikipedia, Facebook, and privacy harm at Seattle TA3M”

  1. xiao Thank you for another helpful site. The place more might I am that type of information coded in this type of ideal way? I’ve got a project that we’re purely currently working on, and that i have been getting the particular view out there regarding such information.

  2. I added a bit to the article about normalizing encrypted access to Wikipedia. I feel that this is particularly important in the context of mobile access where users are often required to identify themselves and even when they aren’t it’s trivial to de-anonymize them. This is the same issue that Richard M. Stallman addresses in Right To Read.

Leave a Reply

Your email address will not be published.