Archive for January, 2011

In Response to @AdamThierer on “Digital Sensors, Darknets, Hyper-Transparency & the Future of Privacy”

Adam Thierer (@adamthierer), a senior research fellow at the Mercatus Center at George Mason University, and regular and thoughful contributor to the conversation on privacy issues, wrote a piece titled “Digital Sensors, Darknets, Hyper-Transparency & the Future of Privacy” on The Technology Liberation Front blog.  After writing a fairly long comment that seemed to get accepted, the commenting system they use on the blog, DISQUS, appears not to have actually recorded or accepted the comment.  At first I thought a site manager had deleted my comment, but in refreshing the page and going into the DISQUS system to find my history of comments, I noticed that it wasn’t there.  With that said, I’m rewriting the response here.


Adam, I’m not sure why you’re so enamored with targeted advertising given how weak it continues to be and the fact that people don’t have an insatiable need to be advertised to, even if it the products are supposedly “relevant” to them.  We don’t need to walk by a store at the mall and have it suggest what we might need.  Using behavioral advertising to justify the benefits for having information about us used by any organization in any way they choose is simply a red herring.  Let me also separately suggest that in the same way as those who advocate that privacy is dead do so from multiple perspectives, I find that not all “privacy advocates” would subscribe to every position you assigned to them.  This topic, as you know better than most, is chock full’o nuances, some of which you reflect in your positions, and if we are to use Prof. Daniel Solove’s taxonomy of privacy as a framework to explore the different issues, they very topic of privacy means different things to different people.  Sorry, but had to point this out since you seemed to use the term “privacy advocate” in a derogatory manner.

The fear being raised from the USA Today article you point out  and the “What They Know” series posted by the Wall St. Journal, is that in most aspects of society we have norms and rules in place that dictate the boundaries of various activities.  For example, we have rules for driving, we have rules for what the police is allowed to pull us over for, we have rules for what companies can say about their products, and so on.  However, as you correctly point out, the fall of practical obscurity has upset many apple carts.  Actually, it’s more than that.  The fact that the costs of collection, storage, aggregation/combination, and dissemination of data have dropped significantly, has disrupted our privacy expectations.  This same dynamic has turned the copyright business model on its head.  It’s forcing us to rethink the norms and the rules of the road so-to-speak.  Unfortunately, the development of technologies to leverage information about us continues to move ahead faster than we can get our arms around the important issues raised by it’s use.

Look, as a privacy advocate, I also understand technology and the possibilities it offers but have also seen how its application in one area is deemed sufficiently successful to apply to other areas where the parameters are often very different, frequently with less than desirable outcomes.  For example, collaborative filtering technology to make product recommendations to users on web sites was being applied in some form under the Total Information Awareness program to determine likely terrorists.  Of course, it’s one thing if a product is wrongly proposed to me, it’s an entirely different thing if I’m falsely accused of being a terrorist because I share characteristics with known terrorists (especially given how little we actually know in advance, or even after, about such folks).  Note, the technology application is the same, but the parameters and tolerance for error, and the necessary recourse and remedies are completely different.  Unfortunately, we don’t always see the people and institutions applying the technology, sufficiently addressing the fidelity of their systems to address these issues.  This is also where the greatest dangers lie.  This issue of the use of these technologies for much more life impacting applications (ie. insurance (health, home, life, etc.)) has to be of concern to everyone.  Organizational transparency about how our information gets used, has to be forthcoming given that there already exists plenty of transparency on individuals’ data.  Online and offline data is quickly merging with so many sensors, but today use of our offline data is already more intrusive that most people can imagine.  There needs to be rules for what’s acceptable and benefits society, and what is not acceptable use.  Individuals should have some say in how information about them is used to their benefit or detriment.  What’s wrong with that?

While I completely agree that there are many legitimate benefits to the use of various targeting technologies, the temptation that such large databases of aggregated information about us pose for legislators and law enforcement, is frequently too great for them not to take advantage of these.  We already see an awful lot of one-way secrecy, where these databases are being accessed by law enforcement under orders of secrecy (ie. NSL request or 2703(d) order), or by companies without our knowledge for predatory behavior.  So the challenge becomes, how do we balance progress and the benefits that technology can afford us, with the need to better manage how and to whom, information about us is distributed.  I wouldn’t look at it as how we keep information about ourselves “secret”, since we never really did that in the past and would never be reasonably able to do that.  A secret tends to be information known to a small number of people (generally at least 2).  One could keep a secret, but there was never a guarantee it would remain so.  But the pace at which it could be shared was slow.  Technology has enabled that pace to change radically, which is what raises the urgency on needing to decide how we will better manage it.

On the point of “property-tizing” personal information, that has certainly been a position espoused by some in the U.S. privacy community, but in Europe and other parts of the world, the focus has been that information about us and its use should be dictated as a human rights issue where it cannot simply be traded away in the way property can.  While both frameworks aim to resolve similar issues, I believe the U.S. one is fraught with some of the paradoxes you refer to, which the international position does not have.  The same way that copyright as a business model is no longer seems effective (there are lots of better ways to monetize the value of content), I also don’t feel that information about us is definable in such a way that it can be “property-tized” 🙂  Because all of this information is so easily distributed and perfectly copied, property rights never seemed to make sense to me here.

Finally, on the point of transparency, there’s a difference between transparency and full unobstructed access to absolutely any information or knowledge about a person.  You began to list exceptions like medical information, but I believe that in different contexts there’s much more information that people would prefer not to share or brought to bear.  Hence, why transparency needs to come with norms and rules of engagement.  For example, when we place constraints on what sort of data employers may consider in their hiring decisions, that’s a reasonable contextual limitation.  Data or information without context can illicit people to react before they understand it, or can be used in ways for which it bears false witness to the facts.  There are many shortcomings to full unfettered access to information about us that we need to address before such widespread uses start happening.  It’s not about putting the genie back in the bottle, but setting up the right frameworks for acceptable uses of information about people (that’s us :).