Archive for September, 2010

The Elephant in the Room Relating to Privacy; We’ve Never Had Any

As always, I continue to explore the bounds of what we mean by privacy, what we expect by privacy and what the issues are surrounding privacy.  The more I have read about the subject from people far smarter on the subject, as well as seen the hype or heard lay people's visceral reactions to the topic, the more I've been forced to think about "what was this privacy thing" that we all seem to harken back to, to long for?".  What was in place before the digital world emerged with such forcefulness, that now in its seeming absence, makes us feel vulnerable to unseen powers and uncontrollable forces? 

I'm going to start with a simple example as this helps to contextualize the discussion in my head.  So let's say I share information about my salary with a good friend.  Specifically, I tell him how much I earn per month and what sorts of bonuses I receive at year-end based on my performance.  This friend does not work with me nor in my industry, and as far as I know, does not hang out with any of my business associates.  This may be information that I would not readily share with colleagues.  It's also information that I would consider private and hence not meant for public dissemination, but I *trust* my friend.  There's a good possibility that I would not feel the need to tell my friend not to share this information because implicitly I'd feel that he would understand not to discuss it with others.  More importantly, to the extent that I don't believe that he knows other colleagues of mine I'd see no real harm in sharing this information.  Now, jump forward a few months and my friend is at a party where he coincidentally meets one of my colleagues.  Over drinks and while establishing their friendship the discussion on how overpaid my colleague thinks some people are at our company comes up.  My friend, on purpose or inadvertently (it really doesn't matter), shares the information about my compensation, perhaps even in defense of me being fairly compensated.  Has he violated my privacy?  Not really, I violated my privacy by sharing that information with him in the first place.  Even if I had couched the conversation with him to not say anything, the fact that I chose to share this at all means that any privacy violation is on me.  

This is one example, but we can find millions of examples of how we have never really had as much privacy as we believed we did.  Whether it was buying whips and handcuffs at an S&M store, or picking up a prescription for a herpes drug at a pharmacy, or any number of embarrassing or socially awkward activities, the fact that we interact with another person or in a public place, means that at least one other person (the sales clerk or the pharmacist and doctor in these examples), knows that information.  We *trust* that they will not widely publicize this and for those who might want to embarrass us, there is a "practical obscurity" in them finding this out.  It's difficult information for them to find out because it would take constant surveillance and some investigative skills to find out about such activities.

If we think back to direct marketers in the 1970s and 1980s, they purchased lists that offered some basic segmentation and ways to filter for only those people whom they felt would make good prospects for their product offers.  Credit card companies have always been able market to us based on our purchases.  But for marketers and anyone else who wanted access to this information, the high costs associated with obtaining the needed information about us required them to have a solid return on investment model for proceeding with this.  Note, that violating the privacy of celebrities had value to the discloser because this made news and news outlets were willing to pay for any information that could help them sell more ads or more newspapers.

Now with the cost of collecting, storing, aggregating, combining and disseminating information having dropped so precipitously, it has become much easier for more constituencies to obtain information about us at little to no cost.  In other words, the ease of obtaining information about anyone is turning all of us, begrudgingly, into celebrities.  There are more and more ways to monetize information about just about anyone.  The practical obscurity we once enjoyed has been significantly reduced and since it is not only what we share that is collected, but what others share, it's hard to see a way to legitimately believe that we will ever enjoy the practical obscurity of the pre-digital days.  Like it or not, it does look like we are being forced into becoming more and more transparent about our lives, which will also require that we begin being more accepting and tolerant of what it means to be human, faults, blemishes and all.

From a legal perspective, the existing regulations around what is considered private versus public is very arbitrary given the advancements in technology.  For example, technology makes it possible to see what is happening behind walls through heat signatures.  But this technology is not usable on someone's home unless the government agents have a warrant for this.  The technology used in airport screening devices has now also made its way into vans that can drive up next to cars and peer inside of them.  These also require a warrant for their use, but should they be used without a warrant the information gathered by a government agent is not usable in court, but it doesn't change the fact that they were able to know or see whatever they wanted to.  An individual's phone logs are not considered private by the courts because a third party (the phone company) can see them at any time and so there's no expectation of privacy here.  Government agents (which includes the police) do not need a warrant to access this information.

At the end of the day, the real privacy issue is that the digitization of data about us has made it so cheap and easy to move, that the practical obscurity has been reduced to the point where it no longer offers any serious defenses against what can be known about us by any organization (or individual) that really wants to know.  The issue isn't that privacy no longer exists, it's that it only existed as an illusion and now nearly all pretense of that illusion has been removed.  I'm not trying to be fatalistic about all of this, nor conspiratorial, but I hope to write future posts about what it means to live in a world where practical obscurities are no longer there to keep our privacy illusions alive, and then begin to explore some likely paths of evolution.  We need to come to terms with where things are now, in order to begin forging a path for where we would like them to go.


Comment to Peter Fleischer’s blog post on “Exhibitionism, or Self-Expression?”

I’m using my blog to comment on Peter’s post as it appears that my response below was longer than what his blogging platform (Blogger) could accept.  Really, Google sets comment limits? 😉

First off, I highly recommend Peter’s blog if you are into privacy issues.  It’s nice to see that despite so much negative hype about Google’s general attitudes towards privacy, there are real people working there and they have concerns and think about these issues like every one else.

In reading this post’s comments on sharing, it brought me back to a debate last evening with a dear friend about this very issue.  However, where he debated in effect a similar position to Peter’s (be comfortable about what you’re going to post), he was viewing it as an issue for his children (currently 9 and 13 yrs old).  He raised the typical and oft repeated anecdotes about college admissions doing online searches on candidates and employers making hiring decisions, and how destructive negative information could impact his children’s future success.

My contention however is that the amount of data being put out there will soon become prohibitive for people to search against.  As Peter noted, first you have to determine if the data you are reviewing is about the candidate that you’re considering.  Then you have to determine the validity of the data and its source.

Collaborative filtering and network analysis tools and techniques are gaining ground in these areas, and are being applied to the selection process for various activities.  Of course, behavioral and interest-based ad targeting has been at the forefront of this (ie. Amazon’s “people who bought this also bought that”).  However, as you might be aware, when governments try to segment terrorists from non-terrorists (ie. “No Fly List”), they tend bring more and different data sets to bear.  For example, they might apply clustering around credit records, travel logs, transaction records, and other data sets in order to come up with something like, “people who bought this and flew to these places and have open balances on their loans are more likely to be terrorists”.

From the average citizen’s standpoint however, we don’t actually know what data is being combined and corolated nor what assumptions are being made to come up with this lens.  Now, if we go back to the university or employer examples, one can imagine a near future where employers (who are already making use of people’s credit records) and have access to too much data, start to combine avaialble data sets to determine whether an employment prospect is suitable for their company.  That employee won’t know the model against which they are being evaluated. 

Now in a world where kids will make mistakes, and mistakes won’t disappear, then one can argue that mistakes will be understood as part of what makes us human (call it being a “normal” person).  One could also argue that a college kid that screws up his credit while in college, learns a valuable lesson which may make her more vigilant in the future about paying bills on time.  Employers may begin to understand this and derive similar conclusions which they will build into their models.

Hence, those kids who worried about not having anything negative about themselves appear any where, might actually be the ones at a disadvantage.  But the problem is no one will really ever know.  Unless the models are made public (which is not likely unless the current laws change), the best we can do is live our lives in a way that is respectful to others and society in general, accept our mistakes, and keep living our lives.

In a world where transparency rules, then trying to be abnormally good makes one less human and hence less desirable.  It’s almost as though the lack of having made any mistakes brings suspicion on a person that they’re either gaming the system or likely to be more destructive in the future, though one can still imagine some less progressive companies deciding to eliminate candidates with any blemishes.  In other words, no different than the situation we have today.  This actually reminds me of a college friend who didn’t drink while we were in college, he got married right after graduation but within 5 years was divorced and an alcoholic.  Here he kept a natural impulse bottled up inside him and it cost him more dearly when he finally succumbed.

Sorry for the long response/comment, but I thought it worth sharing a position on how things might change/evolve in the future.

Expectation of Privacy

How long before the Fourth Amendment to the United States Constitution becomes just a collection of empty words?  Recently there have been a couple of court cases that have provided conflicting guidance on the legal test for the applicability of privacy protections.  It’s worth reading the Wikipedia entry for Expectation of Privacy to get a very basic understanding of the legal tenets behind this.  In both cases the issue stems from the FBI’s use of GPS devices on suspects cars without a warrant.  In the first case, United States v. Maynard, the U.S. Court of Appeals for the District of Columbia decided against “always-on” surveillance and to uphold that there had been a Fourth Amendment violation.  In the second case, United States v. Pineda-Moreno, the U.S. Court of Appeals for the Ninth Circuit decided that the similar GPS tracking was not in violation of the Fourth Amendment.  Clearly, this looks like an issue that may need to get to the U.S. Supreme Court to more fully resolve.

However, what has gotten my attention in these cases is not so much the expectation of privacy issue per se, but that we may be loosing this expectation by virtue of some of the technologies that in some cases are being mounted on our property (ie. GPS devices under cars), and to what extent is that limited.  With the continuing growth of smart phones, most of which have some form of GPS embedded in all of them, does the U.S. v. Pineda-Moreno case imply that we are all slowly giving up our Fourth Amendment rights through the use of various technologies?

Recently, there has been a spate of stories about how RFID chips are being put to use for tracking various groups of people’s activities.  The first announcement was about how a school in California was going to provide jerseys to kids with RFID chips embedded in them to reduce the cost of tracking them under a program called Child Location, Observation and Utilization Data System (aka. CLOUDS).  Note, that the problem Contra Costa County is trying to solve seems valid and legitimate, but the unintended (or ignored) consequences could be significant (if my kid wears her jersey home and we go out for dinner, does that imply that I have given up my expectation of privacy?).

The next program is being deployed at Northern Arizona University.  This program is to track student attendance.  While I think that the administrators here have forgotten that this is a university, where the incentive to learn and graduate should have nothing to do with attendance, at issue is the fact that students are being tracked.  While the use here is clearly more speculative, one could rationalize its value to the university in terms of better understanding their resource deployment and utilization (ie. use smaller rooms for classes where most students tend not to attend).

The third program, which is deployed at a senior citizen caring facility in Milwaukee, WI, is meant to “allow designated officials caring for these senior citizens to know their whereabouts and activities”.  Can anyone argue with this use?  Probably not on the face of it.

All of these programs lead to the concern on how use of this information against the very citizens these applications are intended to serve, will be treated by the courts in support (or not) of our protections under the Fourth Amendment.  While I’m sure that there is a certain minimal amount of security standards that these systems are being held to, clearly the applications they are being used for would not warrant the sort of security one might expect from an RFID deployment on passports.  But to the extent that information from these devices can be used against us, then the cost and the need for paramount security rise accordingly.  As most security experts know, RFID chips can be tampered with.  If you don’t believe that, just ask the Germans who recently had the technology they are proposing to use for their citizens’ ID cards, compromised.

My point here is that we put a lot of trust into technologies for specific applications, but when the data that emanates from these systems is used for unintended purposes, the consequences can be severe.  Data from applications that make use of RFID chips, suffer the same privacy issues that are raised by how our other on and off-line information is collected, stored, combined and disseminated.  While there are loads of issues around those, when we begin to add on the legal ramifications around use of this information, then we really need to step back and more fully consider the consequences of these supposedly useful applications.

UPDATE: Looks like this topic is getting plenty of attention.  Just caught the following ABC News report titled “What Info Can Uncle Sam Dig Up About You?

UPDATE: Here’s another great analysis titled “GPS Monitoring Revisited” which goes over the two cases listed above in great detail about the rulings themselves.

UPDATE: Excellent analysis by Prof. Susan Freiwald comparing and contrasting the conflicting GPS tracking rulings mentioned above.

More thoughts on Privacy

While it some times shocks me that people pay so little attention to the privacy issues that are quickly overtaking us, I also realize that it’s no one’s fault.  It can be hard to see the imminent harm if you’re not spending time reading and understanding what’s happening with information about you.  We all have busy lives and the issues here frequently require more than a five minute primer to appreciate.  Who has the time?  More importantly, who cares if you’ve done nothing wrong or have nothing to be ashamed of?  Heck, if you are careful about what you say, the views and pictures you share, and appropriately set your various privacy settings online, what’s their to worry about?  There’s also of course, the fact that “so what if marketers know information about me to try sell me stuff”?  Few among us can point to anything having gone wrong with our online identities so what’s there really to worry about?

These questions reflect the refrains that are most frequently used around privacy issues, but are unfortunately narrow in their perspectives.

I suspect that most people lock the doors to their homes.  Not everyone does, but many people do.  Why is that?  Well, one might say it’s more of a security issue, protecting one’s home against burglars.  Another reason might include not wanting to have neighbors barge in unannounced, hence making it a bit more of a privacy issue.  The reality however is that most people know very few people that have had their homes involved in a burglary (whether they’ve been home or not), and it’s infrequent for neighbors these days to stop by unannounced.  And yet, we still lock our doors.  Yes, privacy and security remain important and we want some control over them.  You never know when crime will strike at your home so better safe than sorry, right?

When meeting someone for the first time, whether in a social or professional environment, if you were asked for your home address and social security number, it’s not likely that in most circumstances you would share this information.  Heck, some times we’re even nervous about giving our full names.  Why?  Well, for one thing, meeting someone for the first time doesn’t give you enough information to assess how they might use that information.  It’s a trust thing.  It’s also a privacy and security thing.  I wouldn’t want this new acquaintance knowing where I live, nor would I see any reason why they should have my social security number other than for nefarious uses.  As for my name, there’s some practical obscurity around the fact that if I didn’t provide much else then they might still have a difficult time finding me (*if* I didn’t want to be found by that person).

OK, now let’s explore three different type of scenarios.  In the first one, let’s say you had a perfect credit record to date and applied for home loan.  After a few days the bank came back to you and said that they declined your loan but offered no real reason for doing so other than you didn’t fit their risk profile.  You weren’t part of a minority group and you’ve never been arrested, so what else could possibly be at issue here?  Could it be that people who visit the same types of Web sites you do tend to default on their home loans more frequently?  

In the second scenario, imagine your health insurance plan raising your rates (oh yeah, that would be a surprise ;).  What if it turns out that the health insurance provider has been buying data about their customers’ transactions and correlating these with local loyalty programs data for “marketing” purposes?  Further, what if as part of their analysis they also realized that based on the foods you buy (mmm, chocolate) you fall into a high-risk group for health issues in the next few of years of your life.  This example could also work at a higher aggregated level, in other words people in your zip code tend to be less healthy than those in a neighboring zip code, but who knew?  More importantly, who would tell you or how would you find out about these practices?

In the third scenario, you are interviewing for a job and share everything you think is relevant with the prospective employer.  But do you know if that’s all *they* find relevant?  Well you can go here to see all of the services that offer credit report and other forms of screening for prospective employees.  There’s some efforts to curb the allowed uses for reviewing job seekers’ credit reports, but with the amount of new data becoming available about people, credit reports may not be necessary to effectively allow prospective employers to dig into your life as they determine to be relevant.

One more example of yet a different sort.  Imagine you have been able to successfully keep yourself from joining any social networks despite the behest of many of your friends and a general social pressure around doing so.  But, you remain a social person with friends and neighbors and have been known to enjoy going to a BBQ or two.  On several occasions, as people are apt to do these days, pictures are taken and you appear in several.  No more than pictures of you at these BBQs.  However, your friends tag you even though you’re not on the social network, which means you never find out that your picture is up online.  Now, because you’ve been tagged in a section designated for people’s names the service knows you’re a person and that you’re related in some way to the other people that appear in the pics.  Given how much the service knows about the other people in the pics, how hard do you think it would be to determine who you are based on looking you up in one of several people search engines or using services like Rapleaf, Lexis-Nexis, Experian, et. al.?  Because of the information that is available about the other people around you, several assumptions can begin to take place.  Perhaps simply looking for people with your name in the same city and state as the others.  In other words, privacy isn’t just an issue that affects those who are online, the ability to now collect, store, combine and disseminate data so easily is proving a good way to capture every one into these databases whether they have chosen to participate in the open sharing of their information or not.

These examples point to the crux of some of our privacy problems.  It’s the high level of trust that many companies are putting in technologies like network analysis, collaborative filtering and behavioral analysis.  An interesting piece in The Economist talks about how some of these technologies are being put to use.  What’s challenging to deal with however, is that statistics don’t account for the original purpose that each data set was collected to serve, and where they try to remain devoid of the original context of the data collected, they miss obvious mistakes.  In the context of ad targeting, it doesn’t really matter if the wrong soap was advertised to me, but in the context of areas that are much more meaningful to our lives, things like home ownership, health concerns, safety and security, these technologies do not yet have the necessary throttles and easy ways for user intervention in order that we may keep their results in check.  Too much faith is being put into the hands of these technologies without consideration for the human impact.

Equally important however is the lack of transparency around all of these uses.  Today’s privacy regulations don’t enable individuals to understand the extent to which data about them is sent to other services, or how their existing services are using data from other sources in making decisions that will affect them.  Some of these effects will be financial, others will in the sorts of opportunities made available to them.

My goal here is not to sound conspiratorial as I don’t believe a conspiracy is going on.  What I believe is that lots of people are doing their jobs, but due to a lack of coordination and balancing between the interests of users and the interests of organizations, we’re headed for a collision course that most people are not prepared for.  A rude awakening of sorts.  When I think of the person at the bank who decides to explore the correlations between Web viewing habits and loan defaulting dead beats, I believe that person is doing their job.  They are trying to reduce the risk for their employer.  They are thinking “out of the box” in terms of what other possible signals they could use to increase the likelihood that the bank gets back the money it loans out.  Same with the analyst at the insurance company that begins to dabble with transactional and loyalty program data.  In both cases, these folks are dealing with esoteric statistics and data clustering problems, but the impact that this information has on us is significant and we need to bring about the same sort of transparency that is being demanded of our lives, to the organizations and the processes they use to analyze information about us.

Clearly, all this gets even more unnerving when we hear of government making use of similar commercial data sources and technologies in trying to predict who among us is an “evil doer”.  OK, that’s enough for this episode.  Hope to share a little bit about how we got to this point and why things have changed so much.  As well, I see common thread between privacy, security, intellectual property issues and organizational transparency that is being enabled by the fact that so much is turning into easily copiable and transportable bits of information, which I also want to explore further.