Google+: The Pretty Data Bow

With Google+ it appears that Google is pulling out all the stops and bringing its information might to the party, and in the process may be in a position to have an even better user data trove than what Facebook has achieved to date.  What’s more, is that they could possibly have a much easier time of building this up and the quality of the data may actually get more and more accurate because of their numerous touch points with users.  Think about it for a moment, Google has a significant number of email users, many of which use Google as their primary personal email address, not to mention lots of small and medium-sized companies using it as their primary business email for their employees.  Google Apps are being shared by more and more people with more and more people every day.  Google Checkout, while not a raging success and pales by comparison to Paypal, still has a significant user base and is still being worked on and evolved.  Google has also normalized YouTube’s user accounts into its profile management systems.  If we just consider these services, that’s a lot of user information, or just simply a large number of user identifiers and names that Google already has for everyone of us.

So I spent last night setting up Google+ and was struck by how many of my friends and acquaintances could easily be found.  However, it was different aspects of them that were being found.  Because Google’s profiles are used by all of their products, depending on how someone used Google’s services they would have a profile completed in some way.  In some cases, I’d find multiple profiles for an individual user with different email addresses.  Not knowing how I would be interacting with that person, I tended to grab all of their visible profiles and would associate them with one or more “Circles”, only making sure that these profiles didn’t belong to different people.

For my profile, I decided that like on Facebook, I’d change my name to “P-Air”.  However, as I was executing this a reminder explained that this would take affect across my Google profile which meant that this would impact my other, perhaps more serious, uses of Google’s services.  While on Facebook I could ignore their request that I use my real name, on Google+ unless I do quite a bit more work in creating a fake email address and a profile to go along with it, they have in effect forced me into using my real name.  This began to get me thinking about all of their other services that appear as independently operating components (ie. Google Checkout) that I would or would not use very frequently, can now come to bear on this service.  As well, because I do like the “Circles” concept which Facebook really dropped the ball on with their Groups, I’m now classifying my interactions in more significant buckets for Google to be able to act upon these.  Wow!  If we thought that Facebook was getting a free ride from user generated data that they are gathering and mining, Google has just gone to the next level.  What’s most diabolical about all of this, is that Google+ feels very smooth.  Forgetting for a second that we are more the product in this new model than we have ever been, now we may be blinded by how nicely it all comes together.

Now the question is, can I trust Google more with my privacy, even with the settings they have introduced from the get-go, than I can Facebook.  This feels kind of like asking whether I find politicians of the Democrat or Republican persuasion more trustworthy 😉

Google Wallet not necessary for VRM realization @dsearls

Doc Searls, a thought leader for whom I have tremendous respect, and has most recently been the driving force behind the concept of VRM (Vendor Relationship Management) through the ProjectVRM initiative at Berkman Center, authored a blog post today on Google’s new wallet initiative titled “Google Wallet and VRM“.  To say that he thinks this is important is an understatement when he makes the bold claim that, “I think it’s the most important thing Google has launched since the search engine.”  Coming from Doc, this is a significant statement.  However, when reading through the first four reasons he provides for this, I was left wanting, and felt like the technological fascination of what could be done with this ignored some of the practical realities of our global village.  With each of the initial four reasons provided, I felt a visceral objection arise, and that’s what led me to writing this blog post.  It’s also evident that perhaps our main difference of opinion on this matter is that I see the possibility of doing everything he suggests without the need for our phone to act as the main instrument used to effect a transaction.  To me, it’s the issue of taking our existing distributed cards systems and turning them into a centralized point of failure.

The best way for me to go through is to follow Doc’s reasons for why he believes in the importance of the Google Wallet initiative and provide my counter arguments on each of these.  The discussion below will only make sense in the context of Doc’s post, so you may want to read that first or have it open in another window for reference.

“Reason #1: We’ve always needed an electronic wallet, especially one in our mobile phone.”: Sorry, but this is a techies wishlist item.  No, normal people have never “needed” an electronic wallet much less one on their mobile phone.  Smartphones may be all the rage here, but in many of the most populous countries this has not been the case and they seem to be doing fine without it.  In Africa, entrepreneurs have innovated around the minutes currency which achieves a nice balance of anonymity and the ability to transact in a low-tech environment.

“Reason #2: We’ve needed one from somebody who doesn’t also have a hand in our pocket.”: Perhaps I’m misunderstanding this point so I’m open to being corrected, but the line that didn’t set well with me was, “List the things Google does but doesn’t make money with, and you’ll have a roster of businesses that needed commodification.”  Would Gmail or YouTube or Maps fall into this category?  Would their efforts with Google Books fall into this category?  What exactly are the businesses that Google does not make money with?  Last I checked, Google was in the “monetizing attention” business and all of their efforts focus around collecting information on users in order to better monetize their attention.  Is this a 2nd or 3rd order effect?  Perhaps it was at first, but now they have perfected this monetization machine so I’d argue that it’s very much of a first order effect.  In other words, Google also has a hand in our pocket and the currency they’re extracting is information, the most valuable currency of them all.  It’s actually a more dangerous hand in the context of an e-wallet application since it gains tremendous advantage and control from learning about our transaction history.  Control over its users, advantage over its competitors and every other company selling anything.

“Reason #3: This reduces friction in a huge way.”: This is the bill of goods online users of applications and services have been sold since the beginning of the commercial Internet (I’ve been along for the ride the whole way).  But what is not discussed are the tradeoffs users have made for all of this convenience and removal of friction.  The tradeoffs have come in no smaller ways than in forfeiting of Constitutional rights we were afforded before all of this.  Yikes!  Can’t believe I went there 😉  OK, I’ll discuss this point further at the end of this post, but for this section let me get back to the more specific points that are raised.  Wallets slow us down at checkout, whether it’s from pulling out cash or cards.  Loyalty cards are a cumbersome addition to the checkout process.  How about if we talk about the current state of the best smartphones (never mind the mobile phones used by the unwashed masses, for now)?  How many times have you tried to locate an app on your iOS, Android, Blackberry, Windows, etc. mobile device and either not found it quickly (it’s on the third swipe screen at the bottom right), or touched the screen but it was non-responsive even after multiple taps, or had to reboot your mobile device, or saw a pop-up flash showing some sort of exception error, or ran out of power because you forgot to plug it in before leaving the house, or forgot it at home altogether, or watched someone struggle typing on the screen keyboard, miniature keypad or numeric keypad?  I dread the idea of being in line at a Starbucks and watching people struggle with their devices in any of these ways, and that would be a relatively simple transaction, never mind something more complex.  Already I see the lines at the self-serve checkouts in grocery stores getting longer than the ones with human cashiers.  Now, on the flip side, with a credit card you or the cashier can always type in your number if the reader doesn’t work (worse case scenario the cashier can even call in the card number to the credit card processor), no power to your card is required for this.  While the idea of being able to carry everything in your device sounds appealing, the practical issue is that we become incredibly dependent on a single point of failure.  When there are problems with the device (and I defy anyone to name a device that has never had a problem), it’s not just one payment instrument or loyalty card that’s affected, they all are and all at once.  The tradeoff between a point solution for reducing the friction in our transactions, versus the greater systemic failure that could increase the friction in our lives (imagine loosing the smartphone or having it remotely hacked or other nefarious acts done to it because it will be worth a lot to the attacker), is not worth it.  At least not as discussed here.

“Reason #4: Now customers can truly relate with vendors.”: This one is a hot-button item for ProjectVRM and I like it, but not in the context of Google doing it.  This intermediary role needs to be handled by either a non-profit or a company that works on behalf of consumers because consumers pay for the service (note, I’m not specifying how they pay, but only where consumers are the primary customer of this service provider’s will they be able to trust that their needs and interests will be respected first and foremost).  Google’s efforts to know everything about me and in turn be able to use this information in ways that as a user I never intended, does not sit well.  The temptation to use this information in ways that may not necessarily be in the best interest of the consumers, but may be in the best interest of Google shareholders, is too great a power to bequeath to any company who’s objectives are not primarily aligned with or focused on, serving the individual.  Also, in their current state, privacy laws do not sufficiently protect users from how their information is used nor provide them enough transparency and control to have confidence and trust in such services.  Having said this, today email and a hosted service where the info from all of one’s payment and loyalty instruments can be brought to bear (something more closely resembling what Minthas done) make sense, but I would keep all of these aggregation services independent of the device.  In other words, there should be no need for the Google Wallet to accomplish this.  When positioning this as a Mint-like service, we see that Reasons 5, 6, 7, 8, 9, 10,11 all still apply, but we’ve separated ourselves from being at the mercy of our device for the actual transaction.  Today, when I withdraw money with my ATM card, instead of getting a receipt at the machine it emails me that info.  When I make a purchase at the Apple store, they email me the receipt.  We may need open standards around portability for making those emails machine readable and for them to be sent to our personal transaction aggregating services (ie. Mint or other similar services), but we can accomplish this without the need for the Google Wallet per se.

None of the other reasons really require the Google Wallet to be realizable, and I’d argue that it’s not by turning our phones into the means for making payments that we benefit, but by creating a transaction aggregating services focused on respecting the user, that we make the gains.  Where a true wallet application for payment would have been interesting is when the prospect of something like David Chaum‘s Digicash(let’s see who’s awake and remembers them from back in the day).  The ability to perform truly anonymous cash-like transactions.  This was something for which a device would be needed at the point-of-sale, though with today’s smartcard technology, I could see it being possible without the need for the phone as payment device.  These could still provide useful transaction data for the user, though the merchant would not necessarily have information on the transacting user other than knowing that the transaction was good and money was transferred into their bank account.

Perhaps now is a good time to elaborate on the more general concerns with hosted and aggregated transaction services.  Service providers that would be maintaining transaction data for us constitute a third party in a transaction (first party = me; second party = merchant; third parties = credit card bank and any transaction aggregating services).  Given the state of the ECPA (Electronic Communications Privacy Act), the idea that government could gain access to this info in a nice neat package, at will (OK, maybe with a subpoena, but certainly no warrant requirement), is truly frightening.  While we can try to divorce the wish list of capabilities we would want from transaction aggregating services, from the laws around us, the fact of the matter is that by using many of the SaaS services today we are slowly diluting our Fourth Amendment rights.  It’s nice to push the envelope on ways of making life better for individuals, but in a setting where governments have made been attacking what few privacy rights we have online, using industry to carry out their assaults (see Amazon shutting down Wikileaks servers and Paypal, Visa & Mastercard shutting down their merchant account), and making more and more changes that advantage industry in its quest to have unfettered access to people’s information, I’m concerned that by making it easier for people to manage their info we are also making it easier for governments to control and coerce their own citizens.  We need to get the privacy statues in place *before* radically facilitating the aggregation of information on citizens, otherwise our best intentions risk putting us in prisons of our own design.  Sorry for the heavy ending here, but there doesn’t appear to be any indications that governments around the world are relenting in their desire to legislate every aspect of our lives, in and out of bedrooms, so we shouldn’t make it easier for them to monitor us.  The world Kafka imagined in The Trial, is sadly coming to life, most recently exemplified by the DOJ’s refusal to make its interpretation on the recently renewed Patriot Act clear.  We have be awake and more alert to this as we develop the next generation of online services.

Privacy, It’s All About Context

Recently, I engaged some friends in a discussion about privacy issues.  In some cases they didn't see the big deal around these until I began to articulate their importance with examples.  In going through this process and trying to figure out how to express the various tensions around these issues, it helped me gain some insights into why it's been challenging to subscribe to some of the notions about privacy that have been suggested by legislative and industry efforts.  Some of this may be basic to some, but for those trying to come to terms with privacy issues, I'm hoping the following post will be helpful.

Let me start by saying that privacy in all its glory, is about one thing, CONTEXT.  Whether we are talking about data or information, it's important to understand that without context, none of it has any value, whether good or bad or anything in between.  Words only get judged through the meaning we ascribe to them.  If Content is King, then Context is the Emperor, for data is not information without context, and information has no value without context.  Think about that.  By the same token, the context you apply to any data makes all the difference, it is this point that frequently confuses people. 

Let's start by talking about something as simple as one's name and address information.  This is information that most people believe to be indiscriminately personally identifiable information, the very thing that no marketer should ever get unless the person gives them explicit permission to have it.  Some people would go so far as to say that they "own" that information about themselves.  However, it's clear that this can't be.  In fact, the post office, your neighbors, anyone who finds your lost luggage, anyone who receives a "snail-mail" letter from you and all who see it pass by them, anyone you show your driver's license to, the previous tenants or owners of your home, and a host of other people, have or have had access to this information about you and can do what they wish with it.  For those people who have their names (or last names) on their mailboxes, that means that anyone walking by their home or apartment building lobby could also have access to and make use of this information.  So how could this information be owned by anyone?  Though perhaps the better question is what does ownership of information entitle since clearly it can be in many people's possession for good reason?  I won't address these questions here, though I personally don't believe information can be owned as property.  Instead, I'm going to touch on more fundamental issues.

In all of the examples of people who have your information, we should note that there are good reasons to let some of those people know or have access to this information even without your explicit permission.  Even in Facebook's recent announcements saying that they would allow developers to have access to their users' and their users' friends' addresses and phone numbers, they did this because new creative applications are able to provide users with better and more useful services if they have access to this information.  So there's nothing inherently wrong with anyone having this information, what becomes troubling is when this information appears or is used in a context that we did not permit, anticipate or have control over.

There's a company called Jigsaw, which was recently acquired by Salesforce.com.  The way the service worked is that sales people would enter business card information of their contacts into the system in order to gain credits which they could use to get contact information about prospects.  This was a service targeted to sales people.  The company took a lot of flack from the media and non-users (people who were not the target customers), because they felt violated by a service that encouraged sales people to provide their contacts databases in exchange for the contact information on people whom they did not know and might want to reach out to.  Imagine that you have a business meeting and all the participants exchange cards.  While you never really know what any of the people receiving your card is going to do with it, you assume that they will respect that you gave it to them with some sort of implied confidence.  The reality is that at least one (if not more than one) person at the meeting will eventually enter your information into their company's CRM (Customer Relationship Management) system.  This also means that everyone in their company will have access to your contact information.  In other words the context of giving your card to someone at a business meeting was fine, but in the context that it ends up on some random service's mailing list or contacts database makes this less palatable. 

Now let's take a slightly more extreme example, imagine getting into a fight with a friend and sending them an email where you say something to the effect of "eat shit and die".  By happenstance, a few days later your friend dies of a stroke due to some rare food allergy, but because the death happened under mysterious circumstances this leads investigators to check your friend's email account.  On its own and under the context that I'm pissed at my friend from an argument we had, "eat shit and die" is an understandable response.  The same words under the context that my friend ate something which killed them, take on a whole new meaning.  They certainly are cause for concern and place me as a suspect in my friend's death.  Note that in one circumstance (or context), the words were fine, albeit strong and even hurtful, under circumstances that I could never have foreseen the words become cause enough for me to look like a potential murderer.

Consider the recent events surrounding Wikileaks and the discomfort that governments officials are going through as a result of the release of the confidential cables, the pattern of behavior suggests that in the context of diplomatic communications, none of what was being said was of any serious consequence among the communicators and their intended audience.  However, once the context changed, and a critical public that has not had the benefit of understanding how diplomatic policy is conducted, the words now have to answer to some very different interpretations.  What they say about statistics, that "statistics can be made to prove anything, even the truth", can also be said about context.  What is worrisome to most is that we never know when someone will take known information about us and process it through a context that we are not aware of or in control of.

What these examples show is that information, in and of itself, is not the problem we are facing when we discuss privacy issues.  The problem is that there has been no way to know (or control) the context under which information about us will be viewed or used over time, and hence even the most seemingly innocent data or information about us in one context can become an indictment of our character or worse, when viewed under another.  When we give information about ourselves to someone or a company by virtue of also knowing what it will be used for, we control the context and we are comforted by that.  As soon more is done with this information about us, we lose that control and we lose that comfort.  The fact that there are services out there compiling and aggregating information about each of us has been happening for a long time, but most people were unaware or only mildly aware of this.  Most certainly aren't aware of the extent to which it has been happening.  Recently, Rapleaf shared some information about Microsoft and Google employee food buying habits.  They did this through the combination of the user data Rapleaf has, with user data kept by a loyalty card data aggregator.  We could probably venture to guess that neither the Microsoft or Google employees that made purchases using their loyalty cards expected that information about their purchases as it relates to their employment with these companies would be used in this way.  By extension, if an insurance company wanted to price their insurance to either of these companies based on this data, this would certainly make many people very uncomfortable.  Was the individual shopper's data private?  Not really, anyone at the supermarket could have seen what each one of these people purchased unless they placed a blanket over their shopping cart and at the checkout stand.  By virtue of using a loyalty card, the supermarket certainly had a record of the transaction which means that the shoppers willingly agreed to be tracked, likely in exchange for some food item discounts.  Combining this information with their email addresses in order to determine where these people worked, is not part of the context these users agreed to.

A recent example of law enforcement databases combining their information with marketing databases has been unsettling, not least of which is because as we know, many of the marketing databases lack the integrity and accuracy we would expect of data that is to be co-mingled with law enforcement data about us.  Context matters, and our ability to control and maintain this information is important.  Note, there are likely many reasons that we would all provide more information about ourselves, which would also be more accurate, given the right value proposition.  If we understood how and when it would be used, we would be willing participants.  In the example above about the eating habits of employees being surreptitiously obtained, there are great inaccuracies and that's part of what would be upsetting if we ever found out an insurance company was using these methods to obtain information about our eating habits.  By contrast, if the insurance companies suggested that a healthy lifestyle would enable us to reduce our monthly health insurance bill (think Allstate's good driving record discount), then perhaps we would not only be more likely to maintain the accuracy of this information, but also motivated to provide it willingly.  All this, so long as we control that this is the only context under which that information would be put to use.

So where does that leave us?  Frankly, I'm not sure.  It's very hard to legislate or regulate the concept of context.  Heck, as it is our laws have very little notion of context.  If the contexts for exceptions to a law are not all considered in advance and written into the law, then the law is followed literally.  This is clearly unfortunate since new contexts emerge all the time and at a pace faster than laws can be changed to address these.  We see this all the time, generally in heart wrenching scenarios where the letter of the law is followed rather than the spirit of the law.  With privacy issues and the upcoming privacy bill, it feels like our legislators continue to play the game of trying to foresee all of the possible problematic contexts rather than understanding the need to establish frameworks that work for current unknowns.  As part of this framework, it's critical that we put people in control of their information.  I believe the result of doing this is that we will see the emergence of a new class of service provider that will help people manage their and interact with their information and those [companies or people] they interact with in ways that make this process easier.  Doc Searls over at the Harvard's Berkman Center For Internet & Society has been working on ProjectVRM which is working on addressing how people can more easily and effectively interact with the companies they do business with through the concept of Vendor Relationship Management.

There is a brave new world coming and we should not lose sight of what we mean and understand by privacy in order to move the ball forward in a positive direction for our social and commercial ecosystems.  This, with an understanding that commerce is only a part of that, not the center around which everything revolves.  Privacy is all about Context, remember that.  In a future post, I'll try to discuss Context Arbitrage, which is where I believe a lot of money is being made today and why we need to get a handle on it.

In Response to @AdamThierer on “Digital Sensors, Darknets, Hyper-Transparency & the Future of Privacy”

Adam Thierer (@adamthierer), a senior research fellow at the Mercatus Center at George Mason University, and regular and thoughful contributor to the conversation on privacy issues, wrote a piece titled “Digital Sensors, Darknets, Hyper-Transparency & the Future of Privacy” on The Technology Liberation Front blog.  After writing a fairly long comment that seemed to get accepted, the commenting system they use on the blog, DISQUS, appears not to have actually recorded or accepted the comment.  At first I thought a site manager had deleted my comment, but in refreshing the page and going into the DISQUS system to find my history of comments, I noticed that it wasn’t there.  With that said, I’m rewriting the response here.

<soapbox>

Adam, I’m not sure why you’re so enamored with targeted advertising given how weak it continues to be and the fact that people don’t have an insatiable need to be advertised to, even if it the products are supposedly “relevant” to them.  We don’t need to walk by a store at the mall and have it suggest what we might need.  Using behavioral advertising to justify the benefits for having information about us used by any organization in any way they choose is simply a red herring.  Let me also separately suggest that in the same way as those who advocate that privacy is dead do so from multiple perspectives, I find that not all “privacy advocates” would subscribe to every position you assigned to them.  This topic, as you know better than most, is chock full’o nuances, some of which you reflect in your positions, and if we are to use Prof. Daniel Solove’s taxonomy of privacy as a framework to explore the different issues, they very topic of privacy means different things to different people.  Sorry, but had to point this out since you seemed to use the term “privacy advocate” in a derogatory manner.

The fear being raised from the USA Today article you point out  and the “What They Know” series posted by the Wall St. Journal, is that in most aspects of society we have norms and rules in place that dictate the boundaries of various activities.  For example, we have rules for driving, we have rules for what the police is allowed to pull us over for, we have rules for what companies can say about their products, and so on.  However, as you correctly point out, the fall of practical obscurity has upset many apple carts.  Actually, it’s more than that.  The fact that the costs of collection, storage, aggregation/combination, and dissemination of data have dropped significantly, has disrupted our privacy expectations.  This same dynamic has turned the copyright business model on its head.  It’s forcing us to rethink the norms and the rules of the road so-to-speak.  Unfortunately, the development of technologies to leverage information about us continues to move ahead faster than we can get our arms around the important issues raised by it’s use.

Look, as a privacy advocate, I also understand technology and the possibilities it offers but have also seen how its application in one area is deemed sufficiently successful to apply to other areas where the parameters are often very different, frequently with less than desirable outcomes.  For example, collaborative filtering technology to make product recommendations to users on web sites was being applied in some form under the Total Information Awareness program to determine likely terrorists.  Of course, it’s one thing if a product is wrongly proposed to me, it’s an entirely different thing if I’m falsely accused of being a terrorist because I share characteristics with known terrorists (especially given how little we actually know in advance, or even after, about such folks).  Note, the technology application is the same, but the parameters and tolerance for error, and the necessary recourse and remedies are completely different.  Unfortunately, we don’t always see the people and institutions applying the technology, sufficiently addressing the fidelity of their systems to address these issues.  This is also where the greatest dangers lie.  This issue of the use of these technologies for much more life impacting applications (ie. insurance (health, home, life, etc.)) has to be of concern to everyone.  Organizational transparency about how our information gets used, has to be forthcoming given that there already exists plenty of transparency on individuals’ data.  Online and offline data is quickly merging with so many sensors, but today use of our offline data is already more intrusive that most people can imagine.  There needs to be rules for what’s acceptable and benefits society, and what is not acceptable use.  Individuals should have some say in how information about them is used to their benefit or detriment.  What’s wrong with that?

While I completely agree that there are many legitimate benefits to the use of various targeting technologies, the temptation that such large databases of aggregated information about us pose for legislators and law enforcement, is frequently too great for them not to take advantage of these.  We already see an awful lot of one-way secrecy, where these databases are being accessed by law enforcement under orders of secrecy (ie. NSL request or 2703(d) order), or by companies without our knowledge for predatory behavior.  So the challenge becomes, how do we balance progress and the benefits that technology can afford us, with the need to better manage how and to whom, information about us is distributed.  I wouldn’t look at it as how we keep information about ourselves “secret”, since we never really did that in the past and would never be reasonably able to do that.  A secret tends to be information known to a small number of people (generally at least 2).  One could keep a secret, but there was never a guarantee it would remain so.  But the pace at which it could be shared was slow.  Technology has enabled that pace to change radically, which is what raises the urgency on needing to decide how we will better manage it.

On the point of “property-tizing” personal information, that has certainly been a position espoused by some in the U.S. privacy community, but in Europe and other parts of the world, the focus has been that information about us and its use should be dictated as a human rights issue where it cannot simply be traded away in the way property can.  While both frameworks aim to resolve similar issues, I believe the U.S. one is fraught with some of the paradoxes you refer to, which the international position does not have.  The same way that copyright as a business model is no longer seems effective (there are lots of better ways to monetize the value of content), I also don’t feel that information about us is definable in such a way that it can be “property-tized” 🙂  Because all of this information is so easily distributed and perfectly copied, property rights never seemed to make sense to me here.

Finally, on the point of transparency, there’s a difference between transparency and full unobstructed access to absolutely any information or knowledge about a person.  You began to list exceptions like medical information, but I believe that in different contexts there’s much more information that people would prefer not to share or brought to bear.  Hence, why transparency needs to come with norms and rules of engagement.  For example, when we place constraints on what sort of data employers may consider in their hiring decisions, that’s a reasonable contextual limitation.  Data or information without context can illicit people to react before they understand it, or can be used in ways for which it bears false witness to the facts.  There are many shortcomings to full unfettered access to information about us that we need to address before such widespread uses start happening.  It’s not about putting the genie back in the bottle, but setting up the right frameworks for acceptable uses of information about people (that’s us :).

</soapbox>

 

 

 

PayPal Should Have Stuck With “the Government Made Me Do It”

Paypal_logo1

Paypal’s General Counsel, John Muller, issued a clarification of PayPal’s position on restricting the Wikileaks account earlier in the week.  You can read it for yourself, but I was struck by his description of their Acceptable Use Policy and how he uses this to justify the action taken.

Specifically, Mr. Muller explains:

“PayPal’s Acceptable Use Policy states that we do not allow any organization to use our service if it encourages, promotes, facilitates or instructs others to engage in illegal activity.”

A few paragraphs later he explains why Wikileaks’ account was restricted:

“The account was again reviewed last week after the U.S. Department of State publicized a letter to WikiLeaks on November 27, stating that WikiLeaks may be in possession of documents that were provided in violation of U.S. law.  PayPal was not contacted by any government organization in the U.S. or abroad. We restricted the account based on our Acceptable Use Policy review.  Ultimately, our difficult decision was based on a belief that the WikiLeaks website was encouraging sources to release classified material, which is likely a violation of law by the source.”

What PayPal is saying is that a site that in effect provides a safe haven for people to reveal illegal activity, is actually encouraging people to engage in illegal activity.  The fact that someone broke the law in providing the most recent batch of cables does not mean that Wikileaks is encouraging people to break the law.  Certainly, none of the people who are providing the information are getting paid for doing so.  They are not seeking fame either.  So in what way exactly, is Wikileaks encouraging that people engage in illegal activity.  It’s like saying that a gun store is promoting that people engage in illegal activity because they sell guns which can be used for committing illegal activities.

Mr. Muller also does something strange in the last part of the sentence where he describes their decision.  Instead of simply saying that the released classified material “is a violation of law by the source”, he throws in the qualifier “likely”, as in the released classified material “is *likely* a violation of law by the source”.  In other words, PayPal is acting on a hunch since they clearly don’t know that it’s a violation of the law.  Saying it’s likely doesn’t make it so.

After reading this nonsense, I would suggest to PayPal that they leverage the opportunity provided to them by our ethically challenged legislators, and just go along with the excuse that they got a call from Senator Lieberman pressuring them like the other companies, and be done with it.  At least then we can just say that PayPal bowed to government pressure, rather than having to think that this company is represented by morons, which is the natural conclusion to reach from the excuse they’re trying to get us to believe.  Really PayPal, that’s the best you got?

Privacy is Dead, Let’s Call the Whole Fourth Amendment Thing Off…and While We’re at it, Lose the First One Too

As a young man, I recall sharing with friends that while I held few principles (moral or otherwise), the few I held were dear to me.  In other words, I didn’t bother encumbering myself with lots of principles that I could never live up to.  This didn’t mean acting immorally, it just meant not taking absolute positions over moral or social issues since there were lots of imaginable circumstances under which I might not live up to such high ideals.  Perhaps it was my disdain for hypocrisy or simply being lazy about having to remember so many principles, but whatever it was, it made me feel better to come to terms with my humanity as someone with failings who wasn’t going to espouse moral superiority, especially on issues that I could imagine not living up to.

Over the past year, it has felt like a patriotic fervor and a heavy dose of superior morality, has crept into the political and legislative discourse.  Heck, the other night I heard the head of USDA on the Colbert Report say that childhood obesity is a national security issue.  He said this with all of the conviction that Janet Napolitano and John Pistole have shown when speaking on the fact that groping and aggressive pat-downs are necessary for our national security.  It’s now clear to me, that if you want to make any argument in our country the best way to do so is to explain that it is being done to “protect the children” or for “national security”.  By the way, if we were playing rock-paper-scissors and two of those were represented as “protect the children” and “national security”, “national security” trumps “protect the children” as exemplified by the groping of minors by the TSA.  Clearly an example where national security is more important than protecting the children, but I digress.

For all of the time that so many people and organizations (ie. the EFF (Electronic Frontier Foundation), the ACLU (American Civil Liberties Union), EPIC (Electronic Privacy Information Center), the CDT (Center for Democracy and Technology), the CCR (Center for Constitutional RIghts), et. al.) have spent fending off the government and commercial interests from violating our privacy, it seems that so many more people don’t seem to care about their privacy.  Never mind that we all have things that we would prefer to keep confidential or simply not share with the world, but when confronted many people are beginning to say that they have nothing to hide from anyone.  The number of people who have said that they don’t care about the TSA’s approach or that they understand the need for it, is simply staggering to me.

More recently, we have been seeing interviews with folks at bus stations going through the similar pat-downs as are happening at airports, and many respond in interviews that they’re OK with it because “I feel better knowing that I won’t get blown up”.  Combine this with the lack of sensitivity and understanding on these issues being displayed by the Fourth Estate.  While I can rationalize that the average person doesn’t have time to think through the issues, we also see CEOs of significant technology companies (with vested interests, but don’t let that get in the way), offer different but equally impassioned perspectives about the death of privacy.  From Bruce Schneier’s blog:


“In January, Facebook Chief Executive Mark Zuckerberg declared the age of privacy to be over. A month earlier, Google Chief Eric Schmidt expressed a similar sentiment. Add Scott McNealy’s and Larry Ellison’s comments from a few years earlier, and you’ve got a whole lot of tech CEOs proclaiming the death of privacy — especially when it comes to young people.”. (http://www.schneier.com/blog/archives/2010/04/privacy_and_con.html)

These are all people who should know better, and perhaps they do.

Our government, through its law enforcement agencies, is also constantly fighting to remove our privacies in the name of national security.  Of note here, are their efforts to require the least amount of oversight possible into its requests for cell location data.  Recently, conflicting opinions were given by the D.C. Circuit Court and the Third District Court which may land this issue in front of the Supreme Court for some meaningful resolution.  Commercial and educational institutions want to track everyone with RFID (Radio Frequency Identification).  If you combine this with government tracking laws being debated above, then the intrusions on our privacy become self-evident and complicit.

So the question that comes to my mind is why do we have the Fourth Amendment to the U.S. Constitution?  If industry and government find it inconvenient, and a significant number of people in the country don’t care about it, then why do we have it?  Why not just stop the hypocrisy and minimize the amount of stuff we claim is important to us and simply discard this, since clearly few find it important?  Let’s stop saying that we’re fighting wars and securing our nation to protect all of our Constitution, since that is clearly not true.  Maybe we care about some of the other amendments, but the Fourth ain’t one of them.  While we’re at it, let’s examine the controversies that suggest that we also don’t hold the First Amendment to the U.S. Constitution in high esteem.

Not too long ago, we saw much commotion around the Park 51 (aka. Cordoba House and “Ground Zero Mosque”), and whether the City of New York should allow its construction as a result of protests from conservative groups and plenty of liberals misdirecting their empathy.  This of course raised another First Amendment issue around the free exercise of religion.  It seemed like an awful lot of people were against this construction.

Most recently, law enforcement, specifically the Justice Department and the Immigration and Customs Enforcement decided not to wait for the ratification of the Combating Online Infringement and Counterfeits Act (aka. COICA) to pursue shutting down 82 Web sites that they considered illegal.  This took place under a forfeiture law.  However, in a few cases, the domains themselves were not infringing on any laws and were merely search engines that complied with the DMCA (Digital Millenium Copyright Act), though in some cases ended up pointing to sites that might have been infringing on copyrights.  Where sites were pointing to infringing ones, this is a First Amendment free (protected) speech issue, but no due process is being afforded to make these determinations.

As many have no doubt heard on the news or read online, Wikileaks released secret U.S. information relating to its role in both the Iraq and Afghanistan wars.  The latest release was relating to U.S. diplomatic cables.  Clearly, the release of all of this information has given us and the world, a rare glimpse into how our government has misled us and at times even claimed information to be of national security importance to justify its secrecy, when in reality it was for political expediency.  Rather than deal with the issues raised by these documents, our legislators and current and former government officials are calling for the blood of the messengers, (ie. “Rep. Peter King: Prosecute Wikileaks, Julian Assange“, “Wikileaks must be stopped“, “U.S. Senators call for Wikileaks to face criminal charges“).  Of course, the irony here is that the first people to have actually published information contained in the Wikileaks release for the public to see, were some mainstream media newspapers including the NY Times, Der Spiegel, El Pais, Le Monde, and The Guardian (UK), but we have yet to hear of any of them being accused of any wrongdoing.  That and the fact that Wikileaks was not the actual leaker of the releases they have made available.  This event clearly raises First Amendment issues around journalistic freedom and freedom of speech.  We did not see Woodward and Bernstein get accused of any wrongdoing during the leaks around Watergate, and Daniel Ellsberg still roams free despite the Pentagon Papers release, but it seems that Wikileaks has drawn the ire of the U.S. government.  Heck, Sen. Lieberman is using this as an opportunity to grandstand his cause celebre and to disembody the First Amendment.

With all of this effort, and the attempts to route around and through the First Amendment, it’s clear that it too is not highly valued.  The voices in support of Wikileaks are not nearly as loud as those who see their actions as criminal, at least in the U.S.  So there, we can now count another amendment of the U.S. Constitution that we could do without.  I’m not suggesting that people don’t try to live by these ideals, but why codify something that so few care about, believe in, or truly think that it is getting in the way of our…wait for it…national security.  I suppose that removing these amendments from our Constitution would go a long way towards reducing the hypocrisy behind the positions that our commercial and government organizations, and many of our fellow citizens, have taken.

The Elephant in the Room Relating to Privacy; We’ve Never Had Any

As always, I continue to explore the bounds of what we mean by privacy, what we expect by privacy and what the issues are surrounding privacy.  The more I have read about the subject from people far smarter on the subject, as well as seen the hype or heard lay people's visceral reactions to the topic, the more I've been forced to think about "what was this privacy thing" that we all seem to harken back to, to long for?".  What was in place before the digital world emerged with such forcefulness, that now in its seeming absence, makes us feel vulnerable to unseen powers and uncontrollable forces? 

I'm going to start with a simple example as this helps to contextualize the discussion in my head.  So let's say I share information about my salary with a good friend.  Specifically, I tell him how much I earn per month and what sorts of bonuses I receive at year-end based on my performance.  This friend does not work with me nor in my industry, and as far as I know, does not hang out with any of my business associates.  This may be information that I would not readily share with colleagues.  It's also information that I would consider private and hence not meant for public dissemination, but I *trust* my friend.  There's a good possibility that I would not feel the need to tell my friend not to share this information because implicitly I'd feel that he would understand not to discuss it with others.  More importantly, to the extent that I don't believe that he knows other colleagues of mine I'd see no real harm in sharing this information.  Now, jump forward a few months and my friend is at a party where he coincidentally meets one of my colleagues.  Over drinks and while establishing their friendship the discussion on how overpaid my colleague thinks some people are at our company comes up.  My friend, on purpose or inadvertently (it really doesn't matter), shares the information about my compensation, perhaps even in defense of me being fairly compensated.  Has he violated my privacy?  Not really, I violated my privacy by sharing that information with him in the first place.  Even if I had couched the conversation with him to not say anything, the fact that I chose to share this at all means that any privacy violation is on me.  

This is one example, but we can find millions of examples of how we have never really had as much privacy as we believed we did.  Whether it was buying whips and handcuffs at an S&M store, or picking up a prescription for a herpes drug at a pharmacy, or any number of embarrassing or socially awkward activities, the fact that we interact with another person or in a public place, means that at least one other person (the sales clerk or the pharmacist and doctor in these examples), knows that information.  We *trust* that they will not widely publicize this and for those who might want to embarrass us, there is a "practical obscurity" in them finding this out.  It's difficult information for them to find out because it would take constant surveillance and some investigative skills to find out about such activities.

If we think back to direct marketers in the 1970s and 1980s, they purchased lists that offered some basic segmentation and ways to filter for only those people whom they felt would make good prospects for their product offers.  Credit card companies have always been able market to us based on our purchases.  But for marketers and anyone else who wanted access to this information, the high costs associated with obtaining the needed information about us required them to have a solid return on investment model for proceeding with this.  Note, that violating the privacy of celebrities had value to the discloser because this made news and news outlets were willing to pay for any information that could help them sell more ads or more newspapers.

Now with the cost of collecting, storing, aggregating, combining and disseminating information having dropped so precipitously, it has become much easier for more constituencies to obtain information about us at little to no cost.  In other words, the ease of obtaining information about anyone is turning all of us, begrudgingly, into celebrities.  There are more and more ways to monetize information about just about anyone.  The practical obscurity we once enjoyed has been significantly reduced and since it is not only what we share that is collected, but what others share, it's hard to see a way to legitimately believe that we will ever enjoy the practical obscurity of the pre-digital days.  Like it or not, it does look like we are being forced into becoming more and more transparent about our lives, which will also require that we begin being more accepting and tolerant of what it means to be human, faults, blemishes and all.

From a legal perspective, the existing regulations around what is considered private versus public is very arbitrary given the advancements in technology.  For example, technology makes it possible to see what is happening behind walls through heat signatures.  But this technology is not usable on someone's home unless the government agents have a warrant for this.  The technology used in airport screening devices has now also made its way into vans that can drive up next to cars and peer inside of them.  These also require a warrant for their use, but should they be used without a warrant the information gathered by a government agent is not usable in court, but it doesn't change the fact that they were able to know or see whatever they wanted to.  An individual's phone logs are not considered private by the courts because a third party (the phone company) can see them at any time and so there's no expectation of privacy here.  Government agents (which includes the police) do not need a warrant to access this information.

At the end of the day, the real privacy issue is that the digitization of data about us has made it so cheap and easy to move, that the practical obscurity has been reduced to the point where it no longer offers any serious defenses against what can be known about us by any organization (or individual) that really wants to know.  The issue isn't that privacy no longer exists, it's that it only existed as an illusion and now nearly all pretense of that illusion has been removed.  I'm not trying to be fatalistic about all of this, nor conspiratorial, but I hope to write future posts about what it means to live in a world where practical obscurities are no longer there to keep our privacy illusions alive, and then begin to explore some likely paths of evolution.  We need to come to terms with where things are now, in order to begin forging a path for where we would like them to go.

Comment to Peter Fleischer’s blog post on “Exhibitionism, or Self-Expression?”

I’m using my blog to comment on Peter’s post as it appears that my response below was longer than what his blogging platform (Blogger) could accept.  Really, Google sets comment limits? 😉

First off, I highly recommend Peter’s blog if you are into privacy issues.  It’s nice to see that despite so much negative hype about Google’s general attitudes towards privacy, there are real people working there and they have concerns and think about these issues like every one else.

In reading this post’s comments on sharing, it brought me back to a debate last evening with a dear friend about this very issue.  However, where he debated in effect a similar position to Peter’s (be comfortable about what you’re going to post), he was viewing it as an issue for his children (currently 9 and 13 yrs old).  He raised the typical and oft repeated anecdotes about college admissions doing online searches on candidates and employers making hiring decisions, and how destructive negative information could impact his children’s future success.

My contention however is that the amount of data being put out there will soon become prohibitive for people to search against.  As Peter noted, first you have to determine if the data you are reviewing is about the candidate that you’re considering.  Then you have to determine the validity of the data and its source.

Collaborative filtering and network analysis tools and techniques are gaining ground in these areas, and are being applied to the selection process for various activities.  Of course, behavioral and interest-based ad targeting has been at the forefront of this (ie. Amazon’s “people who bought this also bought that”).  However, as you might be aware, when governments try to segment terrorists from non-terrorists (ie. “No Fly List”), they tend bring more and different data sets to bear.  For example, they might apply clustering around credit records, travel logs, transaction records, and other data sets in order to come up with something like, “people who bought this and flew to these places and have open balances on their loans are more likely to be terrorists”.

From the average citizen’s standpoint however, we don’t actually know what data is being combined and corolated nor what assumptions are being made to come up with this lens.  Now, if we go back to the university or employer examples, one can imagine a near future where employers (who are already making use of people’s credit records) and have access to too much data, start to combine avaialble data sets to determine whether an employment prospect is suitable for their company.  That employee won’t know the model against which they are being evaluated. 

Now in a world where kids will make mistakes, and mistakes won’t disappear, then one can argue that mistakes will be understood as part of what makes us human (call it being a “normal” person).  One could also argue that a college kid that screws up his credit while in college, learns a valuable lesson which may make her more vigilant in the future about paying bills on time.  Employers may begin to understand this and derive similar conclusions which they will build into their models.

Hence, those kids who worried about not having anything negative about themselves appear any where, might actually be the ones at a disadvantage.  But the problem is no one will really ever know.  Unless the models are made public (which is not likely unless the current laws change), the best we can do is live our lives in a way that is respectful to others and society in general, accept our mistakes, and keep living our lives.

In a world where transparency rules, then trying to be abnormally good makes one less human and hence less desirable.  It’s almost as though the lack of having made any mistakes brings suspicion on a person that they’re either gaming the system or likely to be more destructive in the future, though one can still imagine some less progressive companies deciding to eliminate candidates with any blemishes.  In other words, no different than the situation we have today.  This actually reminds me of a college friend who didn’t drink while we were in college, he got married right after graduation but within 5 years was divorced and an alcoholic.  Here he kept a natural impulse bottled up inside him and it cost him more dearly when he finally succumbed.

Sorry for the long response/comment, but I thought it worth sharing a position on how things might change/evolve in the future.

Expectation of Privacy

How long before the Fourth Amendment to the United States Constitution becomes just a collection of empty words?  Recently there have been a couple of court cases that have provided conflicting guidance on the legal test for the applicability of privacy protections.  It’s worth reading the Wikipedia entry for Expectation of Privacy to get a very basic understanding of the legal tenets behind this.  In both cases the issue stems from the FBI’s use of GPS devices on suspects cars without a warrant.  In the first case, United States v. Maynard, the U.S. Court of Appeals for the District of Columbia decided against “always-on” surveillance and to uphold that there had been a Fourth Amendment violation.  In the second case, United States v. Pineda-Moreno, the U.S. Court of Appeals for the Ninth Circuit decided that the similar GPS tracking was not in violation of the Fourth Amendment.  Clearly, this looks like an issue that may need to get to the U.S. Supreme Court to more fully resolve.

However, what has gotten my attention in these cases is not so much the expectation of privacy issue per se, but that we may be loosing this expectation by virtue of some of the technologies that in some cases are being mounted on our property (ie. GPS devices under cars), and to what extent is that limited.  With the continuing growth of smart phones, most of which have some form of GPS embedded in all of them, does the U.S. v. Pineda-Moreno case imply that we are all slowly giving up our Fourth Amendment rights through the use of various technologies?

Recently, there has been a spate of stories about how RFID chips are being put to use for tracking various groups of people’s activities.  The first announcement was about how a school in California was going to provide jerseys to kids with RFID chips embedded in them to reduce the cost of tracking them under a program called Child Location, Observation and Utilization Data System (aka. CLOUDS).  Note, that the problem Contra Costa County is trying to solve seems valid and legitimate, but the unintended (or ignored) consequences could be significant (if my kid wears her jersey home and we go out for dinner, does that imply that I have given up my expectation of privacy?).

The next program is being deployed at Northern Arizona University.  This program is to track student attendance.  While I think that the administrators here have forgotten that this is a university, where the incentive to learn and graduate should have nothing to do with attendance, at issue is the fact that students are being tracked.  While the use here is clearly more speculative, one could rationalize its value to the university in terms of better understanding their resource deployment and utilization (ie. use smaller rooms for classes where most students tend not to attend).

The third program, which is deployed at a senior citizen caring facility in Milwaukee, WI, is meant to “allow designated officials caring for these senior citizens to know their whereabouts and activities”.  Can anyone argue with this use?  Probably not on the face of it.

All of these programs lead to the concern on how use of this information against the very citizens these applications are intended to serve, will be treated by the courts in support (or not) of our protections under the Fourth Amendment.  While I’m sure that there is a certain minimal amount of security standards that these systems are being held to, clearly the applications they are being used for would not warrant the sort of security one might expect from an RFID deployment on passports.  But to the extent that information from these devices can be used against us, then the cost and the need for paramount security rise accordingly.  As most security experts know, RFID chips can be tampered with.  If you don’t believe that, just ask the Germans who recently had the technology they are proposing to use for their citizens’ ID cards, compromised.

My point here is that we put a lot of trust into technologies for specific applications, but when the data that emanates from these systems is used for unintended purposes, the consequences can be severe.  Data from applications that make use of RFID chips, suffer the same privacy issues that are raised by how our other on and off-line information is collected, stored, combined and disseminated.  While there are loads of issues around those, when we begin to add on the legal ramifications around use of this information, then we really need to step back and more fully consider the consequences of these supposedly useful applications.

UPDATE: Looks like this topic is getting plenty of attention.  Just caught the following ABC News report titled “What Info Can Uncle Sam Dig Up About You?

UPDATE: Here’s another great analysis titled “GPS Monitoring Revisited” which goes over the two cases listed above in great detail about the rulings themselves.

UPDATE: Excellent analysis by Prof. Susan Freiwald comparing and contrasting the conflicting GPS tracking rulings mentioned above.

More thoughts on Privacy

While it some times shocks me that people pay so little attention to the privacy issues that are quickly overtaking us, I also realize that it’s no one’s fault.  It can be hard to see the imminent harm if you’re not spending time reading and understanding what’s happening with information about you.  We all have busy lives and the issues here frequently require more than a five minute primer to appreciate.  Who has the time?  More importantly, who cares if you’ve done nothing wrong or have nothing to be ashamed of?  Heck, if you are careful about what you say, the views and pictures you share, and appropriately set your various privacy settings online, what’s their to worry about?  There’s also of course, the fact that “so what if marketers know information about me to try sell me stuff”?  Few among us can point to anything having gone wrong with our online identities so what’s there really to worry about?

These questions reflect the refrains that are most frequently used around privacy issues, but are unfortunately narrow in their perspectives.

I suspect that most people lock the doors to their homes.  Not everyone does, but many people do.  Why is that?  Well, one might say it’s more of a security issue, protecting one’s home against burglars.  Another reason might include not wanting to have neighbors barge in unannounced, hence making it a bit more of a privacy issue.  The reality however is that most people know very few people that have had their homes involved in a burglary (whether they’ve been home or not), and it’s infrequent for neighbors these days to stop by unannounced.  And yet, we still lock our doors.  Yes, privacy and security remain important and we want some control over them.  You never know when crime will strike at your home so better safe than sorry, right?

When meeting someone for the first time, whether in a social or professional environment, if you were asked for your home address and social security number, it’s not likely that in most circumstances you would share this information.  Heck, some times we’re even nervous about giving our full names.  Why?  Well, for one thing, meeting someone for the first time doesn’t give you enough information to assess how they might use that information.  It’s a trust thing.  It’s also a privacy and security thing.  I wouldn’t want this new acquaintance knowing where I live, nor would I see any reason why they should have my social security number other than for nefarious uses.  As for my name, there’s some practical obscurity around the fact that if I didn’t provide much else then they might still have a difficult time finding me (*if* I didn’t want to be found by that person).

OK, now let’s explore three different type of scenarios.  In the first one, let’s say you had a perfect credit record to date and applied for home loan.  After a few days the bank came back to you and said that they declined your loan but offered no real reason for doing so other than you didn’t fit their risk profile.  You weren’t part of a minority group and you’ve never been arrested, so what else could possibly be at issue here?  Could it be that people who visit the same types of Web sites you do tend to default on their home loans more frequently?  

In the second scenario, imagine your health insurance plan raising your rates (oh yeah, that would be a surprise ;).  What if it turns out that the health insurance provider has been buying data about their customers’ transactions and correlating these with local loyalty programs data for “marketing” purposes?  Further, what if as part of their analysis they also realized that based on the foods you buy (mmm, chocolate) you fall into a high-risk group for health issues in the next few of years of your life.  This example could also work at a higher aggregated level, in other words people in your zip code tend to be less healthy than those in a neighboring zip code, but who knew?  More importantly, who would tell you or how would you find out about these practices?

In the third scenario, you are interviewing for a job and share everything you think is relevant with the prospective employer.  But do you know if that’s all *they* find relevant?  Well you can go here to see all of the services that offer credit report and other forms of screening for prospective employees.  There’s some efforts to curb the allowed uses for reviewing job seekers’ credit reports, but with the amount of new data becoming available about people, credit reports may not be necessary to effectively allow prospective employers to dig into your life as they determine to be relevant.

One more example of yet a different sort.  Imagine you have been able to successfully keep yourself from joining any social networks despite the behest of many of your friends and a general social pressure around doing so.  But, you remain a social person with friends and neighbors and have been known to enjoy going to a BBQ or two.  On several occasions, as people are apt to do these days, pictures are taken and you appear in several.  No more than pictures of you at these BBQs.  However, your friends tag you even though you’re not on the social network, which means you never find out that your picture is up online.  Now, because you’ve been tagged in a section designated for people’s names the service knows you’re a person and that you’re related in some way to the other people that appear in the pics.  Given how much the service knows about the other people in the pics, how hard do you think it would be to determine who you are based on looking you up in one of several people search engines or using services like Rapleaf, Lexis-Nexis, Experian, et. al.?  Because of the information that is available about the other people around you, several assumptions can begin to take place.  Perhaps simply looking for people with your name in the same city and state as the others.  In other words, privacy isn’t just an issue that affects those who are online, the ability to now collect, store, combine and disseminate data so easily is proving a good way to capture every one into these databases whether they have chosen to participate in the open sharing of their information or not.

These examples point to the crux of some of our privacy problems.  It’s the high level of trust that many companies are putting in technologies like network analysis, collaborative filtering and behavioral analysis.  An interesting piece in The Economist talks about how some of these technologies are being put to use.  What’s challenging to deal with however, is that statistics don’t account for the original purpose that each data set was collected to serve, and where they try to remain devoid of the original context of the data collected, they miss obvious mistakes.  In the context of ad targeting, it doesn’t really matter if the wrong soap was advertised to me, but in the context of areas that are much more meaningful to our lives, things like home ownership, health concerns, safety and security, these technologies do not yet have the necessary throttles and easy ways for user intervention in order that we may keep their results in check.  Too much faith is being put into the hands of these technologies without consideration for the human impact.

Equally important however is the lack of transparency around all of these uses.  Today’s privacy regulations don’t enable individuals to understand the extent to which data about them is sent to other services, or how their existing services are using data from other sources in making decisions that will affect them.  Some of these effects will be financial, others will in the sorts of opportunities made available to them.

My goal here is not to sound conspiratorial as I don’t believe a conspiracy is going on.  What I believe is that lots of people are doing their jobs, but due to a lack of coordination and balancing between the interests of users and the interests of organizations, we’re headed for a collision course that most people are not prepared for.  A rude awakening of sorts.  When I think of the person at the bank who decides to explore the correlations between Web viewing habits and loan defaulting dead beats, I believe that person is doing their job.  They are trying to reduce the risk for their employer.  They are thinking “out of the box” in terms of what other possible signals they could use to increase the likelihood that the bank gets back the money it loans out.  Same with the analyst at the insurance company that begins to dabble with transactional and loyalty program data.  In both cases, these folks are dealing with esoteric statistics and data clustering problems, but the impact that this information has on us is significant and we need to bring about the same sort of transparency that is being demanded of our lives, to the organizations and the processes they use to analyze information about us.

Clearly, all this gets even more unnerving when we hear of government making use of similar commercial data sources and technologies in trying to predict who among us is an “evil doer”.  OK, that’s enough for this episode.  Hope to share a little bit about how we got to this point and why things have changed so much.  As well, I see common thread between privacy, security, intellectual property issues and organizational transparency that is being enabled by the fact that so much is turning into easily copiable and transportable bits of information, which I also want to explore further.