Comment to Peter Fleischer’s blog post on “Exhibitionism, or Self-Expression?”

I’m using my blog to comment on Peter’s post as it appears that my response below was longer than what his blogging platform (Blogger) could accept.  Really, Google sets comment limits? ūüėČ

First off, I highly recommend Peter’s blog if you are into privacy issues.  It’s nice to see that despite so much negative hype about Google’s general attitudes towards privacy, there are real people working there and they have concerns and think about these issues like every one else.

In reading this post’s comments on sharing, it brought me back to a debate last evening with a dear friend about this very issue.  However, where he debated in effect a similar position to Peter’s (be comfortable about what you’re going to post), he was viewing it as an issue for his children (currently 9 and 13 yrs old).  He raised the typical and oft repeated anecdotes about college admissions doing online searches on candidates and employers making hiring decisions, and how destructive negative information could impact his children’s future success.

My contention however is that the amount of data being put out there will soon become prohibitive for people to search against.  As Peter noted, first you have to determine if the data you are reviewing is about the candidate that you’re considering.  Then you have to determine the validity of the data and its source.

Collaborative filtering and network analysis tools and techniques are gaining ground in these areas, and are being applied to the selection process for various activities.  Of course, behavioral and interest-based ad targeting has been at the forefront of this (ie. Amazon’s “people who bought this also bought that”).  However, as you might be aware, when governments try to segment terrorists from non-terrorists (ie. “No Fly List”), they tend bring more and different data sets to bear.  For example, they might apply clustering around credit records, travel logs, transaction records, and other data sets in order to come up with something like, “people who bought this and flew to these places and have open balances on their loans are more likely to be terrorists”.

From the average citizen’s standpoint however, we don’t actually know what data is being combined and corolated nor what assumptions are being made to come up with this lens.  Now, if we go back to the university or employer examples, one can imagine a near future where employers (who are already making use of people’s credit records) and have access to too much data, start to combine avaialble data sets to determine whether an employment prospect is suitable for their company.  That employee won’t know the model against which they are being evaluated. 

Now in a world where kids will make mistakes, and mistakes won’t disappear, then one can argue that mistakes will be understood as part of what makes us human (call it being a “normal” person).  One could also argue that a college kid that screws up his credit while in college, learns a valuable lesson which may make her more vigilant in the future about paying bills on time.  Employers may begin to understand this and derive similar conclusions which they will build into their models.

Hence, those kids who worried about not having anything negative about themselves appear any where, might actually be the ones at a disadvantage.  But the problem is no one will really ever know.  Unless the models are made public (which is not likely unless the current laws change), the best we can do is live our lives in a way that is respectful to others and society in general, accept our mistakes, and keep living our lives.

In a world where transparency rules, then trying to be abnormally good makes one less human and hence less desirable.  It’s almost as though the lack of having made any mistakes brings suspicion on a person that they’re either gaming the system or likely to be more destructive in the future, though one can still imagine some less progressive companies deciding to eliminate candidates with any blemishes.  In other words, no different than the situation we have today.  This actually reminds me of a college friend who didn’t drink while we were in college, he got married right after graduation but within 5 years was divorced and an alcoholic.  Here he kept a natural impulse bottled up inside him and it cost him more dearly when he finally succumbed.

Sorry for the long response/comment, but I thought it worth sharing a position on how things might change/evolve in the future.


Expectation of Privacy

How long before the Fourth Amendment to the United States Constitution becomes just a collection of empty words?  Recently there have been a couple of court cases that have provided conflicting guidance on the legal test for the applicability of privacy protections.  It’s worth reading the Wikipedia entry for Expectation of Privacy to get a very basic understanding of the legal tenets behind this.  In both cases the issue stems from the FBI’s use of GPS devices on suspects cars without a warrant.  In the first case, United States v. Maynard, the U.S. Court of Appeals for the District of Columbia decided against “always-on” surveillance and to uphold that there had been a Fourth Amendment violation.  In the second case, United States v. Pineda-Moreno, the U.S. Court of Appeals for the Ninth Circuit decided that the similar GPS tracking was not in violation of the Fourth Amendment.  Clearly, this looks like an issue that may need to get to the U.S. Supreme Court to more fully resolve.

However, what has gotten my attention in these cases is not so much the expectation of privacy issue per se, but that we may be loosing this expectation by virtue of some of the technologies that in some cases are being mounted on our property (ie. GPS devices under cars), and to what extent is that limited.  With the continuing growth of smart phones, most of which have some form of GPS embedded in all of them, does the U.S. v. Pineda-Moreno case imply that we are all slowly giving up our Fourth Amendment rights through the use of various technologies?

Recently, there has been a spate of stories about how RFID chips are being put to use for tracking various groups of people’s activities.  The first announcement was about how a school in California was going to provide jerseys to kids with RFID chips embedded in them to reduce the cost of tracking them under a program called Child Location, Observation and Utilization Data System (aka. CLOUDS).  Note, that the problem Contra Costa County is trying to solve seems valid and legitimate, but the unintended (or ignored) consequences could be significant (if my kid wears her jersey home and we go out for dinner, does that imply that I have given up my expectation of privacy?).

The next program is being deployed at Northern Arizona University.  This program is to track student attendance.  While I think that the administrators here have forgotten that this is a university, where the incentive to learn and graduate should have nothing to do with attendance, at issue is the fact that students are being tracked.  While the use here is clearly more speculative, one could rationalize its value to the university in terms of better understanding their resource deployment and utilization (ie. use smaller rooms for classes where most students tend not to attend).

The third program, which is deployed at a senior citizen caring facility in Milwaukee, WI, is meant to “allow designated officials caring for these senior citizens to know their whereabouts and activities”.  Can anyone argue with this use?  Probably not on the face of it.

All of these programs lead to the concern on how use of this information against the very citizens these applications are intended to serve, will be treated by the courts in support (or not) of our protections under the Fourth Amendment.  While I’m sure that there is a certain minimal amount of security standards that these systems are being held to, clearly the applications they are being used for would not warrant the sort of security one might expect from an RFID deployment on passports.  But to the extent that information from these devices can be used against us, then the cost and the need for paramount security rise accordingly.  As most security experts know, RFID chips can be tampered with.  If you don’t believe that, just ask the Germans who recently had the technology they are proposing to use for their citizens’ ID cards, compromised.

My point here is that we put a lot of trust into technologies for specific applications, but when the data that emanates from these systems is used for unintended purposes, the consequences can be severe.  Data from applications that make use of RFID chips, suffer the same privacy issues that are raised by how our other on and off-line information is collected, stored, combined and disseminated.  While there are loads of issues around those, when we begin to add on the legal ramifications around use of this information, then we really need to step back and more fully consider the consequences of these supposedly useful applications.

UPDATE: Looks like this topic is getting plenty of attention.  Just caught the following ABC News report titled “What Info Can Uncle Sam Dig Up About You?

UPDATE: Here’s another great analysis titled “GPS Monitoring Revisited” which goes over the two cases listed above in great detail about the rulings themselves.

UPDATE: Excellent analysis by Prof. Susan Freiwald comparing and contrasting the conflicting GPS tracking rulings mentioned above.

More thoughts on Privacy

While it some times shocks me that people pay so little attention to the privacy issues that are quickly overtaking us, I also realize that it’s no one’s fault.  It can be hard to see the imminent harm if you’re not spending time reading and understanding what’s happening with information about you.  We all have busy lives and the issues here frequently require more than a five minute primer to appreciate.  Who has the time?  More importantly, who cares if you’ve done nothing wrong or have nothing to be ashamed of?  Heck, if you are careful about what you say, the views and pictures you share, and appropriately set your various privacy settings online, what’s their to worry about?  There’s also of course, the fact that “so what if marketers know information about me to try sell me stuff”?  Few among us can point to anything having gone wrong with our online identities so what’s there really to worry about?

These questions reflect the refrains that are most frequently used around privacy issues, but are unfortunately narrow in their perspectives.

I suspect that most people lock the doors to their homes.  Not everyone does, but many people do.  Why is that?  Well, one might say it’s more of a security issue, protecting one’s home against burglars.  Another reason might include not wanting to have neighbors barge in unannounced, hence making it a bit more of a privacy issue.  The reality however is that most people know very few people that have had their homes involved in a burglary (whether they’ve been home or not), and it’s infrequent for neighbors these days to stop by unannounced.  And yet, we still lock our doors.  Yes, privacy and security remain important and we want some control over them.  You never know when crime will strike at your home so better safe than sorry, right?

When meeting someone for the first time, whether in a social or professional environment, if you were asked for your home address and social security number, it’s not likely that in most circumstances you would share this information.  Heck, some times we’re even nervous about giving our full names.  Why?  Well, for one thing, meeting someone for the first time doesn’t give you enough information to assess how they might use that information.  It’s a trust thing.  It’s also a privacy and security thing.  I wouldn’t want this new acquaintance knowing where I live, nor would I see any reason why they should have my social security number other than for nefarious uses.  As for my name, there’s some practical obscurity around the fact that if I didn’t provide much else then they might still have a difficult time finding me (*if* I didn’t want to be found by that person).

OK, now let’s explore three different type of scenarios.  In the first one, let’s say you had a perfect credit record to date and applied for home loan.  After a few days the bank came back to you and said that they declined your loan but offered no real reason for doing so other than you didn’t fit their risk profile.  You weren’t part of a minority group and you’ve never been arrested, so what else could possibly be at issue here?  Could it be that people who visit the same types of Web sites you do tend to default on their home loans more frequently?  

In the second scenario, imagine your health insurance plan raising your rates (oh yeah, that would be a surprise ;).  What if it turns out that the health insurance provider has been buying data about their customers’ transactions and correlating these with local loyalty programs data for “marketing” purposes?  Further, what if as part of their analysis they also realized that based on the foods you buy (mmm, chocolate) you fall into a high-risk group for health issues in the next few of years of your life.  This example could also work at a higher aggregated level, in other words people in your zip code tend to be less healthy than those in a neighboring zip code, but who knew?  More importantly, who would tell you or how would you find out about these practices?

In the third scenario, you are interviewing for a job and share everything you think is relevant with the prospective employer.  But do you know if that’s all *they* find relevant?  Well you can go here to see all of the services that offer credit report and other forms of screening for prospective employees.  There’s some efforts to curb the allowed uses for reviewing job seekers’ credit reports, but with the amount of new data becoming available about people, credit reports may not be necessary to effectively allow prospective employers to dig into your life as they determine to be relevant.

One more example of yet a different sort.  Imagine you have been able to successfully keep yourself from joining any social networks despite the behest of many of your friends and a general social pressure around doing so.  But, you remain a social person with friends and neighbors and have been known to enjoy going to a BBQ or two.  On several occasions, as people are apt to do these days, pictures are taken and you appear in several.  No more than pictures of you at these BBQs.  However, your friends tag you even though you’re not on the social network, which means you never find out that your picture is up online.  Now, because you’ve been tagged in a section designated for people’s names the service knows you’re a person and that you’re related in some way to the other people that appear in the pics.  Given how much the service knows about the other people in the pics, how hard do you think it would be to determine who you are based on looking you up in one of several people search engines or using services like Rapleaf, Lexis-Nexis, Experian, et. al.?  Because of the information that is available about the other people around you, several assumptions can begin to take place.  Perhaps simply looking for people with your name in the same city and state as the others.  In other words, privacy isn’t just an issue that affects those who are online, the ability to now collect, store, combine and disseminate data so easily is proving a good way to capture every one into these databases whether they have chosen to participate in the open sharing of their information or not.

These examples point to the crux of some of our privacy problems.  It’s the high level of trust that many companies are putting in technologies like network analysis, collaborative filtering and behavioral analysis.  An interesting piece in The Economist talks about how some of these technologies are being put to use.  What’s challenging to deal with however, is that statistics don’t account for the original purpose that each data set was collected to serve, and where they try to remain devoid of the original context of the data collected, they miss obvious mistakes.  In the context of ad targeting, it doesn’t really matter if the wrong soap was advertised to me, but in the context of areas that are much more meaningful to our lives, things like home ownership, health concerns, safety and security, these technologies do not yet have the necessary throttles and easy ways for user intervention in order that we may keep their results in check.  Too much faith is being put into the hands of these technologies without consideration for the human impact.

Equally important however is the lack of transparency around all of these uses.  Today’s privacy regulations don’t enable individuals to understand the extent to which data about them is sent to other services, or how their existing services are using data from other sources in making decisions that will affect them.  Some of these effects will be financial, others will in the sorts of opportunities made available to them.

My goal here is not to sound conspiratorial as I don’t believe a conspiracy is going on.  What I believe is that lots of people are doing their jobs, but due to a lack of coordination and balancing between the interests of users and the interests of organizations, we’re headed for a collision course that most people are not prepared for.  A rude awakening of sorts.  When I think of the person at the bank who decides to explore the correlations between Web viewing habits and loan defaulting dead beats, I believe that person is doing their job.  They are trying to reduce the risk for their employer.  They are thinking “out of the box” in terms of what other possible signals they could use to increase the likelihood that the bank gets back the money it loans out.  Same with the analyst at the insurance company that begins to dabble with transactional and loyalty program data.  In both cases, these folks are dealing with esoteric statistics and data clustering problems, but the impact that this information has on us is significant and we need to bring about the same sort of transparency that is being demanded of our lives, to the organizations and the processes they use to analyze information about us.

Clearly, all this gets even more unnerving when we hear of government making use of similar commercial data sources and technologies in trying to predict who among us is an “evil doer”.  OK, that’s enough for this episode.  Hope to share a little bit about how we got to this point and why things have changed so much.  As well, I see common thread between privacy, security, intellectual property issues and organizational transparency that is being enabled by the fact that so much is turning into easily copiable and transportable bits of information, which I also want to explore further.

“Tweeting Out The Dead”, really Forbes, this was worth writing?

Forbes writer, Mike Isaac, wrote a piece last week titled "Tweeting Out The Dead", which has been bothering me all week.  Maybe not bothering me so much as annoying me like gnat buzzing about my ear.  In this Twittermania segment, Isaac invokes "Miss Manners" then goes on to discuss "Condolence tweets".  For anyone who has used Twitter for even 10 seconds, it wouldn't take long to understand that it's purely a broadcast platform where people push out messages of a promotional, informational, and in some cases directly communicative, nature.  Those who use it to communicate with one person publicly (as opposed to using the "Direct Message" capability), do so because they know that person is reading their tweets and does not wish to keep that tweet private.

For some reason, Isaac has chosen to see tweets where people seem to extend public condolences as being "condolence tweets" and therefore raises the issue of whether this is a proper way to extend sympathies out to the deceased's loved ones. ¬†Clearly Mike needs to spend some more time on Twitter if he's going to write about that medium. ¬†These tweets by celebrities or others are not real condolences as in those one might extend to close family, but rather "shout outs", making others aware that they cared about the deceased, and in some cases simply letting others know that someone they cared about has just passed. ¬†While the family may see this, that's not the intended audience for this message…and frankly, Isaac should have understood that. ¬†Making snide comments about Ashton Kutcher's tweet about Brittany Murphy's passing, clearly shows that Isaac is out of touch. ¬†This was one way that Ashton chose to let the world know he noticed her passing, irrespective of the other ways he might have reached out to her family directly. ¬†This message was intended for his followers not for the family per se.

From here, Isaac goes on to quote various folks (a doctor from UCSF and an analyst from Forrester Research) about the process of grieving and how it's moving on to Twitter. ¬†I won't stand here and pretend to understand the full extent of people's grief nor why they might tweet about someone's passing, but to the extent that condolences remain "an expression of sympathy to someone who has experienced grief arising from death, deep mental anguish, or misfortune", then I'll say that what's happening on Twitter is not "condolence tweets", but simple sharing of expressions of grief with one's followers at large. ¬†So Mr. Isaac, consider spending more time on Twitter if you need topics here to make your publishing deadline ūüėČ

End of Privacy

A little rambling on some thoughts that have been on my mind in relationship to privacy issues.  In saying the "end of privacy", I'm trying to imagine what the effect of loosing what we understand today as being control over our personal information.  Loosing control in effect means opening up the can of externalities that take place as soon as we can no longer determine who is seeing our information or what uses our information is put to.  Imagine that anything I write to anyone can be seen by everyone, any where I go can be known to anyone at any time, anything I say can make it into the public record, and as soon as someone believes they can actually map my thoughts, those too will be knowable to others.  If you think about it, many of the technologists working to make these things happen are moving us closer and closer to this new world.  Where it would be nice to have all of this coordinated by a rational independent authority so that we stay focused on the impact to the individual first, because there's no master architect it's unlikely to be the primary consideration.  The focus on the individual will be the sizzle not the steak.  All of the technologists behind the various services are busily making these things happen, each of course, is doing so with the best of intentions.  Not to mention, that I'm not sure I see the business in privacy, as a stated in a previous post, versus the economic interests aligned against it.

The impact of having comments shared that were meant for one intended recipient manifested themselves with email many moons ago, and we quickly realized that if it's written electronically it can easily be copied or forwarded and be shared with others. ¬†Actually, email used to have another more devilish feature (which some email readers still support) similar to "Forward" called "Redirect" where you could redirect a document to another person, but any response from that person would go directly back to the originator of the email. ¬†This created much confusion and more than one "faux pas" or snarky response intended for the person that redirected the email, actually going to the original sender…d'oh!…but I digress.

Facebook broke us down some more by creating an environment they purported as being safe for communicating with only people you know in the physical world, but then extending (exploiting) our use of their service by opening our comments and pictures up to people we didn't know and didn't necessarily want to share these with.  Eventually allowing more and more of our information, that which we might have considered to be private, to be shared with total strangers and making us more discoverable in the process, than most people had intended to be.  In Mark Zuckerberg's opinion, it's what we want, in the opinion of people who had legitimate concerns for their safety and security, or in some cases their job prospects, this was been disastrous.  There is a tinge of irony here since what we supposedly want also lines up with furthering Facebook's business model, but I'll leave that for another time.

Location-based services have been the next bastion of assault on our privacy.  In some cases these are "opt-in" in others they are "opt-out" (and in some cases you have no choice), and the applications use game mechanics to make them more fun, keeping us distracted from the data collection they are doing.  Many of these so-called "free" services are not free at all, they just monetize our data in ways not immediately evident to users.  Other methods used to get at this information have been through services offered by the wireless phone providers to protect "the children".  A friend recently explained to me how he busted his son for lying about his whereabouts and going to a party for which he had not received permission to attend.  I laughed and asked why his son hadn't figured out he would get caught, to which he explained that the phone is too integral a part of his son's life to be away from it, so he was willing to take the chance that his parents wouldn't check up on him.  If you can do this with your kids, it won't be long until friends do it (that's what Loopt and Google Latitude are all about).  Heck, why stop there, it shouldn't be technologically difficult to enable even acquaintances that sort of access, then it's probably just easier to open it to everyone.  This certainly follows the logic of the social networks.

So where am I going with this?  Well, in a world without privacy it occurs to me that we would have to become fully accountable for all of our actions.  There would be no hiding behind the veil of politeness, there would be no more little white lies for us to get away with, or tailoring of our conversation to the audience we're addressing.  We in effect become fully transparent.  Context of our statements and of our locations would be difficult if not impossible to incorporate which is a considerable down side, but for all of the discussion around this issue, I see no slow down in the adoption of services that dispense with our privacy or the feverish pace of new services emerging.  Clearly, lack of context is not holding anyone up to opening up our information.  Perhaps this is what's needed at a time when trust is at an all time low, call it the shock treatment of trust enabling.  A high level of trust is important to a functioning society, be it on an economic level or on a political one, and increasing transparency may force the matter.  People becoming trustworthy, not necessarily because they wanted to be but because they have to be.

For all of this redefining (or dismantling) of privacy, there are clearly some other benefits which is what is enticing so many people to use these services.  Knowing that my friends are nearby, looking for a product or service near where I am *now*, making a coupon or an offer available to me at the time I'm near the establishment, keeping the kids safe, and locating my lost phone, are all certainly valuable services.  The fact that these come at a price that has not fully been digested by most is what worries me.  But a part of me also feels that perhaps the transparency pendulum has to swing to an extreme so that we all become aware of the risks of too little privacy.  Already Facebook has been making some changes and enables users to post links, photos and status updates to user created lists rather than to everyone.  As of today, they are also apparently testing the ability to delete accounts (not just deactivate as is currently possible).  Perhaps we are going through the bumps in the road necessary to learn how to deal with these new issues.

This weekend's release of classified U.S. war related documents by Wikileaks demonstrates the potential transparency can have on accountability.  While this was about the transparency in government, I believe we are on the threshold of similar things happening to individuals.  Both Jeff Jarvis on his blog BuzzMachine and Jay Rosen on PressThink raise some very interesting points organizational transparency.  In the case of Jeff he discusses the struggle between transparency and keeping secrets when trust is low.  Jay talks more about the world's first stateless news organization and the impact it will have news reporting.  What was interesting about the Wikileaks document release, is that they made a lot of documents available, hence enabling some level of context setting to occur.  Had they just released a few choice documents, the context might have been blurred.

So the question I struggle with is, will the loosing of our individual privacy be part of the age of accountability?  Sure, people claim to be accountable but in the past two years seeing the happenings in government, the financial industry, the oil and gas industry and several other areas, it feels like our systems are so complex that responsibility has been diluted.  In other words, no one is singularly responsible for anything any more.  Take the recent economic crisis stemming from mortgage-backed securities.  There were so many institutional players (and individuals within each of these) involved and a system built around passing "the hot potato", that it's difficult to point to any single actor or organization as being guilty of fraud or any other misdeeds.  Even the homeowners who were sold on the harmlessness of the mortgages they were signing up for had a role to play.  Transparency in this system might have helped us better assess where it all started to go wrong and the relative roles all of the players in this Theater of the Absurd.

Anyway, lots to digest here and clearly a need to break down the issues further, but thought this brain dump might be an interesting way to get thoughts out of my head ūüôā

Freedom isn’t Free and There’s No Business in Privacy

As I was walking up one of the streets near my house today, I passed an old Blazer with a bumper sticker that read, “Freedom isn’t Free”. ¬†This forced me to pause a bit and think about that. ¬†I mean, sure the word “free” appears in freedom, but the point of this bumper sticker was in support of having wars to…how do they say this again?…protect our freedoms. ¬†Yes, I have always seen this as a dubious claim and my sarcasm here is intended. ¬†But this sentence forced one of those word associations or semantic links in my head, and next thing I know, as has happened quite frequently lately, the thought of privacy popped in.

Earlier today, I caught a New York Times article on a startup called Bynamite.  This startup sounded really cool to me until I got to the bottom of the article, specifically:

“Like most start-ups, Bynamite faces long odds. To succeed, it must be easy to use, and users must trust it as a reliable middleman handling their data. It has no business model yet, though it could offer product recommendations, based on interests, and collect fees on resulting sales from merchants. It hasn‚Äôt ruled out accepting ads itself.”

Which begged the question of how could I trust a company who has yet to define its business model and is asking that I trust them with my information? ¬†Worse, they may even try to position themselves as deliverers of ads and product offers by knowing more about me, by receiving explicit permission to seek out all information that ad networks and sites have about me. ¬†But in thinking through this further I also began to realize that in fact there really is no business in helping people maintain their privacy. ¬†There’s no market for it either. ¬†Look at what’s happening today and you’ll see how little people actually care about their privacy no matter what they say. ¬†Look no further than in the success of Apple’s revised Terms of Use for iTunes and the App Store, where users give up their rights to manage and determine who gets their location data. ¬†Apple basically says that not only will they have access to it, but any thrid party developer can get this as well. ¬†Twitter enabled location data to be part of the metadata associated with tweets, and the faithful, like Apple’s, have embraced this capability. ¬†A more powerful example comes in the form of the flourishing ecosystems around location-based social networks and games. ¬†While today these require explicit “check-ins”, those using these services (which are growing every day) are becoming such emphatic users that it will soon be easier for them to have the check-ins occur automatically. ¬†This is one group of privacy diluting activities that are helping to get us used to the idea that our sensitive location information should be made public. ¬†Good news for the generation of thieves.

We also see another set of activities taking place around media properties that offer free access to their content. ¬†Some of these sites try to offer pay walls but these offerings are often challenged by users’ mantra of not wanting to pay for content. ¬†So instead these same people register to the media site’s free offering in exchange for being bombarded by advertisements. ¬†Of course many of these ads come from ad networks, all of whom get to place a “cookie” on the user’s machine to track their activities as they navigate around the Web, making it easier to better target offers to them…sort of. ¬†Software-as-a-Service applications like email also show some disregard of any privacy consciousness because¬†the laws around emails being private when sitting in an ISPs servers have yet to be clarified. ¬†In other words, without any certainty around what can happen to our emails, people have jumped on the use of hosted email services. Yes, I often hear the cop out refrain of “don’t say anything in email that you would regret anyone seeing”, and yet this has never really slowed anyone down from writing something that they wouldn’t want at least one other person seeing. ¬†It’s in the nature of our personal intercommunications.

Where individuals may not be prepared or willing to pay to keep their information truly private or at least in their control, the fact is that marketers, media companies (who have advertiser clients), governments, and investigators, are all willing to pay for your information, and frequently are willing to pay quite a bit for it. ¬†In effect, you don’t make enough money from keeping your information a secret (or in your control) to justify paying more than a Mercedes Benz dealership who wants to know your income and that you’re looking for a new car now. ¬†When you think about how many services online are free to end-users, you begin to realize the value that your information must bring them in order for them to afford to provide you with free services. ¬†Google is a simple but great example. ¬†In exchange for providing you access to the WORLD’s information, they get to place ads next to their search listings with offers to entice you. ¬†Google constantly refines how they decide what offers should appear as they figure out how to put information they have about you to better and better use. ¬†Whether it be your search history, or where you last clicked on, or what people you know have clicked on. ¬†Would you pay to use a search engine today? ¬†I thought not.

As I read what Bynamite was offering, my first take was “this is great, about time someone helped us all understand what information all of these “trackers” have about us”, but as I realized that few people would likely pay for such an offering, it made me realize that in the end, Bynamite may just be wrapping themselves in the privacy flag to position a better mouse trap for getting access to a higher quality version of our data. ¬†Once in hand, they can resort to the same business models that abound today for many media companies.

While I consider myself a pretty strong privacy advocate, I feel that the old adage “you can get any information from people by just offering a free t-shirt”, will make it difficult to enlist much of a following to support our need to have control over our own information, and to understand the effect of externalities that opened up as a result of no longer controlling our own information.

Identity to Activity Stream: “Baby, I’m breaking up with you”

The recently proposed¬†TOU (terms of use) changes at Facebook and the ensuing blog posts on TechCrunch and ReadWriteWeb on this subject really frustrated the heck out of me. ¬†It was that sinking feeling you get after having been duped or had something of value stolen from you. Like you’ve been emotionally raped. ¬†I recall working at Tribe when Facebook emerged on the scene with a very simple value proposition that resonated really well. ¬†Facebook claimed to be a service for making it easier to keep up with the people you already know and interact with in the physical world. ¬†Mark Zuckerberg went to great lengths to explain this differentiation from other social networks services like MySpace, where MySpace was more like a night club than an engaging safe place for your existing relationships.

Starting with the Facebook Beacon fiasco, which was recently settled for a paltry sum, and has created no deterrence for such behavior in the future, Facebook has slowly moved closer and closer to basically giving its users the finger and usurping all of their information for whatever uses it deems they will want or need. ¬†This of course, with no consideration to what they actually might want or need. ¬†The EFF had a good piece on the privacy default settings changes that Facebook made at the end of last year to start us down this slippery road. ¬†Danah Boyd, an ethnographer who has studied and written much about social networks and how users interact in them, has published a wonderful paper from her talk at SXSW titled “Making Sense of Privacy and Publicity” really getting to the meat of what the privacy issue is all about, worth reviewing.

So all of this got me thinking about who has control of our information and what could be done to put users back in charge of that. ¬†The trick of course, is making this easy and seamless for anyone to use as easily as they do Facebook today. ¬†There have been so many efforts to open up all of these platforms with initiatives like Open Social and Facebook pseudo-open efforts around Facebook Connect, but the model in these services seems to maintain that ownership of the user’s info is maintained by the service and not the user (or shared ownership but there’s no way to get it out when you want it back ;). ¬†This is further exacerbated by the fact that Facebook won’t let you delete your profile under the guise that by doing so it would make other users’ experience unappealing (though you can disable it). ¬†This is actually partially true. ¬†While at Tribe we did allow users to completely remove their profile which made for awkward experiences when reading threads in tribes (discussion groups) where the comments remained but the author’s profile pic was a question mark and their name was “unknown”. ¬†Still, the comments remained which is how we at least kept the thread flowing even without the author or commenter in the loop any longer. ¬†Suffice it to say, Facebook gets away with this because they can claim ownership to user information on the basis of needing this right in order to provide its various services back to users. ¬†It’s also what enabled them to create F8, since they had secured the rights they needed to be able to pass along user data to the various application providers that wanted to offer services on the Facebook platform. ¬†At first they only allowed the info to be kept for 24 hrs, now after repeated appeals from app developers, it looks like the limits on how long data can be held by an app have been considerably relaxed.

To say that all of this isn’t very tricky is an understatement, but when Facebook starts crossing the line by opening up the content they have usurped from its members through a change in the TOU beyond the original agreement that people entered into to participate on their service, something has got to give.

Let’s take a simplistic re-imagination of this service. ¬†First, while some times seen as a fringe lonely voice in the world of all things social media, Marc Canter had envisioned much of what I’m going to touch on here. ¬†The challenge I’ve frequently had with his version of things in the past is its seeming aversion to a business model to support it. ¬†With that said, I think that business models are catching up to Marc’s utopioan ideas so it’s time to try to fit these two things together. ¬†You should read his post today as it raises good points on these matters and got me motivated to finish off this post.

Cost and Ownership
Users who believe that they should be able to have a place online to store their information for free need to then ask themselves why they shouldn’t be paid a salary to sit home and do nothing. ¬†If a company offers a service for “free” but hires employees and buys hardware and software to render that service, unless those people are not getting paid either and the equipment is donated to them, the company will have to find a way to make money some how. ¬†Even in Open Source software, those developing code are doing this as a hobby or as a sideline to their day job where they are getting paid money for the work they’re doing there. ¬†Until we live in a society where you can survive by doing nothing, I’ll argue that there’s no such thing as a free business model. ¬†Seemingly free business models generally mean that one constituency gets some services for free while another constituency pays for services. ¬†The innovation in Web-based business models was that advertising became a source of revenue for the most unlikeliest of services, which then enabled services to be provided to end-users for…wait for it…”free”!

Freemium (as coined by one Fred Wilson’s blog readers fm Alacra) business models in effect segments the end-user community into two groups, those who would not likely pay for the service versus those who find immediate value from heavy usage and would easily pay for the service. ¬†Even where the ratio might be something like 90:10 (non paying:paying), that paying group is able to keep the service afloat for the 100% of the users (though in the case of Ning, this does not appear to have been the case). ¬†The bonus comes in when some of the 90% who get accustomed to the service and like it, grow their use of it to the point that they need to upgrade to the paid version. ¬†At that point, they don’t usually have a problem paying for it because the value has been proven to them. ¬†flickr was a great example of this for me, basically free until I was too invested to leave but needed more storage capacity and had no problem paying for it.

Now that users have grown accustomed to the services offered by social networks like Facebook and LinkedIn and the value proposition that they deliver, it becomes more interesting to contemplate what aspects of those services, end-users might be willing to pay for in order to exert more control. ¬†Imagine if for example, the end-user could actually control their own information. ¬†I mean full portability, as well as the ability to remove other services’ access to it or able to decide exactly how and when it could be used. ¬†Would that be worth paying for? ¬†Would it be worth paying $1 to $5 per year? ¬†Today, the credit reporting services that let you track any activity against your credit report are charging anywhere from $6 to $15 per month for their services. ¬†What if the profile repository that you controlled could provide similar service in terms of how and where your profile was being distributed, and enable you to control that access. ¬†What’s often forgotten about Facebook is that some times it’s not because you have given an app access to your info that it’s out there, more frequently it’s because one of your friends has downloaded an app that then gets all of their friends’ information. ¬†How is one to ever keep up with that sort of information dissemination?

With users willing to pay for services, ownership of their information could be bestowed on them since a service for their benefit could be maintained. ¬†In other words, by applying a business model that does not depend on advertising, users’ interests could be kept at the center of the value proposition, not as a distant second or third as is happening on the various social networks today. ¬†Around the time that Marc Andreessen was joining the board of directors at Facebook, and before the company was generating meaningful revenues, he was asked why he was so bullish on their prospects. ¬†His analysis was simple, if the company is able to generate $5 per member per year (which he thought was an understatement), they could do incredibly well. ¬†In other words, at the numbers we’re talking about, the service doesn’t have to cost much to users to be a viable business.

Separate Identity from Applications and Activity Stream
There has been lots of work going on out there on identity issues and standards, but sadly many of these have been moving very slowly or are too complicated for the average person to care about. ¬†Of the most promising that I’ve seen is OpenID, which is supported by the likes of Google and Yahoo! and other services out there. ¬†Today, both Facebook Connect and Twitter enable their respective identity systems to be used for basic authentication on other services. ¬†While this latter case is interesting and a step in the right direction, the fact that Facebook (more of an issue than Twitter) owns so much of your identity and activity streams, means that it’s not a user beneficial service, it’s primarily a benefit to Facebook who collects and monetizes the information because of the rights they have usurped from users who joined their service.

So what if we were able to architect identity as a separate *paid* service, and for any site that a user logs into using this identity service, the site can offer to broadcast the user’s activity if the user opts in for this. ¬†The broadcast format could be done using some modification to something like PubSubHubBub with user ID of some sort (I’ll leave for the more technically inclined to determine). ¬†Like any publish-subscribe model, any user could subscribe to different types of feeds from different users, like “only show me John’s photos, but show me all of Marsha’s activities”. ¬†In other words, by separating identity from applications, we can start to move to a place where people could control their identities and control what activities are published. ¬†“Like” as an app could also work on this model, since I could subscribe to what some of my friends “Like” (or whatever the right metaphor is here, “Want”, “Love”, “Have”, etc.). ¬†This would also go a long way to helping those wanting to keep up with you to only select those aspects of you that they really care about. ¬†The identity system could also be used as a (or integrated with other) messaging platform(s) in order to establish what sorts of information you are willing to share and with whom. ¬†Consider this analogous to your current “friends lists”. ¬†The ability to message to specific groups would also be critical, so you no longer needed to broadcast everything to the world if you knew it would only be of interest to one of your social groups.

I could see a new class of social sites emerging analogous to RSS readers, but topic or purpose focused, capturing only feeds that would be relevant to their mission.  Users might also use services that offer general feed capturing from all of those people the user is connected to through the identity system.  In effect, an aggregator of all feeds, but making these available according to the appropriate access controls.

In effect, what I’m saying is separate identity control from the information aggregators and the apps. ¬†The incentives around these two sets of activities are very different and should not be kept together as they are conflicting. ¬†The objective for managing information that is published to a service is to gain the broadest distribution for it, not to mention that as a publishing model, it’s true customers are advertisers, not individual users. ¬†Hence, if an info aggregator is managing identity then they will do everything possible to get users not to be able to contain, or retain control over, information those users might not want widely disseminated. ¬†On the flip side, an identity business is much more focused on the needs and security of end-users in the same way that banks used to be when it came to helping secure our money and other valuables ūüėČ ¬†It would also need to be more highly regulated business given the sensitivity of the control this business would be providing its users.

Now that we’ve seen what’s possible, and the benefits that social media and social applications can bring, I believe it’s time we rethink the control infrastructure so that we’re no longer at the mercy and the whims of companies with interests that are wildly divergent from the community they claim to serve…you and me.