Book Review: The Future of The Internet, and How to Stop It

[A version of this review appeared in the EPIC Alert, 15.09. Sign up for the EPIC Alert here.]

Professor Zittrain‘s modestly titled “The Future of the Internet — And How To Stop It” elucidates what has made the Internet so successful, so creative, and yet has also placed it in danger. Zittrain finds the solution by isolating ways these key ingredients can be used to solve the rising problems. From the punch card census to Wikipedia; from the Internet worm to massive botnets run by mobsters; from government mainframes to embarrassing user-generated viral videos, Zittrain covers the gamut of the Internet history and experience, tying it under his model of the competition between generative networks and controlled, limited appliances and networks.

As Zittrain explains, the Internet is a generative network — it fosters innovation and disruption — in contradiction to appliancized networks such as the old America Online or Compuserve, which greatly limited innovation in favor of control. This generativity works on several “layers” of the internet — from the basic IP, or Internet Protocol, to devices, operating systems and even the content or social aspects of the Internet. This generativity has allowed the explosion of the Internet and its various uses: any device can be made to connect to the IP protocol; transport protocols such as FTP and HTTP can be made to work on IP; Websites and email services run on those protocols; computer components can run many operating systems; operating systems can run any software; Wikis and other software allow anyone to modify websites without the need to learn HTML. This he calls the “Hourglass structure of the Internet”:

While generativity has brought the Internet’s benefits into existence it has also brought a new breed of problems. Computers that run any code can easily fall victims to viruses, and become sources of annoyance or malfeasance to the rest of the Internet. Compromised computers can launch spam, phishing and denial of service attacks. One answer to these problems is to reduce the generativity, to create more “tethered appliances.” In some ways the next generation does not see the same generativity that the Internet previously had. Youth communicate via instant messaging, texting and social network sites, avoiding e-mail as too filled with spam, viruses and phishing attempts. Zittrain considers these “contingently generative” services — you’re free to do a lot, to create, but this license may be withdrawn.

But centralized, contingently generative devices raise other problems, of control and information collection. An automobile with a navigation device under control of a provider can have that device hijacked for eavesdropping by law enforcement. A digital video recorder that receives updates from its central server can be updated according to a court order, disabling functions that users were expecting.

Zittrain offers a different solution from the tethered appliance model. The answer is to promote solutions that preserve and indeed depend on the generative features. At the technical level, computers can be technically configured to easily recover from user mistakes — undoing virus installations. Or users can share simple statistics about their computers, allowing the creation of systems that decide whether new code is safe or not.

Privacy is the subject of one chapter, with Zittrain proposing that the solution for generative privacy problems lie in the “social layer.” Privacy regulations, based on the 1973 principles of Fair Information Practice (FIPs), are appropriate to “privacy 1.0” threats of centralized information collection. Privacy 2.0 sees the dangers of ubiquitous sensors, of peer production and reputation systems. Zittrain would promote code-backed norms — so that one can list one’s privacy preferences similar to the way that the Creative Commons facilitates one listing their Copyright licensing preferences. Or users could be enabled to contextualize their data online, or enact “reputation bankruptcies” that would expire some of their older activity. But these ideas are still reminiscent of Fair Information Practices, still focused on the privacy 1.0 principles. Code backed norms are simply another way to talk about user control and consent. Contextualizing one’s data online is similar to the FIP of being able to amend or correct a record. Reputation bankruptcy is akin to deleting a record. And lastly, social networks are not quite distributed — YouTube is a centralized place to take down videos; Facebook and Myspace can both surveil as well exclude the content on it.

Norms and methods for expressing privacy preferences may help, but ultimately privacy 1.0 harms will remain and may indeed grow with the Internet. Privacy 1.0 solutions are still out there, but it looks like we’ll just need new ways of enacting them. Traditional concepts of privacy protection will still be relevant – they just need to be re-thought, engineered in. Perhaps even re-generated.

Posted: May 29, 2008 in:

Social Networks as Regulated Utilities?

At CFP‘s, panel on “Privacy, Reputation, and the Management of Online Communities” professor Frank Pasquale mentioned the idea of treating social networking service providers as regulated utilities. He may or may not have read about the Facebook VP that described Facebook as akin to a cable company. One carrying social data:

We view ourselves as a technology company at our core. We’re the cable company creating the pipes, and what they carry is social information and engagement information about people.

So they carry your social information — your social relationships and identity, contextualized and with that social meaning, not just at the level of Internet packets. Facebook knows the meaning of what it is carrying, unlike your telecommunications company. They think of themselves as carriers that set up the structure that knows that meaning and allows it to be communicated.

CPNI and Content?

This sort of thinking opens of lots of neat new analogies. Lets think about privacy. Per the Telecommunications act, telephone companies have to protect the privacy of your “Customer Proprietary Network Information,” or CPNI. This is basically the information the company needs to provide the service — the numbers you dial, the location you dial from, etc… Importantly, not the content of your communication — that already has intense protection. The legal definition includes:

information that relates to the quantity, technical configuration, type, destination, location, and amount of use of a telecommunications service subscribed to by any customer of a telecommunications carrier, and that is made available to the carrier by the customer solely by virtue of the carrier-customer relationship”

Telephone companies have a duty to protect this information but can use it in advertising, or sell it to joint venture partners with consent. EPIC has more information on protection for CPNI.

So what is the equivalent in the social networking space? The people you send messages to are going to be protected. So should be your browsing, and just about all of your social actions. But there’s an important leap here. If Facebook thinks of itself as a pipe of social information, then your connections — your social graph — would be more like the content. That’s the change: your connections are your social messages, rather than your connections being who receives your messages. That would give your social graph quite a bit of protection, and disallow Facebook from reading it.

Common Carriers

Utilities are also sometimes viewed as common carriers. Common carriers can’t discriminate in what they carry, and further are absolutely liable. I analogize the first to the idea that Facebook won’t judge my social graph, and thus can’t discriminate based on how I am socially relating. This means I can be free to move data around, and even download my social graph from Facebook. I previously blogged about the potential for privacy enhancement from so called “data portability.” It would be discriminatory, and a violation of common carrier principles, for Facebook to prohibit certain uses of the social graph.

Posted: May 27, 2008 in:

Computers, Freedom and Privacy 2008

Today I got to CFP 2008: Technology Policy ’08 conference. Tomorrow I’ll be presenting on what could be a hopeful new direction in spyware policy. I’ll be speaking on the stalker spyware complaint EPIC filed earlier this year.

In this digital age, spyware is used by employers and parents, as well as stalkers and perpetrators of abuse. This workshop will discuss whether anti spyware policy and technology is appropriately tailored to spyware uses in the social context of abuse: misusing power and control. The essence of spyware is to spy, to monitor, to watch someone – all without their knowledge. How do we identify and respond to harmful, inappropriate use? What are the challenges faced by policymakers and antispyware technology providers when dealing with abusive uses of spyware? This workshop will explore the varying opinions on spyware policy and practice as it intersects with privacy and safety.

Other program topics that look interesting include network neutrality, reputation systems, and social networks. The whole program is here.

Posted: May 21, 2008 in:

As the Web Goes Social, Where Is Privacy?

Google, MySpace, and Facebook have recently announced initiatives to share social networking information with third party sites. Google’s announcement describes Google Friend Connect:

This new service, announced as a preview release tonight at Campfire One, lets non-technical site owners sprinkle social features throughout their websites, so visitors will easily be able to join with their AOL, Google, OpenID, and Yahoo! credentials. You’ll be able to see, invite, and interact with new friends or, using secure authorization APIs, with existing friends from social sites on the web like Facebook, Google Talk, hi5, LinkedIn, orkut, Plaxo, and others.

Facebook similarly describes its initiative:

Facebook Connect is the next iteration of Facebook Platform that allows users to “connect” their Facebook identity, friends and privacy to any site. This will now enable third party websites to implement and offer even more features of Facebook Platform off of Facebook – similar to features available to third party applications today on Facebook.

It adds that key features will be: “Trusted authentication; Real Identity; Friends Access; and Dynamic Privacy.” Myspace’s launch includes some partner sites already:

LOS ANGELES—May 8, 2008—MySpace, the world’s most popular social network, alongside Yahoo!, eBay, Photobucket, and Twitter, today announced the launch of the MySpace ‘Data Availability’ initiative, a ground-breaking offering to empower the global MySpace community to share their public profile data to websites of their choice throughout the Internet. Today’s announcement throws open the doors to traditionally closed networks by putting users in the driver’s seat of their data and Web identit

Data Portability

These are being referred to as advances in “data portability” (see here, and here, for example). Data portability is the name given to the idea that data a user has generated with one vendor can easily be moved to or manipulated by another vendor, without the need for any pre-existing relationships.

There was some promise that data portability might improve privacy. Timothy Lee at Techdirt blogged on how data portability could mitigate privacy issues. I previously blogged about a position paper from ENISA on social networking security recommendations. They noted (pdf):

Many of the threats . . . in particular those relating to data privacy, have arisen because SNSs [Social Network Sites] are extremely centralized (i.e., high numbers of users with few providers). Where users were previously protected by spreading tehr data over many mutually inaccessible repositories, its now collected in a single place. It is currently very difficult to transfer your social network from one provider to another, or to interact between provers. . . . While there are clear commercial reasons behind these trends, the security and usability implications of a centralized and closed data storage model should not be ignored. A possible solution to this problem is portable social networks, which allows users to control and syndicate their own ‘social graph’. . . . At a minimum, it should be possible to export the social graph and its preferences from one providers to another and, ideally, users would have the possibility of complete control over their own social data, syndicating it to providers which created added-value ‘mashup’ applications.

The Promise of Privacy?

So portability holds great promise — users are able to easily move between providers; no one provider is a central point of tracking; and users control where their data goes and presumably who has access to it.

But what is now being billed as “portability” looks quite far from that promise. These systems look like they will allow them to track you as you use several sites, rather than allow you leave existing social networks with your data. That’s not really allowing data to move around — thats just SNSs giving you a long leash. It looks like more, not less decentralization. Instead of you having the security and privacy of having different accounts, different persona, you’ll instead have on single logon for several web services. In fact Facebook seems to tout as an advantage that people will no longer be anonymous, that they’ll be coming in with their entire social graph to new ventures. When privacy activists are telling users to use pseudonyms, to use different logins, this new development is going in a different direction.

I suspect these companies want your entire web experience to be “social.” But more importantly, while logged into them, and while a captive audience to their ads, and all while building up their profiles of personal information so that they can market to you.

Posted: May 20, 2008 in:

BBC Creates Data-Mining Facebook Application

I earlier blogged about the civil liberties dangers that law enforcement Facebook applications pose. The problem: by default, applications have access to much of your and your friends’ data.

The BBC has written an application that shows how easy data collection can be.

We wrote an evil data mining application called Miner, which, if we wanted, could masquerade as a game, a test, or a joke of the day. It took us less than three hours.

But whatever it looks like, in the background, it is collecting personal details, and those of the users’ friends, and e-mailing them out of Facebook, to our inbox.

When you add an application, unless you say otherwise, it is given access to most of the information in your profile. That includes information you have on your friends even if they think they have tight security settings.

Did you know that you were responsible for other people’s security?

Facebook responded:

Users are strongly encouraged to report any suspected misuse of information to Facebook. Additionally, users can block individual applications from accessing any of their data, block all applications, or block individual types of information.

We have sophisticated technology and a dedicated team to address inappropriate activity by applications. Access by applications to Facebook user data is strictly regulated and if we find that an application is in violation of our terms and policies, we take appropriate action to bring it into compliance or remove it entirely.

I hope this means that Facebook has some automated processes for detecting when applications are accessing too much data, and that this causes them to be reviewed. But overall I don’t see how users can be careful when adding an application. They have no way of knowing what it does.

Posted: May 2, 2008 in: