Currently, the leading academic institutions researching “Internet and Society” are Anglo-Saxon affairs, notably at Harvard, Stanford, Yale, Toronto and Oxford. This has prompted the question: Where is mainland Europe’s counterweight in this fast-growing and important area of study?
Perhaps language is a barrier to the wider exposure of continental research, or maybe a clash of academic cultures is impeding cross-fertilization. Public universities in Europe might also be facing funding challenges that conspire against the fast founding of topical new research centers. Smaller places do exist, such as in Turin, and universities might have a faculty or lab that innovates in its niche. Whatever the reason, these efforts have not yet managed to steer the global debate regarding Internet and society, or match the impact of results-oriented projects such as the OpenNet Initiative.
The lack of European institutions with the caliber of a Berkman Center has been keenly felt, however, and so several initiatives are in the works. In Lund, plans are afoot to set up the Lund University Internet Institute (LUII). And in Berlin, Humboldt University’s Google-funded Alexander von Humboldt Institut für Internet und Gesellschaft (HIIG) has just launched, with a symposium to mark the occasion.
In a sign of how en vogue the topic is, that week there were at least two more conferences in the same vein — the corporate-sponsored Silicon Valley Human Rights Conference (#rightscon) in San Francisco, and the Swedish government-funded conference on Internet and democratic change (#net4change) in Stockholm. One speaker, Rebecca MacKinnon, even managed to headline two of them, in San Francisco and Berlin.
The audiences at these conferences varied. In San Francisco we saw civil society and corporations getting together for an “outcome-oriented” event aimed at using ICT to do good. Stockholm had NGOs, entrepreneurs and net activists comparing experiences in the trenches and building networks. Both conferences had strong representations from the Arab world.
In Berlin, in contrast, the audience was resolutely academic, first-world, and with a preponderance of competence in the social sciences and law. The focus, too, was not on outcomes or actions but on discussing research questions that the fledgling institute might pursue. These are not criticisms, but they do point to a big divergence in motivation: Participants in Stockholm and San Francisco approached the issues from a user perspective, and tended to place themselves in opposition to the perceived paternalism of state actors. The default stance to regulatory initiatives among this group is mistrust. They tend to see regulation as a necessary evil.
Meanwhile, in Berlin, regulation — whether national or even international — was far more openly mooted as a desirable means to protect society from the ill effects of Internet-mediated change.
This contrast of approaches was most visible in the two keynote speeches. Rebecca MacKinnon was clearly an emissary of the regulation skeptics, and her talk was a well-argued and illustrated cautionary tale of unintended consequences and slippery slopes. She drew a direct comparison between Chinese corporate self-censorship and the West’s regulatory tack towards intermediary liability, with its attendant chilling effects.
Phillip Mueller‘s keynote on open statecraft, by contrast, was a far more academic and abstract treatment by a public policy professor. Machiavelli and Martin Luther were invoked (the latter as a proto-blogger), governance and social production models were contrasted, and differences were tweaked between one-to-many, many-to-many and few-to-few media.
The overall effect was that of a public policy professional sizing up the Internet. MacKinnon, on the other hand, came across as a digital native sizing up public policy. It’s a subtle distinction, and both perspectives are valuable, but as an Internet user, I find myself hoping HIIG’s ethos doesn’t default solely to Mueller’s approach.
Privacy: How might a digital native’s approach to research questions differ? I think it could affect some of the underlying assumptions. An example: In the workshop on “Internet Legislation and Regulation through the Eyes of Constitution” [sic] there was some talk about how constitutional rights such as privacy or free expression must continue to be robustly protected as the Internet comes to permeate society. This is true, though privacy and free expression often stand in opposition to one another, and so a balance of rights needs to be found that corresponds to a society’s needs and expectations — that’s the job of judges and legislators.
What’s evident is that over time, the march of technology will naturally favor some rights at the expense of others; in a world of cheap camera phones, Facebook and Twitter, our private sphere shrinks and smudges into various shades of semi-privacy, in part because our friends and colleagues have ever more powerful tools to freely express themselves about us.
A conventional policy reaction to this technology-mediated erosion of privacy might be to legislate ever stronger protection in a valiant attempt to freeze privacy norms at pre-Internet levels. A digital native’s policy reaction would be to embrace this shifting natural balance, and focus instead on enabling emerging norms for privacy management. Privacy is a mutable social norm, and it always has been, waxing and waning over the centuries. The new norms need to accommodate this dynamism.
The Berkman Center’s Executive Director Urs Gasser, in his contribution to the workshop, made room for the digitally native response. He pointed out that policy responses to the Internet could range from enacting wholly new legislation, to the subsumption of old legislation into a new more relevant legal framework, to doing nothing at all. He warned against legislating too soon: Knee-jerk legislation produced the US Patriot Act, after all. And finally, he betrayed an engineer’s sensibility, suggesting that the online effects of legislation should be measurable, enabling feedback loops that would allow the legal system to learn.
Public Domain: In the workshop “The Digital Public Domain Between Regulation And Innovation” there was a similar recognition that traditional methods of rewarding creativity through intellectual property protection are being made obsolete by technological innovation. To digital natives, the concept of “buying” digital content is an increasingly anachronistic metaphor, and yet regulatory activity has focused almost exclusively on perpetuating the notion of property, and hence stealing, into the digital age. Meanwhile, technology strongly favors the duplication of digital content with impunity.
A digitally native policy approach, in contrast, appreciates that social practices are shifting just as much in the creation of content as in its consumption. The old lone-author notion of content creation that traditional IP law has catered to is now just one extreme in a spectrum of increasingly collaborative and reiterative creative processes. This new reality has triggered a Cambrian explosion of more apt content use schemes: Licensing models such as the Creative Commons and GNU GPL, voluntary micropayment reward schemes such as Flattr and Readability, and flat-rate consumption schemes such as Spotify and Netflix.
All of these innovations are blurring the boundaries of the public domain, and constitute a de facto assault on IP orthodoxy. What they also share is a bottom-up, evolutionary genesis, born of disparate social movements and entrepreneurial initiatives, as opposed to a more deliberate, top-down approach championed by University of Haifa Dean Niva Elkin-Koren, who was present at the workshop. Her wish was that “we need to start from the purpose of the public domain and then derive norms.”
I certainly approve of this sentiment, though I suspect such a project would crucially lack broader support among copyright incumbents. In the meantime, the best we can do is have these emerging use schemes reshape the public domain in an ad hoc way, with the net effect so far being positive. Elkin-Koren has a point, however, which she has long argued: The evolution of this process does not guarantee a positive outcome.
So, even among digital natives, the tactics may differ while the strategies align. Fortunately, these two approaches are not mutually exclusive. And perhaps the specter of a Darwinian evolution of content use norms will push the incumbents towards a system that more holistically looks at how to maximize creativity with a minimum of constraints — something which ACTA demonstrably fails to do.
With all the great people at the workshops and on the sidelines, HIIG looks set to bring a strong European voice to the “Internet and Society” debate. And with MacKinnon, Gasser and Elkin-Koren contributing to the launch symposium, here’s hoping that voice also embraces the digitally native view.