Category Archives: Uncategorized

Fake participation fatigue

Two items make a trend, right?

1. When the UK government hosted an international conference on cyber-security last week, commingling foreign ministers from all over with industry representatives and, daringly, Wikipedia’s Jimmy Wales, it was the backdrop that struck me as incongruous:

Perhaps the organizers were under the impression that getting the #LondonCyber Twitter hashtag to trend would be a sufficient proxy for civil society participation in an otherwise closed talking shop. No doubt they anticipated the criticism, and some tech-savvy mandarin came up with the “Let them tweet hashtags” solution.

And never mind the audacity of David Cameron fishing for tweets so publicly just months after the London riots had him running to sacrifice social media at the altar of public security.

2. WhiteHouse.gov, in its zeal to embrace participatory media, now allows people to start petitions, promising an official response upon sufficient signatures. The problem is that these petitions do not lead to policy change, but to rote copy-paste responses that rehash the administration’s line (exhibit one and two).

That the following would happen is inevitable:

In case it disappears from the website, here is the petition text:

We demand a vapid, condescending, meaningless, politically safe response to this petition.
Since these petitions are ignored apart from an occasional patronizing and inane political statement amounting to nothing more than a condescending pat on the head, we the signers would enjoy having the illusion of success. Since no other outcome to this process seems possible, we demand that the White House immediately assign a junior staffer to compose a tame and vapid response to this petition, and never attempt to take any meaningful action on this or any other issue. We would also like a cookie.

Last I checked, over 10,000 had signed, with a goal of 25,000 looking well within reach.

Yes, people can and do set policy — via democratic elections and referenda. One day, the ability to vote online in binding elections or referenda will become commonplace. Until then, administrations who imply that participatory media lets citizens participate in anything more meaningful than government PR campaigns do so at the risk of being ridiculed. (h/t Felix)

Europe arrives? Berlin’s Humboldt Institute for Internet and Society launches

Currently, the leading academic institutions researching “Internet and Society” are Anglo-Saxon affairs, notably at Harvard, Stanford, Yale, Toronto and Oxford. This has prompted the question: Where is mainland Europe’s counterweight in this fast-growing and important area of study?

Perhaps language is a barrier to the wider exposure of continental research, or maybe a clash of academic cultures is impeding cross-fertilization. Public universities in Europe might also be facing funding challenges that conspire against the fast founding of topical new research centers. Smaller places do exist, such as in Turin, and universities might have a faculty or lab that innovates in its niche. Whatever the reason, these efforts have not yet managed to steer the global debate regarding Internet and society, or match the impact of results-oriented projects such as the OpenNet Initiative.

The lack of European institutions with the caliber of a Berkman Center has been keenly felt, however, and so several initiatives are in the works. In Lund, plans are afoot to set up the Lund University Internet Institute (LUII). And in Berlin, Humboldt University’s Google-funded Alexander von Humboldt Institut für Internet und Gesellschaft (HIIG) has just launched, with a symposium to mark the occasion.

I attended this First Berlin Symposium on Internet and Society (#BSIS11) on Oct 26-28. Below are some notes on the event and some wider thoughts on its context.

In a sign of how en vogue the topic is, that week there were at least two more conferences in the same vein — the corporate-sponsored Silicon Valley Human Rights Conference (#rightscon) in San Francisco, and the Swedish government-funded conference on Internet and democratic change (#net4change) in Stockholm. One speaker, Rebecca MacKinnon, even managed to headline two of them, in San Francisco and Berlin.

The audiences at these conferences varied. In San Francisco we saw civil society and corporations getting together for an “outcome-oriented” event aimed at using ICT to do good. Stockholm had NGOs, entrepreneurs and net activists comparing experiences in the trenches and building networks. Both conferences had strong representations from the Arab world.

In Berlin, in contrast, the audience was resolutely academic, first-world, and with a preponderance of competence in the social sciences and law. The focus, too, was not on outcomes or actions but on discussing research questions that the fledgling institute might pursue. These are not criticisms, but they do point to a big divergence in motivation: Participants in Stockholm and San Francisco approached the issues from a user perspective, and tended to place themselves in opposition to the perceived paternalism of state actors. The default stance to regulatory initiatives among this group is mistrust. They tend to see regulation as a necessary evil.

Meanwhile, in Berlin, regulation — whether national or even international — was far more openly mooted as a desirable means to protect society from the ill effects of Internet-mediated change.

This contrast of approaches was most visible in the two keynote speeches. Rebecca MacKinnon was clearly an emissary of the regulation skeptics, and her talk was a well-argued and illustrated cautionary tale of unintended consequences and slippery slopes. She drew a direct comparison between Chinese corporate self-censorship and the West’s regulatory tack towards intermediary liability, with its attendant chilling effects.

Phillip Mueller‘s keynote on open statecraft, by contrast, was a far more academic and abstract treatment by a public policy professor. Machiavelli and Martin Luther were invoked (the latter as a proto-blogger), governance and social production models were contrasted, and differences were tweaked between one-to-many, many-to-many and few-to-few media.

The overall effect was that of a public policy professional sizing up the Internet. MacKinnon, on the other hand, came across as a digital native sizing up public policy. It’s a subtle distinction, and both perspectives are valuable, but as an Internet user, I find myself hoping HIIG’s ethos doesn’t default solely to Mueller’s approach.

Privacy: How might a digital native’s approach to research questions differ? I think it could affect some of the underlying assumptions. An example: In the workshop on “Internet Legislation and Regulation through the Eyes of Constitution” [sic] there was some talk about how constitutional rights such as privacy or free expression must continue to be robustly protected as the Internet comes to permeate society. This is true, though privacy and free expression often stand in opposition to one another, and so a balance of rights needs to be found that corresponds to a society’s needs and expectations — that’s the job of judges and legislators.

What’s evident is that over time, the march of technology will naturally favor some rights at the expense of others; in a world of cheap camera phones, Facebook and Twitter, our private sphere shrinks and smudges into various shades of semi-privacy, in part because our friends and colleagues have ever more powerful tools to freely express themselves about us.

A conventional policy reaction to this technology-mediated erosion of privacy might be to legislate ever stronger protection in a valiant attempt to freeze privacy norms at pre-Internet levels. A digital native’s policy reaction would be to embrace this shifting natural balance, and focus instead on enabling emerging norms for privacy management. Privacy is a mutable social norm, and it always has been, waxing and waning over the centuries. The new norms need to accommodate this dynamism.

The Berkman Center’s Executive Director Urs Gasser, in his contribution to the workshop, made room for the digitally native response. He pointed out that policy responses to the Internet could range from enacting wholly new legislation, to the subsumption of old legislation into a new more relevant legal framework, to doing nothing at all. He warned against legislating too soon: Knee-jerk legislation produced the US Patriot Act, after all. And finally, he betrayed an engineer’s sensibility, suggesting that the online effects of legislation should be measurable, enabling feedback loops that would allow the legal system to learn.

Public Domain: In the workshop “The Digital Public Domain Between Regulation And Innovation” there was a similar recognition that traditional methods of rewarding creativity through intellectual property protection are being made obsolete by technological innovation. To digital natives, the concept of “buying” digital content is an increasingly anachronistic metaphor, and yet regulatory activity has focused almost exclusively on perpetuating the notion of property, and hence stealing, into the digital age. Meanwhile, technology strongly favors the duplication of digital content with impunity.

A digitally native policy approach, in contrast, appreciates that social practices are shifting just as much in the creation of content as in its consumption. The old lone-author notion of content creation that traditional IP law has catered to is now just one extreme in a spectrum of increasingly collaborative and reiterative creative processes. This new reality has triggered a Cambrian explosion of more apt content use schemes: Licensing models such as the Creative Commons and GNU GPL, voluntary micropayment reward schemes such as Flattr and Readability, and flat-rate consumption schemes such as Spotify and Netflix.

All of these innovations are blurring the boundaries of the public domain, and constitute a de facto assault on IP orthodoxy. What they also share is a bottom-up, evolutionary genesis, born of disparate social movements and entrepreneurial initiatives, as opposed to a more deliberate, top-down approach championed by University of Haifa Dean Niva Elkin-Koren, who was present at the workshop. Her wish was that “we need to start from the purpose of the public domain and then derive norms.”

I certainly approve of this sentiment, though I suspect such a project would crucially lack broader support among copyright incumbents. In the meantime, the best we can do is have these emerging use schemes reshape the public domain in an ad hoc way, with the net effect so far being positive. Elkin-Koren has a point, however, which she has long argued: The evolution of this process does not guarantee a positive outcome.

So, even among digital natives, the tactics may differ while the strategies align. Fortunately, these two approaches are not mutually exclusive. And perhaps the specter of a Darwinian evolution of content use norms will push the incumbents towards a system that more holistically looks at how to maximize creativity with a minimum of constraints — something which ACTA demonstrably fails to do.

With all the great people at the workshops and on the sidelines, HIIG looks set to bring a strong European voice to the “Internet and Society” debate. And with MacKinnon, Gasser and Elkin-Koren contributing to the launch symposium, here’s hoping that voice also embraces the digitally native view.

The Wikileaks blame game — who released what, exactly?

The story of how the unredacted version of the US diplomatic cables ended up in the wild really is a disgraceful farce — a tragedy of errors, with plenty of blame to go round. Nigel Parry has a great round-up and Micah Sifry also weighs in.

There’s one thing I am not inclined to blame Assange for, however: It’s been misreported just about everywhere that Assange and Wikileaks released the fully unredacted cables themselves, as per their tweet:

WIKILEAKS RELEASE: Full Cablegate2 database file (torrent) http://file.wikileaks.org/torrent/cable_db_full.7z.torrent

The above downloads a SQL file. They also made a 60GB HTML version available, in addition to the SQL database version above. I downloaded both files, and searched them for previously redacted cables which I had read when they were released. In these BitTorrent files, however, those cables are still redacted. It’s only the remaining, previously unreleased files which are left unredacted.

The online resources showing this partially redacted version of the cables are cablesearch.org and cablegatesearch.net. (Both these links go to the same cable from 2006 where a Chinese official asks the US to censor Google Earth imagery in China (without success). In both cases, the name of the official is redacted.)

The completely unredacted version of the cables is the file hosted by Cryptome, and it is this version which the cables.mrkva.eu online search tool queries. (Here is the same cable about Google Earth as before. It reveals the name of the official.)

So it appears that Wikileaks is not directly responsible for “unredacting” the previously redacted material that is now floating around. The source of that lies elsewhere. Is this a distinction without a difference? I’m not sure; Wikileaks did after all tweet a link to the cables.mrkva.eu search tool. Perhaps in the current chaos Wikileaks is not even sure what it is releasing.

So who bears the preponderance of the blame, then? Right now I’m leaning towards The Guardian’s David Leigh for his apparent technological ineptitude in not knowing that encrypted files don’t come with temporary passwords — perhaps he watched too many James Bond films, where messages self-destruct on camera. By Parry’s account, Leigh couldn’t even unzip a file on his own. It’s not surprising then that he’d put the password to the unredacted original trove of cables in his book, published in February 2011. It’s colossal cluelessness, and I hope he’s sleeping badly for all the vulnerable people he’s put at risk.

Julian Assange is also to blame, primarily for being so cavalier with the information, entrusting it to people who are not capable of keeping it safe (or perhaps not being clear enough to Leigh about the nature of the file in question). As a result, intelligence agencies have likely had access to the unredacted cables for some time now.

The damage this has done is real. I would not want to be the Chinese person in this 2010 cable, a nephew of a Politburo Standing Committee member, who told US diplomats that cyber attacks against Google in China were being coordinated by his government. That cable was previously redacted, but now shows his name.

Nor would I want to be the Chinese person in this 2008 cable, published by Wikileaks a few days ago in its unredacted form, where he tells US diplomats:

Xxx himself is a leader of an underground church in Shanghai. He recently returned from a secret meeting of leaders of underground churches from Beijing, Shanghai, Tianjin, Wuhan, and Nanjing. Participants in the meeting reported that there has been an increase of governmental scrutiny and pressure because of the Olympics.

Getting his real name is now as easy as clicking on the link above. If these people, and others like them, find themselves harassed or arrested as a result of this débâcle, then I’m afraid that on balance, the Wikileaks experiment in radical transparency has made the world a worse place — and all through the sheer ineptitude of all parties concerned.

Wikileaks own leak ushers in the era of radical transparency

As the story emerges about how Wikileaks’ US diplomatic cables came to be available in an unredacted, unencrypted form this week, potentially harming the safety of many informants and other vulnerable people, one obvious lesson is again in evidence:

People are the weakest link in any encryption system.

The potential for human error is constant and unswerving, and so the odds were always that eventually, somehow, somebody would screw up — even somebody as security obsessed as Julian Assange was not exempt. It’s a common cliché to posit that “information wants to be free”; perhaps it is more accurate to say that for information, being encrypted is an unstable state — either the password is soon forgotten or taken to the grave and the information disappears from the universe, or else some blunder eventually allows it to escape to the world at large. For information, in the long run, it’s more “Live free or die”; there is no stable intermediate state. Conspiracies are short-lived at best because humans are fallible; those Knights Templar successfully defending the Holy Grail across the millennia exist only in bad fantasy fiction.

The 500MB BitTorrent file that contains all the cables unzips to around 60GB of HTML files — my computer’s been at it for over 8 hours and counting. I can’t not rifle through this trove now that it is in the wild, of course. Previously, I was frustrated that I couldn’t just do text searches on all the content for my own ad hoc investigative reporting, although I understood and approved of the reason why. Now that this information is in the open, we can’t just let those with nefarious motives read them — we all need to read up, so that there is some hope for a silver lining. (If I find anything relevant to Dliberation’s remit, I’ll blog it of course.)

It looks like we will after all have to adjust to living in a global society where radical transparency is an expected outcome, whether from customer database leaks or whistleblower actions. For a while, as The Guardian and others released redacted versions of the cables, we thought Pandora’s box could be opened just a sliver. We were wrong.

Flash mob rule

Much has already been said about the looting spree that afflicted London and other British cities last week, so I’ll stick to just one observation:

These incidents were traditional flash mobs in every sense but for their destructive intent. All flash mobs — be it a “spontaneous” pillow fight in central Stockholm or a frozen Grand Central Station in New York — share the same dynamic: Social (or semi-social) media are used to gather a group at a pre-defined semi-secret location to engage in a common synchronized activity.

In the case of the London incidents, the looters discovered that this dynamic can be co-opted to overwhelm local law enforcement through sheer numbers at a certain place and for a certain time, thus facilitating looting.

Law enforcement has always been a little skittish about flash mob projects, precisely because there was that “what if” scenario looming — what if the group act was anti-social in its intent, instead of social? Now we know it works very well. And so do the looters.

PDF 2011, and a first post

The seeds of this blog were planted over a year ago, when I found myself more and more fascinated by the implications of a global society where almost all content is digitally stored and transmitted. At the time, the topic felt a little niched, but in the intervening year the news has been invaded by Wikileaks, cyber attacks against major corporations, tightening internet censorship in China and elsewhere, and the emergence of social media-savvy revolutionaries in the Middle East.

Ironically, the topic is itself now ripe for close and constant surveillance; this is what Dliberation.org is for. And there is no better time to start such a project than at Personal Democracy Forum, edition 2011.

Forget Twitter and Facebook; this is a satellite TV revolution

(Originally posted pseudonymously on Ultimi Barbarorum, January 28, 2011.)

Today’s lesson: The internet and mobile telephony are not robust technologies when it comes to withstanding state intervention. States can and do pull the plug on them when they sense an existential threat. China turned off the Internet in restless  Xinjiang for 9 months in 2009-2010, and Iran and other countries turn off sms and mobile internet use when it suits them. Today, Egypt’s authorities tried to dampen a popular uprising by shutting down both its Internet and mobile telephony.

This is sobering, but points the way to how such draconian measures can be circumvented by those intent on accessing independent news: By not relying at all on terrestrial infrastructure such as cell towers and Internet cabling, falling back instead on direct satellite communications.

By necessity, this set-up reverts to a broadcast/receiver relationship, with international broadcasters like the BBC and Al Jazeera able to invest in satellite video phones as a back-up in case authorities turn off other means of broadcasting live. The Egyptian people, meanwhile, have ubiquitous access to satellite television — as anyone who’s been to Cairo can attest after just a brief glance across the rooftops:

Satellite dishes on Cairo rooftops.
Satellite dishes on Cairo rooftops.

There is no way to restrict the reception of such broadcasting — there is no way for Mubarak to prevent Egyptians from watching satellite broadcasts of Al Jazeera short of turning off the electricity. This fall-back on satellite reception is not something widely available in all countries. In China, for example, it is cable television that is ubiquitous, a terrestrial mode of communication, that can and is blacked out at will by the Chinese authorities — most recently whenever CNN broadcast news of Liu Xiaobo’s Nobel Peace Prize.

While I am sure that much of Egypt’s older cohorts are glued to their televisions tonight, I wonder if turning off the Internet and mobile telephony earlier today didn’t have an effect opposite to what Mubarak’s regime intended: Egypt’s urban youth, suddenly without their main means of diversion or entertainment, had only the streets to go to. For once, there was no Twitter or Facebook or YouTube to distract them. All that was left to do was to go out and vent their rage.