News
net.wars: This is not (just) about Google
by Wendy M Grossman | posted on 21 September 2012
We had previously glossed over the news, in February, that Google had overridden the "Do Not Track" settings in Apple's Safari Web browser, used on both its desktop and mobile machines. For various reasons, Do Not Track is itself a divisive issue, pitting those who favor user control over privacy issues against those who ask exactly how people plan to pay for all that free content if not through advertising. But there was little disagreement about this: Google
In August, the US Federal Trade Commission fined Google $22.5 million for that little escapade. Pocket change, you might say, and compared to Google's $43.6 billion in 2011 revenues you'd be right. As the LSE's Edgar Whitely pointed out on Monday, a sufficiently large company can also view such a fine strategically: paying might be cheaper than fixing the problem. I'm less sure: fines have a way of going up a lot if national regulators believe a company is deliberately and repeatedly flouting their authority. And to any of the humans reviewing the fine - neither Page nor Brin grew up particularly wealthy, and I doubt Google pays its lawyers more than six figures - I'd bet $22.5 million still seems pretty much like real money.
On Monday, Simon Davies, the founder and former director of Privacy International, convened a meeting at the LSE to discuss this incident and its eventual impact. This was when it became clear that whatever you think about Google in particular, or online behavioral advertising in general, the questions it raises will apply widely to the increasing numbers of highly complex computer systems in all sectors. How does an organization manage complex code? What systems need to be in place to ensure that code does what it's supposed to do, no less - and no more? How do we make these systems accountable? And to whom?
The story in brief: Stanford PhD student Jonathan Mayer studies the intersection of technology and privacy, not by writing thoughtful papers studying the law but empirically, by studying what companies do and how they do it and to how many millions of people.
"This space can inherently be measured," he said on Monday. "There are wide-open policy questions that can be significantly informed by empirical measurements." So, for example, he'll look at things like what opt-out cookies actually do (not much of benefit to users, sadly), what kinds of tracking mechanisms are actually in use and by whom, and how information is being shared between various parties. As part of this, Mayer got interested in identifying the companies placing cookies in Safari; the research methodology involved buying ads that included codes enabling him to measure the cookies in place. It was this work that uncovered Google's bypassage of Safari's Do Not Track flag, which has been enabled by default since 2004. Mayer found cookies from four companies, two of which he puts down to copied and pasted circumvention code and two of which - Google and Vibrant - he were deliberate. He believes that the likely purpose of the bypass was to enable social synchronizing features (such as Google+'s "+1" button); fixing one bit of coded policy broke another.
This wasn't much consolation to Whitley, however: where are the quality controls? "It's scary when they don't really tell you that's exactly what they have chosen to do as explicitly corporate policy. Or you have a bunch of uncontrolled programmers running around in a large corporation providing software for millions of users. That's also scary."
And this is where, for me, the issue at hand jumped from the parochial to the global. In the early days of the personal computer or of the Internet, it didn't matter so much if there were software bugs and insecurities, because everything based on them was new and understood to be experimental enough that there were always backup systems. Now we're in the computing equivalent of the intermediate period in a pilot's career, which is said to be the more dangerous time: that between having flown enough to think you know it all, and having flown enough to know you never will. (John F. Kennedy, Jr, was in that window when he crashed.)
Programmers are rarely brought into these kinds of discussions, yet are the people at the coalface who must transpose human language laws, regulations, and policies into the logical precision of computer code. As Danielle Citron explains in a long and important 2007 paper, Technological Due Process, that process inevitably generates many errors. Her paper focuses primarily on several large, automated benefits systems (two of them built by EDS) where the consequences of the errors may be denying the most needy and vulnerable members of society the benefits the law intends them to receive.
As the LSE's Chrisanthi Avgerou said, these issues apply across the board, in major corporations like Google, but also in government, financial services, and so on. "It's extremely important to be able to understand how they make these decisions." Just saying, "Trust us" - especially in an industry full of as many software holes as we've seen in the last 30 years - really isn't enough.
Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, follow on Twitter or send email to netwars(at) skeptic.demon.co.uk (but please turn off HTML).
in News
net.wars: My identity, my self
net.wars: The doors of probability
net.wars: Don't take ballots from smiling strangers
you're reading:
net.wars: This is not (just) about Google