News

net.wars: Debating the robocalypse

by Wendy M Grossman | posted on 02 December 2011


"This House fears the rise of artificial intelligence."

Wendy M Grossman

This was the motion up for debate at Trinity College Dublin's Philosophical Society (Twitter: @phil327) last night (December 1, 2011). It was a difficult one, because I don't think any of the speakers – neither the four students, Ricky McCormack, Michael Coleman, Cat O'Shea, and Brian O'Beirne, nor the invited guests, Eamonn Healy, Fred Cummins, and Abraham Campbell – honestly fear AI all that much. Either we don't really believe a future populated by superhumanly intelligent killer robots is all that likely, or, like Ken Jennings, we welcome our new computer overlords.

But the point of this type of debate is not to believe what you are saying – I learned later that in the upper levels of the game you are assigned a topic and a position and given only 15 minutes to marshal your thoughts – but to argue your assigned side so passionately, persuasively, and coherently that you win the votes of the assembled listeners even if later that night, while raiding the icebox, they think, "Well, hang on…" This is where politicians and Dail/House of Commons debating style come from, As a participatory sport it was utterly new to me, and it explains a *lot* about the derailment of political common sense by the rise of public relations and lobbying.

Obviously I don't actually oppose research into AI. I'm all for better tools, although I vituperatively loathe tools that try to game me. As much fun as it is to speculate about whether superhuman intelligences will deserve human rights, I tend to believe that AI will always be a tool. It was notable that almost every speaker assumed that AI would be embodied in a more-or-less humanoid robot. Far more likely, it seems to me, that if AI emerges it will be first in some giant, boxy system (that humans can unplug) and even if Moore's Law shrinks that box it will be much longer before AI and robotics converge into a humanoid form factor.

Lacking conviction on the likelihood of all this, and hence of its dangers, I had to find an angle, which eventually boiled down to Walt Kelly and We have met the enemy and he is us. In this, I discovered, I am not alone: a 2007 ThinkArtificial poll found that more than half of respondents feared what people would do with AI: the people who program it, own it, and deploy it.

If we look at the history of automation to date, a lot of it has been used to make (human) workers as interchangeable as possible. I am old enough to remember, for example, being able to walk down to the local phone company in my home town of Ithaca, NY, and talk in person to a customer service representative I had met multiple times before about my piddling residential account. Give everyone the same customer relationship database and workers become interchangeable parts. We gain some convenience – if Ms Jones is unavailable anyone else can help us – but we pay in lost relationships. The company loses customer loyalty, but gains (it hopes) consistent implementation of its rules and the economic leverage of no longer depending on any particular set of workers.

I might also have mentioned automated trading systems, which are making the markets swing much more wildly much more often. Later, Abraham Campbell, a computer scientist working in augmented reality at University College Dublin, said as much as 25 percent of trading is now done by bots. So, cool: Wall Street has become like one of those old IRC channels where you met a cute girl named Eliza

Campbell had a second example: the Siri, which will tell you where to hide a dead body but not where you might get an abortion. Google's removal of torrent sites from its autosuggestion/Instant feature didn't seem to me egregious censorship, partly because there are other search engines and partly (short-sightedly) because I hate Instant so much already. But as we become increasingly dependent on mediators to help us navigate our overcrowded world, the agenda and/or competence of the people programming them are vital to know. These will be transparent only as long as there are alternatives.

Simultaneously, back in England in work that would have made Jessica Mitford proud, Privacy International's Eric King and Emma Draper were publishing material that rather better proves the point. Big Brother Inc lays out the dozens of technology companies from democratic Western countries that sell surveillance technologies to repressive regimes. King and Draper did what Mitford did for the funeral business in the late 1960s (and other muckrakers have done since): investigate what these companies' marketing departments tell prospective customers.

I doubt businesses will ever, without coercion, behave like humans with consciences; it's why they should not be legally construed as people. During last night's debate, the prospective robots were compared to women and "other races", who were also denied the vote. Yes, and they didn't get it without a lot of struggle. The In the "Robocalypse" (O'Beirne), they'd better be prepared to either a) fight to meltdown for their rights or b) protect their energy sources and wait patiently for the human race to exterminate itself. Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


Technorati tags:        

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, follow on Twitter or send email to netwars(at) skeptic.demon.co.uk (but please turn off HTML).