News

net.wars: A really fancy hammer with a gun

by Wendy M Grossman | posted on 24 April 2012


Is a robot more like a hammer, a monkey, or the Harley-Davidson on which he rode into town? Or try this one: what if the police program your really cute, funny robot butler (Tony Danza? Scarlett Johansson?) to ask you a question whose answer will incriminate you (and which it then relays). Is that a violation of the Fourth Amendment (protection against search and seizure) or the Fifth Amendment (you cannot be required to incriminate yourself)? Is it more like flipping a drug dealer or tampering with property? Forget science fiction, philosophy, and your inner biological supremacist; this is the sort of legal question that will be defined in the coming decade.

Wendy M Grossman

Making a start on this was the goal of last weekend's We Robot conference at the University of Miami Law School, organized by respected cyberlaw thinker Michael Froomkin. Robots are set to be a transformative technology, he argued to open proceedings, and cyberlaw began too late. Perhaps robotlaw is still a green enough field that we can get it right from the beginning. Engineers! Lawyers! Cross the streams!

What's the difference between a robot and a disembodied artificial intelligence? William Smart (Washington University, St Louis) summed it up nicely: "My iPad can't stab me in my bed." No: and as intimate as you may become with your iPad you're unlikely to feel the same anthropomorphic betrayal you likely would if the knife is being brandished by that robot butler above, which runs your life while behaving impeccably like it's your best friend. Smart sounds unsusceptible. "They're always going to be tools," he said. "Even if they are sophisticated and autonomous, they are always going to be toasters. I'm wary of thinking in any terms other than a really, really fancy hammer."

Traditionally, we think of machines as predictable because they respond the same way to the same input, time after time. But Smart, working with Neil Richards (University of Washinton, St Louis), points out that sensors are sensitive to distinctions analog humans can't make. A half-degree difference in temperature, or a tiny change in lighting are different conditions to a robot. To us, their behaviour will just look capricious, helping to foster that anthropomorphic response, wrongly attributing to them the moral agency necessary for guilt under the law: the "Android Fallacy".

Smart and I may be outliers. The recent Big Bang Theory episode in which the can't-talk-to-women Rajesh, entranced with Siri, dates his iPhone is hilarious because in Raj's confusion we recognize our own ability to have "relationships" with almost anything by projecting human capacities such as cognition, intent, and emotions. You could call it a design flaw (if humans had a designer), and a powerful one: people send real wedding presents to TV characters, name Liquid Robotics' Wave Gliders, and characterize sending a six-legged land mine-defusing robot that's lost a leg or two to continue work as "cruel". (Kate Darling, MIT Media Lab).

What if our rampant affection for these really fancy hammers leads us to want to give them rights? Darling asked. Or, asked Sinziana Gutiu (University of Ottawa), will sex robots like Roxxxy teach us wrong expectations of humans? (When the discussion briefly compared sex robots to pets, a Twitterer quipped, "If robots are pets is sex with them bestiality?")

Few are likely to fall in love with the avatars in the automated immigration kiosks proposed at the University of Arizona (Kristen Thomasen, University of Ottawa) with two screens, one with a robointerrogator and the other flashing images and measuring responses. Automated law enforcement, already with us in nascent form, raises a different set of issues (Lisa Shay . Historically, enforcement has never been perfect; laws only have to be "good enough" to achieve their objective, whether that's slowing traffic or preventing murder. These systems pose the same problem as electronic voting: how do we audit their decisions? In military applications, disclosure may tip off the enemy, as Woodrow Hartzog (Samford University). Yet here – and especially in medicine, where liability will be a huge issue – our traditional legal structures decide whom to punish by retracing the reasoning that led to the eventual decision. But even today's systems are already too complex.

When Hartzog asks if anyone really knows how Google or a smartphone tracks us, it reminds me of a recent conversation with Ross Anderson, the Cambridge University security engineer. In 50 years, he said, we have gone from a world whose machines could all be understood by a bright ten-year-old with access to a good library to a world with far greater access to information but full of machines whose inner workings are beyond a single person's understanding. And so: what does due process look like when only seven people understand algorithms that have consequences for the fates of millions of people? Bad enough to have the equivalent of a portable airport scanner looking for guns in New York City; what about house arrest because your butler caught you admiring Timothy Olyphant's gun on Justified

"We got privacy wrong the last 15 years." Froomkin exclaimed, putting that together. "Without a strong 'home as a fortress right' we risk a privacy future with an interrogator-avatar-kiosk from hell in every home."

The problem with robots isn't robots. The problem is us. As usual, Pogo had it right.


Technorati tags:       

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, follow on Twitter or send email to netwars(at) skeptic.demon.co.uk (but please turn off HTML).