Features

net.wars: Human factors

by Wendy M Grossman | posted on 17 July 2009


For the last several weeks I've been mulling over the phrase security fatigue. If I tell you that I missed a day at Wimbledon because of this, you'll get an idea of its power.

Wendy M Grossman

It started with a paper (PDF) co-authored by Angela Sasse, in which she examined the burden that complying with security policies imposes upon corporate employees. Her suggestion: that companies think in terms of a "compliance budget" that, like any other budget (money, space on a newspaper page), has to be managed and used carefully. And, she said, security burdens weigh differently on different people and at different times, and a compliance budget needs to comprehend that, too.

Some examples (mine, not hers). Logging onto six different machines with six different user IDs and passwords (each of which has to be changed once a month) is annoying but probably tolerable if you do it once every morning when you get to work and once in the afternoon when you get back from lunch. But if the machines all log you out every time you take your hands off the keyboard for two minutes, by the end of the day they will be lucky to survive your baseball bat.

Similarly, while airport security is never fun, the burden of it is a lot less to a passenger traveling solo after a good night's sleep who reaches the checkpoints when they're empty than it is to the single parent with three bored and overtired kids under ten who arrives at the checkpoint after an overnight flight and has to wait in line for an hour.

Context also matters: a couple of weeks ago I turned down a ticket to Court 1 at Wimbledon on men's semi-finals day because I couldn't face the effort it would take to comply with their security rules and screening. I grudgingly accept airport security as the trade-off for getting somewhere, but to go through the same thing for a supposedly fun day out?

It's relatively easy to see how the compliance budget concept could be worked out in practice in a controlled environment like a company. It's very difficult to see how it can be worked out for the public at large, not least because none of the many companies each of us deals with sees it as beneficial to cooperate with the others.

You can't, for example, say to your online broker that you just can't cope with making another support phone call, can't they find some other way to unlock your account? Or tell Facebook that 61 privacy settings is too many because you're a member of six other social networks and Life is Too Short to spend a whole day configuring them all.

Bruce Schneier recently highlighted that last-referenced paper, from Joseph Bonneau and Soeren Preibusch at Cambridge's computer lab, alongside another by Leslie John, Alessandro Acquisti, and George Loewenstein from Carnegie-Mellon, to note a counterintuitive discovery: the more explicit you make privacy concerns the less people will tell you. "Privacy salience" (as Schneier calls it) makes people more cautious.

In a way, this is a good thing and goes to show what privacy advocates have been saying along: people do care about privacy if you give them the chance. But if you're the owners of Facebook, a frequent flyer program, or Google it means that it is not in your business interest to spell out too clearly to users what they should be concerned about. All of these businesses rely on collecting more and more data about more and more people.

Fortunately for them, as we know from research conducted by Lorrie Cranor (also at Carnegie-Mellon), people hate reading privacy policies. I don't think this is because people aren't interested in their privacy. I think this goes back to what Sasse was saying: it's security fatigue. For most people, security and privacy concerns are just barriers blocking the thing they came to do.

But choice is a good thing, right? Doesn't everyone want control? Not always. Go back a few years and you may remember some widely publicized research that pointed out that too many choices stall decision-making and make people feel… tired. A multiplicity of choices adds weight and complexity to the decision you're making: shouldn't you investigate all the choices, particularly if you're talking about which of 56 mutual funds to add to your 401(k)?

It seems obvious, therefore, that the more complex the privacy controls offered by social networks and other services the less likely people are to use them: too many choices, too little time, too much security fatigue.

In minor cases in real life, we handle this by making a decision once and sticking to it as a kind of rule until we're forced to change: which brand of toothpaste, what time to leave for work, never buy any piece of clothing that doesn't have pockets. In areas where rules don't work, the best strategy is usually to constrain the choices until what you have left is a reasonable number to investigate and work with.

E-commerce sites notoriously get this backwards: they force you to explore group by group instead of allowing you to exclude choices you'll never use.

How do we implement security and privacy so that they're usable? This is one of the great unsolved, under-researched questions in security. I'm hoping to know more next week.

 Picture courtesy ACLU


Technorati tags:   
How much security can you take? - You can discuss this article on our discussion board.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, follow on Twitter or send email to netwars(at) skeptic.demon.co.uk (but please turn off HTML).