Privacy and the Human in the Loop

by

wpid-hil-2012-10-17-11-24.jpeg

When considering any system sometimes we forget that there’s a ‘Human in the Loop’. I’ve just finished reading a great white paper by Lori Cranor (A Framework for Reasoning About the Human in the Loop), and whilst this paper talks about security, it’s kissing cousin is Privacy. So a lot of the ideas presented here are interchangeable.

In this paper Lori talks about keeping humans out of the loop when it comes to security unless it’s absolutely unavoidable. In which case she talks about a framework that can be used to identify problem areas before a system is built.

Here are the components in her framework:

  1. Communication: How are you communicating with the user (Notices, Warnings Status lights)?
  2. Communications impediments: Can communications be interfered with (malicious 3rd parties)?
  3. Personal variables: Human behavior and relevant knowledge about the system?
  4. Intentions: Can the system be trusted and are users motivated to take appropriate action?
  5. Capabilities: Are users capable of taking the appropriate action?

So what has all of this got to do with the proposed Do Not Track standard? Well actually, a lot. Systems live and die based on the ‘Human in the Loop’ so if the solution is poorly designed or cannot be trusted there is little chance of it succeeding.

The current proposed Do Not Track standard has an incredibly simple Human Interface. The user goes to the browser menu, selects Privacy and then checks the box marked ‘Ask Web Sites Not To Track Me’. That’s it. That one check box is all the human intervention required. So what could possibly go wrong? Well a lot.

The standard makes it very simple for a user to communicate an intention to a Web server – and then (dare I say it) deliberately removes the need for a Web server to communicate that it ‘acknowledges and understands’ the users intention. Right there is the fatal design flaw. (Image if HTTPS worked this way). A malicious 3rd party can easily change the users intention to an alternative undesired outcome i.e. ‘Track Me’. As there’s no need for the Web server to acknowledge what it received you can easily make the case that it can simply ignore everything and continue as normal. In short there’s NO verification (as in Trust but Verify) required. So Do Not Track fails both item 1 & 2 in the framework.

As we go on we see that there are similar problems with all of the other framework items as well. Humans have really NO idea how their private data is being used on the Web. They love all the FREE services but fail to understand that ‘pie is not free at the truck stop’. Their data is shared in an attempt to market new services to them. So Do Not Track fails item 3.

Lets look at the final two items. Intentions and Capabilities – again we have a ‘swing and a miss’ scenario. If I cannot verify what I sent then I cannot trust the system. I have to trust the content provider and due to the lack of transparency when it comes to privacy (NOT security) the Human has no idea what is really taking place under the covers. Finally – capabilities. Can I take appropriate action IF I find out my privacy is being abused. Not really – I can go to another Web site but that might be the same as jumping from the fire pan in to the fire. I cannot change my browser settings any further so essentially i’m stuck sharing my data if I want that free service.

However the user can fight back – and FaceBook is a good example of that. Approximately 25% of a FaceBook users use a fake profile. That’s 250 million people all lying about who they are. And herein lies (pun intended) the real Privacy issue – where’s the motivation for both parties (Human and Content Provider) to deliver meaningful value?

It’s like everyone is stuck in the mud with the current status quo where everything is free and everything (my privacy) is for sale. The only solution that i’ve seen that comes really, really close to meeting Lori’s framework guidelines is the RePriv idea from Microsoft. Why? Because it adds accountability back into the system.

As the old saying goes – 50% of all advertising is worthless – the trick is in figuring out which 50%. A better designed system as Microsoft proves in the RePriv paper showcases that it can be done and the benefits are significant for the ‘Human in the Loop’.

Posted in: #Choice, #mobile, #privacy, Privacy, Quality of Experience, User Experience


Email Subscription


Categories