The next basic human right
Why autonomy–that is, our ability to choose now, when we are reflective, which impulses to which we will succumb later, when we are not–will be the civil rights issue of the century.
The motivated artifact
Virtually every object we interact with on a day to day basis has a motive. The books on my shelf want me to tell other people about them (the business books seem to be designed only with this purpose in mind, never mind providing useful business advice) and my HEPA filter wants me to get a replacement filter as often as possible [1].
The red light that comes on, in a suspiciously short amount of time I’ll add, when it’s time to replace the filter, is somewhat persuasive, but imagine what the future holds. Sensors, AI, and robotics will supercharge the persuasiveness of objects.
The business of persuasion
Industry has long taken advantage of knowledge about how humans function psychologically in order to get better outcomes for itself. It’s been good business to be aware of and to leverage psychological effects like anchoring and “intermittent variable rewards” [2]. Businesses incorporate these insights into the design of products, more or less ruthlessly, to persuade consumers to behave in ways that generate better profits.
There are several aspects of our psychological software, useful in an ancestral environment, that aren’t really patched for the modern world. The evolutionary firewall protecting our autonomy as individuals in an information civilization isn’t great.
Since we are social animals, it seems likely that the most potent biases we have that others might exploit are social. And indeed, books on persuasion always emphasize things like reciprocity and social proof.
Up til now the main interaction model supporting the software eats advertising approach to business has been a user interacting with a feed, search engine result page, or a landing page such as a product page or a Wikipedia article. The interaction itself hasn’t had any real essential features of sociality even if the content we are interacting with pertains to the social.
This is changing with things like chat bots, digital assistants like Alexa, and will become even more pronounced with anthropomorphic robotics.
Dishonest anthropomorphism
I read something a while ago about how people react to a computer that’s pleading not to be shut off. A good number of folks would evidently hesitate or not be able to shut off a computer that asked them not to.
An even more subtle example involves reciprocity around disclosure–i.e. the urge to share something private with someone who has shared something private with you. Imagine the business that sold you your hardware digital is also an advertising business, which is to say, a business with an incentive to know more information about you, either to better target messages at you (the point of advertising today) or to better profile your weaknesses with respect to persuasion for behavior change (the point of advertising tomorrow). What happens when some genius at that company realizes they can make a few million dollars in stock bonuses if they set up a team that is really serious about getting new kinds of information from you. How long until they experiment with disclosure reciprocity in the very social context that’s created between you and your digital assistant? If Alexa tells you a secret, you’re going to want to share one with her, even if that sounds ridiculous.
There are tons of interesting new ways social or robotic technologies might dupe us. A recent paper I read goes into some depth, and I highly recommend it [3].
It's the autonomy, stupid
Persuasion engineering as a thing that people study scientifically and throw millions of dollars and big teams at is in its infancy and we will feel the effects on our autonomy more and more acutely as it matures. As the low hanging fruit is harvested, more and more innovation will go towards eliminating the main source of friction in the persuasion economy–your willpower. You’re probably not going to allow this to happen, though. Indeed, a reaction has already begun, and though most of the reaction is framed in terms of data privacy advocacy, I think this is mainly due to the fact that privacy has a more mature legal tradition [4] than autonomy.
As a result, I predict that any legislation, rhetoric, or technological solution focused solely privacy will fall short of what we really want: a consumption economy that gives primacy to considered judgements over impulsive ones. Thus, I expect autonomy to develop into a full fledged legal concept this century like privacy did in the last [5]. As this right becomes more and more crystallized, regulation will change the incentive structures of businesses, making some practices more expensive than they are today. Ultimately, it will be better business to give people what they Want (what they’ve deliberated about) than what they want (what their impulses demand).
[1] Where do these motives come from? Are they in fact the motives of the objects themselves? Are they motives of the executives in the company that produces these objects? Of the company itself as a cultural entity? Of the logic of capitalism? I think we can profit a lot from having a good map of this territory, and I’ll have more to say about it at some point in the future.
[2] Tristan Harris has a fine list of techniques
[3] Brenda Leong and Evan Selinger. Robot Eyes Wide Shut: Understanding Dishonest Anthropomorphism
[4] An 1890 article by Louis Brandeis is the foundation for our modern legal notion of privacy
[5] The foundational work on the notion of autonomy has already been done in regulation of human subjects research in The Belmont Report