Politics, designed

The most revolutionary thing you can do in today’s politics is to assume at least a few people on the other side are acting in good faith. Seeking and finding villainy among those who disagree with you is neither hard, brave, nor virtuous. It’s the path of least resistance afforded by the way our society is currently organized, and like most default choices, doesn’t have the end user’s best interests in mind.

The use of knowledge in the tech industry

How much and what kinds of knowledge are most useful?

I want to have a somewhat abstract discussion on the use of knowledge and how people regard knowledge in various settings, but in particular I will focus on my experiences in a fairly specific cross-section of the tech industry. I will suggest a framework and pose some questions.

The first question is how does knowledge pertain to success in Silicon Valley? If you want to “win” in tech, is it better to invest in knowing things rather than, say, knowing people? Or is tech a more purely social game? What is the relative value of intellectual capital vs pure social capital? I have thoughts on this that I’ll write about later.

The three approaches to knowledge

Supposing that knowledge has some non-zero value, I can think of 3 approaches to obtaining and utilizing it I’ve seen firsthand.

The most common pattern I saw was a belief that knowledge is important and that it was held by successful people. Folks like Peter Thiel or Marc Andreessen have very important knowledge and if you can somehow get it from them, you’ll be far better off. Let’s call this the model of privileged knowledge.

Another possibility that I’m not sure I know as many pracitioners of is that the milleu of knowledge is such that there are some fundamentals that everyone needs to know and there's a vanishing advantage to acquiring knowledge beyond this point. Furthermore, this knowledge is basically a commodity, so you can get it by business reading books and blog posts. Let’s call this the model of commodity knowledge.

Finally, there is the point of view represented by Peter Thiel’s notion of “secrets.” Specifically, that there is a means of deducing important knowledge through intuition or analysis, and that this is the most valuable kind of knowledge due to the need to avoid competition. Let’s call this the model of proprietary knowledge.

My experience

Anecdotally, I think the predominant view of most of my acquaintances is tech (mostly novice VCs and entrepreneurs) for example, is that you should believe and profess that knowledge is useful but actually act as though social capital is more important and optimize for that, accumulating whatever priveleged knowledge accrues along the way.

If there is a predominant pattern of belief is in privileged knowledge, as I suggest, venture capitalists or others who would sell mining equipment in a goldrush, would do well to position themselves as gatekeepers of an important knowledge base.

On the other hand, and to take myself as an example of what not to do: I was very attracted to the model of proprietary knowledge. However, I would caution anyone who would use this approach in SV. One of my proprietary findings is that, despite what it would like to believe about itself, tech is unable to value or do much with contrarian or proprietary knowledge bases.


Overall, I’m interested in what the ROI of knowledge is for the various games one might play in tech, which approach to knowledge acquisition is the most useful, which is the most useful to seem to use and in what areas, and which approaches to knowledge are believed in and by whom. I’d also be glad to hear of any other approaches to knowledge.

These concepts had been floating around in my head for a while and were catalyzed by a discussion with David Lee

Ah, The New Year

Why your resolutions probably won't work for long

Ah, the New Year: that time of year where most people get to re-learn that inflexible thinking patterns usually only get you a few weeks worth of success at most.

Time as a complicated dimension of UX design

Is faster always better?

We recently published a paper at the 22nd ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) exploring the benefits of deliberately slowing down algorithms for certain tasks.

“Faster is better” seems like a straightforward rule of thumb for designing algorithms and user experiences, and when algorithmic systems were being designed for single-user productivity applications, this rule was probably all that needed to be said about time in design.

However, modern algorithms are very different from the first word processors. In particular, algorithms are now used to make decisions of consequence like who will be released on bail and who will receive preventative healthcare. When a user uses an algorithm to augment a real-world decision which effects others, is it really optimal for them to get the quickest answer and accept the suggestion of the algorithm with no deliberation or consideration of nuance? Probably not.

We conducted experiments where we used an algorithm to augment human decision making on a contrived task, and we found that a slow algorithm can help users be more reflective about the task at hand and be more thoughtful when interpreting algorithmic outputs. To see more about this, take a look at the paper linked above or this Medium post by my colleague.

An additional issue related to time in UX which our work did not address is determining what role the rapid tempo of online services might contribute to some of their unwanted effects. My hypothesis is that fake news, polarization, and the addictiveness of apps are all intimately related to the tempo of interaction in online social media. The advertising business model needs interfaces to be fast, so that loading times are not a limiting factor in the number of page views per minute a user generates.

I think tools that slow down existing social media or social media specifically designed with a slower tempo of interaction [1] would be more tranquil spaces where our deliberative, rather than our impulsive, selves would show up. However, this would require social media products with fundamentally different business incentives than our current ones, monetized by advertising.

[1] I’m not simply implying that we have Facebook but with slow page load times, even though deliberately slowing down Facebook probably does make it less addictive and may prove to be more effective at curtailing addictive usage than simply using a tool that blocks the site for specified time periods. Instead, I’m thinking of things like a comment system that delays the broadcast of comments to other users by a minute or more. It would be more difficult to have a heated argument on Twitter if there was a cooldown period in the propagation of what you’ve just posted.

Adversarial interaction

How users can win with online platforms

My research focuses on something I call adversarial interaction.

Adversarial interaction is when people use online platforms strategically and in unexpected ways. But why would a user want to interact adversarially?

Users and platforms are often in conflict. What a user wants may not be good for business. A product’s interface is a choice architecture where some outcomes are systematically favored over other theoretically-achievable outcomes. Entering into someone else’s choice architecture has ramifications for user autonomy—will they be able to make the choices they want to make when interacting with the system or will there be engineered friction to realizing certain outcomes?

Our core insight is that the design of most software products is contestable—a user downloads code and data from a platform and usually executes the code as instructed on the data, as the platform expects. However, this code, and thus the product design and the underlying choice architecture, is often just a suggestion. A user executing code in unexpected ways might co-author the product design in order to achieve outcomes they prefer more often.

We look at user-platform conflicts using tools from the fields human-computer interaction and security, incentives in online systems using tools from game theory and mechanism design, and ways that users can use software to modify their interactions to favor themselves rather than platforms in these conflicts.

The most obvious example of adversarial interaction is adblocking. Another example would be a user who wants to spend less time on Facebook, in spite of how bad this decision might be for the company’s business model. Facebook is designed to be addictive, but rendering the site based on their design is a suggestion, not actually required. A user might use a tool where they specify how much they’d like to use Facebook. Once this usage threshold is reached, the tool might degrade addictive design patterns in Facebook to make the user experience more frustrating—by washing out colors, blurring text, and making the site load more slowly.

Overall, we want to know what can be achieved by viewing the interface as a contested space, by what means and at what cost?

Image: The Code Of Honor—A Duel In The Bois De Boulogne, Near Paris, wood-engraving after Godefroy Durand, Harper's Weekly (January 1875). Credit: Wikipedia

Loading more posts…