Why Doesn't AI More Often Reflect End Users' Best Interests?
|Nov 11, 2019|
Someone shared a post with this title on Reddit. Based on some things I’ve learned recently, here’s my response.
AI has reached a point where certain applications require an industrial scale in terms of access to data, talent, and GPUs. Deep learning has transformed the economics of building AI applications and to a large extent de-democratized AI. Most companies with access to data, talent, and processing power have only a tangential incentive to do something in the user’s best interest.
There are probably also some arguments to be made about AI being inherently at odds with the user’s best interest. If, for example, data privacy is taken to be in a user’s best interest, then their interest is probably opposed with a great number of applications of AI which may require wholesale invasions of privacy to succeed.
Finally, there’s a question of opportunity cost. If you have the resources to develop AI applications, serving the user’s best interest may be many times less profitable than developing an application which harms a user’s self-interest.