AI Loyalty: A New Paradigm for Aligning Stakeholder Interests
IEEE Transactions on Technology and Society
Anthony Aguirre, Gaia Dempsey, Harry Surden & Peter B. Reiner, AI Loyalty: A New Paradigm for Aligning Stakeholder Interests, 1 IEEE Transactions on Tech. & Soc'y 128 (2020), available at https://doi.org/10.1109/TTS.2020.3013490.
When we consult a doctor, lawyer, or financial advisor, we assume that they are acting in our best interests. But what should we assume when we interact with an artificial intelligence (AI) system? AI-driven personal assistants, such as Alexa and Siri, already serve as interfaces between consumers and information on the Web, and users routinely rely upon these and similar systems to take automated actions or provide information. Superficially, they may appear to be acting according to user interests, but many are designed with embedded conflicts of interests. To address this problem, we introduce the concept of AI loyalty. AI systems are loyal to the degree that they minimize and make transparent, conflicts of interest, and act in ways that prioritize the interests of users. Loyal AI products hold obvious appeal for the end-user and could promote the alignment of the long-term interests of AI developers and customers. To this end, we suggest criteria for assessing whether an AI system is acting in a manner that is loyal to the user, and argue that AI loyalty should be deliberately considered during the technological design process alongside other important values in AI ethics, such as fairness, accountability privacy, and equity.
Copyright protected. Use of materials from this collection beyond the exceptions provided for in the Fair Use and Educational Use clauses of the U.S. Copyright Law may violate federal law. Permission to publish or reproduce is required.