Document Type

Article

Publication

The Yale Law Journal Forum

Year

2024

Abstract

Artificial Intelligence (AI) systems such as ChatGPT can now produce convincingly human speech, at scale. It is tempting to ask whether such AI-generated content “disrupts” the law. That, we claim, is the wrong question. It characterizes the law as inherently reactive, rather than proactive, and fails to reveal how what may look like “disruption” in one area of the law is business as usual in another. We challenge the prevailing notion that technology inherently disrupts law, proposing instead that law and technology co-construct each other in a dynamic interplay reflective of societal priorities and political power. This Essay instead deploys and expounds upon the method of “legal construction of technology.” By removing the blinders of technological determinism and instead performing legal construction of technology, legal scholars and policymakers can more effectively ensure that the integration of AI systems into society aligns with key values and legal principles.

Legal construction of technology, as we perform it, consists of examining the ways in which the law’s objects, values, and institutions constitute legal sensemaking of new uses of technology. For example, the First Amendment governs “speech” and “speakers” toward a number of theoretical goals, largely through the court system. This leads to a particular set of puzzles, such as the fact that AI systems are not human speakers with human intent. But other areas of the law construct AI systems very differently. Content-moderation law regulates communications platforms and networks toward the goals of balancing harms against free speech and innovation; risk regulation, increasingly being deployed to regulate AI systems, regulates risky complex systems toward the ends of mitigating both physical and dignitary harms; and consumer-protection law regulates businesses and consumers toward the goals of maintaining fair and efficient markets. In none of these other legal constructions of AI is AI’s lack of human intent a problem.

By going through each example in turn, this Essay aims to demonstrate the benefits of looking at AI-generated content through the lens of legal construction of technology, instead of asking whether the technology disrupts the law. We aim, too, to convince policymakers and scholars of the benefits of the method: it is descriptively accurate, yields concrete policy revelations, and can in practice be deeply empowering for policymakers and scholars alike. AI systems do not in some abstract sense disrupt the law. Under a values-driven rather than technology-driven approach to technology policy, the law can do far more than just react.

Share

COinS