Open Access. Powered by Scholars. Published by Universities.®

Law Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 7 of 7

Full-Text Articles in Law

Equitable Ecosystem: A Two-Pronged Approach To Equity In Artificial Intelligence, Rangita De Silva De Alwis, Amani Carter, Govind Nagubandi Apr 2023

Equitable Ecosystem: A Two-Pronged Approach To Equity In Artificial Intelligence, Rangita De Silva De Alwis, Amani Carter, Govind Nagubandi

All Faculty Scholarship

Lawmakers, technologists, and thought leaders are facing a once-in-a-generation opportunity to build equity into the digital infrastructure that will power our lives; we argue for a two-pronged approach to seize that opportunity. Artificial Intelligence (AI) is poised to radically transform our world, but we are already seeing evidence that theoretical concerns about potential bias are now being borne out in the market. To change this trajectory and ensure that development teams are focused explicitly on creating equitable AI, we argue that we need to shift the flow of investment dollars. Venture Capital (VC) firms have an outsized impact in determining …


The Disembodied First Amendment, Nathan Cortez, William M. Sage Feb 2023

The Disembodied First Amendment, Nathan Cortez, William M. Sage

Faculty Scholarship

First Amendment doctrine is becoming disembodied—increasingly detached from human speakers and listeners. Corporations claim that their speech rights limit government regulation of everything from product labeling to marketing to ordinary business licensing. Courts extend protections to commercial speech that ordinarily extended only to core political and religious speech. And now, we are told, automated information generated for cryptocurrencies, robocalling, and social media bots are also protected speech under the Constitution. Where does it end? It begins, no doubt, with corporate and commercial speech. We show, however, that heightened protection for corporate and commercial speech is built on several “artifices” - …


Regulating The Risks Of Ai, Margot E. Kaminski Jan 2023

Regulating The Risks Of Ai, Margot E. Kaminski

Publications

Companies and governments now use Artificial Intelligence (“AI”) in a wide range of settings. But using AI leads to well-known risks that arguably present challenges for a traditional liability model. It is thus unsurprising that lawmakers in both the United States and the European Union (“EU”) have turned to the tools of risk regulation in governing AI systems.

This Article describes the growing convergence around risk regulation in AI governance. It then addresses the question: what does it mean to use risk regulation to govern AI systems? The primary contribution of this Article is to offer an analytic framework for …


Humans In The Loop, Nicholson Price Ii, Rebecca Crootof, Margot Kaminski Jan 2023

Humans In The Loop, Nicholson Price Ii, Rebecca Crootof, Margot Kaminski

Articles

From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these “human in the loop” systems. We make four contributions to the discourse.

First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify “the MABA-MABA trap,” which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into decision making process. Regardless …


Naïve Realism, Cognitive Bias, And The Benefits And Risks Of Ai, Harry Surden Jan 2023

Naïve Realism, Cognitive Bias, And The Benefits And Risks Of Ai, Harry Surden

Publications

In this short piece I comment on Orly Lobel's book on artificial intelligence (AI) and society "The Equality Machine." Here, I reflect on the complex topic of aI and its impact on society, and the importance of acknowledging both its positive and negative aspects. More broadly, I discuss the various cognitive biases, such as naïve realism, epistemic bubbles, negativity bias, extremity bias, and the availability heuristic, that influence individuals' perceptions of AI, often leading to polarized viewpoints. Technology can both exacerbate and ameliorate these biases, and I commend Lobel's balanced approach to AI analysis as an example to emulate.

Although …


Humans In The Loop, Rebecca Crootof, Margot E. Kaminski, W. Nicholson Price Ii Jan 2023

Humans In The Loop, Rebecca Crootof, Margot E. Kaminski, W. Nicholson Price Ii

Publications

From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these "human-in-the-loop" systems. We make four contributions to the discourse.

First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify "the MABA-MABA trap," which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into a decision-making process. Regardless of whether the …


Comments Of The Cordell Institute On Ai Accountability, Neil M. Richards, Woodrow Hartzog, Jordan Francis Jan 2023

Comments Of The Cordell Institute On Ai Accountability, Neil M. Richards, Woodrow Hartzog, Jordan Francis

Scholarship@WashULaw

These comments are a response to the National Telecommunications and Information Administration's 2023 request for comment on AI accountability (AI Accountability RFC, NTIA–2023–0005).

Responding to NTIA’s recent inquiry into AI assurance and accountability, we offer two main arguments regarding the importance of substantive legal protections. First, a myopic focus on concepts of transparency, bias mitigation, and ethics (for which procedural compliance efforts such as audits, assessments, and certifications are proxies) is insufficient when it comes to the design and implementation of accountable AI systems. We call rules built around transparency and bias mitigation “AI half-measures,” because they provide the appearance …