Open Access. Powered by Scholars. Published by Universities.®

Law Commons

Open Access. Powered by Scholars. Published by Universities.®

Science and Technology Law

Series

Artificial intelligence

Institution
Publication Year
Publication

Articles 1 - 30 of 90

Full-Text Articles in Law

Ethical Algorithms: Navigating Ai In Legal Practice For A Just Jurisprudence, Bree'ara Murphy, Rachel Gadra Rankin, Joseph Rios May 2024

Ethical Algorithms: Navigating Ai In Legal Practice For A Just Jurisprudence, Bree'ara Murphy, Rachel Gadra Rankin, Joseph Rios

Law Review Blog Posts

Exploring the professional obligations practitioners may face in light of developing AI technology by examining state and federal model rule language, current judicial treatment of AI, and AI best practices.


The Driving Impact Of Artificial Intelligence On Global Expansion, Aleksandra Drozd Apr 2024

The Driving Impact Of Artificial Intelligence On Global Expansion, Aleksandra Drozd

Senior Honors Theses

The invention and continual growth of artificial intelligence (AI) on the global stage have significantly shaped the world’s economies, governments, societies and their cultures. The new industrial revolution and the subsequent race of the world’s leading powers have led to increased international joint efforts and exchange of information, simultaneously reducing barriers to trade and communication. Meanwhile, emerging technologies deploying AI have led to changes in human behavior and culture and challenged the traditional nation-state model. Although several implications of the proliferation of AI remain unknown, its widening application may be tied with accelerating globalization, referred to interchangeably as global expansion. …


The Right To A Glass Box: Rethinking The Use Of Artificial Intelligence In Criminal Justice, Brandon L. Garrett, Cynthia Rudin Jan 2024

The Right To A Glass Box: Rethinking The Use Of Artificial Intelligence In Criminal Justice, Brandon L. Garrett, Cynthia Rudin

Faculty Scholarship

Artificial intelligence (“AI”) increasingly is used to make important decisions that affect individuals and society. As governments and corporations use AI more pervasively, one of the most troubling trends is that developers so often design it to be a “black box.” Designers create AI models too complex for people to understand or they conceal how AI functions. Policymakers and the public increasingly sound alarms about black box AI. A particularly pressing area of concern has been criminal cases, in which a person’s life, liberty, and public safety can be at stake. In the United States and globally, despite concerns that …


Ai-Based Evidence In Criminal Trials?, Sabine Gless, Fredric I. Lederer, Thomas Weigend Jan 2024

Ai-Based Evidence In Criminal Trials?, Sabine Gless, Fredric I. Lederer, Thomas Weigend

Faculty Publications

Smart devices are increasingly the origin of critical criminal case data. The importance of such data, especially data generated when using modern automobiles, is likely to become even more important as increasingly complex methods of machine learning lead to AI-based evidence being autonomously generated by devices. This article reviews the admissibility of such evidence from both American and German perspectives. As a result of this comparative approach, the authors conclude that American evidence law could be improved by borrowing aspects of the expert testimony approaches used in Germany’s “inquisitorial” court system.


Constructing Ai Speech, Margot E. Kaminski, Meg Leta Jones Jan 2024

Constructing Ai Speech, Margot E. Kaminski, Meg Leta Jones

Publications

Artificial Intelligence (AI) systems such as ChatGPT can now produce convincingly human speech, at scale. It is tempting to ask whether such AI-generated content “disrupts” the law. That, we claim, is the wrong question. It characterizes the law as inherently reactive, rather than proactive, and fails to reveal how what may look like “disruption” in one area of the law is business as usual in another. We challenge the prevailing notion that technology inherently disrupts law, proposing instead that law and technology co-construct each other in a dynamic interplay reflective of societal priorities and political power. This Essay instead deploys …


Chatgpt, Ai Large Language Models, And Law, Harry Surden Jan 2024

Chatgpt, Ai Large Language Models, And Law, Harry Surden

Publications

This Essay explores Artificial Intelligence (AI) Large Language Models (LLMs) like ChatGPT/GPT-4, detailing the advances and challenges in applying AI to law. It first explains how these AI technologies work at an understandable level. It then examines the significant evolution of LLMs since 2022 and their improved capabilities in understanding and generating complex documents, such as legal texts. Finally, this Essay discusses the limitations of these technologies, offering a balanced view of their potential role in legal work.


Locating Liability For Medical Ai, W. Nicholson Price Ii, I. Glenn Cohen Jan 2024

Locating Liability For Medical Ai, W. Nicholson Price Ii, I. Glenn Cohen

Articles

When medical AI systems fail, who should be responsible, and how? We argue that various features of medical AI complicate the application of existing tort doctrines and render them ineffective at creating incentives for the safe and effective use of medical AI. In addition to complexity and opacity, the problem of contextual bias, where medical AI systems vary substantially in performance from place to place, hampers traditional doctrines. We suggest instead the application of enterprise liability to hospitals—making them broadly liable for negligent injuries occurring within the hospital system—with an important caveat: hospitals must have access to the information needed …


A Public Technology Option, Hannah Bloch-Wehba Dec 2023

A Public Technology Option, Hannah Bloch-Wehba

Faculty Scholarship

Private technology increasingly underpins public governance. But the state’s growing reliance on private firms to provide a variety of complex technological products and services for public purposes brings significant costs for transparency: new forms of governance are becoming less visible and less amenable to democratic control. Transparency obligations initially designed for public agencies are a poor fit for private vendors that adhere to a very different set of expectations.

Aligning the use of technology in public governance with democratic values calls for rethinking, and in some cases abandoning, the legal structures and doctrinal commitments that insulate private vendors from meaningful …


Either The Law Will Govern Ai, Or Ai Will Govern The Law, Margaret Hu Nov 2023

Either The Law Will Govern Ai, Or Ai Will Govern The Law, Margaret Hu

Popular Media

No abstract provided.


The Philosophy Of Ai: Learning From History, Shaping Our Future. Hearing Before The Committee On Homeland Security And Government Affairs, Senate, One Hundred Eighteenth Congress, First Session., Margaret Hu Nov 2023

The Philosophy Of Ai: Learning From History, Shaping Our Future. Hearing Before The Committee On Homeland Security And Government Affairs, Senate, One Hundred Eighteenth Congress, First Session., Margaret Hu

Congressional Testimony

No abstract provided.


Biden's Executive Order Puts Civil Rights Rights In The Middle Of The Ai Regulation Discussion, Margaret Hu Nov 2023

Biden's Executive Order Puts Civil Rights Rights In The Middle Of The Ai Regulation Discussion, Margaret Hu

Popular Media

No abstract provided.


Legalbench: A Collaboratively Built Benchmark For Measuring Legal Reasoning In Large Language Models, Neel Guha, Julian Nyarko, Daniel E. Ho, Christopher Ré, Adam Chilton, Aditya Narayana, Alex Chohlas-Wood, Austin Peters, Brandon Waldon, Daniel Rockmore, Diego A. Zambrano, Dmitry Talisman, Enam Hoque, Faiz Surani, Frank Fagan, Galit Sarfaty, Gregory M. Dickinson, Haggai Porat, Jason Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John Nay, Jonathan H. Choi, Kevin Tobia, Margaret Hagan, Megan Ma, Michael A. Livermore, Nikon Rasumov-Rahe, Nils Holzenberger, Noam Kolt, Peter Henderson, Sean Rehaag, Sharad Goel, Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, Zehua Li Sep 2023

Legalbench: A Collaboratively Built Benchmark For Measuring Legal Reasoning In Large Language Models, Neel Guha, Julian Nyarko, Daniel E. Ho, Christopher Ré, Adam Chilton, Aditya Narayana, Alex Chohlas-Wood, Austin Peters, Brandon Waldon, Daniel Rockmore, Diego A. Zambrano, Dmitry Talisman, Enam Hoque, Faiz Surani, Frank Fagan, Galit Sarfaty, Gregory M. Dickinson, Haggai Porat, Jason Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John Nay, Jonathan H. Choi, Kevin Tobia, Margaret Hagan, Megan Ma, Michael A. Livermore, Nikon Rasumov-Rahe, Nils Holzenberger, Noam Kolt, Peter Henderson, Sean Rehaag, Sharad Goel, Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, Zehua Li

All Papers

The advent of large language models (LLMs) and their adoption by the legal community has given rise to the question: what types of legal reasoning can LLMs perform? To enable greater study of this question, we present LegalBench: a collaboratively constructed legal reasoning benchmark consisting of 162 tasks covering six different types of legal reasoning. LegalBench was built through an interdisciplinary process, in which we collected tasks designed and hand-crafted by legal professionals. Because these subject matter experts took a leading role in construction, tasks either measure legal reasoning capabilities that are practically useful, or measure reasoning skills that lawyers …


On The Danger Of Not Understanding Technology, Fredric I. Lederer May 2023

On The Danger Of Not Understanding Technology, Fredric I. Lederer

Popular Media

No abstract provided.


Regulating Artificial Intelligence In International Investment Law, Mark Mclaughlin Apr 2023

Regulating Artificial Intelligence In International Investment Law, Mark Mclaughlin

Research Collection Yong Pung How School Of Law

The interaction between artificial intelligence (AI) and international investment treaties is an uncharted territory of international law. Concerns over the national security, safety, and privacy implications of AI are spurring regulators into action around the world. States have imposed restrictions on data transfer, utilised automated decision-making, mandated algorithmic transparency, and limited market access. This article explores the interaction between AI regulation and standards of investment protection. It is argued that the current framework provides an unpredictable legal environment in which to adjudicate the contested norms and ethics of AI. Treaties should be recalibrated to reinforce their anti-protectionist origins, embed human-centric …


Legal Dispositionism And Artificially-Intelligent Attributions, Jerrold Soh Feb 2023

Legal Dispositionism And Artificially-Intelligent Attributions, Jerrold Soh

Research Collection Yong Pung How School Of Law

It is conventionally argued that because an artificially-intelligent (AI) system acts autonomously, its makers cannot easily be held liable should the system's actions harm. Since the system cannot be liable on its own account either, existing laws expose victims to accountability gaps and need to be reformed. Recent legal instruments have nonetheless established obligations against AI developers and providers. Drawing on attribution theory, this paper examines how these seemingly opposing positions are shaped by the ways in which AI systems are conceptualised. Specifically, folk dispositionism underpins conventional legal discourse on AI liability, personality, publications, and inventions and leads us towards …


Regulating The Risks Of Ai, Margot E. Kaminski Jan 2023

Regulating The Risks Of Ai, Margot E. Kaminski

Publications

Companies and governments now use Artificial Intelligence (“AI”) in a wide range of settings. But using AI leads to well-known risks that arguably present challenges for a traditional liability model. It is thus unsurprising that lawmakers in both the United States and the European Union (“EU”) have turned to the tools of risk regulation in governing AI systems.

This Article describes the growing convergence around risk regulation in AI governance. It then addresses the question: what does it mean to use risk regulation to govern AI systems? The primary contribution of this Article is to offer an analytic framework for …


Naïve Realism, Cognitive Bias, And The Benefits And Risks Of Ai, Harry Surden Jan 2023

Naïve Realism, Cognitive Bias, And The Benefits And Risks Of Ai, Harry Surden

Publications

In this short piece I comment on Orly Lobel's book on artificial intelligence (AI) and society "The Equality Machine." Here, I reflect on the complex topic of aI and its impact on society, and the importance of acknowledging both its positive and negative aspects. More broadly, I discuss the various cognitive biases, such as naïve realism, epistemic bubbles, negativity bias, extremity bias, and the availability heuristic, that influence individuals' perceptions of AI, often leading to polarized viewpoints. Technology can both exacerbate and ameliorate these biases, and I commend Lobel's balanced approach to AI analysis as an example to emulate.

Although …


Humans In The Loop, Rebecca Crootof, Margot E. Kaminski, W. Nicholson Price Ii Jan 2023

Humans In The Loop, Rebecca Crootof, Margot E. Kaminski, W. Nicholson Price Ii

Publications

From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these "human-in-the-loop" systems. We make four contributions to the discourse.

First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify "the MABA-MABA trap," which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into a decision-making process. Regardless of whether the …


Humans In The Loop, Nicholson Price Ii, Rebecca Crootof, Margot Kaminski Jan 2023

Humans In The Loop, Nicholson Price Ii, Rebecca Crootof, Margot Kaminski

Articles

From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these “human in the loop” systems. We make four contributions to the discourse.

First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify “the MABA-MABA trap,” which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into decision making process. Regardless …


Open-Source Clinical Machine Learning Models: Critical Appraisal Of Feasibility, Advantages, And Challenges, Keerthi B. Harish, W. Nicholson Price Ii, Yindalon Aphinyanaphongs Nov 2022

Open-Source Clinical Machine Learning Models: Critical Appraisal Of Feasibility, Advantages, And Challenges, Keerthi B. Harish, W. Nicholson Price Ii, Yindalon Aphinyanaphongs

Articles

Machine learning applications promise to augment clinical capabilities and at least 64 models have already been approved by the US Food and Drug Administration. These tools are developed, shared, and used in an environment in which regulations and market forces remain immature. An important consideration when evaluating this environment is the introduction of open-source solutions in which innovations are freely shared; such solutions have long been a facet of digital culture. We discuss the feasibility and implications of open-source machine learning in a health care infrastructure built upon proprietary information. The decreased cost of development as compared to drugs and …


Using Artificial Intelligence In The Law Review Submissions Process, Brenda M. Simon Nov 2022

Using Artificial Intelligence In The Law Review Submissions Process, Brenda M. Simon

Faculty Scholarship

The use of artificial intelligence to help editors examine law review submissions may provide a way to improve an overburdened system. This Article is the first to explore the promise and pitfalls of using artificial intelligence in the law review submissions process. Technology-assisted review of submissions offers many possible benefits. It can simplify preemption checks, prevent plagiarism, detect failure to comply with formatting requirements, and identify missing citations. These efficiencies may allow editors to address serious flaws in the current selection process, including the use of heuristics that may result in discriminatory outcomes and dependence on lower-ranked journals to conduct …


Algorithmic Governance From The Bottom Up, Hannah Bloch-Wehba Nov 2022

Algorithmic Governance From The Bottom Up, Hannah Bloch-Wehba

Faculty Scholarship

Artificial intelligence and machine learning are both a blessing and a curse for governance. In theory, algorithmic governance makes government more efficient, more accurate, and more fair. But the emergence of automation in governance also rests on public-private collaborations that expand both public and private power, aggravate transparency and accountability gaps, and create significant obstacles for those seeking algorithmic justice. In response, a nascent body of law proposes technocratic policy changes to foster algorithmic accountability, ethics, and transparency.

This Article examines an alternative vision of algorithmic governance, one advanced primarily by social and labor movements instead of technocrats and firms. …


Content Moderation As Surveillance, Hannah Bloch-Wehba Oct 2022

Content Moderation As Surveillance, Hannah Bloch-Wehba

Faculty Scholarship

Technology platforms are the new governments, and content moderation is the new law, or so goes a common refrain. As platforms increasingly turn toward new, automated mechanisms of enforcing their rules, the apparent power of the private sector seems only to grow. Yet beneath the surface lies a web of complex relationships between public and private authorities that call into question whether platforms truly possess such unilateral power. Law enforcement and police are exerting influence over platform content rules, giving governments a louder voice in supposedly “private” decisions. At the same time, law enforcement avails itself of the affordances of …


Crossing The Rubicon: Evaluating The Use Of Artificial Intelligence In The Law And Singapore Courts, Ming En Tor Apr 2022

Crossing The Rubicon: Evaluating The Use Of Artificial Intelligence In The Law And Singapore Courts, Ming En Tor

Research Collection Yong Pung How School Of Law

In recent years, Artificial Intelligence (“AI”) has challenged many fundamental assumptions of how organisations and industries should operate. The Courts, traditionally seen as a hallowed ground graced by the best of lawyers, still remains as unchartered territory for AI’s infiltration. Yet, there is growing evidence which suggest AI may soon cross this frontier to replace important court functions.

This paper critically assesses the use of AI in law and the courts. Part II will first examine the arguments for and against the adoption of AI in the legal profession. Thereafter, Part III will critically examine whether AI should …


Volume Introduction, I. Glenn Cohen, Timo Minssen, W. Nicholson Price Ii, Christopher Robertson, Carmel Shachar Mar 2022

Volume Introduction, I. Glenn Cohen, Timo Minssen, W. Nicholson Price Ii, Christopher Robertson, Carmel Shachar

Other Publications

Medical devices have historically been less regulated than their drug and biologic counterparts. A benefit of this less demanding regulatory regime is facilitating innovation by making new devices available to consumers in a timely fashion. Nevertheless, there is increasing concern that this approach raises serious public health and safety concerns. The Institute of Medicine in 2011 published a critique of the American pathway allowing moderate-risk devices to be brought to the market through the less-rigorous 501(k) pathway,1 flagging a need for increased postmarket review and surveillance. High-profile recalls of medical devices, such as vaginal mesh products, along with reports globally …


Distributed Governance Of Medical Ai, W. Nicholson Price Ii Mar 2022

Distributed Governance Of Medical Ai, W. Nicholson Price Ii

Law & Economics Working Papers

Artificial intelligence (AI) promises to bring substantial benefits to medicine. In addition to pushing the frontiers of what is humanly possible, like predicting kidney failure or sepsis before any human can notice, it can democratize expertise beyond the circle of highly specialized practitioners, like letting generalists diagnose diabetic degeneration of the retina. But AI doesn’t always work, and it doesn’t always work for everyone, and it doesn’t always work in every context. AI is likely to behave differently in well-resourced hospitals where it is developed than in poorly resourced frontline health environments where it might well make the biggest difference …


Ethical Ai Development: Evidence From Ai Startups, James Bessen, Stephen Michael Impink, Lydia Reichensperger, Robert Seamans Mar 2022

Ethical Ai Development: Evidence From Ai Startups, James Bessen, Stephen Michael Impink, Lydia Reichensperger, Robert Seamans

Faculty Scholarship

Artificial Intelligence startups use training data as direct inputs in product development. These firms must balance numerous trade-offs between ethical issues and data access without substantive guidance from regulators or existing judicial precedence. We survey these startups to determine what actions they have taken to address these ethical issues and the consequences of those actions. We find that 58% of these startups have established a set of AI principles. Startups with data-sharing relationships with high-technology firms; that were impacted by privacy regulations; or with prior (non-seed) funding from institutional investors are more likely to establish ethical AI principles. Lastly, startups …


New Innovation Models In Medical Ai, W Nicholson Price Ii, Rachel E. Sachs, Rebecca S. Eisenberg Mar 2022

New Innovation Models In Medical Ai, W Nicholson Price Ii, Rachel E. Sachs, Rebecca S. Eisenberg

Articles

In recent years, scientists and researchers have devoted considerable resources to developing medical artificial intelligence (AI) technologies. Many of these technologies—particularly those that resemble traditional medical devices in their functions—have received substantial attention in the legal and policy literature. But other types of novel AI technologies, such as those related to quality improvement and optimizing use of scarce facilities, have been largely absent from the discussion thus far. These AI innovations have the potential to shed light on important aspects of health innovation policy. First, these AI innovations interact less with the legal regimes that scholars traditionally conceive of as …


Liability For Use Of Artificial Intelligence In Medicine, W. Nicholson Price, Sara Gerke, I. Glenn Cohen Jan 2022

Liability For Use Of Artificial Intelligence In Medicine, W. Nicholson Price, Sara Gerke, I. Glenn Cohen

Law & Economics Working Papers

While artificial intelligence has substantial potential to improve medical practice, errors will certainly occur, sometimes resulting in injury. Who will be liable? Questions of liability for AI-related injury raise not only immediate concerns for potentially liable parties, but also broader systemic questions about how AI will be developed and adopted. The landscape of liability is complex, involving health-care providers and institutions and the developers of AI systems. In this chapter, we consider these three principal loci of liability: individual health-care providers, focused on physicians; institutions, focused on hospitals; and developers.


Terrified By Technology: How Systemic Bias Distorts U.S. Legal And Regulatory Responses To Emerging Technology, Steve Calandrillo, Nolan Kobuke Anderson Jan 2022

Terrified By Technology: How Systemic Bias Distorts U.S. Legal And Regulatory Responses To Emerging Technology, Steve Calandrillo, Nolan Kobuke Anderson

Articles

Americans are becoming increasingly aware of the systemic biases we possess and how those biases preclude us from collectively living out the true meaning of our national creed. But to fully understand systemic bias we must acknowledge that it is pervasive and extends beyond the contexts of race, privilege, and economic status. Understanding all forms of systemic bias helps us to better understand ourselves and our shortcomings. At first glance, a human bias against emerging technology caused by systemic risk misperception might seem uninteresting or unimportant. But this Article demonstrates how the presence of systemic bias anywhere, even in an …