Open Access. Powered by Scholars. Published by Universities.®

Law Commons

Open Access. Powered by Scholars. Published by Universities.®

Series

Artificial intelligence

PDF

Discipline
Institution
Publication Year
Publication

Articles 1 - 30 of 187

Full-Text Articles in Law

Briefing Note: 45th Meeting Of The Wipo Standing Committee On Copyright And Related Rights, Sean Flynn Mar 2024

Briefing Note: 45th Meeting Of The Wipo Standing Committee On Copyright And Related Rights, Sean Flynn

Joint PIJIP/TLS Research Paper Series

This analysis provides a historical and legal overview of the principle agenda items to be discussed at the 45th meeting of the Standing Committee on Copyright and Related Rights.


Ai, Artists, And Anti-Moral Rights, Derek E. Bambauer, Robert W. Woods Jan 2024

Ai, Artists, And Anti-Moral Rights, Derek E. Bambauer, Robert W. Woods

UF Law Faculty Publications

Generative artificial intelligence (AI) tools are increasingly used to imitate the distinctive characteristics of famous artists, such as their voice, likeness, and style. In response, legislators have introduced bills in Congress that would confer moral rights protections, such as control over attribution and integrity, upon artists. This Essay argues such measures are almost certain to fail because of deep-seated, pervasive hostility to moral rights measures in U.S. intellectual property law. It analyses both legislative measures and judicial decisions that roll back moral rights, and explores how copyright’s authorship doctrines manifest a latent hostility to these entitlements. The Essay concludes with …


Chatgpt, Ai Large Language Models, And Law, Harry Surden Jan 2024

Chatgpt, Ai Large Language Models, And Law, Harry Surden

Publications

This Essay explores Artificial Intelligence (AI) Large Language Models (LLMs) like ChatGPT/GPT-4, detailing the advances and challenges in applying AI to law. It first explains how these AI technologies work at an understandable level. It then examines the significant evolution of LLMs since 2022 and their improved capabilities in understanding and generating complex documents, such as legal texts. Finally, this Essay discusses the limitations of these technologies, offering a balanced view of their potential role in legal work.


Ai-Based Evidence In Criminal Trials?, Sabine Gless, Fredric I. Lederer, Thomas Weigend Jan 2024

Ai-Based Evidence In Criminal Trials?, Sabine Gless, Fredric I. Lederer, Thomas Weigend

Faculty Publications

Smart devices are increasingly the origin of critical criminal case data. The importance of such data, especially data generated when using modern automobiles, is likely to become even more important as increasingly complex methods of machine learning lead to AI-based evidence being autonomously generated by devices. This article reviews the admissibility of such evidence from both American and German perspectives. As a result of this comparative approach, the authors conclude that American evidence law could be improved by borrowing aspects of the expert testimony approaches used in Germany’s “inquisitorial” court system.


Overcoming Racial Harms To Democracy From Artificial Intelligence, Spencer A. Overton Jan 2024

Overcoming Racial Harms To Democracy From Artificial Intelligence, Spencer A. Overton

GW Law Faculty Publications & Other Works

While the United States is becoming more racially diverse, generative artificial intelligence and related technologies threaten to undermine truly representative democracy. Left unchecked, AI will exacerbate already substantial existing challenges, such as racial polarization, cultural anxiety, antidemocratic attitudes, racial vote dilution, and voter suppression. Synthetic video and audio (“deepfakes”) receive the bulk of popular attention—but are just the tip of the iceberg. Microtargeting of racially tailored disinformation, racial bias in automated election administration, discriminatory voting restrictions, racially targeted cyberattacks, and AI-powered surveillance that chills racial justice claims are just a few examples of how AI is threatening democracy. Unfortunately, existing …


Locating Liability For Medical Ai, W. Nicholson Price Ii, I. Glenn Cohen Jan 2024

Locating Liability For Medical Ai, W. Nicholson Price Ii, I. Glenn Cohen

Articles

When medical AI systems fail, who should be responsible, and how? We argue that various features of medical AI complicate the application of existing tort doctrines and render them ineffective at creating incentives for the safe and effective use of medical AI. In addition to complexity and opacity, the problem of contextual bias, where medical AI systems vary substantially in performance from place to place, hampers traditional doctrines. We suggest instead the application of enterprise liability to hospitals—making them broadly liable for negligent injuries occurring within the hospital system—with an important caveat: hospitals must have access to the information needed …


A Public Technology Option, Hannah Bloch-Wehba Dec 2023

A Public Technology Option, Hannah Bloch-Wehba

Faculty Scholarship

Private technology increasingly underpins public governance. But the state’s growing reliance on private firms to provide a variety of complex technological products and services for public purposes brings significant costs for transparency: new forms of governance are becoming less visible and less amenable to democratic control. Transparency obligations initially designed for public agencies are a poor fit for private vendors that adhere to a very different set of expectations.

Aligning the use of technology in public governance with democratic values calls for rethinking, and in some cases abandoning, the legal structures and doctrinal commitments that insulate private vendors from meaningful …


Either The Law Will Govern Ai, Or Ai Will Govern The Law, Margaret Hu Nov 2023

Either The Law Will Govern Ai, Or Ai Will Govern The Law, Margaret Hu

Popular Media

No abstract provided.


The Philosophy Of Ai: Learning From History, Shaping Our Future. Hearing Before The Committee On Homeland Security And Government Affairs, Senate, One Hundred Eighteenth Congress, First Session., Margaret Hu Nov 2023

The Philosophy Of Ai: Learning From History, Shaping Our Future. Hearing Before The Committee On Homeland Security And Government Affairs, Senate, One Hundred Eighteenth Congress, First Session., Margaret Hu

Congressional Testimony

No abstract provided.


Biden's Executive Order Puts Civil Rights Rights In The Middle Of The Ai Regulation Discussion, Margaret Hu Nov 2023

Biden's Executive Order Puts Civil Rights Rights In The Middle Of The Ai Regulation Discussion, Margaret Hu

Popular Media

No abstract provided.


Comment Letter On Sec’S Proposed Rule On Conflicts Of Interest Associated With The Use Of Predictive Data Analytics By Broker-Dealers And Investment Advisers, File Number S7-12-23, Sergio Alberto Gramitto Ricci, Christina M. Sautter Oct 2023

Comment Letter On Sec’S Proposed Rule On Conflicts Of Interest Associated With The Use Of Predictive Data Analytics By Broker-Dealers And Investment Advisers, File Number S7-12-23, Sergio Alberto Gramitto Ricci, Christina M. Sautter

Faculty Works

This comment letter responds to the Securities and Exchange Commission’s proposed rule Release Nos. 34-97990; IA-6353; File Number S7-12-23 - Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers. Our comments draw on our scholarship relating to laypersons’ participation in securities markets and the corporate sector as well as on the role of technology in corporate governance.

We express concerns that the SEC’s proposed regulation undermines individuals’ ability to access capital markets in an efficient and cost-effective manner. In the era of excessive concentration of equities ownership and power, often with negative societal …


Legalbench: A Collaboratively Built Benchmark For Measuring Legal Reasoning In Large Language Models, Neel Guha, Julian Nyarko, Daniel E. Ho, Christopher Ré, Adam Chilton, Aditya Narayana, Alex Chohlas-Wood, Austin Peters, Brandon Waldon, Daniel Rockmore, Diego A. Zambrano, Dmitry Talisman, Enam Hoque, Faiz Surani, Frank Fagan, Galit Sarfaty, Gregory M. Dickinson, Haggai Porat, Jason Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John Nay, Jonathan H. Choi, Kevin Tobia, Margaret Hagan, Megan Ma, Michael A. Livermore, Nikon Rasumov-Rahe, Nils Holzenberger, Noam Kolt, Peter Henderson, Sean Rehaag, Sharad Goel, Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, Zehua Li Sep 2023

Legalbench: A Collaboratively Built Benchmark For Measuring Legal Reasoning In Large Language Models, Neel Guha, Julian Nyarko, Daniel E. Ho, Christopher Ré, Adam Chilton, Aditya Narayana, Alex Chohlas-Wood, Austin Peters, Brandon Waldon, Daniel Rockmore, Diego A. Zambrano, Dmitry Talisman, Enam Hoque, Faiz Surani, Frank Fagan, Galit Sarfaty, Gregory M. Dickinson, Haggai Porat, Jason Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John Nay, Jonathan H. Choi, Kevin Tobia, Margaret Hagan, Megan Ma, Michael A. Livermore, Nikon Rasumov-Rahe, Nils Holzenberger, Noam Kolt, Peter Henderson, Sean Rehaag, Sharad Goel, Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, Zehua Li

All Papers

The advent of large language models (LLMs) and their adoption by the legal community has given rise to the question: what types of legal reasoning can LLMs perform? To enable greater study of this question, we present LegalBench: a collaboratively constructed legal reasoning benchmark consisting of 162 tasks covering six different types of legal reasoning. LegalBench was built through an interdisciplinary process, in which we collected tasks designed and hand-crafted by legal professionals. Because these subject matter experts took a leading role in construction, tasks either measure legal reasoning capabilities that are practically useful, or measure reasoning skills that lawyers …


When Machines Can Be Judge, Jury, And Executioner: Justice In The Age Of Artificial Intelligence (Book Review), Stacy Fowler Sep 2023

When Machines Can Be Judge, Jury, And Executioner: Justice In The Age Of Artificial Intelligence (Book Review), Stacy Fowler

Faculty Articles

In When Machines Can Be Judge, Jury, and Executioner, former federal judge Katherine Forrest raises concerns over the pervasive use of artificial intelligence (AI) in the American justice system to produce risks and need assessments (RNA) regarding the probability of recidivism for citizens charged with a crime. Forrest’s argument centers on AI’s primary focus on utilitarian outcomes when assessing liberty for individual citizens. This approach leads Forrest to the conclusion that in its current form, AI is “ill-suited to the criminal justice context.” Forrest contends that AI should instead be programmed to focus on John Rawl’ 'concept of justice as …


Here Are Ways Professional Education Leaders Can Prepare Students For The Rise Of Ai, A. Benjamin Spencer Jul 2023

Here Are Ways Professional Education Leaders Can Prepare Students For The Rise Of Ai, A. Benjamin Spencer

Popular Media

No abstract provided.


National Telecommunications And Information Administration: Comments From Researchers At Boston University And The University Of Chicago, Ran Canetti, Aloni Cohen, Chris Conley, Mark Crovella, Stacey Dogan, Marco Gaboardi, Woodrow Hartzog, Rory Van Loo, Christopher Robertson, Katharine B. Silbaugh Jun 2023

National Telecommunications And Information Administration: Comments From Researchers At Boston University And The University Of Chicago, Ran Canetti, Aloni Cohen, Chris Conley, Mark Crovella, Stacey Dogan, Marco Gaboardi, Woodrow Hartzog, Rory Van Loo, Christopher Robertson, Katharine B. Silbaugh

Faculty Scholarship

These comments were composed by an interdisciplinary group of legal, computer science, and data science faculty and researchers at Boston University and the University of Chicago. This group collaborates on research projects that grapple with the legal, policy, and ethical implications of the use of algorithms and digital innovation in general, and more specifically regarding the use of online platforms, machine learning algorithms for classification, prediction, and decision making, and generative AI. Specific areas of expertise include the functionality and impact of recommendation systems; the development of Privacy Enhancing Technologies (PETs) and their relationship to privacy and data security laws; …


On The Danger Of Not Understanding Technology, Fredric I. Lederer May 2023

On The Danger Of Not Understanding Technology, Fredric I. Lederer

Popular Media

No abstract provided.


Out Of Sight, Out Of Mind? Remote Work And Contractual Distancing, Nicola Countouris, Valerio De Stefano May 2023

Out Of Sight, Out Of Mind? Remote Work And Contractual Distancing, Nicola Countouris, Valerio De Stefano

Articles & Book Chapters

Since the Covid-19 pandemic, remote work has acquired quasi-Marmite status. It has become difficult, if not impossible, to approach the issue in a measured and dispassionate way, which is one of the reasons books such as the present one are being published. Remote work is often seen as anathema by some who associate it with laziness, low productivity and the degradation of the social fabric of firms and of their creative and collaborative potential. The notorious views of CEOs such as Tesla and Twitter’s Elon Musk or JP Morgan’s Jamie Dimon come to mind, indicative – in the view of …


Introduction To The Future Of Remote Work, Nicola Countouris, Valerio De Stefano, Agnieszka Piasna, Silvia Rainone May 2023

Introduction To The Future Of Remote Work, Nicola Countouris, Valerio De Stefano, Agnieszka Piasna, Silvia Rainone

Articles & Book Chapters

Debates on the future of work have taken a more fundamental turn in the wake of the Covid-19 pandemic. Early in 2020, when large sections of the workforce were prevented from coming to their usual places of work, remote work became the only way for many to continue to perform their professions. What had been a piecemeal, at times truly sluggish, evolution towards a multilocation approach to work suddenly turned into an abrupt, radical and universal shift. It quickly became clear that the consequences of this shift were far more significant and far-reaching than simply changing the workplace’s address. They …


Regulating Artificial Intelligence In International Investment Law, Mark Mclaughlin Apr 2023

Regulating Artificial Intelligence In International Investment Law, Mark Mclaughlin

Research Collection Yong Pung How School Of Law

The interaction between artificial intelligence (AI) and international investment treaties is an uncharted territory of international law. Concerns over the national security, safety, and privacy implications of AI are spurring regulators into action around the world. States have imposed restrictions on data transfer, utilised automated decision-making, mandated algorithmic transparency, and limited market access. This article explores the interaction between AI regulation and standards of investment protection. It is argued that the current framework provides an unpredictable legal environment in which to adjudicate the contested norms and ethics of AI. Treaties should be recalibrated to reinforce their anti-protectionist origins, embed human-centric …


Generative Artificial Intelligence And Copyright Law, Christopher T. Zirpoli Feb 2023

Generative Artificial Intelligence And Copyright Law, Christopher T. Zirpoli

Copyright, Fair Use, Scholarly Communication, etc.

Recent innovations in artificial intelligence (AI) are raising new questions about how copyright law principles such as authorship, infringement, and fair use will apply to content created or used by AI. So-called “generative AI” computer programs—such as Open AI’s DALL-E 2 and ChatGPT programs, Stability AI’s Stable Diffusion program, and Midjourney’s self-titled program—are able to generate new images, texts, and other content (or “outputs”) in response to a user’s textual prompts (or “inputs”). These generative AI programs are “trained” to generate such works partly by exposing them to large quantities of existing works such as writings, photos, paintings, and other …


Human-Centered Design To Address Biases In Artificial Intelligence, Ellen W. Clayton, You Chen, Laurie L. Novak, Shilo Anders, Bradley Malin Feb 2023

Human-Centered Design To Address Biases In Artificial Intelligence, Ellen W. Clayton, You Chen, Laurie L. Novak, Shilo Anders, Bradley Malin

Vanderbilt Law School Faculty Publications

The potential of artificial intelligence (AI) to reduce health care disparities and inequities is recognized, but it can also exacerbate these issues if not implemented in an equitable manner. This perspective identifies potential biases in each stage of the AI life cycle, including data collection, annotation, machine learning model development, evaluation, deployment, operationalization, monitoring, and feedback integration. To mitigate these biases, we suggest involving a diverse group of stakeholders, using human-centered AI principles. Human-centered AI can help ensure that AI systems are designed and used in a way that benefits patients and society, which can reduce health disparities and inequities. …


Legal Dispositionism And Artificially-Intelligent Attributions, Jerrold Soh Feb 2023

Legal Dispositionism And Artificially-Intelligent Attributions, Jerrold Soh

Research Collection Yong Pung How School Of Law

It is conventionally argued that because an artificially-intelligent (AI) system acts autonomously, its makers cannot easily be held liable should the system's actions harm. Since the system cannot be liable on its own account either, existing laws expose victims to accountability gaps and need to be reformed. Recent legal instruments have nonetheless established obligations against AI developers and providers. Drawing on attribution theory, this paper examines how these seemingly opposing positions are shaped by the ways in which AI systems are conceptualised. Specifically, folk dispositionism underpins conventional legal discourse on AI liability, personality, publications, and inventions and leads us towards …


Regulating Machine Learning: The Challenge Of Heterogeneity, Cary Coglianese Feb 2023

Regulating Machine Learning: The Challenge Of Heterogeneity, Cary Coglianese

All Faculty Scholarship

Machine learning, or artificial intelligence, refers to a vast array of different algorithms that are being put to highly varied uses, including in transportation, medicine, social media, marketing, and many other settings. Not only do machine-learning algorithms vary widely across their types and uses, but they are evolving constantly. Even the same algorithm can perform quite differently over time as it is fed new data. Due to the staggering heterogeneity of these algorithms, multiple regulatory agencies will be needed to regulate the use of machine learning, each within their own discrete area of specialization. Even these specialized expert agencies, though, …


Artificial Intelligence And Contract Formation: Back To Contract As Bargain?, John Linarelli Jan 2023

Artificial Intelligence And Contract Formation: Back To Contract As Bargain?, John Linarelli

Book Chapters

Some say AI is advancing quickly. ChatGPT, Bard, Bing’s AI, LaMDA, and other recent advances are remarkable, but they are talkers not doers. Advances toward some kind of robust agency for AI is, however, coming. Humans and their law must prepare for it. This chapter addresses this preparation from the standpoint of contract law and contract practices. An AI agent that can participate as a contracting agent, in a philosophical or psychological sense, with humans in the formation of a con-tract will have to have the following properties: (1) AI will need the cognitive functions to act with intention and …


Regulating The Risks Of Ai, Margot E. Kaminski Jan 2023

Regulating The Risks Of Ai, Margot E. Kaminski

Publications

Companies and governments now use Artificial Intelligence (“AI”) in a wide range of settings. But using AI leads to well-known risks that arguably present challenges for a traditional liability model. It is thus unsurprising that lawmakers in both the United States and the European Union (“EU”) have turned to the tools of risk regulation in governing AI systems.

This Article describes the growing convergence around risk regulation in AI governance. It then addresses the question: what does it mean to use risk regulation to govern AI systems? The primary contribution of this Article is to offer an analytic framework for …


Naïve Realism, Cognitive Bias, And The Benefits And Risks Of Ai, Harry Surden Jan 2023

Naïve Realism, Cognitive Bias, And The Benefits And Risks Of Ai, Harry Surden

Publications

In this short piece I comment on Orly Lobel's book on artificial intelligence (AI) and society "The Equality Machine." Here, I reflect on the complex topic of aI and its impact on society, and the importance of acknowledging both its positive and negative aspects. More broadly, I discuss the various cognitive biases, such as naïve realism, epistemic bubbles, negativity bias, extremity bias, and the availability heuristic, that influence individuals' perceptions of AI, often leading to polarized viewpoints. Technology can both exacerbate and ameliorate these biases, and I commend Lobel's balanced approach to AI analysis as an example to emulate.

Although …


Is Disclosure And Certification Of The Use Of Generative Ai Really Necessary?, Maura R. Grossman, Paul W. Grimm, Daniel G. Brown Jan 2023

Is Disclosure And Certification Of The Use Of Generative Ai Really Necessary?, Maura R. Grossman, Paul W. Grimm, Daniel G. Brown

Faculty Scholarship

No abstract provided.


The Prediction Society: Algorithms And The Problems Of Forecasting The Future, Hideyuki Matsumi, Daniel J. Solove Jan 2023

The Prediction Society: Algorithms And The Problems Of Forecasting The Future, Hideyuki Matsumi, Daniel J. Solove

GW Law Faculty Publications & Other Works

Predictions about the future have been made since the earliest days of humankind, but today, we are living in a brave new world of prediction. Today’s predictions are produced by machine learning algorithms that analyze massive quantities of personal data. Increasingly, important decisions about people are being made based on these predictions.

Algorithmic predictions are a type of inference. Many laws struggle to account for inferences, and even when they do, the laws lump all inferences together. But as we argue in this Article, predictions are different from other inferences. Predictions raise several unique problems that current law is ill-suited …


Humans In The Loop, Rebecca Crootof, Margot E. Kaminski, W. Nicholson Price Ii Jan 2023

Humans In The Loop, Rebecca Crootof, Margot E. Kaminski, W. Nicholson Price Ii

Publications

From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these "human-in-the-loop" systems. We make four contributions to the discourse.

First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify "the MABA-MABA trap," which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into a decision-making process. Regardless of whether the …


Humans In The Loop, Nicholson Price Ii, Rebecca Crootof, Margot Kaminski Jan 2023

Humans In The Loop, Nicholson Price Ii, Rebecca Crootof, Margot Kaminski

Articles

From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these “human in the loop” systems. We make four contributions to the discourse.

First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify “the MABA-MABA trap,” which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into decision making process. Regardless …