Open Access. Powered by Scholars. Published by Universities.®

Law Commons

Open Access. Powered by Scholars. Published by Universities.®

Science and Technology Law

Artificial intelligence

Institution
Publication Year
Publication
Publication Type

Articles 1 - 30 of 149

Full-Text Articles in Law

The Driving Impact Of Artificial Intelligence On Global Expansion, Aleksandra Drozd Apr 2024

The Driving Impact Of Artificial Intelligence On Global Expansion, Aleksandra Drozd

Senior Honors Theses

The invention and continual growth of artificial intelligence (AI) on the global stage have significantly shaped the world’s economies, governments, societies and their cultures. The new industrial revolution and the subsequent race of the world’s leading powers have led to increased international joint efforts and exchange of information, simultaneously reducing barriers to trade and communication. Meanwhile, emerging technologies deploying AI have led to changes in human behavior and culture and challenged the traditional nation-state model. Although several implications of the proliferation of AI remain unknown, its widening application may be tied with accelerating globalization, referred to interchangeably as global expansion. …


The Legal Imitation Game: Generative Ai’S Incompatibility With Clinical Legal Education, Jake Karr, Jason Schultz Apr 2024

The Legal Imitation Game: Generative Ai’S Incompatibility With Clinical Legal Education, Jake Karr, Jason Schultz

Fordham Law Review

In this Essay, we briefly describe key aspects of [generative artificial intelligence] that are particularly relevant to, and raise particular risks for, its potential use by lawyers and law students. We then identify three foundational goals of clinical legal education that provide useful frameworks for evaluating technological tools like GenAI: (1) practice readiness, (2) justice readiness, and (3) client-centered lawyering. First is “practice readiness,” which is about ensuring that students have the baseline abilities, knowledge, and skills to practice law upon graduation. Second is “justice readiness,” a concept proposed by Professor Jane Aiken, which is about teaching law students to …


Critical Data Theory, Margaret Hu Mar 2024

Critical Data Theory, Margaret Hu

William & Mary Law Review

Critical Data Theory examines the role of AI and algorithmic decisionmaking at its intersection with the law. This theory aims to deconstruct the impact of AI in law and policy contexts. The tools of AI and automated systems allow for legal, scientific, socioeconomic, and political hierarchies of power that can profitably be interrogated with critical theory. While the broader umbrella of critical theory features prominently in the work of surveillance scholars, legal scholars can also deploy criticality analyses to examine surveillance and privacy law challenges, particularly in an examination of how AI and other emerging technologies have been expanded in …


Chatgpt, Ai Large Language Models, And Law, Harry Surden Jan 2024

Chatgpt, Ai Large Language Models, And Law, Harry Surden

Publications

This Essay explores Artificial Intelligence (AI) Large Language Models (LLMs) like ChatGPT/GPT-4, detailing the advances and challenges in applying AI to law. It first explains how these AI technologies work at an understandable level. It then examines the significant evolution of LLMs since 2022 and their improved capabilities in understanding and generating complex documents, such as legal texts. Finally, this Essay discusses the limitations of these technologies, offering a balanced view of their potential role in legal work.


Ai-Based Evidence In Criminal Trials?, Sabine Gless, Fredric I. Lederer, Thomas Weigend Jan 2024

Ai-Based Evidence In Criminal Trials?, Sabine Gless, Fredric I. Lederer, Thomas Weigend

Faculty Publications

Smart devices are increasingly the origin of critical criminal case data. The importance of such data, especially data generated when using modern automobiles, is likely to become even more important as increasingly complex methods of machine learning lead to AI-based evidence being autonomously generated by devices. This article reviews the admissibility of such evidence from both American and German perspectives. As a result of this comparative approach, the authors conclude that American evidence law could be improved by borrowing aspects of the expert testimony approaches used in Germany’s “inquisitorial” court system.


Locating Liability For Medical Ai, W. Nicholson Price Ii, I. Glenn Cohen Jan 2024

Locating Liability For Medical Ai, W. Nicholson Price Ii, I. Glenn Cohen

Articles

When medical AI systems fail, who should be responsible, and how? We argue that various features of medical AI complicate the application of existing tort doctrines and render them ineffective at creating incentives for the safe and effective use of medical AI. In addition to complexity and opacity, the problem of contextual bias, where medical AI systems vary substantially in performance from place to place, hampers traditional doctrines. We suggest instead the application of enterprise liability to hospitals—making them broadly liable for negligent injuries occurring within the hospital system—with an important caveat: hospitals must have access to the information needed …


Code And Prejudice: Regulating Discriminatory Algorithms, Bernadette M. Coyle Dec 2023

Code And Prejudice: Regulating Discriminatory Algorithms, Bernadette M. Coyle

Washington and Lee Law Review Online

In an era dominated by efficiency-driven technology, algorithms have seamlessly integrated into every facet of daily life, wielding significant influence over decisions that impact individuals and society at large. Algorithms are deliberately portrayed as impartial and automated in order to maintain their legitimacy. However, this illusion crumbles under scrutiny, revealing the inherent biases and discriminatory tendencies embedded in ostensibly unbiased algorithms. This Note delves into the pervasive issues of discriminatory algorithms, focusing on three key areas of life opportunities: housing, employment, and voting rights. This Note systematically addresses the multifaceted issues arising from discriminatory algorithms, showcasing real-world instances of algorithmic …


Our Changing Reality: The Metaverse And The Importance Of Privacy Regulations In The United States, Anushkay Raza Dec 2023

Our Changing Reality: The Metaverse And The Importance Of Privacy Regulations In The United States, Anushkay Raza

Global Business Law Review

This Note discusses the legal and pressing digital challenges that arise in connection with the growing use of virtual reality, and more specifically, the metaverse. As this digital realm becomes more integrated into our daily lives, the United States should look towards creating a federal privacy law that protects fundamental individual privacy rights. This Note argues that congress should emulate the European Union's privacy regulations, and further, balances the potential consequences and benefits of adapting European regulations within the United Sates. Finally, this Note provides drafting considerations of future lawyers who will not only be dealing with the rise of …


A Public Technology Option, Hannah Bloch-Wehba Dec 2023

A Public Technology Option, Hannah Bloch-Wehba

Faculty Scholarship

Private technology increasingly underpins public governance. But the state’s growing reliance on private firms to provide a variety of complex technological products and services for public purposes brings significant costs for transparency: new forms of governance are becoming less visible and less amenable to democratic control. Transparency obligations initially designed for public agencies are a poor fit for private vendors that adhere to a very different set of expectations.

Aligning the use of technology in public governance with democratic values calls for rethinking, and in some cases abandoning, the legal structures and doctrinal commitments that insulate private vendors from meaningful …


Either The Law Will Govern Ai, Or Ai Will Govern The Law, Margaret Hu Nov 2023

Either The Law Will Govern Ai, Or Ai Will Govern The Law, Margaret Hu

Popular Media

No abstract provided.


The Philosophy Of Ai: Learning From History, Shaping Our Future. Hearing Before The Committee On Homeland Security And Government Affairs, Senate, One Hundred Eighteenth Congress, First Session., Margaret Hu Nov 2023

The Philosophy Of Ai: Learning From History, Shaping Our Future. Hearing Before The Committee On Homeland Security And Government Affairs, Senate, One Hundred Eighteenth Congress, First Session., Margaret Hu

Congressional Testimony

No abstract provided.


Biden's Executive Order Puts Civil Rights Rights In The Middle Of The Ai Regulation Discussion, Margaret Hu Nov 2023

Biden's Executive Order Puts Civil Rights Rights In The Middle Of The Ai Regulation Discussion, Margaret Hu

Popular Media

No abstract provided.


The First Byte Rule: A Proposal For Liability Of Artificial Intelligences, Hilyard Nichols Nov 2023

The First Byte Rule: A Proposal For Liability Of Artificial Intelligences, Hilyard Nichols

William & Mary Business Law Review

Artificial Intelligences (AIs) are a relatively new addition to human civilization. From delivery robots to board game champions, researchers and businesses have found a variety of ways to apply this new technology. As it continues to grow and become more prevalent, though, so do its interactions with society at large. This will create benefits for people, through cheaper or better products and services. It also has the possibility to create harm. AIs are not perfect, and as the range of AI uses grows, so will the range of potential harms. A mistake from an AI customer service bot could fraudulently …


Regulation Priorities For Artificial Intelligence Foundation Models, Matthew R. Gaske Nov 2023

Regulation Priorities For Artificial Intelligence Foundation Models, Matthew R. Gaske

Vanderbilt Journal of Entertainment & Technology Law

This Article responds to the call in technology law literature for high-level frameworks to guide regulation of the development and use of Artificial Intelligence (AI) technologies. Accordingly, it adapts a generalized form of the fintech Innovation Trilemma framework to argue that a regulatory scheme can prioritize only two of three aims when considering AI oversight: (1) promoting innovation, (2) mitigating systemic risk, and (3) providing clear regulatory requirements. Specifically, this Article expressly connects legal scholarship to research in other fields focusing on foundation model AI systems and explores this kind of system’s implications for regulation priorities from the geopolitical and …


Legalbench: A Collaboratively Built Benchmark For Measuring Legal Reasoning In Large Language Models, Neel Guha, Julian Nyarko, Daniel E. Ho, Christopher Ré, Adam Chilton, Aditya Narayana, Alex Chohlas-Wood, Austin Peters, Brandon Waldon, Daniel Rockmore, Diego A. Zambrano, Dmitry Talisman, Enam Hoque, Faiz Surani, Frank Fagan, Galit Sarfaty, Gregory M. Dickinson, Haggai Porat, Jason Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John Nay, Jonathan H. Choi, Kevin Tobia, Margaret Hagan, Megan Ma, Michael A. Livermore, Nikon Rasumov-Rahe, Nils Holzenberger, Noam Kolt, Peter Henderson, Sean Rehaag, Sharad Goel, Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, Zehua Li Sep 2023

Legalbench: A Collaboratively Built Benchmark For Measuring Legal Reasoning In Large Language Models, Neel Guha, Julian Nyarko, Daniel E. Ho, Christopher Ré, Adam Chilton, Aditya Narayana, Alex Chohlas-Wood, Austin Peters, Brandon Waldon, Daniel Rockmore, Diego A. Zambrano, Dmitry Talisman, Enam Hoque, Faiz Surani, Frank Fagan, Galit Sarfaty, Gregory M. Dickinson, Haggai Porat, Jason Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John Nay, Jonathan H. Choi, Kevin Tobia, Margaret Hagan, Megan Ma, Michael A. Livermore, Nikon Rasumov-Rahe, Nils Holzenberger, Noam Kolt, Peter Henderson, Sean Rehaag, Sharad Goel, Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, Zehua Li

All Papers

The advent of large language models (LLMs) and their adoption by the legal community has given rise to the question: what types of legal reasoning can LLMs perform? To enable greater study of this question, we present LegalBench: a collaboratively constructed legal reasoning benchmark consisting of 162 tasks covering six different types of legal reasoning. LegalBench was built through an interdisciplinary process, in which we collected tasks designed and hand-crafted by legal professionals. Because these subject matter experts took a leading role in construction, tasks either measure legal reasoning capabilities that are practically useful, or measure reasoning skills that lawyers …


Fair’S Fair: How Public Benefit Considerations In The Fair Use Doctrine Can Patch Bias In Artificial Intelligence Systems, Patrick K. Lin Jul 2023

Fair’S Fair: How Public Benefit Considerations In The Fair Use Doctrine Can Patch Bias In Artificial Intelligence Systems, Patrick K. Lin

Indiana Journal of Law and Social Equality

The impact of artificial intelligence (AI) expands relentlessly despite well documented examples of bias in AI systems, from facial recognition failing to differentiate between darker-skinned faces to hiring tools discriminating against female candidates. These biases can be introduced to AI systems in a variety of ways; however, a major source of bias is found in training datasets, the collection of images, text, audio, or information used to build and train AI systems. This Article first grapples with the pressure copyright law exerts on AI developers and researchers to use biased training data to build algorithms, focusing on the potential risk …


Toward An Enhanced Level Of Corporate Governance: Tech Committees As A Game Changer For The Board Of Directors, Maria Lillà Montagnani, Maria Lucia Passador May 2023

Toward An Enhanced Level Of Corporate Governance: Tech Committees As A Game Changer For The Board Of Directors, Maria Lillà Montagnani, Maria Lucia Passador

The Journal of Business, Entrepreneurship & the Law

Although tech committees are increasingly being included in the functioning of the board of directors, a gap exists in the current literature on board committees, as it tends to focus on traditional board committees, such as nominating, auditing or remuneration ones. Therefore, this article performs an empirical analysis of tech committees adopted by North American and European listed companies in 2019 in terms of their composition, characteristics and functions. The aim of the study is to understand what “technology” really stands for in the “tech committees” label within the board, or – to phrase it differently – to ascertain what …


On The Danger Of Not Understanding Technology, Fredric I. Lederer May 2023

On The Danger Of Not Understanding Technology, Fredric I. Lederer

Popular Media

No abstract provided.


The Perks Of Being Human, Max Stul Oppenheimer Apr 2023

The Perks Of Being Human, Max Stul Oppenheimer

Washington and Lee Law Review Online

The power of artificial intelligence has recently entered the public consciousness, prompting debates over numerous legal issues raised by use of the tool. Among the questions that need to be resolved is whether to grant intellectual property rights to copyrightable works or patentable inventions created by a machine, where there is no human intervention sufficient to grant those rights to the human. Both the U. S. Copyright Office and the U. S. Patent and Trademark Office have taken the position that in cases where there is no human author or inventor, there is no right to copyright or patent protection. …


Regulating Artificial Intelligence In International Investment Law, Mark Mclaughlin Apr 2023

Regulating Artificial Intelligence In International Investment Law, Mark Mclaughlin

Research Collection Yong Pung How School Of Law

The interaction between artificial intelligence (AI) and international investment treaties is an uncharted territory of international law. Concerns over the national security, safety, and privacy implications of AI are spurring regulators into action around the world. States have imposed restrictions on data transfer, utilised automated decision-making, mandated algorithmic transparency, and limited market access. This article explores the interaction between AI regulation and standards of investment protection. It is argued that the current framework provides an unpredictable legal environment in which to adjudicate the contested norms and ethics of AI. Treaties should be recalibrated to reinforce their anti-protectionist origins, embed human-centric …


Exams In The Time Of Chatgpt, Margaret Ryznar Mar 2023

Exams In The Time Of Chatgpt, Margaret Ryznar

Washington and Lee Law Review Online

Invaluable guidance has emerged regarding online teaching in recent years, but less so concerning online and take-home final exams. This article offers various methods to administer such exams while maintaining their integrity—after asking artificial intelligence writing tool ChatGPT for its views on the matter. The sophisticated response of the chatbot, which students can use in their written work, only raises the stakes of figuring out how to administer exams fairly.


Regulating Uncertain States: A Risk-Based Policy Agenda For Quantum Technologies, Tina Dekker, Florian Martin-Bariteau Feb 2023

Regulating Uncertain States: A Risk-Based Policy Agenda For Quantum Technologies, Tina Dekker, Florian Martin-Bariteau

Canadian Journal of Law and Technology

Many countries are taking a national approach to developing quantum strategies with a strong focus on innovation. However, societal, ethical, legal, and policy considerations should not be an afterthought that is pushed aside by the drive for innovation. A responsible, global approach to quantum technologies that considers the legal, ethical, and societal dimensions of quantum technologies is necessary to avoid exacerbating existing global inequalities. Quantum technologies are expected to disrupt other transformative technologies whose legal landscape is still under development (e.g., artificial intelligence [‘‘AI”], blockchain, etc.). The shortcomings of global policies regarding AI and the digital context teach lessons that …


Legal Dispositionism And Artificially-Intelligent Attributions, Jerrold Soh Feb 2023

Legal Dispositionism And Artificially-Intelligent Attributions, Jerrold Soh

Research Collection Yong Pung How School Of Law

It is conventionally argued that because an artificially-intelligent (AI) system acts autonomously, its makers cannot easily be held liable should the system's actions harm. Since the system cannot be liable on its own account either, existing laws expose victims to accountability gaps and need to be reformed. Recent legal instruments have nonetheless established obligations against AI developers and providers. Drawing on attribution theory, this paper examines how these seemingly opposing positions are shaped by the ways in which AI systems are conceptualised. Specifically, folk dispositionism underpins conventional legal discourse on AI liability, personality, publications, and inventions and leads us towards …


Recognizing Operators’ Duties To Properly Select And Supervise Ai Agents – A (Better?) Tool For Algorithmic Accountability, Richard Zuroff Jan 2023

Recognizing Operators’ Duties To Properly Select And Supervise Ai Agents – A (Better?) Tool For Algorithmic Accountability, Richard Zuroff

Canadian Journal of Law and Technology

In November of 2020, the Privacy Commissioner of Canada proposed creating GDPR-inspired rights for decision subjects and allowing financial penalties for violations of those rights. Shortly afterward, the proposal to create a right to an explanation for algorithmic decisions was incorporated into Bill C-11, the Digital Charter Implementation Act. This commentary proposes that creating duties for operators to properly select and supervise artificial agents would be a complementary, and potentially more effective, accountability mechanism than creating a right to an explanation. These duties would be a natural extension of employers’ duties to properly select and retain human employees. Allowing victims …


The Need For An Australian Regulatory Code For The Use Of Artificial Intelligence (Ai) In Military Application, Sascha-Dominik Dov Bachmann, Richard V. Grant Jan 2023

The Need For An Australian Regulatory Code For The Use Of Artificial Intelligence (Ai) In Military Application, Sascha-Dominik Dov Bachmann, Richard V. Grant

American University National Security Law Brief

Artificial Intelligence (AI) is enabling rapid technological innovation and is ever more pervasive, in a global technological eco-system lacking suitable governance and absence of regulation over AI-enabled technologies. Australia is committed to being a global leader in trusted secure and responsible AI and has escalated the development of its own sovereign AI capabilities. Military and Defence organisations have similarly embraced AI, harnessing advantages for applications supporting battlefield autonomy, intelligence analysis, capability planning, operations, training, and autonomous weapons systems. While no regulation exists covering AI-enabled military systems and autonomous weapons, these platforms must comply with International Humanitarian Law, the Law of …


Vicarious Liability For Ai, Mihailis E. Diamantis Jan 2023

Vicarious Liability For Ai, Mihailis E. Diamantis

Indiana Law Journal

When an algorithm harms someone—say by discriminating against her, exposing her personal data, or buying her stock using inside information—who should pay? If that harm is criminal, who deserves punishment? In ordinary cases, when A harms B, the first step in the liability analysis turns on what sort of thing A is. If A is a natural phenomenon, like a typhoon or mudslide, B pays, and no one is punished. If A is a person, then A might be liable for damages and sanction. The trouble with algorithms is that neither paradigm fits. Algorithms are trainable artifacts with “off” switches, …


Regulating The Risks Of Ai, Margot E. Kaminski Jan 2023

Regulating The Risks Of Ai, Margot E. Kaminski

Publications

Companies and governments now use Artificial Intelligence (“AI”) in a wide range of settings. But using AI leads to well-known risks that arguably present challenges for a traditional liability model. It is thus unsurprising that lawmakers in both the United States and the European Union (“EU”) have turned to the tools of risk regulation in governing AI systems.

This Article describes the growing convergence around risk regulation in AI governance. It then addresses the question: what does it mean to use risk regulation to govern AI systems? The primary contribution of this Article is to offer an analytic framework for …


Naïve Realism, Cognitive Bias, And The Benefits And Risks Of Ai, Harry Surden Jan 2023

Naïve Realism, Cognitive Bias, And The Benefits And Risks Of Ai, Harry Surden

Publications

In this short piece I comment on Orly Lobel's book on artificial intelligence (AI) and society "The Equality Machine." Here, I reflect on the complex topic of aI and its impact on society, and the importance of acknowledging both its positive and negative aspects. More broadly, I discuss the various cognitive biases, such as naïve realism, epistemic bubbles, negativity bias, extremity bias, and the availability heuristic, that influence individuals' perceptions of AI, often leading to polarized viewpoints. Technology can both exacerbate and ameliorate these biases, and I commend Lobel's balanced approach to AI analysis as an example to emulate.

Although …


Humans In The Loop, Rebecca Crootof, Margot E. Kaminski, W. Nicholson Price Ii Jan 2023

Humans In The Loop, Rebecca Crootof, Margot E. Kaminski, W. Nicholson Price Ii

Publications

From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these "human-in-the-loop" systems. We make four contributions to the discourse.

First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify "the MABA-MABA trap," which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into a decision-making process. Regardless of whether the …


Humans In The Loop, Nicholson Price Ii, Rebecca Crootof, Margot Kaminski Jan 2023

Humans In The Loop, Nicholson Price Ii, Rebecca Crootof, Margot Kaminski

Articles

From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these “human in the loop” systems. We make four contributions to the discourse.

First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify “the MABA-MABA trap,” which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into decision making process. Regardless …