Open Access. Powered by Scholars. Published by Universities.®
- Institution
-
- University of Michigan Law School (16)
- University of Colorado Law School (11)
- Singapore Management University (8)
- Vanderbilt University Law School (8)
- Boston University School of Law (7)
-
- University of Pennsylvania Carey Law School (7)
- Schulich School of Law, Dalhousie University (6)
- William & Mary Law School (6)
- California Western School of Law (5)
- Maurer School of Law: Indiana University (5)
- Texas A&M University School of Law (5)
- Washington and Lee University School of Law (5)
- The Catholic University of America, Columbus School of Law (4)
- University of Washington School of Law (4)
- Northwestern Pritzker School of Law (3)
- Pepperdine University (3)
- Selected Works (3)
- American University Washington College of Law (2)
- Columbia Law School (2)
- Mitchell Hamline School of Law (2)
- Penn State Dickinson Law (2)
- Southern Methodist University (2)
- University of Miami Law School (2)
- Bridgewater State University (1)
- Case Western Reserve University School of Law (1)
- Chicago-Kent College of Law (1)
- Cleveland State University (1)
- Edith Cowan University (1)
- James Madison University (1)
- Loyola University Chicago, School of Law (1)
- Publication Year
- Publication
-
- Faculty Scholarship (20)
- Articles (13)
- Publications (11)
- All Faculty Scholarship (8)
- Research Collection Yong Pung How School Of Law (7)
-
- Vanderbilt Journal of Entertainment & Technology Law (7)
- Canadian Journal of Law and Technology (5)
- Catholic University Journal of Law and Technology (4)
- Law & Economics Working Papers (3)
- Northwestern University Law Review (3)
- Popular Media (3)
- Washington and Lee Law Review Online (3)
- Faculty Scholarly Works (2)
- IP Theory (2)
- Michigan Law Review (2)
- Michigan Technology Law Review (2)
- Other Publications (2)
- Stephen E Henderson (2)
- All Papers (1)
- American University National Security Law Brief (1)
- Arkansas Law Review (1)
- Articles & Chapters (1)
- Articles by Maurer Faculty (1)
- Articles in Law Reviews & Other Academic Journals (1)
- Biennial Conference: The Social Practice of Human Rights (1)
- Centre for AI & Data Governance (1)
- Congressional Testimony (1)
- Cybaris® (1)
- Faculty Journal Articles and Book Chapters (1)
- Faculty Publications & Other Works (1)
Articles 1 - 30 of 138
Full-Text Articles in Law
Code And Prejudice: Regulating Discriminatory Algorithms, Bernadette M. Coyle
Code And Prejudice: Regulating Discriminatory Algorithms, Bernadette M. Coyle
Washington and Lee Law Review Online
In an era dominated by efficiency-driven technology, algorithms have seamlessly integrated into every facet of daily life, wielding significant influence over decisions that impact individuals and society at large. Algorithms are deliberately portrayed as impartial and automated in order to maintain their legitimacy. However, this illusion crumbles under scrutiny, revealing the inherent biases and discriminatory tendencies embedded in ostensibly unbiased algorithms. This Note delves into the pervasive issues of discriminatory algorithms, focusing on three key areas of life opportunities: housing, employment, and voting rights. This Note systematically addresses the multifaceted issues arising from discriminatory algorithms, showcasing real-world instances of algorithmic …
Our Changing Reality: The Metaverse And The Importance Of Privacy Regulations In The United States, Anushkay Raza
Our Changing Reality: The Metaverse And The Importance Of Privacy Regulations In The United States, Anushkay Raza
Global Business Law Review
This Note discusses the legal and pressing digital challenges that arise in connection with the growing use of virtual reality, and more specifically, the metaverse. As this digital realm becomes more integrated into our daily lives, the United States should look towards creating a federal privacy law that protects fundamental individual privacy rights. This Note argues that congress should emulate the European Union's privacy regulations, and further, balances the potential consequences and benefits of adapting European regulations within the United Sates. Finally, this Note provides drafting considerations of future lawyers who will not only be dealing with the rise of …
A Public Technology Option, Hannah Bloch-Wehba
A Public Technology Option, Hannah Bloch-Wehba
Faculty Scholarship
Private technology increasingly underpins public governance. But the state’s growing reliance on private firms to provide a variety of complex technological products and services for public purposes brings significant costs for transparency: new forms of governance are becoming less visible and less amenable to democratic control. Transparency obligations initially designed for public agencies are a poor fit for private vendors that adhere to a very different set of expectations.
Aligning the use of technology in public governance with democratic values calls for rethinking, and in some cases abandoning, the legal structures and doctrinal commitments that insulate private vendors from meaningful …
Either The Law Will Govern Ai, Or Ai Will Govern The Law, Margaret Hu
Either The Law Will Govern Ai, Or Ai Will Govern The Law, Margaret Hu
Popular Media
No abstract provided.
The Philosophy Of Ai: Learning From History, Shaping Our Future. Hearing Before The Committee On Homeland Security And Government Affairs, Senate, One Hundred Eighteenth Congress, First Session., Margaret Hu
Congressional Testimony
No abstract provided.
Biden's Executive Order Puts Civil Rights Rights In The Middle Of The Ai Regulation Discussion, Margaret Hu
Biden's Executive Order Puts Civil Rights Rights In The Middle Of The Ai Regulation Discussion, Margaret Hu
Popular Media
No abstract provided.
The First Byte Rule: A Proposal For Liability Of Artificial Intelligences, Hilyard Nichols
The First Byte Rule: A Proposal For Liability Of Artificial Intelligences, Hilyard Nichols
William & Mary Business Law Review
Artificial Intelligences (AIs) are a relatively new addition to human civilization. From delivery robots to board game champions, researchers and businesses have found a variety of ways to apply this new technology. As it continues to grow and become more prevalent, though, so do its interactions with society at large. This will create benefits for people, through cheaper or better products and services. It also has the possibility to create harm. AIs are not perfect, and as the range of AI uses grows, so will the range of potential harms. A mistake from an AI customer service bot could fraudulently …
Regulation Priorities For Artificial Intelligence Foundation Models, Matthew R. Gaske
Regulation Priorities For Artificial Intelligence Foundation Models, Matthew R. Gaske
Vanderbilt Journal of Entertainment & Technology Law
This Article responds to the call in technology law literature for high-level frameworks to guide regulation of the development and use of Artificial Intelligence (AI) technologies. Accordingly, it adapts a generalized form of the fintech Innovation Trilemma framework to argue that a regulatory scheme can prioritize only two of three aims when considering AI oversight: (1) promoting innovation, (2) mitigating systemic risk, and (3) providing clear regulatory requirements. Specifically, this Article expressly connects legal scholarship to research in other fields focusing on foundation model AI systems and explores this kind of system’s implications for regulation priorities from the geopolitical and …
Legalbench: A Collaboratively Built Benchmark For Measuring Legal Reasoning In Large Language Models, Neel Guha, Julian Nyarko, Daniel E. Ho, Christopher Ré, Adam Chilton, Aditya Narayana, Alex Chohlas-Wood, Austin Peters, Brandon Waldon, Daniel Rockmore, Diego A. Zambrano, Dmitry Talisman, Enam Hoque, Faiz Surani, Frank Fagan, Galit Sarfaty, Gregory M. Dickinson, Haggai Porat, Jason Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John Nay, Jonathan H. Choi, Kevin Tobia, Margaret Hagan, Megan Ma, Michael A. Livermore, Nikon Rasumov-Rahe, Nils Holzenberger, Noam Kolt, Peter Henderson, Sean Rehaag, Sharad Goel, Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, Zehua Li
Legalbench: A Collaboratively Built Benchmark For Measuring Legal Reasoning In Large Language Models, Neel Guha, Julian Nyarko, Daniel E. Ho, Christopher Ré, Adam Chilton, Aditya Narayana, Alex Chohlas-Wood, Austin Peters, Brandon Waldon, Daniel Rockmore, Diego A. Zambrano, Dmitry Talisman, Enam Hoque, Faiz Surani, Frank Fagan, Galit Sarfaty, Gregory M. Dickinson, Haggai Porat, Jason Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John Nay, Jonathan H. Choi, Kevin Tobia, Margaret Hagan, Megan Ma, Michael A. Livermore, Nikon Rasumov-Rahe, Nils Holzenberger, Noam Kolt, Peter Henderson, Sean Rehaag, Sharad Goel, Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, Zehua Li
All Papers
The advent of large language models (LLMs) and their adoption by the legal community has given rise to the question: what types of legal reasoning can LLMs perform? To enable greater study of this question, we present LegalBench: a collaboratively constructed legal reasoning benchmark consisting of 162 tasks covering six different types of legal reasoning. LegalBench was built through an interdisciplinary process, in which we collected tasks designed and hand-crafted by legal professionals. Because these subject matter experts took a leading role in construction, tasks either measure legal reasoning capabilities that are practically useful, or measure reasoning skills that lawyers …
Fair’S Fair: How Public Benefit Considerations In The Fair Use Doctrine Can Patch Bias In Artificial Intelligence Systems, Patrick K. Lin
Fair’S Fair: How Public Benefit Considerations In The Fair Use Doctrine Can Patch Bias In Artificial Intelligence Systems, Patrick K. Lin
Indiana Journal of Law and Social Equality
The impact of artificial intelligence (AI) expands relentlessly despite well documented examples of bias in AI systems, from facial recognition failing to differentiate between darker-skinned faces to hiring tools discriminating against female candidates. These biases can be introduced to AI systems in a variety of ways; however, a major source of bias is found in training datasets, the collection of images, text, audio, or information used to build and train AI systems. This Article first grapples with the pressure copyright law exerts on AI developers and researchers to use biased training data to build algorithms, focusing on the potential risk …
Toward An Enhanced Level Of Corporate Governance: Tech Committees As A Game Changer For The Board Of Directors, Maria Lillà Montagnani, Maria Lucia Passador
Toward An Enhanced Level Of Corporate Governance: Tech Committees As A Game Changer For The Board Of Directors, Maria Lillà Montagnani, Maria Lucia Passador
The Journal of Business, Entrepreneurship & the Law
Although tech committees are increasingly being included in the functioning of the board of directors, a gap exists in the current literature on board committees, as it tends to focus on traditional board committees, such as nominating, auditing or remuneration ones. Therefore, this article performs an empirical analysis of tech committees adopted by North American and European listed companies in 2019 in terms of their composition, characteristics and functions. The aim of the study is to understand what “technology” really stands for in the “tech committees” label within the board, or – to phrase it differently – to ascertain what …
On The Danger Of Not Understanding Technology, Fredric I. Lederer
On The Danger Of Not Understanding Technology, Fredric I. Lederer
Popular Media
No abstract provided.
The Perks Of Being Human, Max Stul Oppenheimer
The Perks Of Being Human, Max Stul Oppenheimer
Washington and Lee Law Review Online
The power of artificial intelligence has recently entered the public consciousness, prompting debates over numerous legal issues raised by use of the tool. Among the questions that need to be resolved is whether to grant intellectual property rights to copyrightable works or patentable inventions created by a machine, where there is no human intervention sufficient to grant those rights to the human. Both the U. S. Copyright Office and the U. S. Patent and Trademark Office have taken the position that in cases where there is no human author or inventor, there is no right to copyright or patent protection. …
Regulating Artificial Intelligence In International Investment Law, Mark Mclaughlin
Regulating Artificial Intelligence In International Investment Law, Mark Mclaughlin
Research Collection Yong Pung How School Of Law
The interaction between artificial intelligence (AI) and international investment treaties is an uncharted territory of international law. Concerns over the national security, safety, and privacy implications of AI are spurring regulators into action around the world. States have imposed restrictions on data transfer, utilised automated decision-making, mandated algorithmic transparency, and limited market access. This article explores the interaction between AI regulation and standards of investment protection. It is argued that the current framework provides an unpredictable legal environment in which to adjudicate the contested norms and ethics of AI. Treaties should be recalibrated to reinforce their anti-protectionist origins, embed human-centric …
Exams In The Time Of Chatgpt, Margaret Ryznar
Exams In The Time Of Chatgpt, Margaret Ryznar
Washington and Lee Law Review Online
Invaluable guidance has emerged regarding online teaching in recent years, but less so concerning online and take-home final exams. This article offers various methods to administer such exams while maintaining their integrity—after asking artificial intelligence writing tool ChatGPT for its views on the matter. The sophisticated response of the chatbot, which students can use in their written work, only raises the stakes of figuring out how to administer exams fairly.
Legal Dispositionism And Artificially-Intelligent Attributions, Jerrold Soh
Legal Dispositionism And Artificially-Intelligent Attributions, Jerrold Soh
Research Collection Yong Pung How School Of Law
It is conventionally argued that because an artificially-intelligent (AI) system acts autonomously, its makers cannot easily be held liable should the system's actions harm. Since the system cannot be liable on its own account either, existing laws expose victims to accountability gaps and need to be reformed. Recent legal instruments have nonetheless established obligations against AI developers and providers. Drawing on attribution theory, this paper examines how these seemingly opposing positions are shaped by the ways in which AI systems are conceptualised. Specifically, folk dispositionism underpins conventional legal discourse on AI liability, personality, publications, and inventions and leads us towards …
Recognizing Operators’ Duties To Properly Select And Supervise Ai Agents – A (Better?) Tool For Algorithmic Accountability, Richard Zuroff
Recognizing Operators’ Duties To Properly Select And Supervise Ai Agents – A (Better?) Tool For Algorithmic Accountability, Richard Zuroff
Canadian Journal of Law and Technology
In November of 2020, the Privacy Commissioner of Canada proposed creating GDPR-inspired rights for decision subjects and allowing financial penalties for violations of those rights. Shortly afterward, the proposal to create a right to an explanation for algorithmic decisions was incorporated into Bill C-11, the Digital Charter Implementation Act. This commentary proposes that creating duties for operators to properly select and supervise artificial agents would be a complementary, and potentially more effective, accountability mechanism than creating a right to an explanation. These duties would be a natural extension of employers’ duties to properly select and retain human employees. Allowing victims …
The Need For An Australian Regulatory Code For The Use Of Artificial Intelligence (Ai) In Military Application, Sascha-Dominik Dov Bachmann, Richard V. Grant
The Need For An Australian Regulatory Code For The Use Of Artificial Intelligence (Ai) In Military Application, Sascha-Dominik Dov Bachmann, Richard V. Grant
American University National Security Law Brief
Artificial Intelligence (AI) is enabling rapid technological innovation and is ever more pervasive, in a global technological eco-system lacking suitable governance and absence of regulation over AI-enabled technologies. Australia is committed to being a global leader in trusted secure and responsible AI and has escalated the development of its own sovereign AI capabilities. Military and Defence organisations have similarly embraced AI, harnessing advantages for applications supporting battlefield autonomy, intelligence analysis, capability planning, operations, training, and autonomous weapons systems. While no regulation exists covering AI-enabled military systems and autonomous weapons, these platforms must comply with International Humanitarian Law, the Law of …
Vicarious Liability For Ai, Mihailis E. Diamantis
Vicarious Liability For Ai, Mihailis E. Diamantis
Indiana Law Journal
When an algorithm harms someone—say by discriminating against her, exposing her personal data, or buying her stock using inside information—who should pay? If that harm is criminal, who deserves punishment? In ordinary cases, when A harms B, the first step in the liability analysis turns on what sort of thing A is. If A is a natural phenomenon, like a typhoon or mudslide, B pays, and no one is punished. If A is a person, then A might be liable for damages and sanction. The trouble with algorithms is that neither paradigm fits. Algorithms are trainable artifacts with “off” switches, …
Humans In The Loop, Nicholson Price Ii, Rebecca Crootof, Margot Kaminski
Humans In The Loop, Nicholson Price Ii, Rebecca Crootof, Margot Kaminski
Articles
From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these “human in the loop” systems. We make four contributions to the discourse.
First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify “the MABA-MABA trap,” which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into decision making process. Regardless …
Regulating The Risks Of Ai, Margot E. Kaminski
Regulating The Risks Of Ai, Margot E. Kaminski
Publications
Companies and governments now use Artificial Intelligence (“AI”) in a wide range of settings. But using AI leads to well-known risks that arguably present challenges for a traditional liability model. It is thus unsurprising that lawmakers in both the United States and the European Union (“EU”) have turned to the tools of risk regulation in governing AI systems.
This Article describes the growing convergence around risk regulation in AI governance. It then addresses the question: what does it mean to use risk regulation to govern AI systems? The primary contribution of this Article is to offer an analytic framework for …
Naïve Realism, Cognitive Bias, And The Benefits And Risks Of Ai, Harry Surden
Naïve Realism, Cognitive Bias, And The Benefits And Risks Of Ai, Harry Surden
Publications
In this short piece I comment on Orly Lobel's book on artificial intelligence (AI) and society "The Equality Machine." Here, I reflect on the complex topic of aI and its impact on society, and the importance of acknowledging both its positive and negative aspects. More broadly, I discuss the various cognitive biases, such as naïve realism, epistemic bubbles, negativity bias, extremity bias, and the availability heuristic, that influence individuals' perceptions of AI, often leading to polarized viewpoints. Technology can both exacerbate and ameliorate these biases, and I commend Lobel's balanced approach to AI analysis as an example to emulate.
Although …
Humans In The Loop, Rebecca Crootof, Margot E. Kaminski, W. Nicholson Price Ii
Humans In The Loop, Rebecca Crootof, Margot E. Kaminski, W. Nicholson Price Ii
Publications
From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these "human-in-the-loop" systems. We make four contributions to the discourse.
First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify "the MABA-MABA trap," which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into a decision-making process. Regardless of whether the …
Big Data Affirmative Action, Peter N. Salib
Big Data Affirmative Action, Peter N. Salib
Northwestern University Law Review
As a vast and ever-growing body of social-scientific research shows, discrimination remains pervasive in the United States. In education, work, consumer markets, healthcare, criminal justice, and more, Black people fare worse than whites, women worse than men, and so on. Moreover, the evidence now convincingly demonstrates that this inequality is driven by discrimination. Yet solutions are scarce. The best empirical studies find that popular interventions—like diversity seminars and antibias trainings—have little or no effect. And more muscular solutions—like hiring quotas or school busing—are now regularly struck down as illegal. Indeed, in the last thirty years, the Supreme Court has invalidated …
Open-Source Clinical Machine Learning Models: Critical Appraisal Of Feasibility, Advantages, And Challenges, Keerthi B. Harish, W. Nicholson Price Ii, Yindalon Aphinyanaphongs
Open-Source Clinical Machine Learning Models: Critical Appraisal Of Feasibility, Advantages, And Challenges, Keerthi B. Harish, W. Nicholson Price Ii, Yindalon Aphinyanaphongs
Articles
Machine learning applications promise to augment clinical capabilities and at least 64 models have already been approved by the US Food and Drug Administration. These tools are developed, shared, and used in an environment in which regulations and market forces remain immature. An important consideration when evaluating this environment is the introduction of open-source solutions in which innovations are freely shared; such solutions have long been a facet of digital culture. We discuss the feasibility and implications of open-source machine learning in a health care infrastructure built upon proprietary information. The decreased cost of development as compared to drugs and …
Using Artificial Intelligence In The Law Review Submissions Process, Brenda M. Simon
Using Artificial Intelligence In The Law Review Submissions Process, Brenda M. Simon
Faculty Scholarship
The use of artificial intelligence to help editors examine law review submissions may provide a way to improve an overburdened system. This Article is the first to explore the promise and pitfalls of using artificial intelligence in the law review submissions process. Technology-assisted review of submissions offers many possible benefits. It can simplify preemption checks, prevent plagiarism, detect failure to comply with formatting requirements, and identify missing citations. These efficiencies may allow editors to address serious flaws in the current selection process, including the use of heuristics that may result in discriminatory outcomes and dependence on lower-ranked journals to conduct …
Algorithmic Governance From The Bottom Up, Hannah Bloch-Wehba
Algorithmic Governance From The Bottom Up, Hannah Bloch-Wehba
Faculty Scholarship
Artificial intelligence and machine learning are both a blessing and a curse for governance. In theory, algorithmic governance makes government more efficient, more accurate, and more fair. But the emergence of automation in governance also rests on public-private collaborations that expand both public and private power, aggravate transparency and accountability gaps, and create significant obstacles for those seeking algorithmic justice. In response, a nascent body of law proposes technocratic policy changes to foster algorithmic accountability, ethics, and transparency.
This Article examines an alternative vision of algorithmic governance, one advanced primarily by social and labor movements instead of technocrats and firms. …
Content Moderation As Surveillance, Hannah Bloch-Wehba
Content Moderation As Surveillance, Hannah Bloch-Wehba
Faculty Scholarship
Technology platforms are the new governments, and content moderation is the new law, or so goes a common refrain. As platforms increasingly turn toward new, automated mechanisms of enforcing their rules, the apparent power of the private sector seems only to grow. Yet beneath the surface lies a web of complex relationships between public and private authorities that call into question whether platforms truly possess such unilateral power. Law enforcement and police are exerting influence over platform content rules, giving governments a louder voice in supposedly “private” decisions. At the same time, law enforcement avails itself of the affordances of …
Arbitral Analytics: How Moneyball Based Litigation/Judicial Analytics Can Be Used To Predict Arbitration Claims And Outcomes, Benjamin Davies
Arbitral Analytics: How Moneyball Based Litigation/Judicial Analytics Can Be Used To Predict Arbitration Claims And Outcomes, Benjamin Davies
Pepperdine Dispute Resolution Law Journal
This paper reviews, discusses, and advances the field of artificial intelligence in the field of litigation analytics and its application to arbitrations. To better explain the weight an attorney, judge, arbitrator, or the public should have towards artificial intelligence and its utilization in the legal field, this paper reviews current AI publications in the litigation analytics field, historical examples, ethical considerations for analytics, and issues surrounding the accumulation of litigation data. Thereafter, this combined knowledge and experience is applied to Federal Industry Regulatory Authority (FINRA) arbitration awards with a novel AI program designed to scrape, index, and analyze these awards …
From Negative To Positive Algorithm Rights, Cary Coglianese, Kat Hefter
From Negative To Positive Algorithm Rights, Cary Coglianese, Kat Hefter
William & Mary Bill of Rights Journal
We consider this issue here and suggest that the current calls for a negative right to be free from AI could very well transform over time into positive claims that demand the use of algorithmic tools by government officials. In Part I, we begin by sketching the current landscape surrounding the adoption of AI by government. That landscape is characterized by strong activist and scholarly voices expressing a pronounced aversion to the use of digital algorithms—and taking a decidedly negative rights tone. In Part II, we show that, although aversion to complex technology might be understandable, that aversion is neither …