Open Access. Powered by Scholars. Published by Universities.®

Law Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 17 of 17

Full-Text Articles in Law

Risky Speech Systems: Tort Liability For Ai-Generated Illegal Speech, Margot E. Kaminski Jan 2024

Risky Speech Systems: Tort Liability For Ai-Generated Illegal Speech, Margot E. Kaminski

Publications

No abstract provided.


Locating Liability For Medical Ai, W. Nicholson Price Ii, I. Glenn Cohen Jan 2024

Locating Liability For Medical Ai, W. Nicholson Price Ii, I. Glenn Cohen

Articles

When medical AI systems fail, who should be responsible, and how? We argue that various features of medical AI complicate the application of existing tort doctrines and render them ineffective at creating incentives for the safe and effective use of medical AI. In addition to complexity and opacity, the problem of contextual bias, where medical AI systems vary substantially in performance from place to place, hampers traditional doctrines. We suggest instead the application of enterprise liability to hospitals—making them broadly liable for negligent injuries occurring within the hospital system—with an important caveat: hospitals must have access to the information needed …


Vicarious Liability For Ai, Mihailis E. Diamantis Jan 2023

Vicarious Liability For Ai, Mihailis E. Diamantis

Indiana Law Journal

When an algorithm harms someone—say by discriminating against her, exposing her personal data, or buying her stock using inside information—who should pay? If that harm is criminal, who deserves punishment? In ordinary cases, when A harms B, the first step in the liability analysis turns on what sort of thing A is. If A is a natural phenomenon, like a typhoon or mudslide, B pays, and no one is punished. If A is a person, then A might be liable for damages and sanction. The trouble with algorithms is that neither paradigm fits. Algorithms are trainable artifacts with “off” switches, …


The Tiktok Algorithm Is Good, But Is It Too Good? Exploring The Responsibility Of Artificial Intelligence Systems Reinforcing Harmful Ideas On Users, Julianne Gabor Jan 2023

The Tiktok Algorithm Is Good, But Is It Too Good? Exploring The Responsibility Of Artificial Intelligence Systems Reinforcing Harmful Ideas On Users, Julianne Gabor

Catholic University Journal of Law and Technology

No abstract provided.


Liability For Use Of Artificial Intelligence In Medicine, W. Nicholson Price, Sara Gerke, I. Glenn Cohen Jan 2022

Liability For Use Of Artificial Intelligence In Medicine, W. Nicholson Price, Sara Gerke, I. Glenn Cohen

Law & Economics Working Papers

While artificial intelligence has substantial potential to improve medical practice, errors will certainly occur, sometimes resulting in injury. Who will be liable? Questions of liability for AI-related injury raise not only immediate concerns for potentially liable parties, but also broader systemic questions about how AI will be developed and adopted. The landscape of liability is complex, involving health-care providers and institutions and the developers of AI systems. In this chapter, we consider these three principal loci of liability: individual health-care providers, focused on physicians; institutions, focused on hospitals; and developers.


Assuming The Risks Of Artificial Intelligence, Amy L. Stein Jan 2022

Assuming The Risks Of Artificial Intelligence, Amy L. Stein

UF Law Faculty Publications

Tort law has long served as a remedy for those injured by products—and injuries from artificial intelligence (“AI”) are no exception. While many scholars have rightly contemplated the possible tort claims involving AI-driven technologies that cause injury, there has been little focus on the subsequent analysis of defenses. One of these defenses, assumption of risk, has been given particularly short shrift, with most scholars addressing it only in passing. This is intriguing, particularly because assumption of risk has the power to completely bar recovery for a plaintiff who knowingly and voluntarily engaged with a risk. In reality, such a defense …


The Ratio Method: Addressing Complex Tort Liability In The Fourth Industrial Revolution, Harrison C. Margolin, Grant H. Frazier Oct 2021

The Ratio Method: Addressing Complex Tort Liability In The Fourth Industrial Revolution, Harrison C. Margolin, Grant H. Frazier

St. Mary's Law Journal

Emerging technologies of the Fourth Industrial Revolution show fundamental promise for improving productivity and quality of life, though their misuse may also cause significant social disruption. For example, while artificial intelligence will be used to accelerate society’s processes, it may also displace millions of workers and arm cybercriminals with increasingly powerful hacking capabilities. Similarly, human gene editing shows promise for curing numerous diseases, but also raises significant concerns about adverse health consequences related to the corruption of human and pathogenic genomes.

In most instances, only specialists understand the growing intricacies of these novel technologies. As the complexity and speed of …


The Power Of The "Internet Of Things" To Mislead And Manipulate Consumers: A Regulatory Challenge, Kate Tokeley Apr 2021

The Power Of The "Internet Of Things" To Mislead And Manipulate Consumers: A Regulatory Challenge, Kate Tokeley

Notre Dame Journal on Emerging Technologies

The “Internet of Things” revolution is on its way, and with it comes an unprecedented risk of unregulated misleading marketing and a dramatic increase in the power of personalized manipulative marketing. IoT is a term that refers to a growing network of internet-connected physical “smart” objects accumulating in our homes and cities. These include “smart” versions of traditional objects such as refrigerators, thermostats, watches, toys, light bulbs, cars, and Alexa-style digital assistants. The corporations who develop IoT are able to utilize a far greater depth of data than is possible from merely tracking our web browsing in regular online environments. …


Medical Device Artificial Intelligence: The New Tort Frontier, Charlotte A. Tschider Jan 2021

Medical Device Artificial Intelligence: The New Tort Frontier, Charlotte A. Tschider

Faculty Publications & Other Works

The medical device industry and new technology start-ups have dramatically increased investment in artificial intelligence (AI) applications, including diagnostic tools and AI-enabled devices. These technologies have been positioned to reduce climbing health costs while simultaneously improving health outcomes. Technologies like AI-enabled surgical robots, AI-enabled insulin pumps, and cancer detection applications hold tremendous promise, yet without appropriate oversight, they will likely pose major safety issues. While preventative safety measures may reduce risk to patients using these technologies, effective regulatory-tort regimes also permit recovery when preventative solutions are insufficient.

The Food and Drug Administration (FDA), the administrative agency responsible for overseeing the …


Where We’Re Going, We Don’T Need Drivers: Autonomous Vehicles And Ai-Chaperone Liability, Peter Y. Kim Oct 2020

Where We’Re Going, We Don’T Need Drivers: Autonomous Vehicles And Ai-Chaperone Liability, Peter Y. Kim

Catholic University Law Review

The future of mainstream autonomous vehicles is approaching in the rearview mirror. Yet, the current legal regime for tort liability leaves an open question on how tortious Artificial Intelligence (AI) devices and systems that are capable of machine learning will be held accountable. To understand the potential answer, one may simply go back in time and see how this question would be answered under traditional torts. This Comment tests whether the incident involving an autonomous vehicle hitting a pedestrian is covered under the traditional torts, argues that they are incapable of solving this novel problem, and ultimately proposes a new …


Non-Autonomous Artificial Intelligence Programs And Products Liability: How New Ai Products Challenge Existing Liability Models And Pose New Financial Burdens, Greg Swanson Apr 2019

Non-Autonomous Artificial Intelligence Programs And Products Liability: How New Ai Products Challenge Existing Liability Models And Pose New Financial Burdens, Greg Swanson

Seattle University Law Review

This Comment argues that the unique relationship between manufacturers, consumers, and their reinforcement learning AI systems challenges existing products liability law models. These traditional models inform how to identify and apportion liability between manufacturers and consumers while exposing litigants to low-dollar tort remedies with inherently high-dollar litigation costs.11 Rather than waiting for AI autonomy, the political and legal communities should be proactive and generate a liability model that recognizes how new AI programs have already redefined the relationship between manufacturer, consumer, and product while challenging the legal and financial burden of prospective consumer-plaintiffs and manufacturer-defendants.


Data-Informed Duties In Ai Development, Frank A. Pasquale Jan 2019

Data-Informed Duties In Ai Development, Frank A. Pasquale

Faculty Scholarship

Law should help direct—and not merely constrain—the development of artificial intelligence (AI). One path to influence is the development of standards of care both supplemented and informed by rigorous regulatory guidance. Such standards are particularly important given the potential for inaccurate and inappropriate data to contaminate machine learning. Firms relying on faulty data can be required to compensate those harmed by that data use—and should be subject to punitive damages when such use is repeated or willful. Regulatory standards for data collection, analysis, use, and stewardship can inform and complement generalist judges. Such regulation will not only provide guidance to …


A Black Box For Patient Safety?, Nathan Cortez Jan 2019

A Black Box For Patient Safety?, Nathan Cortez

Faculty Journal Articles and Book Chapters

Technology now makes it possible to record surgical procedures with striking granularity. And new methods of artificial intelligence (A.I.) and machine learning allow data from surgeries to be used to identify and predict errors. These technologies are now being deployed, on a research basis, in hospitals around the world, including in U.S. hospitals. This Article evaluates whether such recordings – and whether subsequent software analyses of such recordings – are discoverable and admissible in U.S. courts in medical malpractice actions. I then argue for reformulating traditional "information policy" to accommodate the use of these new technologies without losing sight of …


Lessons From Literal Crashes For Code, Margot Kaminski Jan 2019

Lessons From Literal Crashes For Code, Margot Kaminski

Publications

No abstract provided.


The Road To Autonomy, Michelle Sellwood Dec 2017

The Road To Autonomy, Michelle Sellwood

San Diego Law Review

[T]his Comment discusses the background of AI and robotics, the technology behind the autonomous vehicle, and the evolution of products liability laws. Part III examines current regulations, the benefits of autonomous technology, and the need for a definitive liability framework. Part IV discusses why current tort liability laws will be ineffective in governing autonomous vehicle liability by examining the shift in liability from the driver to the owner and manufacturer. Part V proposes a short-term solution by attributing liability to the programmer, while software is still hard-coded. Finally, Part VI explores legal personhood, and proposes that the autonomous vehicle be …


Amoral Machines, Or: How Roboticists Can Learn To Stop Worrying And Love The Law, Bryan Casey Aug 2017

Amoral Machines, Or: How Roboticists Can Learn To Stop Worrying And Love The Law, Bryan Casey

Northwestern University Law Review

The media and academic dialogue surrounding high-stakes decisionmaking by robotics applications has been dominated by a focus on morality. But the tendency to do so while overlooking the role that legal incentives play in shaping the behavior of profit-maximizing firms risks marginalizing the field of robotics and rendering many of the deepest challenges facing today’s engineers utterly intractable. This Essay attempts to both halt this trend and offer a course correction. Invoking Justice Oliver Wendell Holmes’s canonical analogy of the “bad man . . . who cares nothing for . . . ethical rules,” it demonstrates why philosophical abstractions like …


The Application Of Traditional Tort Theory To Embodied Machine Intelligence, Curtis E.A. Karnow Jan 2013

The Application Of Traditional Tort Theory To Embodied Machine Intelligence, Curtis E.A. Karnow

Curtis E.A. Karnow

This note discusses the traditional tort theories of liability such as negligence and strict liability and suggests these are likely insufficient to impose liability on legal entities (people and companies) selling or employing autonomous robots. I provide the essential working definitions of ‘autonomous’ as well as the legal notion of ‘foreseeability’ which lies at the heart of tort liability. The note is not concerned with the policy, ethics, or other issues arising from the use of robots including armed and unarmed drones, because those, as I define them, are not currently autonomous, and do not implicate the legal issues I …