Open Access. Powered by Scholars. Published by Universities.®

Law Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 5 of 5

Full-Text Articles in Law

Locating Liability For Medical Ai, W. Nicholson Price Ii, I. Glenn Cohen Jan 2024

Locating Liability For Medical Ai, W. Nicholson Price Ii, I. Glenn Cohen

Articles

When medical AI systems fail, who should be responsible, and how? We argue that various features of medical AI complicate the application of existing tort doctrines and render them ineffective at creating incentives for the safe and effective use of medical AI. In addition to complexity and opacity, the problem of contextual bias, where medical AI systems vary substantially in performance from place to place, hampers traditional doctrines. We suggest instead the application of enterprise liability to hospitals—making them broadly liable for negligent injuries occurring within the hospital system—with an important caveat: hospitals must have access to the information needed …


Liability For Use Of Artificial Intelligence In Medicine, W. Nicholson Price, Sara Gerke, I. Glenn Cohen Jan 2022

Liability For Use Of Artificial Intelligence In Medicine, W. Nicholson Price, Sara Gerke, I. Glenn Cohen

Law & Economics Working Papers

While artificial intelligence has substantial potential to improve medical practice, errors will certainly occur, sometimes resulting in injury. Who will be liable? Questions of liability for AI-related injury raise not only immediate concerns for potentially liable parties, but also broader systemic questions about how AI will be developed and adopted. The landscape of liability is complex, involving health-care providers and institutions and the developers of AI systems. In this chapter, we consider these three principal loci of liability: individual health-care providers, focused on physicians; institutions, focused on hospitals; and developers.


Regulating New Tech: Problems, Pathways, And People, Cary Coglianese Dec 2021

Regulating New Tech: Problems, Pathways, And People, Cary Coglianese

All Faculty Scholarship

New technologies bring with them many promises, but also a series of new problems. Even though these problems are new, they are not unlike the types of problems that regulators have long addressed in other contexts. The lessons from regulation in the past can thus guide regulatory efforts today. Regulators must focus on understanding the problems they seek to address and the causal pathways that lead to these problems. Then they must undertake efforts to shape the behavior of those in industry so that private sector managers focus on their technologies’ problems and take actions to interrupt the causal pathways. …


The Power Of The "Internet Of Things" To Mislead And Manipulate Consumers: A Regulatory Challenge, Kate Tokeley Apr 2021

The Power Of The "Internet Of Things" To Mislead And Manipulate Consumers: A Regulatory Challenge, Kate Tokeley

Notre Dame Journal on Emerging Technologies

The “Internet of Things” revolution is on its way, and with it comes an unprecedented risk of unregulated misleading marketing and a dramatic increase in the power of personalized manipulative marketing. IoT is a term that refers to a growing network of internet-connected physical “smart” objects accumulating in our homes and cities. These include “smart” versions of traditional objects such as refrigerators, thermostats, watches, toys, light bulbs, cars, and Alexa-style digital assistants. The corporations who develop IoT are able to utilize a far greater depth of data than is possible from merely tracking our web browsing in regular online environments. …


How Much Can Potential Jurors Tell Us About Liability For Medical Artificial Intelligence?, W. Nicholson Price Ii, Sara Gerke, I. Glenn Cohen Jan 2021

How Much Can Potential Jurors Tell Us About Liability For Medical Artificial Intelligence?, W. Nicholson Price Ii, Sara Gerke, I. Glenn Cohen

Articles

Artificial intelligence (AI) is rapidly entering medical practice, whether for risk prediction, diagnosis, or treatment recommendation. But a persistent question keeps arising: What happens when things go wrong? When patients are injured, and AI was involved, who will be liable and how? Liability is likely to influence the behavior of physicians who decide whether to follow AI advice, hospitals that implement AI tools for physician use, and developers who create those tools in the first place. If physicians are shielded from liability (typically medical malpractice liability) when they use AI tools, even if patient injury results, they are more likely …