Open Access. Powered by Scholars. Published by Universities.®

Law Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 7 of 7

Full-Text Articles in Law

Locating Liability For Medical Ai, W. Nicholson Price Ii, I. Glenn Cohen Jan 2024

Locating Liability For Medical Ai, W. Nicholson Price Ii, I. Glenn Cohen

Articles

When medical AI systems fail, who should be responsible, and how? We argue that various features of medical AI complicate the application of existing tort doctrines and render them ineffective at creating incentives for the safe and effective use of medical AI. In addition to complexity and opacity, the problem of contextual bias, where medical AI systems vary substantially in performance from place to place, hampers traditional doctrines. We suggest instead the application of enterprise liability to hospitals—making them broadly liable for negligent injuries occurring within the hospital system—with an important caveat: hospitals must have access to the information needed …


Liability For Use Of Artificial Intelligence In Medicine, W. Nicholson Price, Sara Gerke, I. Glenn Cohen Jan 2022

Liability For Use Of Artificial Intelligence In Medicine, W. Nicholson Price, Sara Gerke, I. Glenn Cohen

Law & Economics Working Papers

While artificial intelligence has substantial potential to improve medical practice, errors will certainly occur, sometimes resulting in injury. Who will be liable? Questions of liability for AI-related injury raise not only immediate concerns for potentially liable parties, but also broader systemic questions about how AI will be developed and adopted. The landscape of liability is complex, involving health-care providers and institutions and the developers of AI systems. In this chapter, we consider these three principal loci of liability: individual health-care providers, focused on physicians; institutions, focused on hospitals; and developers.


Assuming The Risks Of Artificial Intelligence, Amy L. Stein Jan 2022

Assuming The Risks Of Artificial Intelligence, Amy L. Stein

UF Law Faculty Publications

Tort law has long served as a remedy for those injured by products—and injuries from artificial intelligence (“AI”) are no exception. While many scholars have rightly contemplated the possible tort claims involving AI-driven technologies that cause injury, there has been little focus on the subsequent analysis of defenses. One of these defenses, assumption of risk, has been given particularly short shrift, with most scholars addressing it only in passing. This is intriguing, particularly because assumption of risk has the power to completely bar recovery for a plaintiff who knowingly and voluntarily engaged with a risk. In reality, such a defense …


Medical Device Artificial Intelligence: The New Tort Frontier, Charlotte A. Tschider Jan 2021

Medical Device Artificial Intelligence: The New Tort Frontier, Charlotte A. Tschider

Faculty Publications & Other Works

The medical device industry and new technology start-ups have dramatically increased investment in artificial intelligence (AI) applications, including diagnostic tools and AI-enabled devices. These technologies have been positioned to reduce climbing health costs while simultaneously improving health outcomes. Technologies like AI-enabled surgical robots, AI-enabled insulin pumps, and cancer detection applications hold tremendous promise, yet without appropriate oversight, they will likely pose major safety issues. While preventative safety measures may reduce risk to patients using these technologies, effective regulatory-tort regimes also permit recovery when preventative solutions are insufficient.

The Food and Drug Administration (FDA), the administrative agency responsible for overseeing the …


Data-Informed Duties In Ai Development, Frank A. Pasquale Jan 2019

Data-Informed Duties In Ai Development, Frank A. Pasquale

Faculty Scholarship

Law should help direct—and not merely constrain—the development of artificial intelligence (AI). One path to influence is the development of standards of care both supplemented and informed by rigorous regulatory guidance. Such standards are particularly important given the potential for inaccurate and inappropriate data to contaminate machine learning. Firms relying on faulty data can be required to compensate those harmed by that data use—and should be subject to punitive damages when such use is repeated or willful. Regulatory standards for data collection, analysis, use, and stewardship can inform and complement generalist judges. Such regulation will not only provide guidance to …


When Ais Outperform Doctors: Confronting The Challenges Of A Tort-Induced Over-Reliance On Machine Learning, A. Michael Froomkin, Ian Kerr, Joelle Pineau Jan 2019

When Ais Outperform Doctors: Confronting The Challenges Of A Tort-Induced Over-Reliance On Machine Learning, A. Michael Froomkin, Ian Kerr, Joelle Pineau

Articles

Someday, perhaps soon, diagnostics generated by machine learning (ML) will have demonstrably better success rates than those generated by human doctors. What will the dominance of ML diagnostics mean for medical malpractice law, for the future of medical service provision, for the demand for certain kinds of doctors, and in the long run for the quality of medical diagnostics itself?

This Article argues that once ML diagnosticians, such as those based on neural networks, are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings. Further, unless implemented …


Law's Halo And The Moral Machine, Bert I. Huang Jan 2019

Law's Halo And The Moral Machine, Bert I. Huang

Faculty Scholarship

How will we assess the morality of decisions made by artificial intelli­gence – and will our judgments be swayed by what the law says? Focusing on a moral dilemma in which a driverless car chooses to sacrifice its passenger to save more people, this study offers evidence that our moral intuitions can be influenced by the presence of the law.