Open Access. Powered by Scholars. Published by Universities.®
- Keyword
-
- Artificial intelligence (7)
- AI (3)
- Liability (3)
- Medical AI (3)
- Machine learning (2)
-
- Malpractice (2)
- Medical artificial intelligence (AI) (2)
- Medical device regulation (2)
- Regulation (2)
- A-Hohfeld Language (1)
- AI Governance (1)
- AI Liability insurance (1)
- AI bias and inequality (1)
- AI in finance (1)
- AI in healthcare (1)
- Adoption of AI in medical practice (1)
- Algorithmic trading (1)
- Artificial Intelligence (1)
- Artificial intelligence (AI) (1)
- Artificial intelligence and emerging technology (1)
- Autonomous trading agent (1)
- Clinical practice (1)
- Data discrimination (1)
- Drug regulators (1)
- Enterprise liability (1)
- Ethics (1)
- Financial benchmark manipulation (1)
- Food and Drug Administration (FDA) (1)
- Health policy (1)
- Healthcare (1)
Articles 1 - 14 of 14
Full-Text Articles in Law
Locating Liability For Medical Ai, W. Nicholson Price Ii, I. Glenn Cohen
Locating Liability For Medical Ai, W. Nicholson Price Ii, I. Glenn Cohen
Articles
When medical AI systems fail, who should be responsible, and how? We argue that various features of medical AI complicate the application of existing tort doctrines and render them ineffective at creating incentives for the safe and effective use of medical AI. In addition to complexity and opacity, the problem of contextual bias, where medical AI systems vary substantially in performance from place to place, hampers traditional doctrines. We suggest instead the application of enterprise liability to hospitals—making them broadly liable for negligent injuries occurring within the hospital system—with an important caveat: hospitals must have access to the information needed …
Humans In The Loop, Nicholson Price Ii, Rebecca Crootof, Margot Kaminski
Humans In The Loop, Nicholson Price Ii, Rebecca Crootof, Margot Kaminski
Articles
From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these “human in the loop” systems. We make four contributions to the discourse.
First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify “the MABA-MABA trap,” which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into decision making process. Regardless …
Open-Source Clinical Machine Learning Models: Critical Appraisal Of Feasibility, Advantages, And Challenges, Keerthi B. Harish, W. Nicholson Price Ii, Yindalon Aphinyanaphongs
Open-Source Clinical Machine Learning Models: Critical Appraisal Of Feasibility, Advantages, And Challenges, Keerthi B. Harish, W. Nicholson Price Ii, Yindalon Aphinyanaphongs
Articles
Machine learning applications promise to augment clinical capabilities and at least 64 models have already been approved by the US Food and Drug Administration. These tools are developed, shared, and used in an environment in which regulations and market forces remain immature. An important consideration when evaluating this environment is the introduction of open-source solutions in which innovations are freely shared; such solutions have long been a facet of digital culture. We discuss the feasibility and implications of open-source machine learning in a health care infrastructure built upon proprietary information. The decreased cost of development as compared to drugs and …
Ai Insurance: How Liability Insurance Can Drive The Responsible Adoption Of Artificial Intelligence In Health Care, Ariel Dora Stern, Avi Goldfarb, Timo Minssen, W. Nicholson Price Ii
Ai Insurance: How Liability Insurance Can Drive The Responsible Adoption Of Artificial Intelligence In Health Care, Ariel Dora Stern, Avi Goldfarb, Timo Minssen, W. Nicholson Price Ii
Articles
Despite enthusiasm about the potential to apply artificial intelligence (AI) to medicine and health care delivery, adoption remains tepid, even for the most compelling technologies. In this article, the authors focus on one set of challenges to AI adoption: those related to liability. Well-designed AI liability insurance can mitigate predictable liability risks and uncertainties in a way that is aligned with the interests of health care’s main stakeholders, including patients, physicians, and health care organization leadership. A market for AI insurance will encourage the use of high-quality AI, because insurers will be most keen to underwrite those products that are …
Volume Introduction, I. Glenn Cohen, Timo Minssen, W. Nicholson Price Ii, Christopher Robertson, Carmel Shachar
Volume Introduction, I. Glenn Cohen, Timo Minssen, W. Nicholson Price Ii, Christopher Robertson, Carmel Shachar
Other Publications
Medical devices have historically been less regulated than their drug and biologic counterparts. A benefit of this less demanding regulatory regime is facilitating innovation by making new devices available to consumers in a timely fashion. Nevertheless, there is increasing concern that this approach raises serious public health and safety concerns. The Institute of Medicine in 2011 published a critique of the American pathway allowing moderate-risk devices to be brought to the market through the less-rigorous 501(k) pathway,1 flagging a need for increased postmarket review and surveillance. High-profile recalls of medical devices, such as vaginal mesh products, along with reports globally …
Part I - Ai And Data As Medical Devices, W. Nicholson Price Ii
Part I - Ai And Data As Medical Devices, W. Nicholson Price Ii
Other Publications
It may seem counterintuitive to open a book on medical devices with chapters on software and data, but these are the frontiers of new medical device regulation and law. Physical devices are still crucial to medicine, but they – and medical practice as a whole – are embedded in and permeated by networks of software and caches of data. Those software systems are often mindbogglingly complex and largely inscrutable, involving artificial intelligence and machine learning. Ensuring that such software works effectively and safely remains a substantial challenge for regulators and policymakers. Each of the three chapters in this part examines …
Liability For Use Of Artificial Intelligence In Medicine, W. Nicholson Price, Sara Gerke, I. Glenn Cohen
Liability For Use Of Artificial Intelligence In Medicine, W. Nicholson Price, Sara Gerke, I. Glenn Cohen
Law & Economics Working Papers
While artificial intelligence has substantial potential to improve medical practice, errors will certainly occur, sometimes resulting in injury. Who will be liable? Questions of liability for AI-related injury raise not only immediate concerns for potentially liable parties, but also broader systemic questions about how AI will be developed and adopted. The landscape of liability is complex, involving health-care providers and institutions and the developers of AI systems. In this chapter, we consider these three principal loci of liability: individual health-care providers, focused on physicians; institutions, focused on hospitals; and developers.
Exclusion Cycles: Reinforcing Disparities In Medicine, Ana Bracic, Shawneequa L. Callier, Nicholson Price
Exclusion Cycles: Reinforcing Disparities In Medicine, Ana Bracic, Shawneequa L. Callier, Nicholson Price
Articles
Minoritized populations face exclusion across contexts from politics to welfare to medicine. In medicine, exclusion manifests in substantial disparities in practice and in outcome. While these disparities arise from many sources, the interaction between institutions, dominant-group behaviors, and minoritized responses shape the overall pattern and are key to improving it. We apply the theory of exclusion cycles to medical practice, the collection of medical big data, and the development of artificial intelligence in medicine. These cycles are both self-reinforcing and other-reinforcing, leading to dismayingly persistent exclusion. The interactions between such cycles offer lessons and prescriptions for effective policy.
The Promise And Limits Of Lawfulness: Inequality, Law, And The Techlash, Salomé Viljoen
The Promise And Limits Of Lawfulness: Inequality, Law, And The Techlash, Salomé Viljoen
Articles
In response to widespread skepticism about the recent rise of “tech ethics”, many critics have called for legal reform instead. In contrast with the “ethics response”, critics consider the “lawfulness response” more capable of disciplining the excesses of the technology industry. In fact, both are simultaneously vulnerable to industry capture and capable of advancing a more democratic egalitarian agenda for the information economy. Both ethics and law offer a terrain of contestation, rather than a predetermined set of commitments by which to achieve more democratic and egalitarian technological production. In advancing this argument, the essay focuses on two misunderstandings common …
How Much Can Potential Jurors Tell Us About Liability For Medical Artificial Intelligence?, W. Nicholson Price Ii, Sara Gerke, I. Glenn Cohen
How Much Can Potential Jurors Tell Us About Liability For Medical Artificial Intelligence?, W. Nicholson Price Ii, Sara Gerke, I. Glenn Cohen
Articles
Artificial intelligence (AI) is rapidly entering medical practice, whether for risk prediction, diagnosis, or treatment recommendation. But a persistent question keeps arising: What happens when things go wrong? When patients are injured, and AI was involved, who will be liable and how? Liability is likely to influence the behavior of physicians who decide whether to follow AI advice, hospitals that implement AI tools for physician use, and developers who create those tools in the first place. If physicians are shielded from liability (typically medical malpractice liability) when they use AI tools, even if patient injury results, they are more likely …
An Agent-Based Model Of Financial Benchmark Manipulation, Gabriel Virgil Rauterberg, Megan Shearer, Michael Wellman
An Agent-Based Model Of Financial Benchmark Manipulation, Gabriel Virgil Rauterberg, Megan Shearer, Michael Wellman
Articles
Financial benchmarks estimate market values or reference rates used in a wide variety of contexts, but are often calculated from data generated by parties who have incentives to manipulate these benchmarks. Since the the London Interbank Offered Rate (LIBOR) scandal in 2011, market participants, scholars, and regulators have scrutinized financial benchmarks and the ability of traders to manipulate them. We study the impact on market quality and microstructure of manipulating transaction-based benchmarks in a simulated market environment. Our market consists of a single benchmark manipulator with external holdings dependent on the benchmark, and numerous background traders unaffected by the benchmark. …
Risks And Remedies For Artificial Intelligence In Healthcare, W. Nicholson Price Ii
Risks And Remedies For Artificial Intelligence In Healthcare, W. Nicholson Price Ii
Other Publications
Artificial intelligence (AI) is rapidly entering health care and serving major roles, from automating drudgery and routine tasks in medical practice to managing patients and medical resources. As developers create AI systems to take on these tasks, several risks and challenges emerge, including the risk of injuries to patients from AI system errors, the risk to patient privacy of data acquisition and AI inference, and more. Potential solutions are complex but involve investment in infrastructure for high-quality, representative data; collaborative oversight by both the Food and Drug Administration and other health-care actors; and changes to medical education that will prepare …
A-Hohfeld: A Language For Robust Structural Representation Of Knowledge In The Legal Domain To Build Interpretation-Assistance Expert Systems, Layman E. Allen, Charles S. Saxon
A-Hohfeld: A Language For Robust Structural Representation Of Knowledge In The Legal Domain To Build Interpretation-Assistance Expert Systems, Layman E. Allen, Charles S. Saxon
Book Chapters
The A-Hohfeld language is presented as a set of definitions; it can be used to precisely express legal norms. The usefulness of the AHohfeld language is illustrated in articulating 2560 alternative structural interpretations of the four-sentence 1982 Library Regulations of Imperial College and constructing an interpretation-assistance legal expert system for these regulations by means of the general-purpose Interpretation-Assistance legal expert system builder called MINT. The logical basis for A-Hohfeld is included as an appendix.
Computer-Aided Normalizing And Unpacking: Some Interesting Machine-Processable Transformations Of Legal Rules, Layman E. Allen, Charles S. Saxon
Computer-Aided Normalizing And Unpacking: Some Interesting Machine-Processable Transformations Of Legal Rules, Layman E. Allen, Charles S. Saxon
Book Chapters
One way of dealing with an important aspect of the natural language barrier that researchers m artificial intelligence have been wrestling with for more than two decades is to normalize the expression of the logical structure of legal rules.
The computer program, NORMALIZER, will enable a legal analyst to automatically generate Normalized Versions of legal rules and Outlines of them from Parenthesized Logical Expressions of their structure and Marked Versions of the Original Text of the rules. In brief:
Parenthesized Logical Expression & Marked Version = = > Outline & Normalized Version.
The Parenthesized Logical Expression of a normalized rule is …