Open Access. Powered by Scholars. Published by Universities.®
- Institution
- Keyword
-
- Artificial intelligence (10)
- Machine learning (8)
- AI (3)
- Data privacy (3)
- Digital government (3)
-
- Information technology (3)
- Public administration (3)
- Administrative law (2)
- Algorithmic decisionmaking (2)
- Algorithmic regulation (2)
- Big data (2)
- Criminal justice (2)
- Data security (2)
- Digital harm (2)
- E-government (2)
- Healthcare (2)
- Medical device regulation (2)
- AI Liability insurance (1)
- AI in healthcare (1)
- AI systems (1)
- Acceptability and trust (1)
- Access (1)
- Accounts (1)
- Address (1)
- Aid (1)
- Aides (1)
- Algorithmic governance (1)
- Algorithmic risk prediction (1)
- Antitrust law & policy (1)
- Apollo (1)
- Publication
-
- All Faculty Scholarship (5)
- Articles (3)
- Publications and Research (3)
- Research Collection Yong Pung How School Of Law (3)
- Other Publications (2)
-
- Research Collection School Of Computing and Information Systems (2)
- Asian Management Insights (1)
- Centre for AI & Data Governance (1)
- Faculty Publications (1)
- Graduate School of Business Publications (1)
- Law & Economics Working Papers (1)
- Law Library Newsletters/Blog (1)
- Popular Media (1)
- School of Business: Faculty Publications and Other Works (1)
- Shorter Faculty Works (1)
Articles 1 - 27 of 27
Full-Text Articles in Law
Creating Data From Unstructured Text With Context Rule Assisted Machine Learning (Craml), Stephen Meisenbacher, Peter Norlander
Creating Data From Unstructured Text With Context Rule Assisted Machine Learning (Craml), Stephen Meisenbacher, Peter Norlander
School of Business: Faculty Publications and Other Works
Popular approaches to building data from unstructured text come with limitations, such as scalability, interpretability, replicability, and real-world applicability. These can be overcome with Context Rule Assisted Machine Learning (CRAML), a method and no-code suite of software tools that builds structured, labeled datasets which are accurate and reproducible. CRAML enables domain experts to access uncommon constructs within a document corpus in a low-resource, transparent, and flexible manner. CRAML produces document-level datasets for quantitative research and makes qualitative classification schemes scalable over large volumes of text. We demonstrate that the method is useful for bibliographic analysis, transparent analysis of proprietary data, …
Forks Over Knives: Predictive Inconsistency In Criminal Justice Algorithmic Risk Assessment Tools, Travis Greene, Galit Shmueli, Jan Fell, Ching-Fu Lin, Han-Wei Liu
Forks Over Knives: Predictive Inconsistency In Criminal Justice Algorithmic Risk Assessment Tools, Travis Greene, Galit Shmueli, Jan Fell, Ching-Fu Lin, Han-Wei Liu
Research Collection Yong Pung How School Of Law
Big data and algorithmic risk prediction tools promise to improve criminal justice systems by reducing human biases and inconsistencies in decision-making. Yet different, equally justifiable choices when developing, testing and deploying these socio-technical tools can lead to disparate predicted risk scores for the same individual. Synthesising diverse perspectives from machine learning, statistics, sociology, criminology, law, philosophy and economics, we conceptualise this phenomenon as predictive inconsistency. We describe sources of predictive inconsistency at different stages of algorithmic risk assessment tool development and deployment and consider how future technological developments may amplify predictive inconsistency. We argue, however, that in a diverse and …
Open-Source Clinical Machine Learning Models: Critical Appraisal Of Feasibility, Advantages, And Challenges, Keerthi B. Harish, W. Nicholson Price Ii, Yindalon Aphinyanaphongs
Open-Source Clinical Machine Learning Models: Critical Appraisal Of Feasibility, Advantages, And Challenges, Keerthi B. Harish, W. Nicholson Price Ii, Yindalon Aphinyanaphongs
Articles
Machine learning applications promise to augment clinical capabilities and at least 64 models have already been approved by the US Food and Drug Administration. These tools are developed, shared, and used in an environment in which regulations and market forces remain immature. An important consideration when evaluating this environment is the introduction of open-source solutions in which innovations are freely shared; such solutions have long been a facet of digital culture. We discuss the feasibility and implications of open-source machine learning in a health care infrastructure built upon proprietary information. The decreased cost of development as compared to drugs and …
Lawbreaker: An Approach For Specifying Traffic Laws And Fuzzing Autonomous Vehicles, Yang Sun, Christopher M. Poskitt, Jun Sun, Yuqi Chen, Zijiang Yang
Lawbreaker: An Approach For Specifying Traffic Laws And Fuzzing Autonomous Vehicles, Yang Sun, Christopher M. Poskitt, Jun Sun, Yuqi Chen, Zijiang Yang
Research Collection School Of Computing and Information Systems
Autonomous driving systems (ADSs) must be tested thoroughly before they can be deployed in autonomous vehicles. High-fidelity simulators allow them to be tested against diverse scenarios, including those that are difficult to recreate in real-world testing grounds. While previous approaches have shown that test cases can be generated automatically, they tend to focus on weak oracles (e.g. reaching the destination without collisions) without assessing whether the journey itself was undertaken safely and satisfied the law. In this work, we propose LawBreaker, an automated framework for testing ADSs against real-world traffic laws, which is designed to be compatible with different scenario …
User Guided Abductive Proof Generation For Answer Set Programming Queries, Avishkar Mahajan, Martin Strecker, Meng Weng (Huang Mingrong) Wong
User Guided Abductive Proof Generation For Answer Set Programming Queries, Avishkar Mahajan, Martin Strecker, Meng Weng (Huang Mingrong) Wong
Research Collection Yong Pung How School Of Law
We present a method for generating possible proofs of a query with respect to a given Answer Set Programming (ASP) rule set using an abductive process where the space of abducibles is automatically constructed just from the input rules alone. Given a (possibly empty) set of user provided facts, our method infers any additional facts that may be needed for the entailment of a query and then outputs these extra facts, without the user needing to explicitly specify the space of all abducibles. We also present a method to generate a set of directed edges corresponding to the justification graph …
Data Vu: Why Breaches Involve The Same Stories Again And Again, Woodrow Hartzog, Daniel Solove
Data Vu: Why Breaches Involve The Same Stories Again And Again, Woodrow Hartzog, Daniel Solove
Shorter Faculty Works
In the classic comedy Groundhog Day, protagonist Phil, played by Bill Murray, asks “What would you do if you were stuck in one place and every day was exactly the same, and nothing that you did mattered?” In this movie, Phil is stuck reliving the same day over and over, where the events repeat in a continual loop, and nothing he does can stop them. Phil’s predicament sounds a lot like our cruel cycle with data breaches.
Every year, organizations suffer more data spills and attacks, with personal information being exposed and abused at alarming rates. While Phil …
Biometrics And An Ai Bill Of Rights, Margaret Hu
Biometrics And An Ai Bill Of Rights, Margaret Hu
Faculty Publications
This Article contends that an informed discussion on an AI Bill of Rights requires grappling with biometric data collection and its integration into emerging AI systems. Biometric AI systems serve a wide range of governmental purposes, including policing, border security and immigration enforcement, and biometric cyberintelligence and biometric-enabled warfare. These systems are increasingly categorized as "high-risk" when deployed in ways that may impact fundamental constitutional rights and human rights. There is growing recognition that high-risk biometric AI systems, such as facial recognition identification, can pose unprecedented challenges to criminal procedure rights. This Article concludes that a failure to recognize these …
Spotlight Report #6: Proffering Machine-Readable Personal Privacy Research Agreements: Pilot Project Findings For Ieee P7012 Wg, Noreen Y. Whysel, Lisa Levasseur
Spotlight Report #6: Proffering Machine-Readable Personal Privacy Research Agreements: Pilot Project Findings For Ieee P7012 Wg, Noreen Y. Whysel, Lisa Levasseur
Publications and Research
What if people had the ability to assert their own legally binding permissions for data collection, use, sharing, and retention by the technologies they use? The IEEE P7012 has been working on an interoperability specification for machine-readable personal privacy terms to support this ability since 2018. The premise behind the work of IEEE P7012 is that people need technology that works on their behalf—i.e. software agents that assert the individual’s permissions and preferences in a machine-readable format.
Thanks to a grant from the IEEE Technical Activities Board Committee on Standards (TAB CoS), we were able to explore the attitudes of …
Gauging The Acceptance Of Contact Tracing Technology: An Empirical Study Of Singapore Residents’ Concerns With Sharing Their Information And Willingness To Trust, Ee-Ing Ong, Wee Ling Loo
Gauging The Acceptance Of Contact Tracing Technology: An Empirical Study Of Singapore Residents’ Concerns With Sharing Their Information And Willingness To Trust, Ee-Ing Ong, Wee Ling Loo
Research Collection Yong Pung How School Of Law
In response to the COVID-19 pandemic, governments began implementing various forms of contact tracing technology. Singapore’s implementation of its contact tracing technology, TraceTogether, however, was met with significant concern by its population, with regard to privacy and data security. This concern did not fit with the general perception that Singaporeans have a high level of trust in its government. We explore this disconnect, using responses to our survey (conducted pre-COVID-19) in which we asked participants about their level of concern with the government and business collecting certain categories of personal data. The results show that respondents had less concern with …
Problematic Ai — When Should We Use It?, Fredric Lederer
Problematic Ai — When Should We Use It?, Fredric Lederer
Popular Media
No abstract provided.
The Executive’S Guide To Getting Ai Wrong, Jerrold Soh
The Executive’S Guide To Getting Ai Wrong, Jerrold Soh
Asian Management Insights
It’s all math. Really.
Assessing Automated Administration, Cary Coglianese, Alicia Lai
Assessing Automated Administration, Cary Coglianese, Alicia Lai
All Faculty Scholarship
To fulfill their responsibilities, governments rely on administrators and employees who, simply because they are human, are prone to individual and group decision-making errors. These errors have at times produced both major tragedies and minor inefficiencies. One potential strategy for overcoming cognitive limitations and group fallibilities is to invest in artificial intelligence (AI) tools that allow for the automation of governmental tasks, thereby reducing reliance on human decision-making. Yet as much as AI tools show promise for improving public administration, automation itself can fail or can generate controversy. Public administrators face the question of when exactly they should use automation. …
Ai Insurance: How Liability Insurance Can Drive The Responsible Adoption Of Artificial Intelligence In Health Care, Ariel Dora Stern, Avi Goldfarb, Timo Minssen, W. Nicholson Price Ii
Ai Insurance: How Liability Insurance Can Drive The Responsible Adoption Of Artificial Intelligence In Health Care, Ariel Dora Stern, Avi Goldfarb, Timo Minssen, W. Nicholson Price Ii
Articles
Despite enthusiasm about the potential to apply artificial intelligence (AI) to medicine and health care delivery, adoption remains tepid, even for the most compelling technologies. In this article, the authors focus on one set of challenges to AI adoption: those related to liability. Well-designed AI liability insurance can mitigate predictable liability risks and uncertainties in a way that is aligned with the interests of health care’s main stakeholders, including patients, physicians, and health care organization leadership. A market for AI insurance will encourage the use of high-quality AI, because insurers will be most keen to underwrite those products that are …
Volume Introduction, I. Glenn Cohen, Timo Minssen, W. Nicholson Price Ii, Christopher Robertson, Carmel Shachar
Volume Introduction, I. Glenn Cohen, Timo Minssen, W. Nicholson Price Ii, Christopher Robertson, Carmel Shachar
Other Publications
Medical devices have historically been less regulated than their drug and biologic counterparts. A benefit of this less demanding regulatory regime is facilitating innovation by making new devices available to consumers in a timely fashion. Nevertheless, there is increasing concern that this approach raises serious public health and safety concerns. The Institute of Medicine in 2011 published a critique of the American pathway allowing moderate-risk devices to be brought to the market through the less-rigorous 501(k) pathway,1 flagging a need for increased postmarket review and surveillance. High-profile recalls of medical devices, such as vaginal mesh products, along with reports globally …
Moving Toward Personalized Law, Cary Coglianese
Moving Toward Personalized Law, Cary Coglianese
All Faculty Scholarship
Rules operate as a tool of governance by making generalizations, thereby cutting down on government officials’ need to make individual determinations. But because they are generalizations, rules can result in inefficient or perverse outcomes due to their over- and under-inclusiveness. With the aid of advances in machine-learning algorithms, however, it is becoming increasingly possible to imagine governments shifting away from a predominant reliance on general rules and instead moving toward increased reliance on precise individual determinations—or on “personalized law,” to use the term Omri Ben-Shahar and Ariel Porat use in the title of their 2021 book. Among the various technological, …
Trust In Robotics: A Multi-Staged Decision-Making Approach To Robots In Community, Wenxi Zhang, Willow Wong, Mark Findlay
Trust In Robotics: A Multi-Staged Decision-Making Approach To Robots In Community, Wenxi Zhang, Willow Wong, Mark Findlay
Centre for AI & Data Governance
Pivoting on the desired outcome of social good within the wider robotics ecosystem, trust is identified as the central adhesive of the HRI interface. However, building trust between humans and robots involves more than improving the machine’s technical reliability or trustworthiness in function. This paper presents a holistic, community-based approach to trust-building, where trust is understood as a multifaceted and multi-staged looped relation that depends heavily on context and human perceptions. Building on past literature that identifies dispositional and learned stages of trust, our proposed Decision to Trust model considers more extensively the human and situational factors influencing how trust …
Designing Respectful Tech: What Is Your Relationship With Technology?, Noreen Y. Whysel
Designing Respectful Tech: What Is Your Relationship With Technology?, Noreen Y. Whysel
Publications and Research
According to research at the Me2B Alliance, people feel they have a relationship with technology. It’s emotional. It’s embodied. And it’s very personal. We are studying digital relationships to answer questions like “Do people have a relationship with technology?” “What does that relationship feel like?” And “Do people understand the commitments that they are making when they explore, enter into and dissolve these relationships?” There are parallels between messy human relationships and the kinds of relationships that people develop with technology. As with human relationships, we move through states of discovery, commitment and breakup with digital applications as well. Technology …
Me2b Alliance Validation Testing Report: Consumer Perception Of Legal Policies In Digital Technology, Noreen Y. Whysel, Karina Alexanyan, Shaun Spaulting, Julia Little
Me2b Alliance Validation Testing Report: Consumer Perception Of Legal Policies In Digital Technology, Noreen Y. Whysel, Karina Alexanyan, Shaun Spaulting, Julia Little
Publications and Research
Our relationship with technology involves legal agreements that we either review or enter into when using a technology, namely privacy policies and terms of service or terms of use (“TOS/TOU”). We initiated this research to understand if providing a formal rating of the legal policies (privacy policies and TOS/TOUs) would be valuable to consumers (or Me-s). From our early qualitative discussions, we noticed that people were unclear on whether these policies were legally binding contracts or not. Thus, a secondary objective emerged to quantitatively explore whether people knew who these policies protected (if anyone), and if the policies were perceived …
Clinical Interactions In Electronic Medical Records Towards The Development Of A Token-Economy Model, Nicole Allison S. Co, Jason Limcaco, Hans Calvin L. Tan, Ma. Regina Justina E. Estuar, Christian E. Pulmano, Dennis Andrew Villamor, Quirino Sugon Jr, Maria Cristina G. Bautista, Paulyn Jean Acacio-Claro
Clinical Interactions In Electronic Medical Records Towards The Development Of A Token-Economy Model, Nicole Allison S. Co, Jason Limcaco, Hans Calvin L. Tan, Ma. Regina Justina E. Estuar, Christian E. Pulmano, Dennis Andrew Villamor, Quirino Sugon Jr, Maria Cristina G. Bautista, Paulyn Jean Acacio-Claro
Graduate School of Business Publications
The use of electronic medical records (EMRs) plays a crucial role in the successful implementation of the Universal Healthcare Law which promises quality and affordable healthcare to all Filipinos. Consequently, the current adoption of EMRs should be studied from the perspective of the healthcare provider. As most studies look into use of EMRs by doctors or patients, there are very few that extend studies to look at possible interaction of doctor and patient in the same EMR environment. Understanding this interaction paves the way for possible incentives that will increase the use and adoption of the EMR. This study uses …
Part I - Ai And Data As Medical Devices, W. Nicholson Price Ii
Part I - Ai And Data As Medical Devices, W. Nicholson Price Ii
Other Publications
It may seem counterintuitive to open a book on medical devices with chapters on software and data, but these are the frontiers of new medical device regulation and law. Physical devices are still crucial to medicine, but they – and medical practice as a whole – are embedded in and permeated by networks of software and caches of data. Those software systems are often mindbogglingly complex and largely inscrutable, involving artificial intelligence and machine learning. Ensuring that such software works effectively and safely remains a substantial challenge for regulators and policymakers. Each of the three chapters in this part examines …
Algorithm Vs. Algorithm, Cary Coglianese, Alicia Lai
Algorithm Vs. Algorithm, Cary Coglianese, Alicia Lai
All Faculty Scholarship
Critics raise alarm bells about governmental use of digital algorithms, charging that they are too complex, inscrutable, and prone to bias. A realistic assessment of digital algorithms, though, must acknowledge that government is already driven by algorithms of arguably greater complexity and potential for abuse: the algorithms implicit in human decision-making. The human brain operates algorithmically through complex neural networks. And when humans make collective decisions, they operate via algorithms too—those reflected in legislative, judicial, and administrative processes. Yet these human algorithms undeniably fail and are far from transparent. On an individual level, human decision-making suffers from memory limitations, fatigue, …
Liability For Use Of Artificial Intelligence In Medicine, W. Nicholson Price, Sara Gerke, I. Glenn Cohen
Liability For Use Of Artificial Intelligence In Medicine, W. Nicholson Price, Sara Gerke, I. Glenn Cohen
Law & Economics Working Papers
While artificial intelligence has substantial potential to improve medical practice, errors will certainly occur, sometimes resulting in injury. Who will be liable? Questions of liability for AI-related injury raise not only immediate concerns for potentially liable parties, but also broader systemic questions about how AI will be developed and adopted. The landscape of liability is complex, involving health-care providers and institutions and the developers of AI systems. In this chapter, we consider these three principal loci of liability: individual health-care providers, focused on physicians; institutions, focused on hospitals; and developers.
Law Library Blog (January 2022): Legal Beagle's Blog Archive, Roger Williams University School Of Law
Law Library Blog (January 2022): Legal Beagle's Blog Archive, Roger Williams University School Of Law
Law Library Newsletters/Blog
No abstract provided.
From Negative To Positive Algorithm Rights, Cary Coglianese, Kat Hefter
From Negative To Positive Algorithm Rights, Cary Coglianese, Kat Hefter
All Faculty Scholarship
Artificial intelligence, or “AI,” is raising alarm bells. Advocates and scholars propose policies to constrain or even prohibit certain AI uses by governmental entities. These efforts to establish a negative right to be free from AI stem from an understandable motivation to protect the public from arbitrary, biased, or unjust applications of algorithms. This movement to enshrine protective rights follows a familiar pattern of suspicion that has accompanied the introduction of other technologies into governmental processes. Sometimes this initial suspicion of a new technology later transforms into widespread acceptance and even a demand for its use. In this paper, we …
Defining Smart Contract Defects On Ethereum, Jiachi Chen, Xin Xia, David Lo, John Grundy, Xiapu Luo, Ting Chen
Defining Smart Contract Defects On Ethereum, Jiachi Chen, Xin Xia, David Lo, John Grundy, Xiapu Luo, Ting Chen
Research Collection School Of Computing and Information Systems
Smart contracts are programs running on a blockchain. They are immutable to change, and hence can not be patched for bugs once deployed. Thus it is critical to ensure they are bug-free and well-designed before deployment. A Contract defect is an error, flaw or fault in a smart contract that causes it to produce an incorrect or unexpected result, or to behave in unintended ways. The detection of contract defects is a method to avoid potential bugs and improve the design of existing code. Since smart contracts contain numerous distinctive features, such as the gas system. decentralized, it is important …
Exclusion Cycles: Reinforcing Disparities In Medicine, Ana Bracic, Shawneequa L. Callier, Nicholson Price
Exclusion Cycles: Reinforcing Disparities In Medicine, Ana Bracic, Shawneequa L. Callier, Nicholson Price
Articles
Minoritized populations face exclusion across contexts from politics to welfare to medicine. In medicine, exclusion manifests in substantial disparities in practice and in outcome. While these disparities arise from many sources, the interaction between institutions, dominant-group behaviors, and minoritized responses shape the overall pattern and are key to improving it. We apply the theory of exclusion cycles to medical practice, the collection of medical big data, and the development of artificial intelligence in medicine. These cycles are both self-reinforcing and other-reinforcing, leading to dismayingly persistent exclusion. The interactions between such cycles offer lessons and prescriptions for effective policy.
Antitrust By Algorithm, Cary Coglianese, Alicia Lai
Antitrust By Algorithm, Cary Coglianese, Alicia Lai
All Faculty Scholarship
Technological innovation is changing private markets around the world. New advances in digital technology have created new opportunities for subtle and evasive forms of anticompetitive behavior by private firms. But some of these same technological advances could also help antitrust regulators improve their performance in detecting and responding to unlawful private conduct. We foresee that the growing digital complexity of the marketplace will necessitate that antitrust authorities increasingly rely on machine-learning algorithms to oversee market behavior. In making this transition, authorities will need to meet several key institutional challenges—building organizational capacity, avoiding legal pitfalls, and establishing public trust—to ensure successful …