Open Access. Powered by Scholars. Published by Universities.®
- Publication
- Publication Type
Articles 1 - 8 of 8
Full-Text Articles in Law
Critical Data Theory, Margaret Hu
Critical Data Theory, Margaret Hu
William & Mary Law Review
Critical Data Theory examines the role of AI and algorithmic decisionmaking at its intersection with the law. This theory aims to deconstruct the impact of AI in law and policy contexts. The tools of AI and automated systems allow for legal, scientific, socioeconomic, and political hierarchies of power that can profitably be interrogated with critical theory. While the broader umbrella of critical theory features prominently in the work of surveillance scholars, legal scholars can also deploy criticality analyses to examine surveillance and privacy law challenges, particularly in an examination of how AI and other emerging technologies have been expanded in …
Ai-Based Evidence In Criminal Trials?, Sabine Gless, Fredric I. Lederer, Thomas Weigend
Ai-Based Evidence In Criminal Trials?, Sabine Gless, Fredric I. Lederer, Thomas Weigend
Faculty Publications
Smart devices are increasingly the origin of critical criminal case data. The importance of such data, especially data generated when using modern automobiles, is likely to become even more important as increasingly complex methods of machine learning lead to AI-based evidence being autonomously generated by devices. This article reviews the admissibility of such evidence from both American and German perspectives. As a result of this comparative approach, the authors conclude that American evidence law could be improved by borrowing aspects of the expert testimony approaches used in Germany’s “inquisitorial” court system.
Either The Law Will Govern Ai, Or Ai Will Govern The Law, Margaret Hu
Either The Law Will Govern Ai, Or Ai Will Govern The Law, Margaret Hu
Popular Media
No abstract provided.
The Philosophy Of Ai: Learning From History, Shaping Our Future. Hearing Before The Committee On Homeland Security And Government Affairs, Senate, One Hundred Eighteenth Congress, First Session., Margaret Hu
Congressional Testimony
No abstract provided.
Biden's Executive Order Puts Civil Rights Rights In The Middle Of The Ai Regulation Discussion, Margaret Hu
Biden's Executive Order Puts Civil Rights Rights In The Middle Of The Ai Regulation Discussion, Margaret Hu
Popular Media
No abstract provided.
The First Byte Rule: A Proposal For Liability Of Artificial Intelligences, Hilyard Nichols
The First Byte Rule: A Proposal For Liability Of Artificial Intelligences, Hilyard Nichols
William & Mary Business Law Review
Artificial Intelligences (AIs) are a relatively new addition to human civilization. From delivery robots to board game champions, researchers and businesses have found a variety of ways to apply this new technology. As it continues to grow and become more prevalent, though, so do its interactions with society at large. This will create benefits for people, through cheaper or better products and services. It also has the possibility to create harm. AIs are not perfect, and as the range of AI uses grows, so will the range of potential harms. A mistake from an AI customer service bot could fraudulently …
On The Danger Of Not Understanding Technology, Fredric I. Lederer
On The Danger Of Not Understanding Technology, Fredric I. Lederer
Popular Media
No abstract provided.
From Negative To Positive Algorithm Rights, Cary Coglianese, Kat Hefter
From Negative To Positive Algorithm Rights, Cary Coglianese, Kat Hefter
William & Mary Bill of Rights Journal
We consider this issue here and suggest that the current calls for a negative right to be free from AI could very well transform over time into positive claims that demand the use of algorithmic tools by government officials. In Part I, we begin by sketching the current landscape surrounding the adoption of AI by government. That landscape is characterized by strong activist and scholarly voices expressing a pronounced aversion to the use of digital algorithms—and taking a decidedly negative rights tone. In Part II, we show that, although aversion to complex technology might be understandable, that aversion is neither …