Open Access. Powered by Scholars. Published by Universities.®

Computer Law Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 4 of 4

Full-Text Articles in Computer Law

Submission To Canadian Government Consultation On A Modern Copyright Framework For Ai And The Internet Of Things, Sean Flynn, Lucie Guibault, Christian Handke, Joan-Josep Vallbé, Michael Palmedo, Carys Craig, Michael Geist, Joao Pedro Quintais Jan 2021

Submission To Canadian Government Consultation On A Modern Copyright Framework For Ai And The Internet Of Things, Sean Flynn, Lucie Guibault, Christian Handke, Joan-Josep Vallbé, Michael Palmedo, Carys Craig, Michael Geist, Joao Pedro Quintais

Reports & Public Policy Documents

We are grateful for the opportunity to participate in the Canadian Government’s consultation on a modern copyright framework for AI and the Internet of Things. Below, we present some of our research findings relating to the importance of flexibility in copyright law to permit text and data mining (“TDM”). As the consultation paper recognizes, TDM is a critical element of artificial intelligence. Our research supports the adoption of a specific exception for uses of works in TDM to supplement Canada’s existing general fair dealing exception.

Empirical research shows that more publication of citable research takes place in countries with “open” …


Legal Risks Of Adversarial Machine Learning Research, Ram Shankar Siva Kumar, Jonathon Penney, Bruce Schneier, Kendra Albert Jan 2020

Legal Risks Of Adversarial Machine Learning Research, Ram Shankar Siva Kumar, Jonathon Penney, Bruce Schneier, Kendra Albert

Articles, Book Chapters, & Popular Press

Adversarial machine learning is the systematic study of how motivated adversaries can compromise the confidentiality, integrity, and availability of machine learning (ML) systems through targeted or blanket attacks. The problem of attacking ML systems is so prevalent that CERT, the federally funded research and development center tasked with studying attacks, issued a broad vulnerability note on how most ML classifiers are vulnerable to adversarial manipulation. Google, IBM, Facebook, and Microsoft have committed to investing in securing machine learning systems. The US and EU are likewise putting security and safety of AI systems as a top priority.

Now, research on adversarial …


Politics Of Adversarial Machine Learning, Kendra Albert, Jonathon Penney, Bruce Schneier, Ram Shankar Siva Kumar Jan 2020

Politics Of Adversarial Machine Learning, Kendra Albert, Jonathon Penney, Bruce Schneier, Ram Shankar Siva Kumar

Articles, Book Chapters, & Popular Press

In addition to their security properties, adversarial machine-learning attacks and defenses have political dimensions. They enable or foreclose certain options for both the subjects of the machine learning systems and for those who deploy them, creating risks for civil liberties and human rights. In this paper, we draw on insights from science and technology studies, anthropology, and human rights literature, to inform how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems. To make this concrete, we use real-world examples of how attacks such as perturbation, model inversion, or membership inference …


Ethical Testing In The Real World: Evaluating Physical Testing Of Adversarial Machine Learning, Kendra Albert, Maggie Delano, Jonathon Penney, Afsaneh Ragot, Ram Shankar Siva Kumar Jan 2020

Ethical Testing In The Real World: Evaluating Physical Testing Of Adversarial Machine Learning, Kendra Albert, Maggie Delano, Jonathon Penney, Afsaneh Ragot, Ram Shankar Siva Kumar

Articles, Book Chapters, & Popular Press

This paper critically assesses the adequacy and representativeness of physical domain testing for various adversarial machine learning (ML) attacks against computer vision systems involving human subjects. Many papers that deploy such attacks characterize themselves as “real world.” Despite this framing, however, we found the physical or real-world testing conducted was minimal, provided few details about testing subjects and was often conducted as an afterthought or demonstration. Adversarial ML research without representative trials or testing is an ethical, scientific, and health/safety issue that can cause real harms. We introduce the problem and our methodology, and then critique the physical domain testing …