Open Access. Powered by Scholars. Published by Universities.®

Computer Law Commons

Open Access. Powered by Scholars. Published by Universities.®

Legislation

Schulich School of Law, Dalhousie University

Series

Artificial Intelligence

Articles 1 - 2 of 2

Full-Text Articles in Computer Law

Submission To Canadian Government Consultation On A Modern Copyright Framework For Ai And The Internet Of Things, Sean Flynn, Lucie Guibault, Christian Handke, Joan-Josep Vallbé, Michael Palmedo, Carys Craig, Michael Geist, Joao Pedro Quintais Jan 2021

Submission To Canadian Government Consultation On A Modern Copyright Framework For Ai And The Internet Of Things, Sean Flynn, Lucie Guibault, Christian Handke, Joan-Josep Vallbé, Michael Palmedo, Carys Craig, Michael Geist, Joao Pedro Quintais

Reports & Public Policy Documents

We are grateful for the opportunity to participate in the Canadian Government’s consultation on a modern copyright framework for AI and the Internet of Things. Below, we present some of our research findings relating to the importance of flexibility in copyright law to permit text and data mining (“TDM”). As the consultation paper recognizes, TDM is a critical element of artificial intelligence. Our research supports the adoption of a specific exception for uses of works in TDM to supplement Canada’s existing general fair dealing exception.

Empirical research shows that more publication of citable research takes place in countries with “open” …


Legal Risks Of Adversarial Machine Learning Research, Ram Shankar Siva Kumar, Jonathon Penney, Bruce Schneier, Kendra Albert Jan 2020

Legal Risks Of Adversarial Machine Learning Research, Ram Shankar Siva Kumar, Jonathon Penney, Bruce Schneier, Kendra Albert

Articles, Book Chapters, & Popular Press

Adversarial machine learning is the systematic study of how motivated adversaries can compromise the confidentiality, integrity, and availability of machine learning (ML) systems through targeted or blanket attacks. The problem of attacking ML systems is so prevalent that CERT, the federally funded research and development center tasked with studying attacks, issued a broad vulnerability note on how most ML classifiers are vulnerable to adversarial manipulation. Google, IBM, Facebook, and Microsoft have committed to investing in securing machine learning systems. The US and EU are likewise putting security and safety of AI systems as a top priority.

Now, research on adversarial …