Open Access. Powered by Scholars. Published by Universities.®

Computational Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

All Master's Theses

Categorized Input

Articles 1 - 1 of 1

Full-Text Articles in Computational Engineering

Bias And Fairness Of Evasion Attacks In Image Perturbation, Sichong Qin Jan 2021

Bias And Fairness Of Evasion Attacks In Image Perturbation, Sichong Qin

All Master's Theses

When talking about protecting privacy of personal images, adversarial attack methods play key roles. These methods are created to protect against the unauthorized usage of personal images. Such methods protect personal privacy by adding some amount of perturbations, otherwise known as "noise", to input images to enhance privacy protection. Fawkes in Clean Attack method is one adversarial machine learning approach aimed at protecting personal privacy against abuse of personal images by unauthorized AI systems. In leveraging the Fawkes in Evasion Attack method and through running additional experiments against the Fawkes system, we were able to prove that the effectiveness of …