Open Access. Powered by Scholars. Published by Universities.®

Dissertations

2023

Adversarial Training

Articles 1 - 1 of 1

Full-Text Articles in Artificial Intelligence and Robotics

Fortifying Robustness: Unveiling The Intricacies Of Training And Inference Vulnerabilities In Centralized And Federated Neural Networks, Guanxiong Liu Aug 2023

Fortifying Robustness: Unveiling The Intricacies Of Training And Inference Vulnerabilities In Centralized And Federated Neural Networks, Guanxiong Liu

Dissertations

Neural network (NN) classifiers have gained significant traction in diverse domains such as natural language processing, computer vision, and cybersecurity, owing to their remarkable ability to approximate complex latent distributions from data. Nevertheless, the conventional assumption of an attack-free operating environment has been challenged by the emergence of adversarial examples. These perturbed samples, which are typically imperceptible to human observers, can lead to misclassifications by the NN classifiers. Moreover, recent studies have uncovered the ability of poisoned training data to generate Trojan backdoored classifiers that exhibit misclassification behavior triggered by predefined patterns.

In recent years, significant research efforts have been …