Open Access. Powered by Scholars. Published by Universities.®

Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Computer Sciences

Series

2023

Adversarial attacks

Articles 1 - 2 of 2

Full-Text Articles in Engineering

Robustembed: Robust Sentence Embeddings Using Self-Supervised Contrastive Pre-Training, Javad Asl, Eduardo Blanco, Daniel Takabi Jan 2023

Robustembed: Robust Sentence Embeddings Using Self-Supervised Contrastive Pre-Training, Javad Asl, Eduardo Blanco, Daniel Takabi

School of Cybersecurity Faculty Publications

Pre-trained language models (PLMs) have demonstrated their exceptional performance across a wide range of natural language processing tasks. The utilization of PLM-based sentence embeddings enables the generation of contextual representations that capture rich semantic information. However, despite their success with unseen samples, current PLM-based representations suffer from poor robustness in adversarial scenarios. In this paper, we propose RobustEmbed, a self-supervised sentence embedding framework that enhances both generalization and robustness in various text representation tasks and against diverse adversarial attacks. By generating high-risk adversarial perturbations to promote higher invariance in the embedding space and leveraging the perturbation within a novel contrastive …


Defending Ai-Based Automatic Modulation Recognition Models Against Adversarial Attacks, Haolin Tang, Ferhat Ozgur Catak, Murat Kuzlu, Evren Catak, Yanxiao Zhao Jan 2023

Defending Ai-Based Automatic Modulation Recognition Models Against Adversarial Attacks, Haolin Tang, Ferhat Ozgur Catak, Murat Kuzlu, Evren Catak, Yanxiao Zhao

Engineering Technology Faculty Publications

Automatic Modulation Recognition (AMR) is one of the critical steps in the signal processing chain of wireless networks, which can significantly improve communication performance. AMR detects the modulation scheme of the received signal without any prior information. Recently, many Artificial Intelligence (AI) based AMR methods have been proposed, inspired by the considerable progress of AI methods in various fields. On the one hand, AI-based AMR methods can outperform traditional methods in terms of accuracy and efficiency. On the other hand, they are susceptible to new types of cyberattacks, such as model poisoning or adversarial attacks. This paper explores the vulnerabilities …