Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Information Security

PDF

Research Collection School Of Computing and Information Systems

2020

Deep learning

Articles 1 - 2 of 2

Full-Text Articles in Physical Sciences and Mathematics

Differential Privacy Protection Over Deep Learning: An Investigation Of Its Impacted Factors, Ying Lin, Ling-Yan Bao, Ze-Minghui Li, Shu-Sheng Si, Chao-Hsien Chu Dec 2020

Differential Privacy Protection Over Deep Learning: An Investigation Of Its Impacted Factors, Ying Lin, Ling-Yan Bao, Ze-Minghui Li, Shu-Sheng Si, Chao-Hsien Chu

Research Collection School Of Computing and Information Systems

Deep learning (DL) has been widely applied to achieve promising results in many fields, but it still exists various privacy concerns and issues. Applying differential privacy (DP) to DL models is an effective way to ensure privacy-preserving training and classification. In this paper, we revisit the DP stochastic gradient descent (DP-SGD) method, which has been used by several algorithms and systems and achieved good privacy protection. However, several factors, such as the sequence of adding noise, the models used etc., may impact its performance with various degrees. We empirically show that adding noise first and clipping second will not only …


Secure And Verifiable Inference In Deep Neural Networks, Guowen Xu, Hongwei Li, Hao Ren, Jianfei Sun, Shengmin Xu, Jianting Ning, Haoming Yang, Kan Yang, Robert H. Deng Dec 2020

Secure And Verifiable Inference In Deep Neural Networks, Guowen Xu, Hongwei Li, Hao Ren, Jianfei Sun, Shengmin Xu, Jianting Ning, Haoming Yang, Kan Yang, Robert H. Deng

Research Collection School Of Computing and Information Systems

Outsourced inference service has enormously promoted the popularity of deep learning, and helped users to customize a range of personalized applications. However, it also entails a variety of security and privacy issues brought by untrusted service providers. Particularly, a malicious adversary may violate user privacy during the inference process, or worse, return incorrect results to the client through compromising the integrity of the outsourced model. To address these problems, we propose SecureDL to protect the model’s integrity and user’s privacy in Deep Neural Networks (DNNs) inference process. In SecureDL, we first transform complicated non-linear activation functions of DNNs to low-degree …