On Adversarial Robustness of Quantized Neural Networks Against Direct Attacks

On Adversarial Robustness of Quantized Neural Networks Against Direct Attacks

Volume 9, Issue 6, Page No 30-46, 2024

Author’s Name:  Abhishek Shrestha, Jürgen Großmann

View Affiliations

Fraunhofer-Institut f¨ur Offene Kommunikationssysteme FOKUS, System Quality Center (SQC), Critical Systems Engineering, Berlin, 10589, Germany

a)whom correspondence should be addressed. E-mail: abhishek.shrestha@fokus.fraunhofer.de

Adv. Sci. Technol. Eng. Syst. J. 9(6), 30-46 (2024); a  DOI: 10.25046/aj090604

Keywords: Deep neural networks, Quantization, Adversarial attacks

Share

22 Downloads

Export Citations

Deep Neural Networks (DNNs) prove to be susceptible to synthetically generated samples, so-called adversarial examples. Such adversarial examples aim at generating misclassifications by specifically optimizing input data for a matching perturbation. With the increasing use of deep learning on embedded devices and the resulting use of quantization techniques to compress deep neural networks, it is critical to investigate the adversarial vulnerability of quantized neural networks.In this paper, we perform an in-depth study of the adversarial robustness of quantized networks against direct attacks, where adversarial examples are both generated and applied on the same network. Our experiments show that quantization makes models resilient to the generation of adversarial examples, even for attacks that demonstrate a high success rate, indicating that it offers some degree of robustness against these attacks. Additionally, we open-source Adversarial Neural Network Toolkit (ANNT) to support the replication of our results.

Received: 14 October 2024  Revised: 10 December 2024 Accepted: 11 December 2024  Online: 23 December 2024

  1. A. Shrestha, J. Großmann, “Properties that allow or prohibit transferability of adversarial attacks among quantized networks,” in Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024), AST ’24, 99–109, Association for Computing Machinery, New York, NY, USA, 2024, doi:10.1145/3644032.3644453.
  2. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, R. Fergus, “Intriguing properties of neural networks,” CoRR, abs/1312.6199, 2014, doi:10.48550/arXiv.1312.6199.
  3. S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, P. Frossard, “Universal adversarial perturbations,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 1765–1773, 2017, doi:10.48550/arXiv.1610.08401.
  4. I. J. Goodfellow, J. Shlens, C. Szegedy, “Explaining and Harnessing Adversarial Examples,” arXiv preprint arXiv:1412.6572, 2015, doi:10.48550/arXiv.1412.6572.
  5. M. Yasmin, M. Sharif, S. Mohsin, “Neural Networks in Medical Imaging Applications: A Survey,” World Applied Sciences Journal, 22, 12, 2013.
  6. J. Grossmann, N. Grube, S. Kharma, D. Knoblauch, R. Krajewski, M. Kucheiko, H.-W.Wiesbrock, “Test and Training Data Generation for Object Recognition in the Railway Domain,” in P. Masci, C. Bernardeschi, P. Graziani, M. Koddenbrock, M. Palmieri, editors, Software Engineering and Formal Methods. SEFM 2022 Collocated Workshops, 5–16, Springer International Publishing, Cham, 2023, doi:10.1007/978-3-031-26236-4 1.
  7. E. C. Pinto Neto, D. M. Baum, J. R. d. Almeida, J. B. Camargo, P. S. Cugnasca, “Deep Learning in Air Traffic Management (ATM): A Survey on Applications, Opportunities, and Open Challenges,” Aerospace, 10(4), 2023, doi:10.3390/aerospace10040358.
  8. J. C.-W. Lin, G. Srivastava, Y.-D. Zhang, “Special Issue Editorial: Advances in Computational Intelligence for Perception and Decision- Making for Autonomous Systems,” ISA Transactions, 132, 1–4, 2023, doi:10.1016/j.isatra.2023.01.031.
  9. Y. Ijiri, M. Sakuragi, Shihong Lao, “Security Management for Mobile Devices by Face Recognition,” in 7th International Conference on Mobile Data Management (MDM’06), 49–49, IEEE, 2006, doi:10.1109/MDM.2006.138.
  10. A. I. Awad, A. Babu, E. Barka, K. Shuaib, “AI-powered biometrics for Internet of Things security: A review and future vision,” Journal of Information Security and Applications, 82, 103748, 2024, doi:https://doi.org/10.1016/j.jisa.2024.103748.
  11. H. Cui, Z. Chen, Y. Xi, H. Chen, J. Hao, “IoT Data Management and Lineage Traceability: A Blockchain-based Solution,” in 2019 IEEE/CIC International Conference on Communications Workshops in China (ICCC Workshops), 239– 244, IEEE, 2019, doi:10.1109/ICCChinaW.2019.8849969.
  12. N. M. Gonzalez, W. A. Goya, R. de Fatima Pereira, K. Langona, E. A. Silva, T. C. Melo de Brito Carvalho, C. C. Miers, J.-E. Mangs, A. Sefidcon, “Fog computing: Data analytics and cloud distributed processing on the network edges,” in 2016 35th International Conference of the Chilean Computer Science Society (SCCC), 1–9, IEEE, 2016, doi:10.1109/SCCC.2016.7836028.
  13. A. Krizhevsky, I. Sutskever, G. E. Hinton, “ImageNet Classification with deep convolutional neural networks,” Communications of the ACM, 60(6), 84–90, 2017, doi:10.1145/3065386.
  14. K. Simonyan, A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” CoRR, abs/1409.1556, 2014, doi:10.48550/arXiv.1409.1556.
  15. K. He, X. Zhang, S. Ren, J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778, IEEE, 2016, doi:10.1109/CVPR.2016.90.
  16. Y. Huang, H. Hu, C. Chen, “Robustness of on-Device Models: Adversarial Attack to Deep Learning Models on Android Apps,” in 2021 IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), 101–110, IEEE, 2021, doi:10.1109/ICSESEIP52600.2021.00019.
  17. S. Han, J. Pool, J. Tran, W. J. Dally, “Learning both Weights and Connections for Efficient Neural Networks,” Advances in neural information processing systems, 28, 2015, doi:10.48550/arXiv.1506.02626.
  18. R. Krishnamoorthi, “Quantizing deep convolutional networks for efficient inference: A whitepaper,” CoRR, abs/1806.08342, 2018,
    doi:10.48550/arXiv.1806.08342.
  19. A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam, “MobileNets: Efficient Convolutional Neural
    Networks for Mobile Vision Applications,” CoRR, abs/1704.04861, 2017, doi:10.48550/arXiv.1704.04861.
  20. Y. Guo, “A Survey on Methods and Theories of Quantized Neural Networks,” CoRR, abs/1808.04752, 2018, doi:10.48550/arXiv.1808.04752.
  21. Y. Zhao, I. Shumailov, R. Mullins, R. Anderson, “To compress or not to compress: Understanding the Interactions between Adversarial Attacks and Neural Network Compression,” Proceedings of Machine Learning and Systems, 1, 230–240, 2020, doi:10.48550/arXiv.1810.00208.
  22. R. Bernhard, P.-A. Moellic, J.-M. Dutertre, “Impact of Low-Bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks,” in 2019 International Conference on Cyberworlds (CW), 308–315, IEEE, 2019, doi:10.1109/CW.2019.00057.
  23. F. Tram`er, N. Papernot, I. Goodfellow, D. Boneh, P. McDaniel, “The Space of Transferable Adversarial Examples,” arXiv preprint arXiv:1704.03453, 2017, doi:10.48550/arXiv.1704.03453.
  24. L. Wu, Z. Zhu, C. Tai, W. E, “Understanding and Enhancing the Transferability of Adversarial Examples,” arXiv preprint arXiv:1802.09707, 2018, doi:10.48550/arXiv.1802.09707.
  25. A. Kurakin, I. Goodfellow, S. Bengio, “Adversarial examples in the physical world,” CoRR, abs/1607.02533, 2017, doi:10.48550/arXiv.1607.02533.
  26. W. Brendel, J. Rauber, M. Bethge, “Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models,” arXiv preprint arXiv:1712.04248, 2018, doi:10.48550/arXiv.1712.04248.
  27. N. Carlini, D. Wagner, “Towards Evaluating the Robustness of Neural Networks,” in 2017 IEEE Symposium on Security and Privacy (SP), 39–57, IEEE, 2017, doi:10.1109/SP.2017.49.
  28. X. Huang, D. Kroening, W. Ruan, J. Sharp, Y. Sun, E. Thamo, M. Wu, X. Yi, “A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability,” Computer Science Review, 37, 100270, 2020, doi:10.48550/arXiv.1812.08342.
  29. N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, A. Swami, “The Limitations of Deep Learning in Adversarial Settings,” CoRR, abs/1511.07528, 2015, doi:10.48550/arXiv.1511.07528.
  30. A. Demontis, M. Melis, M. Pintor, M. Jagielski, B. Biggio, A. Oprea, C. Nita- Rotaru, F. Roli, “Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks,” in 28th USENIX security symposium (USENIX security 19), 19, 2019, doi:10.48550/arXiv.1809.02861.
  31. A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images,” 60, 2009.
  32. Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, 86(11), 2278–2324, 1998, doi:10.1109/5.726791, conference Name: Proceedings of the IEEE.
  33. S.-M. Moosavi-Dezfooli, A. Fawzi, P. Frossard, “DeepFool: a simple and accurate method to fool deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2574–2582, 2016, doi:10.48550/arXiv.1511.04599.
  34. A. G. Matachana, K. T. Co, L. Mu˜noz-Gonz´alez, D. Martinez, E. C. Lupu, “Robustness and Transferability of Universal Attacks on Compressed Models,” CoRR, abs/2012.06024, 2020, doi:10.48550/arXiv.2012.06024.
  35. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, A. Vladu, “Towards Deep Learning Models Resistant to Adversarial Attacks,” arXiv preprint arXiv:1706.06083, 2019, doi:10.48550/arXiv.1706.06083.
  36. N. Papernot, P. McDaniel, X. Wu, S. Jha, A. Swami, “Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks,” in
    2016 IEEE symposium on security and privacy (SP), 582–597, IEEE, 2016, doi:10.48550/arXiv.1511.04508.
  37. S. Zhou, Y. Wu, Z. Ni, X. Zhou, H. Wen, Y. Zou, “DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients,” CoRR, abs/1606.06160, 2018, doi:10.48550/arXiv.1606.06160.
  38. Y. Bengio, N. L´eonard, A. Courville, “Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation,” CoRR, abs/1308.3432, 2013, doi:10.48550/arXiv.1308.3432.
  39. M. Rastegari, V. Ordonez, J. Redmon, A. Farhadi, “XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks,” 9908, 525–542, 2016, doi:10.1007/978-3-319-46493-0 32, series Title: Lecture Notes in Computer Science.
  40. M. Courbariaux, Y. Bengio, J.-P. David, “BinaryConnect: Training Deep Neural Networks with binary weights during propagations,” Advances in neural information processing systems, 28, 9, 2015, doi:10.48550/arXiv.1511.00363.
  41. TensorFlow Lite, “TensorFlow Lite | ML for Mobile and Edge Devices,” 2021.
  42. Y. Zhao, X. Gao, R. Mullins, C. Xu, “Mayo: A Framework for Auto-generating Hardware Friendly Deep Neural Networks,” in Proceedings of the 2nd International Workshop on Embedded and Mobile Deep Learning, 25–30, ACM, 2018, doi:10.1145/3212725.3212726.
  43. M. Jaderberg, A. Vedaldi, A. Zisserman, “Speeding up Convolutional Neural Networks with Low Rank Expansions,” CoRR, abs/1405.3866, 2014, doi:10.48550/arXiv.1405.3866.
  44. A. Ilyas, S. Santurkar, D. Tsipras, L. Engstrom, B. Tran, A. Madry, “Adversarial Examples Are Not Bugs, They Are Features,” Advances in neural information processing systems, 32, 2019, doi:10.48550/arXiv.1905.02175.
  45. Y. Liu, X. Chen, C. Liu, D. Song, “Delving into Transferable Adversarial Examples and Black-box Attacks,” CoRR, abs/1611.02770, 2017,
    doi:10.48550/arXiv.1611.02770.
  46. C. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, “Going deeper with convolutions,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1–9, IEEE, 2015, doi:10.1109/CVPR.2015.7298594.
  47. K. Duncan, E. Komendantskaya, R. Stewart, M. Lones, “Relative Robustness of Quantized Neural Networks Against Adversarial Attacks,” in 2020 International Joint Conference on Neural Networks (IJCNN), 1–8, IEEE, 2020, doi:10.1109/IJCNN48605.2020.9207596.
  48. X. Huang, M. Kwiatkowska, S. Wang, M. Wu, “Safety Verification of Deep Neural Networks,” 10426, 3–29, 2017, doi:10.1007/978-3-319-63387-9 1, series Title: Lecture Notes in Computer Science.
  49. J. Rauber, W. Brendel, M. Bethge, “Foolbox: A Python toolbox to benchmark the robustness of machine learning models,” CoRR, abs/1707.04131, 2018, doi:10.48550/arXiv.1707.04131.
  50. N. Papernot, F. Faghri, N. Carlini, I. Goodfellow, R. Feinman, A. Kurakin,C. Xie, Y. Sharma, T. Brown, A. Roy, A. Matyasko, V. Behzadan, K. Hambardzumyan, Z. Zhang, Y.-L. Juang, Z. Li, R. Sheatsley, A. Garg, J. Uesato, W. Gierke, Y. Dong, D. Berthelot, P. Hendricks, J. Rauber, R. Long, P. Mc- Daniel, “Technical Report on the CleverHans v2.1.0 Adversarial Examples Library,” CoRR, abs/1610.00768, 2018, doi:10.48550/arXiv.1610.00768.
  51. M.-I. Nicolae, M. Sinn, M. N. Tran, B. Buesser, A. Rawat, M. Wistuba, V. Zantedeschi, N. Baracaldo, B. Chen, H. Ludwig, I. M. Molloy, B. Edwards, “Adversarial Robustness Toolbox v1.0.0,” CoRR, abs/1707.04131, 2019, doi:10.48550/arXiv.1807.01069.
  52. M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, Y. Bengio, “Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1,” CoRR, abs/1602.02830, 2016, doi:10.48550/arXiv.1602.02830.
  53. J. Uesato, B. O’Donoghue, A. v. d. Oord, P. Kohli, “Adversarial Risk and the Dangers of Evaluating Against Weak Attacks,” in International conference on machine learning, 5025–5034, PMLR, 2018, doi:10.48550/arXiv.1802.05666.
  54. P.-Y. Chen, H. Zhang, Y. Sharma, J. Yi, C.-J. Hsieh, “ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models,” 15–26, 2017, doi:10.1145/3128572.3140448.
  55. Y. Li, X. Dong, W. Wang, “Additive Powers-of-Two Quantization: An Efficient Non-uniform Discretization for Neural Networks,” CoRR, abs/1909.13144, 2020, doi:10.48550/arXiv.1909.13144.
  56. Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, A. Y. Ng, “Reading Digits in Natural Images with Unsupervised Feature Learning,” in NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 9, 2011.
  57. A. Shrestha, J. Großmann, “Adversarial Neural Network Toolkit,” https: //github.com/Abhishek2271/AdversarialNeuralNetworkToolkit,
    2024.
  58. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, X. Zheng, “TensorFlow: A system for large-scale machine learning,” OSDI’16, 21, USENIX Association, 2016, doi:10.5555/3026877.3026899.
  59. Y. Wu, et al., “Tensorpack,” https://github.com/tensorpack/, 2016.

Citations by Dimensions

Citations by PlumX

Crossref Citations

This paper is currently not cited.

No. of Downloads Per Month

ASTESJ_090604 L
No. of Downloads Per Country

Special Issues

Special Issue on Computing, Engineering and Multidisciplinary Sciences
Guest Editors: Prof. Wang Xiu Ying
Deadline: 30 April 2025

Special Issue on AI-empowered Smart Grid Technologies and EVs
Guest Editors: Dr. Aparna Kumari, Mr. Riaz Khan
Deadline: 30 November 2024

Special Issue on Innovation in Computing, Engineering Science & Technology
Guest Editors: Prof. Wang Xiu Ying
Deadline: 15 October 2024