Efficient Defense Against Adversarial Attacks

Authors

  • Amruta Mankawade Assistant Professor, Department of Artificial Intelligence and Data Science Engineering, Vishwakarma Institute of Technology, Pune
  • Pavitha Nooji Assistant Professor, Department of Artificial Intelligence, Faculty of Science and Technology, Vishwakarma University, Pune
  • Aditya Kulkarni Student, Department of Artificial Intelligence and Data Science Engineering, Vishwakarma Institute of Technology, Pune
  • Shravani Dhamne Student, Department of Artificial Intelligence and Data Science Engineering, Vishwakarma Institute of Technology, Pune
  • Raj Dharmale Student, Department of Artificial Intelligence and Data Science Engineering, Vishwakarma Institute of Technology, Pune
  • Jayesh Chaudhari Student, Department of Artificial Intelligence and Data Science Engineering, Vishwakarma Institute of Technology, Pune

Keywords:

Attacks, Adversarial Training, Defenses, Neural Network

Abstract

Adversarial attacks are a significant vulnerability for deep learning models, particularly Convolutional Neural Networks (CNNs), which are widely employed in image classification and object detection. These attacks involve crafting imperceptible perturbations to input data that mislead CNNs into making incorrect predictions, posing risks in critical areas such as autonomous driving, security, and healthcare. This paper focuses on understanding the nature of adversarial attacks on CNNs, including white-box attacks, where attackers have full knowledge of the model’s parameters, and black-box attacks, where attackers have limited or no access to the model’s architecture. Common attack techniques such as the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) and many more are reviewed to illustrate how CNNs can be compromised. In response to these threats, we explore various defense mechanisms aimed at increasing CNN robustness. Adversarial training, which incorporates adversarial examples during the training process, is a prominent defense. Other approaches, like input preprocessing, gradient obfuscation, and randomization techniques, are also discussed. This work emphasizes the trade-off between the efficiency of these defenses and their ability to protect CNNs without significantly increasing computational costs.

Downloads

Published

2024-09-09

How to Cite

Amruta Mankawade, Pavitha Nooji, Aditya Kulkarni, Shravani Dhamne, Raj Dharmale, & Jayesh Chaudhari. (2024). Efficient Defense Against Adversarial Attacks. Journal of Computational Analysis and Applications (JoCAAA), 33(08), 456–462. Retrieved from http://eudoxuspress.com/index.php/pub/article/view/1339

Issue

Section

Articles

Similar Articles

<< < 10 11 12 13 14 15 16 > >> 

You may also start an advanced similarity search for this article.