About
About
Explore adversarial attacks like FGSM and defence strategies. Learn to secure AI models against manipulation and understand vulnerabilities in CNNs and LLMs effectively.
After completing this Pathway, you will be able to:
- Implement the Fast Gradient Sign Method (FGSM) to generate adversarial examples that mislead target models
- Assess the robustness of different neural architectures (CNNs, LLMs) against common evasion attacks
Read more