This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.
Is our AI model resilient to evasion attacks?
Is our AI model resilient to evasion attacks?
Evasion attacks involve modifying the input data to evade detection or classification by the model. These attacks can be used to bypass security systems, such as intrusion detection systems or spam filters. Example: Specific malware is crafted to avoid being flagged by a machine-learning-based antivirus.
If you answered No then you are at risk
If you are not sure, then you might be at risk too
Recommendations
- Develop anomaly detection systems to monitor deviations in input distributions and flag suspicious patterns.
- Integrate robust logging mechanisms to analyze and mitigate the impact of detected attacks.
- Train models with diverse and adversarial data, including known evasion techniques.
- Implement ensemble modeling to reduce susceptibility to evasion attacks.
- Ensure that thresholds and rules are periodically reviewed to adapt to evolving evasion techniques.
Interesting resources/references
- Microsoft, Threat Modelling AI/ML Systems and Dependencies
- Adversarially Robust Malware Detection Using Monotonic Classification
- Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training
- Feature Denoising for Improving Adversarial Robustness
- Securing Machine Learning Algorithms, ENISA
- STRIDE-AI: An Approach to Identifying Vulnerabilities of Machine Learning Assets
- Stride-ML Threat Model
- MITRE ATLAS™ - Adversarial Threat Landscape for Artificial-Intelligence Systems