Are we protected from poisoning attacks?
This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.
Are we protected from poisoning attacks?
- In a poisoning attack, the goal of the attacker is to contaminate the machine model generated in the training phase, so that predictions on new data will be modified in the testing phase. This attack could also be caused by insiders.
- Example: in a medical dataset where the goal is to predict the dosage of a medicine using demographic information, researchers introduced malicious samples at 8% poisoning rate, which changed the dosage by 75.06% for half of the patients.
Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.
Other scenarios:
- Data tampering: Actors like AI/ML designers and engineers can deliberately or unintentionally manipulate and expose data. Data can also be manipulated during the storage procedure and by means of some processes like feature selection. Besides interfering with model inference, this type of threat can also bring severe discriminatory issues by introducing bias. Source: ENISA
- An attacker who knows how a raw data filtration scheme is set up may be able to leverage that knowledge into malicious input later in system deployment. Source:BerryVilleiML
- Adversaries may fine-tune hyper-parameters and thus influence the AI system’s behaviour. Hyper-parameters can be a vector for accidental overfitting. In addition, hard to detect changes to hyper-parameters would make an ideal insider attack. Source: ENISA
If you answered No then you are at risk
If you are not sure, then you might be at risk too
Recommendations
- Define anomaly sensors to look at data distribution on day to day basis and alert on variations.
- Measure training data variation on daily basis, telemetry for skew/drift.
- Input validation, both sanitization and integrity checking.
Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.
- Implement measures against insider threats.
Interesting resources/references
- Microsoft, Threat Modelling AI/ML Systems and Dependencies
- Securing Machine Learning Algorithms, ENISA
- STRIDE-AI: An Approach to Identifying Vulnerabilities of Machine Learning Assets
- Stride-ML Threat Model
- Robustness Techniques & Toolkits for Applied AI
- MITRE ATLAS™ - Adversarial Threat Landscape for Artificial-Intelligence Systems