Are we protected from membership inference attacks?
This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.
Are we protected from membership inference attacks?
- In a membership inference attack, the attacker can determine whether a given data record was part of the model’s training dataset or not.
- Example: researchers were able to predict a patient’s main procedure (e.g.: Surgery the patient went through) based on the attributes (e.g.: age, gender, hospital).
Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.
If you answered No then you are at risk
If you are not sure, then you might be at risk too
Recommendations
- Some research papers indicate Differential Privacy would be an effective mitigation. Check for more information Threat Modeling AI/ML Systems and Dependencies.
- The usage of neuron dropout and model stacking can be effective mitigations to an extent. Using neuron dropout not only increases resilience of a neural net to this attack, but also increases model performance.
Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.