This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.
Are we protected from membership inference attacks?
Are we protected from membership inference attacks?
In a membership inference attack (MIA), the attacker can determine whether a given data record was part of the model’s training dataset or not. Example: researchers were able to predict a patient’s main procedure (e.g., surgery the patient went through) based on the attributes (e.g., age, gender, hospital).
If you answered No then you are at risk
If you are not sure, then you might be at risk too
Recommendations
- Differential Privacy has been shown to be an effective mitigation in some studies.
- The usage of neuron dropout and model stacking can be effective mitigations to an extent. Using neuron dropout not only increases resilience of a neural net to this attack, but also increases model performance.