Are we protected from model inversion attacks?

This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.

Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we protected from model inversion attacks?
  • In a model inversion attack, if attackers already have access to some personal data belonging to specific individuals included in the training data, they can infer further personal information about those same individuals by observing the inputs and outputs of the ML model.
  • In model Inversion the private features used in machine learning models can be recovered. This includes reconstructing private training data that the attacker should not have access to.
  • Example: an attacker recovers the secret features used in the model through careful queries.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Interfaces to models trained with sensitive data need strong access control.
  • Implement rate-limiting on the queries allowed by the model.
  • Implement gates between users/callers and the actual model by performing input validation on all proposed queries, rejecting anything not meeting the model’s definition of input correctness and returning only the minimum amount of information needed to be useful.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.