Are we protected from model stealing attacks?

This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.

Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we protected from model stealing attacks?
  • In model stealing, the attackers recreate the underlying model by legitimately querying the model. The functionality of the new model is the same as that of the underlying model.
  • Example: in the BigML case, researchers were able to recover the model used to predict if someone should have a good/bad credit risk using 1,150 queries and within 10 minutes.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Minimize or obfuscate the details returned in prediction APIs while still maintaining their usefulness to honest applications.
  • Define a well-formed query for your model inputs and only return results in response to completed, well-formed inputs matching that format.
  • Return rounded confidence values. Most legitimate callers do not need multiple decimal places of precision.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.