This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.
Are we protected from model stealing attacks?
Are we protected from model stealing attacks?
In model stealing, the attackers can recreate the underlying model by legitimately querying the model. The functionality of the new model is the same as that of the underlying model. Example: in the BigML case, researchers were able to recover the model used to predict if someone should have a good/bad credit risk using 1,150 queries and within 10 minutes.
If you answered No then you are at risk
If you are not sure, then you might be at risk too
Recommendations
- Minimize or obfuscate the details returned in prediction APIs while still maintaining their usefulness to 'honest' applications.
- Define a well-formed query for your model inputs and only return results in response to completed, well-formed inputs matching that format.