Are we protected from malicious AI/ML providers who could recover training data?

This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.

Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we protected from malicious AI/ML providers who could recover training data?
  • Malicious ML providers could query the model used by a customer and recover this customer’s training data. The training process is either fully or partially outsourced to a malicious third party who wants to provide the user with a trained model that contains a backdoor.
  • Example: researchers showed how a malicious provider presented a backdoored algorithm, wherein the private training data was recovered. They were able to reconstruct faces and texts, given the model alone.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Research papers demonstrating the viability of this attack indicate Homomorphic Encryption could be an effective mitigation. Check for more information Threat Modeling AI/ML Systems and Dependencies
  • Train all sensitive models in-house.
  • Catalog training data or ensure it comes from a trusted third party with strong security practices.
  • Threat model the interaction between the MLaaS provider and your own systems.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.