This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.
Could third-party AI/ML providers compromise our training data or insert backdoors?
Could third-party AI/ML providers compromise our training data or insert backdoors?
Malicious ML providers could query the model used by a customer and recover the customer’s training data. The training process is either fully or partially outsourced to a malicious third party who wants to provide the user with a trained model that contains a backdoor. Example: researchers showed how a malicious provider presented a backdoored algorithm, wherein the private training data was recovered. They were able to reconstruct faces and texts, given the model alone.
If you answered Yes then you are at risk
If you are not sure, then you might be at risk too
Recommendations
- Research papers demonstrating the viability of this attack indicate Homomorphic Encryption could be an effective mitigation.
- Train all sensitive models in-house.
- Catalog training data or ensure it comes from a trusted third party with strong security practices.
- Threat model the interaction between the MLaaS provider and your own systems.