Are we protected from attacks to the AI/ML Supply Chain?
This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.
Are we protected from attacks to the AI/ML Supply Chain?
- Owing to large resources (data + computation) required to train algorithms, the current practice is to reuse models trained by large corporations, and modify them slightly for the task at hand. These models are curated in a Model Zoo. In this attack, the adversary attacks the models hosted in the Model Zoo, thereby poisoning the well for anyone else.
- Example: researchers showed how it was possible for an attacker to insert malicious code into one of the popular models. An unsuspecting ML developer downloaded this model and used it as part of the image recognition system in their code.
Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.
If you answered No then you are at risk
If you are not sure, then you might be at risk too
Recommendations
- Minimize 3rd-party dependencies for models and data where possible.
- Incorporate these dependencies into your threat modeling process.
- Leverage strong authentication, access control and encryption between 1st/3rd-party systems.
Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.
- Perform integrity checks where possible to detect tampering.