Are we protected against model sabotage?

This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.

Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we protected against model sabotage?

Sabotaging the model is a nefarious threat that refers to exploitation or physical damage of libraries and machine learning platforms that host or supply AI/ML services and systems. Sources: ENISA

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Implement security measures to protect your models against sabotage.
  • Assess the security profile of third party tooling and providers.
  • Consider implementing a disaster recovery plan with mitigation measures for this type of attack.