Have we implemented safeguards to detect and prevent insider threats to our AI systems?

This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.

Cybersecurity Category
Design PhaseDeploy PhaseMonitor Phase
Have we implemented safeguards to detect and prevent insider threats to our AI systems?

AI designers and developers may deliberately expose data and models for a variety of reasons, e.g. revenge or extortion. Integrity, data confidentiality and trustworthiness are the main impacted security properties. Source: ENISA

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Implement onboarding and offboarding procedures to ensure the trustworthiness of internal and external personnel.
  • Enforce separation of duties and least privilege principle.
  • Enforce the usage of managed devices with appropriate policies and protective software.
  • Implement awareness training.
  • Implement strict access control and audit trail mechanisms.