Can human operators safely interrupt or override the AI system at any time?

This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.

Accountability & Human Oversight CategorySafety & Environmental Impact Category
Design PhaseDeploy PhaseMonitor Phase
Can human operators safely interrupt or override the AI system at any time?
  • High-risk AI systems must provide natural persons with the means to stop or override the system when necessary. This includes mechanisms such as a 'stop button' or fallback procedures that bring the system to a safe state.
  • A lack of override capabilities could lead to harm, especially in autonomous systems where malfunction or misalignment may go unnoticed without human intervention.

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Design systems with built-in override or halt capabilities.
  • Ensure that these mechanisms are tested regularly and accessible to responsible personnel.
  • Document override procedures clearly and provide training to relevant users.

Interesting resources/references