This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.
Are users informed about the AI system's reliability, limitations, and risks in a way that enables safe and effective use?
Are users informed about the AI system's reliability, limitations, and risks in a way that enables safe and effective use?
Users need to understand what the AI system can and cannot do, including its intended use, reliability, limitations, and potential risks. Without clear communication, users may place unwarranted trust in the system, misuse it, or be harmed by misleading outputs. This undermines transparency, fairness, safety, and user autonomy. For example, failing to disclose error rates, decision logic, or appropriate use contexts can lead to over-reliance or unsafe behavior, especially in sensitive domains.
If you answered No then you are at risk
If you are not sure, then you might be at risk too
Recommendations
- Clearly communicate the system's intended use, benefits, limitations, and potential risks.
- Provide timely, accessible information on accuracy levels, error rates, interpretability, and system updates.
- Ensure users understand when and how to rely on the system, and when human judgment is needed.
- Use interpretability tools appropriate to the impact of the system, especially if it is a black-box model.
- Follow accessibility best practices to ensure all users, including those with disabilities, can understand the system.
- Incorporate feedback loops such as surveys to verify that users actually understand how the system works and what they can expect.
- Consider this part of compliance with the GDPR transparency principle, and good practice for system safety and usability.
