This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.
Could our AI system oversimplify real-world problems?
Could our AI system oversimplify real-world problems?
An AI systems can overlook the social contexts in which they operate, leading to unintended consequences. Specifically, watch out for these types of abstraction traps:
- The formalism trap: focusing too narrowly on technical aspects without considering real-world context.
- The ripple effect trap: ignoring how an AI system might alter behaviors within a social system, causing unforeseen impacts.
- The solutionism trap: over-relying on AI as the answer to all problems, neglecting simpler, more ethical, or effective alternatives.
- The framing trap: failing to account for the broader context or related factors within which the system operates, leading to inaccurate outcomes.
- The portability trap: applying AI systems outside their original context, potentially resulting in errors or harm. For example, self-driving cars trained in one country may struggle with different traffic rules and conditions elsewhere.
If you answered Yes then you are at risk
If you are not sure, then you might be at risk too
Recommendations
- Align the problem formulation with the relevant social context to avoid oversimplification. Ensure all actors and factors within the system are considered to account for the broader context in which the AI operates.
- Evaluate potential shifts in power dynamics and unintended consequences as the system interacts with other components. Consider how geographical, cultural, or temporal differences might affect its performance when applied to new contexts.
- Critically assess if AI is truly the best solution, or if simpler alternatives might serve the same purpose more effectively.