Could the AI system’s design choices lead to unfair outcomes?

This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.

Bias, Fairness & Discrimination Category
Design PhaseInput PhaseModel PhaseOutput PhaseMonitor Phase
Could the AI system’s design choices lead to unfair outcomes?

Biases can emerge from an AI model’s design and training, even if the dataset is unbiased. Design choices and development processes can introduce various biases that affect fairness and accuracy.

  • Algorithmic bias: Introduced by design decisions, like optimization functions or regularization techniques, which can distort predictions and lead to unfair outcomes.
  • Aggregation bias: Occurs when a model assumes all data follows the same distribution, failing to account for group differences and leading to inaccurate results.
  • Omitted-variable bias: Happens when key factors are left out of the model, distorting relationships between features and outcomes. For instance, failing to account for a new competitor could mislead predictions.
  • Learning bias: Arises when a model prioritizes one objective, like accuracy, over others, like fairness, leading to skewed outcomes that benefit certain groups.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Critically assess how optimization methods, loss functions, and regularization impact fairness.
  • Account for group differences: Avoid assuming uniform data distributions. Identify and model distinct subgroups where necessary.
  • Use feature importance techniques to detect and include relevant variables that could influence predictions.
  • Balance performance trade-offs: Monitor both overall accuracy and subgroup performance to prevent the model from favouring certain groups or objectives unfairly.