Bias & Discrimination: could there be groups who might be disproportionately affected by the outcomes of the AI system?

This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.

Ethics & Human Rights CategoryTechnique & Processes Category
Design PhaseInput PhaseModel PhaseOutput Phase
Bias & Discrimination: could there be groups who might be disproportionately affected by the outcomes of the AI system?
  • Could the AI system potentially negatively discriminate against people on the basis of any of the following grounds: sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age, gender or sexual orientation?
  • If your model is learning from data specific to some cultural background then the output could be discriminating for members of other cultural backgrounds.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Consider the different types of users and contexts where your product is going to be used.
  • Consider the impact of diversity of backgrounds, cultures, and other important different attributes when selecting your input data, features and when testing the output.
  • Assess the risk of possible unfairness towards individuals or communities to avoid discriminating minority groups.
  • The disadvantage to people depends on the kind of harm, severity of the harm and significance (how many people are put at a disadvantage compared to another group of people). Statistical assessments on group differences are an important tool to assess unfair and discriminatory uses of AI.
  • Design with empathy, diversity and respect in mind.