Could the AI system promote certain values or beliefs on users?
Could the AI system promote certain values or beliefs on users?
- Could cultural and language differences be an issue when it comes to the ethical nuance of your algorithm? Well-meaning values can create unintended consequences.
- Must the AI system understand the world in all its different contexts?
- Could ambiguity in rules you teach the AI system be a problem?
- Can your system interact equitably with users from different cultures and with different abilities?
If you answered Yes then you are at risk
If you are not sure, then you might be at risk too
Recommendations
- Consider designing with value alignment, which means that you want to ensure consideration of existing values and sensitivity to a wide range of cultural norms and values.
- Make sure that when you test the product you include a large diversity in type of users.
- Think carefully about what diversity means in the context where the product is going to be used.
- Remember that this is a team effort and not an individual decision.
Interesting resources/references
- Freedom of thought and religion(Universal Declaration of Human Rights), article 22 Cultural, religious and linguistic diversity, article 10 Freedom of thought, Conscience and religion (Charter of fundamental rights of the European Union)
- Value alignment
- Online Ethics Canvas
- AI Values and Alignment