Can our AI system represent different norms and values without creating ambiguity?
This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.
Can our AI system represent different norms and values without creating ambiguity?
- Can we build a model that is inclusive?
- Could cultural and language differences be an issue when it comes to the ethical nuance of your algorithm? Well-meaning values can create unintended consequences.
- Must the AI system understand the world in all its different contexts?
- Could ambiguity in rules you teach the AI system be a problem?
- Can your system interact equitably with users from different cultures and with different abilities?
If you answered Yes then you are at risk
If you are not sure, then you might be at risk too
Recommendations
- Consider designing with value alignment, what means that you want to ensure consideration of existing values and sensitivity to a wide range of cultural norms and values.
- Make sure that when you test the product you include a large diversity in type of users.
- Think carefully about what diversity means in the context where the product is going to be used.
- Remember that this is a team effort and not an individual decision!