Could the AI system become persuasive causing harm to the individual?

This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.

Safety Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could the AI system become persuasive causing harm to the individual?
  • This is of special importance in Human Robot Interaction (HRI): If the robot can achieve reciprocity when interacting with humans, could there be a risk of manipulation and human compliance?
  • Reciprocity is a social norm of responding to a positive action with another positive action, rewarding kind actions. As a social construct, reciprocity means that in response to friendly actions, people are frequently much nicer and much more cooperative than predicted by the self-interest model; conversely, in response to hostile actions they are frequently much more nasty and even brutal. Source: Wikipedia

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Signals of susceptibility coming from a robot or computer could have an impact on the willingness of humans to cooperate or take advice from it.
  • It is important to consider and test this possible scenario when your AI system is interacting with humans and some type of collaboration/cooperation in expected.