If we plan to deploy a third-party AI tool, have we assessed our shared responsibility for its potential impact on users?

This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.

Accountability & Human Oversight CategoryCybersecurity Category
Design PhaseInput PhaseDeploy PhaseMonitor Phase
If we plan to deploy a third-party AI tool, have we assessed our shared responsibility for its potential impact on users?

If you use a third-party tool you might still have a responsibility towards the users. Think about employees, job applicants, patients, etc. It is also your responsibility to make sure that the AI system you choose won't cause harm to the individuals.

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

If personal data is involved, review which ones are your responsibilities (look into art. 24 and 28 GDPR).

You can also start by checking:

  • That you have the right agreements in place with the third party provider.
  • That the origin and data lineage of their datasets are verified.
  • How their models are fed; do they anonymize the data?
  • How you have assessed their security, ethical data handling, quality processes and measures to prevent bias and discrimination in their AI system.
  • That you have informed users accordingly.