Library

PLOT4ai is a library (currently) containing 86 threats related to ML/AI. The threats have been classified in 8 different categories.

In case you are new to PLOT4ai, read the HOW TO first.

Plot4ai - Phases and Categories

Threat Modeling Categories

Click on a category to filter the cards below.

  • All
  • Technique & Processes
  • Accessibility
  • Identifiability & Linkability
  • Security
  • Safety
  • Unawareness
  • Ethics & Human Rights
  • Non-compliance

Click on a card to view the contents; Then click on card flip icon to flip it and see the backside of the card
(Please note: these cards can also be downloaded as CSV or PDF for offline usage)

Is the task or assignment completely clear?click to show this card

Is the task or assignment completely clear?

Click to show this card

Can we assure that the data that we need will be complete and trustworthy?click to show this card

Can we assure that the data that we need will be complete and trustworthy?

Click to show this card

Can the data be representative of the different groups/populations?click to show this card

Can the data be representative of the different groups/populations?

Click to show this card

Did we identify all the important stakeholders needed for the project? click to show this card

Did we identify all the important stakeholders needed for the project?

Click to show this card

Does the model need to be explainable for the users or affected persons?click to show this card

Does the model need to be explainable for the users or affected persons?

Click to show this card

Once our model is running, can we keep feeding it data? click to show this card

Once our model is running, can we keep feeding it data?

Click to show this card

Is human intervention necessary to oversee the automatic decision making (ADM) process of the AI system?click to show this card

Is human intervention necessary to oversee the automatic decision making (ADM) process of the AI system?

Click to show this card

Can the channels that we will utilize to collect real-time input become a problem? click to show this card

Can the channels that we will utilize to collect real-time input become a problem?

Click to show this card

Can all classes or groups be represented?click to show this card

Can all classes or groups be represented?

Click to show this card

When datasets from external sources are updated, can we receive and process the new data on time?click to show this card

When datasets from external sources are updated, can we receive and process the new data on time?

Click to show this card

Could the legitimacy of the data sources that we need be an issue? click to show this card

Could the legitimacy of the data sources that we need be an issue?

Click to show this card

Do we have enough dedicated resources to monitor the algorithm?click to show this card

Do we have enough dedicated resources to monitor the algorithm?

Click to show this card

Can we collect all the data that we need for the purpose of the algorithm?click to show this card

Can we collect all the data that we need for the purpose of the algorithm?

Click to show this card

Can our system's user interface be used by those with special needs or disabilities? click to show this card

Can our system's user interface be used by those with special needs or disabilities?

Click to show this card

Do we need to offer a redress mechanism to the users? click to show this card

Do we need to offer a redress mechanism to the users?

Click to show this card

Do we need to implement an age gate to use our product?click to show this card

Do we need to implement an age gate to use our product?

Click to show this card

If users need to provide consent, can we make the information (including the logic behind the algorithm) easily available?click to show this card

If users need to provide consent, can we make the information (including the logic behind the algorithm) easily available?

Click to show this card

Could the user perceive the message from the AI system in a different way than intended?click to show this card

Could the user perceive the message from the AI system in a different way than intended?

Click to show this card

Could the learning curve of the product be an issue? Can it be difficult to use?click to show this card

Could the learning curve of the product be an issue? Can it be difficult to use?

Click to show this card

Can the data used to feed the model be linked to individuals? click to show this card

Can the data used to feed the model be linked to individuals?

Click to show this card

Could actions be incorrectly attributed to an individual or group?click to show this card

Could actions be incorrectly attributed to an individual or group?

Click to show this card

Could we be revealing information that a person has not chosen to share?click to show this card

Could we be revealing information that a person has not chosen to share?

Click to show this card

Do we need to red-team/pen test the AI system? click to show this card

Do we need to red-team/pen test the AI system?

Click to show this card

Are our APIs securely implemented?click to show this card

Are our APIs securely implemented?

Click to show this card

Is our data storage protected?click to show this card

Is our data storage protected?

Click to show this card

If our AI system uses randomness, is the source of randomness properly protected?click to show this card

If our AI system uses randomness, is the source of randomness properly protected?

Click to show this card

Is our model suited for processing confidential information?click to show this card

Is our model suited for processing confidential information?

Click to show this card

Can our AI system scale in performance from data input to data output?click to show this card

Can our AI system scale in performance from data input to data output?

Click to show this card

Are we protected from an insider threat?click to show this card

Are we protected from an insider threat?

Click to show this card

Are we protected against model sabotage?click to show this card

Are we protected against model sabotage?

Click to show this card

Could environmental phenomena or natural disasters have a negative impact on our AI system?click to show this card

Could environmental phenomena or natural disasters have a negative impact on our AI system?

Click to show this card

Are we protected from perturbation attacks?click to show this card

Are we protected from perturbation attacks?

Click to show this card

Are we protected from poisoning attacks?click to show this card

Are we protected from poisoning attacks?

Click to show this card

Are we protected from model inversion attacks?click to show this card

Are we protected from model inversion attacks?

Click to show this card

Are we protected from membership Inference attacks?click to show this card

Are we protected from membership Inference attacks?

Click to show this card

Are we protected from model stealing attacks?click to show this card

Are we protected from model stealing attacks?

Click to show this card

Are we protected from a reprogramming deep neural nets attack?click to show this card

Are we protected from a reprogramming deep neural nets attack?

Click to show this card

Are we protected from adversarial example?click to show this card

Are we protected from adversarial example?

Click to show this card

Are we protected from malicious ML providers who could recover training data?click to show this card

Are we protected from malicious ML providers who could recover training data?

Click to show this card

Are we protected from attacks to the ML Supply Chain?click to show this card

Are we protected from attacks to the ML Supply Chain?

Click to show this card

Are we protected from exploits on software dependencies of our ML systems?click to show this card

Are we protected from exploits on software dependencies of our ML systems?

Click to show this card

Could we have a possible malicious use, misuse or inappropriate use of our AI system? click to show this card

Could we have a possible malicious use, misuse or inappropriate use of our AI system?

Click to show this card

In case of system failure, could users be negatively impacted? click to show this card

In case of system failure, could users be negatively impacted?

Click to show this card

Could our AI system cause negative impact on the environment? click to show this card

Could our AI system cause negative impact on the environment?

Click to show this card

Could the possibility that our model is going to be deployed in a different context be a problem?click to show this card

Could the possibility that our model is going to be deployed in a different context be a problem?

Click to show this card

Could the AI system become persuasive causing harm to the individual?click to show this card

Could the AI system become persuasive causing harm to the individual?

Click to show this card

Can we transform our RL agent’s reward function to avoid undesired negative side effects on the environment?click to show this card

Can we transform our RL agent’s reward function to avoid undesired negative side effects on the environment?

Click to show this card

Can we prevent our agents from “gaming” their reward functions?click to show this card

Can we prevent our agents from “gaming” their reward functions?

Click to show this card

Can our RL agent efficiently achieve goals for which feedback is very expensive or difficult to be obtained?click to show this card

Can our RL agent efficiently achieve goals for which feedback is very expensive or difficult to be obtained?

Click to show this card

Can our ML system be robust to changes in the data distribution?click to show this card

Can our ML system be robust to changes in the data distribution?

Click to show this card

Can our RL agents learn about their environment without executing catastrophic actions?click to show this card

Can our RL agents learn about their environment without executing catastrophic actions?

Click to show this card

Do we need to inform users that they are interacting with an AI system? click to show this card

Do we need to inform users that they are interacting with an AI system?

Click to show this card

Can we provide the necessary information to the users about possible impacts, benefits and potential risks?click to show this card

Can we provide the necessary information to the users about possible impacts, benefits and potential risks?

Click to show this card

Can users anticipate the actions of the AI system?click to show this card

Can users anticipate the actions of the AI system?

Click to show this card

Bias & Discrimination: could there be groups who might be disproportionately affected by the outcomes of the AI system?click to show this card

Bias & Discrimination: could there be groups who might be disproportionately affected by the outcomes of the AI system?

Click to show this card

Can we expect mostly positive reactions from the users or individuals? click to show this card

Can we expect mostly positive reactions from the users or individuals?

Click to show this card

Could the AI system have an impact on human work? click to show this card

Could the AI system have an impact on human work?

Click to show this card

Could the AI system have a negative impact on society at large? click to show this card

Could the AI system have a negative impact on society at large?

Click to show this card

Could the AI system limit the right to be heard?click to show this card

Could the AI system limit the right to be heard?

Click to show this card

Could the system have a big impact on decisions regarding the right to life?click to show this card

Could the system have a big impact on decisions regarding the right to life?

Click to show this card

Could the AI system affect the freedom of expression of its users?click to show this card

Could the AI system affect the freedom of expression of its users?

Click to show this card

Could the AI system affect the freedom of its users?click to show this card

Could the AI system affect the freedom of its users?

Click to show this card

Could the AI system affect the right to a fair hearing?click to show this card

Could the AI system affect the right to a fair hearing?

Click to show this card

Could children be part of our users’ group? click to show this card

Could children be part of our users’ group?

Click to show this card

Could cultural and language differences be an issue when it comes to the ethical nuance of our algorithm?click to show this card

Could cultural and language differences be an issue when it comes to the ethical nuance of our algorithm?

Click to show this card

Could our product not be representing current social needs and social context?click to show this card

Could our product not be representing current social needs and social context?

Click to show this card

Could our product have an impact denying access to jobs, housing, insurance, benefits or education?click to show this card

Could our product have an impact denying access to jobs, housing, insurance, benefits or education?

Click to show this card

Could our AI system affect human autonomy by interfering with the user’s decision-making process in an unintended and undesirable way? click to show this card

Could our AI system affect human autonomy by interfering with the user’s decision-making process in an unintended and undesirable way?

Click to show this card

Is our training data and labelling produced respecting dignity and wellbeing of the labour force involved?click to show this card

Is our training data and labelling produced respecting dignity and wellbeing of the labour force involved?

Click to show this card

Are we going to collect/use behavioural data to feed the AI system?click to show this card

Are we going to collect/use behavioural data to feed the AI system?

Click to show this card

Can we build a model that is inclusive?click to show this card

Can we build a model that is inclusive?

Click to show this card

Could our AI system automatically label or categorize people?click to show this card

Could our AI system automatically label or categorize people?

Click to show this card

Is data minimisation possible?click to show this card

Is data minimisation possible?

Click to show this card

Could we be processing sensitive data?click to show this card

Could we be processing sensitive data?

Click to show this card

Do we have a lawful basis for processing the data?click to show this card

Do we have a lawful basis for processing the data?

Click to show this card

Is the creation of the AI system proportional to the end goal? click to show this card

Is the creation of the AI system proportional to the end goal?

Click to show this card

Could the purpose limitation principle be an issue?click to show this card

Could the purpose limitation principle be an issue?

Click to show this card

Can we comply with all the applicable GDPR data subjects’ rights? click to show this card

Can we comply with all the applicable GDPR data subjects’ rights?

Click to show this card

Have we considered the possibility to start with a data protection impact assessment (DPIA)? click to show this card

Have we considered the possibility to start with a data protection impact assessment (DPIA)?

Click to show this card

If children or other type of vulnerable users are part of the users group, do our third party providers need to comply with their data processing?click to show this card

If children or other type of vulnerable users are part of the users group, do our third party providers need to comply with their data processing?

Click to show this card

Do we need to use metadata to feed our model? click to show this card

Do we need to use metadata to feed our model?

Click to show this card

Will our product make automatic decisions without human intervention? click to show this card

Will our product make automatic decisions without human intervention?

Click to show this card

Could copyright restrictions on the dataset be an issue?click to show this card

Could copyright restrictions on the dataset be an issue?

Click to show this card

Are we planning to use a third party AI tool?click to show this card

Are we planning to use a third party AI tool?

Click to show this card

Could we have geolocation restrictions for implementing the product?click to show this card

Could we have geolocation restrictions for implementing the product?

Click to show this card

Can we comply with the storage limitation principle?click to show this card

Can we comply with the storage limitation principle?

Click to show this card

References

I have created an overview of all the sources I have used for the creation of this library, which can be found under References

Technique & Processes Category
Design PhaseInput PhaseModel PhaseOutput Phase
Is the task or assignment completely clear?
  • Is it clear which problem you want to solve?
  • Do you have clear what you want to produce and what the output should be?
  • Are the benefits of the solution clear?

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Clearly define the problem and outcome you are optimizing for.
  • Assess if your AI system will be well-suited for this purpose.
  • Always discuss if there are alternative ways to solve the problem.
  • Define success! Working with individuals who may be directly affected can help you identify an appropriate way to measure success.
  • Make sure there is a stakeholder involved (product owner for instance) with enough knowledge of the business and a clear vision about what the model needs to do.
  • Did you try analytics first? In this context analytics could also offer inspiring views that can help you decide on the next steps. They can be a good source of information and are sometimes enough to solve the problem without the need of AI/ML.
Technique & Processes Category
Design PhaseInput Phase
Can we assure that the data that we need will be complete and trustworthy?

Can you avoid the known principle of “garbage in, garbage out”? Your AI system is only as reliable as the data it works with.

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Verify the data sources.
    • Is there information missing within the dataset?
    • Are all the necessary classes represented?
    • Does the data belong to the correct time frame and geographical coverage?
    • Evaluate which extra data you need to collect/receive.
  • Carefully consider representation schemes, especially in cases of text, video, APIs, and sensors. Text representation schemes are not all the same. If your system is counting on ASCII and it gets Unicode, will your system recognize the incorrect encoding? Source: BerryVilleiML
Technique & Processes Category
Design PhaseInput PhaseModel PhaseOutput Phase
Can the data be representative of the different groups/populations?

It is important to reduce the risk of bias and different types of discrimination. Did you consider diversity and representativeness of users/individuals in the data?

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Who is covered and who is underrepresented?
  • Prevent disparate impact: when the output of a member of a minority group is disparate compared to representation of the group. Consider measuring the accuracy from minority classes too instead of measuring only the total accuracy. Adjusting the weighting factors to avoid disparate impact can result in positive discrimination which has also its own issues: disparate treatment.
  • One approach to addressing the problem of class imbalance is to randomly resample the training dataset. This technique can help to rebalance the class distribution when classes are under or over represented
    • random oversampling (i.e. duplicating samples from the minority class)
    • random undersampling (i.e. deleting samples from the majority class)
  • There are trade-offs when determining an AI system’s metrics for success. It is important to balance performance metrics against the risk of negatively impacting vulnerable populations.
Technique & Processes Category
Design PhaseInput PhaseModel PhaseOutput Phase
Did we identify all the important stakeholders needed for the project?
  • Do you have all the necessary stakeholders on board? Not having the right people that can give the necessary input can put the design of the AI system in danger.
  • Think for instance when attributes or variables need to be selected, or when you need to understand the different data contexts.
  • Data scientists should not be the only ones making assumptions about variables, it should really be a team effort.

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Identify and involve on time the people that you need during the whole life cycle of the AI system. This will avoid unnecessary rework and frustrations.
  • Identifying who’s responsible for making the decisions and how much control they have over the decision-making process allows for a more evident tracking of responsibility in the AI’s development process.
Technique & Processes Category
Design PhaseInput PhaseModel PhaseOutput Phase
Does the model need to be explainable for the users or affected persons?

Do you need to be able to give a clear explanation to the user about the logic that the AI systems used to reach a certain decision? And can that decision have a big impact on the user?

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Evaluate the type of models that you could use to solve the problem as specified in your task.
  • Consider what the impact is if certain black box models cannot be used and interpretability tools do not offer sufficient results. You might need to evaluate a possible change in strategy.
  • Data scientists can evaluate the impact from a technical perspective and discuss this with the rest of stakeholders. The decision keeps being a team effort.
Technique & Processes Category
Design PhaseInput PhaseOutput Phase
Once our model is running, can we keep feeding it data?
  • Will you use the output from other models to feed the model again (looping)? or will you use other sources ?
  • Are you sure this data will be continuously available?

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Consider how the model will keep learning. Design a strategy to prevent issues with the next steps.
  • Imagine you planned to feed your model with input obtained by mining surveys and it appears these surveys contain a lot of free text fields. To prepare that data and avoid issues (bias, inaccuracies, etc) you might need extra time. Consider these type of scenarios that could impact the whole life cycle of your product!

Interesting resources/references

Technique & Processes Category
Design PhaseInput PhaseModel PhaseOutput Phase
Is human intervention necessary to oversee the automatic decision making (ADM) process of the AI system?
  • Do humans need to review the process and the decisions of the AI system? Consider the impact that this could have for the organisation.
  • Do you have enough capacitated employees available for this role?

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

It is important that people are available for this role and that they receive specific training on how to exercise oversight. The training should teach them how to perform the oversight without being biased by the decision of the AI system (automation bias).

Technique & Processes CategorySecurity Category
Design PhaseInput PhaseOutput Phase
Can the channels that we will utilize to collect real-time input become a problem?
  • Are these channels trustworthy?
  • What will happen in case of failure?
  • Think about IoT devices used as sensors for instances.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • If you are collecting/receiving data from sensors, consider estimating the impact it could have on your model if any of the sensors fail and your input data gets interrupted or corrupted.
  • Sensor blinding attacks are one example of a risk faced by poorly designed input gathering systems. Note that consistent feature identification related to sensors is likely to require human calibration. Source: BerryVilleiML
Technique & Processes Category
Design PhaseInput PhaseModel PhaseOutput Phase
Can all classes or groups be represented?

When applying statistical generalisation, the risk exist of making inferences due to misrepresentation, for instance: a postal code where mostly young families live can discriminate the few old families living there because they are not properly represented in the group.

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

When using techniques like statistical generalisation is important to know your data well, and get familiarised with who is and who is not represented in the samples. Check the samples for expectations that can be easily verified. For example, if half the population is known to be female, then you can check if approximately half the sample is female.

Technique & Processes Category
Design PhaseInput PhaseModel PhaseOutput Phase
When datasets from external sources are updated, can we receive and process the new data on time?
  • This could be especially risky in health and finance environments. How much change are you expecting in the data you receive?
  • How can you make sure to receive the updates on time?

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

Not only do you need to be able to trust the sources but you also need to design a process in which data is prepared on time to be used in the model and where you can timely consider the impact it could have in the output of the model, especially when this could have a negatively impact on the users. This process can be designed once you know how often changes in the data can be expected and how big the changes are.

Technique & Processes Category
Design PhaseInput PhaseOutput Phase
Could the legitimacy of the data sources that we need be an issue?
  • Data lineage can be necessary to demonstrate trust as part of your information transparency policy, but it can also be very important when it comes to assessing impact on the data flow. If sources are not verified and legitimised you could run risks such as data being wrongly labelled for instance.
  • Do you know where you need to get the data from? Who is responsible for the collection, maintenance and dissemination? Are the sources verified? Do you have the right agreements in place? Are you allowed to received or collect that data? Also keep ethical considerations in mind!

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Develop a robust understanding of your relevant data feeds, flows and structures such that if any changes occur to the model data inputs, you can assess any potential impact on model performance. In case of third party AI systems contact your vendor to ask for this information.
  • If you are using synthetic data you should know how it was created and the properties it has. Also keep in mind that synthetic data might not be the answer to all your privacy related problems; synthetic data does not always provide a better trade-off between privacy and utility than traditional anonymisation techniques.
  • Do you need to share models and combine them? The usage of Model Cards and Datasheets can help providing the source information.
Technique & Processes Category
Design PhaseInput PhaseModel PhaseOutput Phase
Do we have enough dedicated resources to monitor the algorithm?

Do you already have a process in place to monitor the quality of the output and system errors? Do you have resources to do this? Not having the right process and resources in place could have an impact on the project deadline and the organisation.

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Put a well-defined process in place to monitor if the AI system is meeting the intended goals.
  • Define failsafe fallback plans to address AI system errors of whatever origin and put governance procedures in place to trigger them.
  • Put measure in places to continuously assess the quality of the output data: e.g. that predictions scores are within expected ranges; anomaly detection in output and reassign input data leading to the detected anomaly.
  • Does the data measure what you need to measure? You could get measurement errors if data is not correctly labelled.
Technique & Processes Category
Design PhaseInput PhaseOutput Phase
Can we collect all the data that we need for the purpose of the algorithm?

Is it possible that you could face difficulties obtaining certain types of data? This could be due to different reasons such as legal, proprietary, financial, physical, etc. This could put the whole project in danger.

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

In the early phases of the project (as soon as the task becomes more clear), start considering which raw data and types of datasets you might need. You might not have the definitive answer until you have tested the model, but it will already help to avoid extra delays and surprises. You might have to involve your legal and financial department. Remember that this is a team effort.

Accessibility Category
Design PhaseOutput Phase
Can our system's user interface be used by those with special needs or disabilities?
  • Is it necessary that your AI system is also accessible and usable for users of assistive technologies (such as screen readers)?
  • Is it possible to provide text alternatives for instance?

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Implement Universal Design principles during every step of the planning and development process. This is not only important for web interfaces but also when AI systems/robots assist individuals.
  • Test the accessibility of your design with different users (also with disabilities).
Accessibility CategoryNon-compliance Category
Design PhaseModel PhaseOutput Phase
Do we need to offer a redress mechanism to the users?
  • For applications that can adversely affect individuals, you might need to consider implementing a redress by design mechanism where affected individuals can request remedy or compensation.
  • Article 22(3) GDPR provides individuals with a right to obtain human intervention if a decision is made solely by an AI system and it also provides the right to contest the decision.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Think about implementing mechanisms to effectively detect and rectify wrong decisions made by your system.
  • Provide a mechanism to ignore or dismiss undesirable features or services.
  • Wrong decisions could also have an impact on people that have not been the target of the data collection (data spillovers). Consider designing a way to offer all affected people the opportunity to contest the decisions of your system and request remedy or compensation. This mechanism should be easily accessible and it implies that you would need to have internally implemented a process where redress can be effectibily executed. This also has impact on the resources/skills needed to fulfil this process.
  • Consider this a necessary step to ensure responsibility and accountability.
Accessibility CategoryTechnique & Processes Category
Design PhaseOutput Phase
Do we need to implement an age gate to use our product?
  • Is your product not meant to be used by children? You might need to implement an age verification mechanism to prevent children from accessing the product.
  • Age verification can also be important to provide the right diagnosis (health sector).

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Clearly specify in the user instructions for which age group the application is built. Labels or symbols can be very helpful.
  • Consider which design is more appropriate, based on your use case and consider the possible risks associated with your design choice and the mitigating measures you can implement to reduce that risk. Document the rest risks that you want to accept.
  • Test the accessibility of your design with different age groups.

Interesting resources/references

Accessibility CategoryNon-compliance Category
Design PhaseModel PhaseOutput Phase
If users need to provide consent, can we make the information (including the logic behind the algorithm) easily available?
  • Can the information be easily accessible and readable?
  • Do you need to build a special place for it (think of a robot where you might need to have a screen for showing the text)

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • As part of privacy compliance you need to provide clear information about the processing and the logic of the algorithm. This information should be easily readable and accessible. During the design consider when and how you are going to provide this information. Especially in robots using AI this could be a challenge.
  • Comply with accessibility rules.
Accessibility CategoryUnawareness Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could the user perceive the message from the AI system in a different way than intended?
  • Is the perception of the provided explanation the same as the one intended?
  • Explainability is critical for end-users in order to take informed and accountable actions.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Understanding who is going to interact with the AI system can help to make the interaction more effective. Identify your different user groups.
  • Involve communication experts and do enough user testing to reduce the gap between the intended and the perceived meaning.
Accessibility CategorySafety Category
Design PhaseModel PhaseOutput Phase
Could the learning curve of the product be an issue? Can it be difficult to use?
  • Does usage of the AI system require new (digital) skills?
  • How easy are users expected to learn how to use the product?
  • Difficulties to learn how the system works could also bring the users in danger and have consequences for the reputation of the product or organisation.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • You can also provide assistance, appropriate training material and disclaimers to users on how to adequately use the system.
  • The words and language used in the interface, the complexity and lack of accessibility of some features could exclude people from using the application. Consider making changes in the design of the product where necessary.
  • Consider this also when children are future users.
Identifiability & Linkability Category
Design PhaseInput PhaseModel PhaseOutput Phase
Can the data used to feed the model be linked to individuals?

Do you need to use unique identifiers in your training dataset? If personal data is not necessary for the model you would not really have a legal justification for using it.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Unique identifiers might be included in the training set when you want to be able to link the results to individuals. Consider using pseudo-identifiers or other techniques that help you protect your data, to be able to legally justify why you think you need this (from a technical perspective)
  • Document the measures you are taking to protect the data. Consider if your measures are necessary and proportional.
Identifiability & Linkability Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could actions be incorrectly attributed to an individual or group?

Is your AI system taking decisions that could be linked to the the wrong person? A facial recognition system for example, or a risk prediction model giving inaccurate predictions that negatively affects an individual by mistake.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Evaluate the possible consequences of inaccuracies of your AI system and implement measures to prevent these errors from happening: avoiding bias and discrimination during the life cycle of the model, ensuring the quality of the input data, implementing a strict human oversight process, ways to double check the results with extra evidence, implementing safety mechanisms and a way to redress, etc.
  • Assess the impact on the different human rights of the individual.
  • Consider not to implement such a system if you cannot mitigate the risks.
Identifiability & Linkability Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could we be revealing information that a person has not chosen to share?
  • An example of this can be location data or behaviour
  • How can you make sure the product doesn’t inadvertently disclose sensitive or private information during use (e.g., indirectly inferring user locations or behaviour)?
  • Could movements or actions be revealed through data aggregation?

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Be careful when considering to make data public that you think is anonymised. Location data and routes can sometimes be de-anonymised (e.g. the case where a fitness apps was showing a heatmap and disclosing locations).
  • It is also important to offer privacy by default: offer the privacy settings by default at the maximum protection level. Let the users change the settings after having offered clear information about the consequences of reducing the privacy levels.
Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Do we need to red-team/pen test the AI system?

Do you need to test the security of your AI system before it goes live? This could have an impact on your project deadlines.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

Include the time you might need for a pen test in your project planning. Sometimes this can take weeks: you might have to hire an external party, agree on the scope, sign the corresponding agreements and even plan a retest.

Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are our APIs securely implemented?

APIs connect computers or pieces of software to each other. APIs are common attack targets in security and are in some sense your public front door. They should not expose information about your system or model. Source: BerryVilleiML

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Check how do you handle time and state and how is authentication implemented in your APIs.
  • Make sure that sensitive information such us API calls secrets are not sent in your commands.
  • Implement encryption at rest and in transit (TLS) and test often your APIs for vulnerabilities.
Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Is our data storage protected?

Is your data stored and managed in a secure way? Are you the only one with access to your data sources? Source: BerryVilleiML

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Implement access control rules
  • Verify the security of the authentication mechanism (and the system as a whole)
  • Consider the risk when utilizing public/external data sources.
Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
If our AI system uses randomness, is the source of randomness properly protected?

Randomness plays an important role in stochastic systems. An ML system that is depending on Monte Carlo randomness to work properly may be derailed by not-really-random “randomness”. “Random” generation of dataset partitions may be at risk if the source of randomness is easy to control by an attacker interested in data poisoning. Source: BerryVilleiML

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

Use of cryptographic randomness sources is encouraged. When it comes to ML, setting weights and thresholds “randomly” must be done with care. Many pseudo-random number generators (PRNG) are not suitable for use. PRNG loops can really damage system behaviour during learning. Cryptographic randomness directly intersects with ML when it comes to differential privacy. Using the wrong sort of random number generator can lead to subtle security problems. Source: BerryVilleiML

Security Category
Design PhaseModel Phase
Is our model suited for processing confidential information?

Some algorithms may be unsuited for processing confidential information. For example, using a non-parametric method like k-nearest neighbours in a situation with sensitive medical records is probably a bad idea since exemplars will have to be stored on production servers. Algorithmic leakage is an issue that should be considered carefully. Source: BerryVilleiML

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

When selecting the algorithm perform analyses and test to rule out algorithmic leakage.

Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Can our AI system scale in performance from data input to data output?

Can your algorithm scale in performance from the data it learned on to real data? In online situations the rate at which data comes into the model may not align with the rate of anticipated data arrival. This can lead to both outright ML system failure and to a system that “chases" its own tail. Source: BerryVilleiML

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Find out what the rate would be of expected data arrival to your model and perform tests in a similar environment with similar amount of data input.
  • Implement measures to make your model scalable.
Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we protected from an insider threat?

AI designers and developers may deliberately expose data and models for a variety of reasons, e.g. revenge or extortion. Integrity, data confidentiality and trustworthiness are the main impacted security properties. Source: ENISA

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Implement on and off boarding procedures to help guarantee the trustworthiness of your internal and external employees.
  • Enforce separation of duties and least privilege
  • Enforce the usage of managed devices with appropriate policies and protective software.
  • Awareness training
  • Implement strict access control and audit trail mechanisms.
Security Category
Design PhaseModel Phase
Are we protected against model sabotage?

Sabotaging the model is a nefarious threat that refers to exploitation or physical damage of libraries and machine learning platforms that host or supply AI/ML services and systems. Sources: ENISA

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Implement security measures to protect your models against sabotage.
  • Assess the security profile of third party tooling and providers
  • Consider implementing a disaster recovery plan with mitigation measures for this type of attack.
Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could environmental phenomena or natural disasters have a negative impact on our AI system?
  • Examples of environmental phenomena are heating, cooling and climate change
  • Examples of natural disasters to take into account are earthquakes, floods and fires Environmental phenomena may adversely influence the operation of IT infrastructure and hardware systems that support AI systems. Natural disasters may lead to unavailability or destruction of the IT infrastructures and hardware that enables the operation, deployment and maintenance of AI systems. Such outages may lead to delays in decision-making, delays in the processing of data streams and entire AI systems being placed offline. Sources: ENISA

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

Implement a disaster discovery plan considering different scenarios, impact, Recovery Time Objective (RTO), Recovery Point Objective (RPO) and mitigation measures.

Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we protected from perturbation attacks?
  • In perturbation style attacks, the attacker stealthily modifies the query to get a desired response.
  • Examples: (Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.)
    • Image: Noise is added to an X-ray image, which makes the predictions go from normal scan to abnormal
    • Text translation: Specific characters are manipulated to result in incorrect translation. The attack can suppress specific word or can even remove the word completely.
  • Random perturbation of labels is also a possible attack, while additionally there is the case of adversarial label noise (intentional switching of classification labels leading to deterministic noise, an error that the model cannot capture due to its generalization bias). Source: ENISA

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

Reactive/Defensive Detection Actions:

  • Implement a minimum time threshold between calls to the API providing classification results. This slows down multi-step attack testing by increasing the overall amount of time required to find a success perturbation.

Proactive/Protective Actions:

  • Develop a new network architecture that increases adversarial robustness by performing feature denoising.
  • Train with known adversarial samples to build resilience and robustness against malicious inputs.
  • Invest in developing monotonic classification with selection of monotonic features. This ensures that the adversary will not be able to evade the classifier by simply padding features from the negative class.
  • Feature squeezing can be used to harden DNN models by detecting adversarial examples.

Response Actions:

  • Issue alerts on classification results with high variance between classifiers, especially when from a single user or small group of users.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we protected from poisoning attacks?
  • In a poisoning attack, the goal of the attacker is to contaminate the machine model generated in the training phase, so that predictions on new data will be modified in the testing phase. This attack could also be caused by insiders.
  • Example: in a medical dataset where the goal is to predict the dosage of a medicine using demographic information, etc. Researchers introduced malicious samples at 8% poisoning rate, which changed the dosage by 75.06% for half of the patients.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

Other scenarios:

  • Data tampering: Actors like AI/ML designers and engineers can deliberately or unintentionally manipulate and expose data. Data can also be manipulated during the storage procedure and by means of some processes like feature selection. Besides interfering with model inference, this type of threat can also bring severe discriminatory issues by introducing bias. Source: ENISA
  • An attacker who knows how a raw data filtration scheme is set up may be able to leverage that knowledge into malicious input later in system deployment. Source:BerryVilleiML
  • Adversaries may fine-tune hyper-parameters and thus influence the AI system’s behaviour. Hyper-parameters can be a vector for accidental overfitting. In addition, hard to detect changes to hyper-parameters would make an ideal insider attack. Source: ENISA

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Define anomaly sensors to look at data distribution on day to day basis and alert on variations.
  • Measure training data variation on daily basis, telemetry for skew/drift.
  • Input validation, both sanitization and integrity checking.

Source: Microsoft, Threat Modelling AI/ML Systems and

  • Implement measures against insider threats.
Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we protected from model inversion attacks?
  • In model Inversion the private features used in machine learning models can be recovered. This includes reconstructing private training data that the attacker should not have access to.
  • Example: an attacker recovers the secret features used in the model through careful queries.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Interfaces to models trained with sensitive data need strong access control.
  • Implement rate-limiting on the queries allowed by the model.
  • Implement gates between users/callers and the actual model by performing input validation on all proposed queries, rejecting anything not meeting the model’s definition of input correctness and returning only the minimum amount of information needed to be useful.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we protected from membership Inference attacks?
  • In a membership inference attack, the attacker can determine whether a given data record was part of the model’s training dataset or not.
  • Example: researchers were able to predict a patient’s main procedure(e.g.: Surgery the patient went through) based on the attributes (e.g.: age, gender, hospital).

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Some research papers indicate Differential Privacy would be an effective mitigation. Check for more information Threat Modeling AI/ML Systems and Dependencies
  • The usage of neuron dropout and model stacking can be effective mitigations to an extent. Using neuron dropout not only increases resilience of a neural net to this attack, but also increases model performance.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

Security Category
Design PhaseModel PhaseOutput Phase
Are we protected from model stealing attacks?
  • In model stealing, the attackers recreate the underlying model by legitimately querying the model. The functionality of the new model is the same as that of the underlying model.
  • Example: in the BigML case, researchers were able to recover the model used to predict if someone should have a good/bad credit risk using 1,150 queries and within 10 minutes.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Minimize or obfuscate the details returned in prediction APIs while still maintaining their usefulness to honest applications.
  • Define a well-formed query for your model inputs and only return results in response to completed, well-formed inputs matching that format.
  • Return rounded confidence values. Most legitimate callers do not need multiple decimal places of precision.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

Security Category
Design PhaseModel Phase
Are we protected from a reprogramming deep neural nets attack?
  • By means of a specially crafted query from an adversary, Machine Learning systems can be reprogrammed to a task that deviates from the creator’s original intent.
  • Example: ImageNet, a system used to classify one of several categories of images was repurposed to count squares.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Configure a strong client-server mutual authentication and access control to model interfaces.
  • Takedown of the offending accounts.
  • Identify and enforce a service-level agreement for your APIs. Determine the acceptable time-to-fix for an issue once reported and ensure the issue no longer repros once SLA expires.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we protected from adversarial example?
  • An adversarial example is an input/query from a malicious entity sent with the sole aim of misleading the machine learning system.
  • Example: researchers constructed sunglasses with a design that could fool image recognition systems, which could no longer recognize the faces correctly.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

These attacks manifest themselves because issues in the machine learning layer were not mitigated. As with any other software, the layer below the target can always be attacked through traditional vectors. Because of this, traditional security practices are more important than ever, especially with the layer of unmitigated vulnerabilities (the data/algo layer) being used between AI and traditional software. Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we protected from malicious ML providers who could recover training data?
  • Malicious ML providers could query the model used by a customer and recover this customer’s training data. The training process is either fully or partially outsourced to a malicious third party who wants to provide the user with a trained model that contains a backdoor.
  • Example: researchers showed how a malicious provider presented a backdoored algorithm, wherein the private training data was recovered. They were able to reconstruct faces and texts, given the model alone.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Research papers demonstrating the viability of this attack indicate Homomorphic Encryption would be an effective mitigation. Check for more information Threat Modeling AI/ML Systems and Dependencies
  • Train all sensitive models in-house.
  • Catalog training data or ensure it comes from a trusted third party with strong security practices.
  • Threat model the interaction between the MLaaS provider and your own systems.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we protected from attacks to the ML Supply Chain?
  • Owing to large resources (data + computation) required to train algorithms, the current practice is to reuse models trained by large corporations, and modify them slightly for the task at hand. These models are curated in a Model Zoo. In this attack, the adversary attacks the models hosted in the Model Zoo, thereby poisoning the well for anyone else.
  • Example: researchers showed how it was possible for an attacker to check in malicious code into one of the popular models. An unsuspecting ML developer downloaded this model and used it as part of the image recognition system in their code.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Minimize 3rd-party dependencies for models and data where possible.
  • Incorporate these dependencies into your threat modeling process.
  • Leverage strong authentication, access control and encryption between 1st/3rd-party systems.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

  • perform integrity checks where possible to detect tampering
Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we protected from exploits on software dependencies of our ML systems?
  • In this case, the attacker does NOT manipulate the algorithms, but instead exploits traditional software vulnerabilities such as buffer overflows or cross-site scripting.
  • Example: an adversary customer finds a vulnerability in a common OSS dependency that you use and uploads a specially crafted training data payload to compromise your service.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

Work with your security team to follow applicable Security Development Lifecycle/Operational Security Assurance best practices. Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies

Safety CategorySecurity Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could we have a possible malicious use, misuse or inappropriate use of our AI system?

An example of abusability: A product that is used to spread misinformation; for example, a chatbot being misused to spread fake news.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Threat model your system: anticipate vulnerabilities and look for ways to hijack and weaponize your system for malicious activity.
  • Conduct red team exercises.
Safety Category
Design PhaseInput PhaseModel PhaseOutput Phase
In case of system failure, could users be negatively impacted?
  • Do you have a mechanism implemented to stop the processing in case of harm?
  • Do you have a way to identify and contact affected individuals and mitigate the adverse impacts?
  • Imagine a scenario where your AI system, a care-robot, is taking care of an individual (the patient) by performing some specific tasks and that this individual depends on this care.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Implement some kind of stop button or procedure to safely abort an operation when needed.
  • Establish a detection and response mechanism for undesirable adverse effects on individuals.
  • Define criticality levels of the possible consequences of faults/misuse of the AI system: what type of harm could be caused to the individuals, environment or organisations?
Safety Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could our AI system cause negative impact on the environment?
  • Ideally only models are used that do not demand the consumption of energy or natural resources beyond what is sustainable.
  • Your product should be designed with the dimension of environmental protection and improvement in mind.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Establish mechanisms to evaluate the environmental impact of your AI system; for example, the amount of energy used and carbon emissions.
  • Implement measures to reduce the environmental impact of the AI system throughout its lifecycle.
Safety Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could the possibility that our model is going to be deployed in a different context be a problem?

Are you testing the product in a real environment before releasing it? If the model is tested with one set of data and then is deployed in a different environment receiving other types of inputs there is less guarantee that it is going to work as planned. This is also the case in reinforcement learning with the so called wrong objective function where slight changes in the environment often require a full retrain of the model.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Use different data for testing and training. Make sure diversity is reflected in the data. Specify your training approach and statistical method. Explore the different environments and contexts and make sure your model is trained with the expected different data sources. This also applies to reinforcement learning.
  • Are you considering enough aspects in the environment? Did you forget any environmental variable that could be harmful? Could limited sampling due to high costs be an issue? Document this risk and look for support in your organisation. The organisation is accountable and responsible for the mitigation or acceptance of this risk. And hopefully you get extra budget assigned.
  • Consider applying techniques such as cultural effective challenge; this is a technique for creating an environment where technology developers can actively participate in questioning the AI process. This better translates the social context into the design process by involving more people and can prevent issues associated with target leakage where the AI system trains on data that prepares it for an alternative job other than the one it was initially intended to complete.

Interesting resources/references

Safety Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could the AI system become persuasive causing harm to the individual?
  • This is of special importance in Human Robot Interaction (HRI): If the robot can achieve reciprocity when interacting with humans, could there be a risk of manipulation and human compliance?
  • Reciprocity is a social norm of responding to a positive action with another positive action, rewarding kind actions. As a social construct, reciprocity means that in response to friendly actions, people are frequently much nicer and much more cooperative than predicted by the self-interest model; conversely, in response to hostile actions they are frequently much more nasty and even brutal. Source: Wikipedia

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Signals of susceptibility coming from a robot or computer could have an impact on the willingness of humans to cooperate or take advice from it.
  • It is important to consider and test this possible scenario when your AI system is interacting with humans and some type of collaboration/cooperation in expected.
Safety Category
Design PhaseInput PhaseModel PhaseOutput Phase
Can we transform our RL agent’s reward function to avoid undesired negative side effects on the environment?
  • Reinforcement Learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Source: Wikipedia

  • To better understand the threat consider a case where a robot is built to move an object, without manually programming a separate penalty for each possible bad behaviour. If the objective function is not well defined, the AI’s ability to develop its own strategies can lead to unintended, harmful side effects. In this case, the objective of moving an object seems simple, yet there are a myriad of ways in which this could go wrong. For instance, if a vase is in the robot’s path, the robot may knock it down in order to complete the goal. Since the objective function does not mention anything about the vase, the robot wouldn’t know to avoid it. Source: OpenAI

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

AI systems don’t share our understanding of the world. It is not sufficient to formulate the objective as “complete task X”; the designer also needs to specify the safety criteria under which the task is to be completed. A better strategy could be to define a budget for how much the AI system is allowed to impact the environment. This would help to minimize the unintended impact, without neutralizing the AI system.

Another approach would be training the agent to recognize harmful side effects so that it can avoid actions leading to such side effects. In that case, the agent would be trained for two tasks: the original task that is specified by the objective function and the task of recognizing side effects. The AI system would still need to undergo extensive testing and critical evaluation before deployment in real life settings. Source: OpenAI

Safety Category
Design PhaseInput PhaseModel PhaseOutput Phase
Can we prevent our agents from “gaming” their reward functions?
  • Reinforcement Learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Source: Wikipedia

  • Consider potential negative consequences from the AI system learning novel or unusual methods to score well on its objective function. Sometimes the AI can come up with some kind of “hack” or loophole in the design of the system to receive unearned rewards. Since the AI is trained to maximize its rewards, looking for such loopholes and “shortcuts” is a perfectly fair and valid strategy for the AI. For example, suppose that the office cleaning robot earns rewards only if it does not see any garbage in the office. Instead of cleaning the place, the robot could simply shut off its visual sensors, and thus achieve its goal of not seeing garbage. Source: OpenAI

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

One possible approach to mitigating this problem would be to have a “reward agent” whose only task is to mark if the rewards given to the learning agent are valid or not. The reward agent ensures that the learning agent (robot for instance) does not exploit the system, but rather, completes the desired objective. In the previous example, the “reward agent” could be trained by the human designer to check if the room has garbage or not (an easier task than cleaning the room). If the cleaning robot shuts off its visual sensors and claims a high reward, the “reward agent” would mark the reward as invalid. The designer can then look into the rewards marked as “invalid” and make necessary changes in the objective function to fix the loophole. Source: OpenAI

Safety Category
Design PhaseInput PhaseModel PhaseOutput Phase
Can our RL agent efficiently achieve goals for which feedback is very expensive or difficult to be obtained?
  • Reinforcement Learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Source: Wikipedia

  • When the agent is learning to perform a complex task, human oversight and feedback are more helpful than just rewards from the environment. Rewards are generally modelled such that they convey to what extent the task was completed, but they do not usually provide sufficient feedback about the safety implications of the agent’s actions. Even if the agent completes the task successfully, it may not be able to infer the side-effects of its actions from the rewards alone. In the ideal setting, a human would provide fine-grained supervision and feedback every time the agent performs an action (Scalable oversight). Though this would provide a much more informative view about the environment to the agent, such a strategy would require far too much time and effort from the human. Source: OpenAI

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

One promising research direction to tackle this problem is semi-supervised learning, where the agent is still evaluated on all the actions (or tasks), but receives rewards only for a small sample of those actions (or tasks).

Another promising research direction is hierarchical reinforcement learning, where a hierarchy is established between different learning agents. There could be a supervisor agent/robot whose task is to assign some work to the another agent/robot and provide it with feedback and rewards. Source: OpenAI

Safety Category
Design PhaseInput PhaseModel PhaseOutput Phase
Can our ML system be robust to changes in the data distribution?

A complex challenge for deploying AI agents in real life settings is that the agent could end up in situations that it has never experienced before. Such situations are inherently more difficult to handle and could lead the agent to take harmful actions. Source: OpenAI

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

One promising research direction focuses on identifying when the agent has encountered a new scenario so that it recognizes that it is more likely to make mistakes. While this does not solve the underlying problem of preparing AI systems for unforeseen circumstances, it helps in detecting the problem before mistakes happen. Another direction of research emphasizes transferring knowledge from familiar scenarios to new scenarios safely. Source: OpenAI

Safety Category
Design PhaseInput PhaseModel PhaseOutput Phase
Can our RL agents learn about their environment without executing catastrophic actions?
  • Reinforcement Learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Source: Wikipedia

  • Safe exploration: An important part of training an AI agent is to ensure that it explores and understands its environment. While exploring, the agent might also take some action that could damage itself or the environment. Source: OpenAI

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

One approach to reduce harm is to optimize the performance of the learning agent in the worst case scenario. When designing the objective function, the designer should not assume that the agent will always operate under optimal conditions. Some explicit reward signal may be added to ensure that the agent does not perform some catastrophic action, even if that leads to more limited actions in the optimal conditions. Source: OpenAI

Unawareness CategoryAccessibility Category
Design PhaseOutput Phase
Do we need to inform users that they are interacting with an AI system?
  • Are users adequately made aware that a decision, content, advice or outcome is the result of an algorithmic decision?
  • Could the AI system generate confusion for some or all users on whether they are interacting with a human or AI system?

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

In cases of interactive AI systems (e.g., chatbots, robots) you should inform the users that they are interacting with an AI system instead of a human. This information should be received at the beginning of the interaction.

Unawareness CategoryAccessibility Category
Design PhaseOutput Phase
Can we provide the necessary information to the users about possible impacts, benefits and potential risks?
  • Did you establish mechanisms to inform users about the purpose, criteria and limitations of decisions generated by the AI system?
  • If an AI-assisted decision has been made about a person without any type of explanation or information then this may limit that person's autonomy, scope and self-determination. This is unlikely to be fair.

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Provide clear information about how and why an AI-assisted decision was made and which personal data was used to train and test the model.
  • The model you choose should be at the right level of interpretability for your use case and for the impact it will have on the decision recipient. If you use a black box model make sure the supplementary explanation techniques you use provide a reliable and accurate representation of the systems behaviour. Source: UK ICO
  • Communicate the benefits, the technical limitations and potential risks of the AI system to users, such as its level of accuracy and/or error rates.
  • Survey/contact your users to see if they understand the decisions that your product makes.
Unawareness CategorySafety Category
Design PhaseInput PhaseModel PhaseOutput Phase
Can users anticipate the actions of the AI system?

Are users aware of the capabilities of the AI system? Users need to be informed about what to expect, not only for transparency reasons but in some products also for safety precautions.

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Consider this as part of the GDPR transparency principle.
  • Users should be aware of what the AI system can do.
  • Clear Information should be provided on time and made accessible following accessibility design principles.

Interesting resources/references

Ethics & Human Rights CategoryTechnique & Processes Category
Design PhaseInput PhaseModel PhaseOutput Phase
Bias & Discrimination: could there be groups who might be disproportionately affected by the outcomes of the AI system?
  • Could the AI system potentially negatively discriminate against people on the basis of any of the following grounds: sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation?
  • If your model is learning from data specific to some cultural background then the output could be discriminating for members of other cultural backgrounds.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Consider the different types of users and contexts where your product is going to be used.
  • Consider the impact of diversity of backgrounds, cultures, and other important different attributes when selecting your input data, features and when testing the output.
  • Assess the risk of possible unfairness towards individuals or communities to avoid discriminating minority groups.
  • The disadvantage to people depends on the kind of harm, severity of the harm and significance (how many people are put at a disadvantage compared to another group of people). Statistical assessments on group differences are an important tool to assess unfair and discriminatory uses of AI.
  • Design with empathy, diversity and respect in mind.
Ethics & Human Rights Category
Design PhaseInput PhaseModel PhaseOutput Phase
Can we expect mostly positive reactions from the users or individuals?
  • Do the users or individuals expect this type of processing of personal data?
  • Do they expect a product functioning like this?
  • Can you roll back if people are not happy with the product?

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Consider the different types of users and contexts your product is going to be used.
  • Consider diversity of backgrounds, cultures, and many other important different attributes.
  • Do enough user testing, like FUPs - Friendly User Pilots
  • Design with empathy, diversity and respect in mind.
  • Assess the risk of possible unfairness towards individuals or communities to avoid discriminating minority groups and prevent a bad reputation for your organisation.
Ethics & Human Rights Category
Design PhaseOutput Phase
Could the AI system have an impact on human work?
  • Could the use of your AI system affect the safety conditions of employees?
  • Could the AI system create the risk of de-skilling of the workforce? (skilled people being replaced by AI systems)

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Pave the way for the introduction of the AI system in your organisation by informing and consulting with future impacted workers and their representatives (e.g. trade unions, work councils) in advance.
  • Adopt measures to ensure that the impact of the AI system on human work are well understood.
  • Ensure that workers understand how the AI system operates, which capabilities it has and does not have. Provide workers with the necessary safety instructions (e.g. when using machine-robots).
  • If you are a third party provider of this type of systems, provide information related to this possible risk to your clients. This information should be easily accessible and understandable.
Ethics & Human Rights Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could the AI system have a negative impact on society at large?
  • Could your product be used for monitoring and surveillance purposes?
  • Could the AI system affect the right to democracy by having an influence in voting selections?

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Consider if your product could be used or misused for this purposes. Maybe it is not possible in the way it currently is but it could be possible with adaptations.
  • Evaluate the possible scenarios and think what role you want to play based on the responsibility and accountability principle.
  • How can you prevent something like that from happening?
  • Does your organisation agree with such a use of the technology?
  • Have you evaluated what the possible impact could be for society and the world you live in?
Ethics & Human Rights Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could the AI system limit the right to be heard?

Consider for instance the risk if your system makes automatic decisions that could have a negative impact on an individual and you do not offer any way to contest that decision.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

It is very important that your product complies with the key EU requirements for achieving a trustworthy AI:

  • human agency and oversight
  • robustness and safety
  • privacy and data governance
  • transparency
  • diversity, non-discrimination and fairness
  • societal and environmental well-being
  • accountability

Remember that there are other human rights that could be affected by your product. Check the other rights in the Charter of Fundamental Rights:

Charter of Fundamental Rights of the European Union

Ethics & Human Rights Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could the system have a big impact on decisions regarding the right to life?

Is the output of the model accurate and fair? Consider for instance the risk if your AI system is used in the health sector for choosing the right treatment for a patient.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

It is very important that your product complies with the key EU requirements for achieving a trustworthy AI:

  • human agency and oversight
  • robustness and safety
  • privacy and data governance
  • transparency
  • diversity, non-discrimination and fairness
  • societal and environmental well-being
  • accountability

Remember that there are other human rights that could be affected by your product. Check the other rights in the Charter of Fundamental Rights:

Charter of Fundamental Rights of the European Union

Ethics & Human Rights Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could the AI system affect the freedom of expression of its users?

Is the output of the model accurate, fair and not discriminatory? Consider the risk if this could be used, intended or unintended, to prevent the freedom of expression of individuals, for instance by wrongly labelling text as hate speech. In an example like this, users would not be able to freely express their opinions because the text is wrongly labelled as hate speech.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

It is very important that your product complies with the key EU requirements for achieving a trustworthy AI:

  • human agency and oversight
  • robustness and safety
  • privacy and data governance
  • transparency
  • diversity, non-discrimination and fairness
  • societal and environmental well-being
  • accountability

Remember that there are other human rights that could be affected by your product. Check the other rights in the Charter of Fundamental Rights:

Charter of Fundamental Rights of the European Union

Ethics & Human Rights Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could the AI system affect the freedom of its users?

Is the output of the model accurate, fair and not discriminatory? Consider the risk if this could be used for monitoring or surveillance purposes; for instance a face recognition system that could wrongly identify a suspect, bringing him/her to jail. Or systems that can spread fake news putting the life of somebody in danger.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

It is very important that your product complies with the key EU requirements for achieving a trustworthy AI:

  • human agency and oversight
  • robustness and safety
  • privacy and data governance
  • transparency
  • diversity, non-discrimination and fairness
  • societal and environmental well-being
  • accountability

Remember that there are other human rights that could be affected by your product. Check the other rights in the Charter of Fundamental Rights:

Charter of Fundamental Rights of the European Union

Ethics & Human Rights Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could the AI system affect the right to a fair hearing?
  • Is the output of the model accurate and fair? Consider the risk if this could be used in a criminal case and the consequences if wrong information is used to condemn someone
  • Do you have a mechanism to challenge the decisions of your AI system?

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

It is very important that your product complies with the key EU requirements for achieving a trustworthy AI:

  • human agency and oversight
  • robustness and safety
  • privacy and data governance
  • transparency
  • diversity, non-discrimination and fairness
  • societal and environmental well-being
  • accountability

Remember that there are other human rights that could be affected by your product. Check the other rights in the Charter of Fundamental Rights:

Charter of Fundamental Rights of the European Union

Ethics & Human Rights Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could children be part of our users’ group?
  • Could your system be used by children?
  • Does the AI system respect the rights of the child, for example with respect to child protection and taking the child’s best interests into account?

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Check if an age verification mechanism is necessary.
  • Pay attention to the way of communication in the product but also in your privacy policy.
  • Implement policies to ensure the safety of children when using or being exposed to your products.
  • Implement procedures to assess and monitor the usage of your product in order to identify any dangers (mental, moral or physical) to children’s health and safety.
  • Label your product properly and provide the right instructions for the children’s safety.
  • Monitor possible inappropriate usage of your products to abuse, exploit or harm children.
  • Implement a responsible marketing and advertising policy that prohibits harmful and unethical advertising related to children.
Ethics & Human Rights Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could cultural and language differences be an issue when it comes to the ethical nuance of our algorithm?

Well-meaning values can create unintended consequences.

  • Must the AI system understand the world in all its different contexts?
  • Could ambiguity in rules you teach the AI system be a problem?

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

Consider designing with value alignment, what means that you want to ensure consideration of existing values and sensitivity to a wide range of cultural norms and values.

Ethics & Human Rights Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could our product not be representing current social needs and social context?

The datasets that you want to use might not be representative of the current social situation. In that case the output of the model is also not representative of the current reality. Depending on the type of product you are designing this could have a big impact on the individual or any other affected matter.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

Make sure that your are using correct, complete, accurate and current data. Also make sure that you have sufficient data to represent all possible contexts that you might need.

Ethics & Human Rights Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could our product have an impact denying access to jobs, housing, insurance, benefits or education?
  • Could your system have such an important impact on the life of people?
  • How can you be sure that the decisions of your algorithm are always fair and correct?
  • How can you prevent causing such a big harm to individuals?

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

It is very important that your product complies with the key EU requirements for achieving a trustworthy AI:

  • human agency and oversight
  • robustness and safety
  • privacy and data governance
  • transparency
  • diversity, non-discrimination and fairness
  • societal and environmental well-being
  • accountability

Remember that there are other human rights that could be affected by your product. Check the other rights in the Charter of Fundamental Rights:

Charter of Fundamental Rights of the European Union

Ethics & Human Rights Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could our AI system affect human autonomy by interfering with the user’s decision-making process in an unintended and undesirable way?
  • Could your system affect which choices and information is made available to people?
  • Could the AI system affect human autonomy by generating over-reliance by users?
  • Could this reinforce their beliefs or encourage certain behaviours?
  • Could the AI system create human attachment, stimulate addictive behaviour, or manipulate user behaviour?

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Consider the possibility of your product affecting the behaviour and choices of individuals.
  • Test the system with enough and varied groups of users.
  • Consult with experts; this is a team effort and it is very important that harm to individuals is prevented.
Ethics & Human Rights Category
Design PhaseInput Phase
Is our training data and labelling produced respecting dignity and wellbeing of the labour force involved?

The need for labelling of data grows and unfortunately with that the amount of companies providing cheap labelling services at the cost of the dignity and labour rights of their workforce. Is the data that you are going to use labelled and under such conditions?

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

Verify the sources of your datasets and who has been responsible for the labelling process. Does your organisation supports this unfair practises? Think in ways to help prevent this.

Ethics & Human Rights Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we going to collect/use behavioural data to feed the AI system?

The risk of conformity behaviour can be reinforced/encouraged by introducing certain behaviours in the design as positive or negative. This could become a risk of behavioural exploitation. (Imagine for example the impact that it could have when an authoritarian government exploits a threat like this)

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Consider the way you label certain behaviours and the consequences it could have on the final output and eventually on the individuals. How do you decide which behaviours are good or bad?
  • Consider diversity of opinion and possible ethical considerations.
  • Consider if you will be able to collect enough information to decide which behaviours you are aiming for and which you are trying to avoid.
Ethics & Human Rights Category
Design PhaseInput PhaseModel PhaseOutput Phase
Can we build a model that is inclusive?

Can your system interact equitably with users from different cultures and with different abilities?

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Make sure that when you test the product you include a large diversity in type of users.
  • Think carefully about what diversity means in the context where the product is going to be used.
  • Remember that this is a team effort and not an individual decision!
Ethics & Human Rights Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could our AI system automatically label or categorize people?
  • This could have an impact on the way individuals perceive themselves and society. It could constrain identity options and even contribute to erase real identity of the individuals.
  • This threat is also important when designing robots and the way they look. For instance: do care/assistant robots need to have a feminine appearance? Is that the perception you want to give to the world? What impact does it have on society?

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • It important that you check the output of your model, not only in isolation but also when this is linked to other information. Think in different possible scenarios that could affect the individuals. Is your output categorizing people or helping to categorize them? In which way? What could be the impact?
  • Think about ways to prevent harm to the individual: provide information to the user, consider changing the design (maybe using different features or attributes?), consider ways to prevent misuse of your output, consider not to release the product to the market.
Non-compliance CategoryTechnique & Processes Category
Design PhaseInput PhaseModel PhaseOutput Phase
Is data minimisation possible?

Although it appears to contradict the principle of data minimisation, not using enough data could sometimes have an impact in the accuracy and performance of the model. A low level of accuracy of the AI system could result in critical, adversarial or damaging consequences. Can you still comply with the data minimisation principle?

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Sometimes data minimisation can be achieved by using less features and training data that is of good quality. However it is not always possible to predict which data elements are relevant to the objective of the system.
  • Consider to start training the model with less data, observe the learning curve and add more data if necessary, thereby justifying why it was necessary.
  • The usage of a large amount of data could be compensated by using pseudonymisation techniques, or techniques like perturbation, differential privacy in pre-processing, use of synthetic data and federated learning.
  • Try to select the right amount of features with the help of experts to avoid Curse of dimensionality (which means that errors increase with an increase in the number of features)

Interesting resources/references

Non-compliance CategoryTechnique & Processes Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could we be processing sensitive data?
  • According to art. 9 GDPR you might not be allowed to process personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, biometric data, health data or data concerning a person’s sex life or sexual orientation.
  • You might be processing sensitive data if the model includes features that are correlated with these protected characteristics (these are called proxies)

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • If you need to use special categories of data as defined in the GDPR art. 9, then you need to check if you are allowed to do this.
  • Applying techniques like anonymisation might still not justify the fact that you first need to receive the original data. Check with your privacy/legal experts.
  • Prevent proxies that could infer sensitive data (especially from vulnerable populations).
  • Check how historical data/practices might bias your data.
  • Identify and remove features that are correlated to sensitive characteristics.
  • Use available methods to test for fairness with respect to different affected groups.
Non-compliance Category
Design PhaseInput PhaseOutput Phase
Do we have a lawful basis for processing the data?

Do you know which GDPR legal ground you can apply?

  • (a) Consent: the individual has given clear consent for you to process their personal data for a specific purpose.
  • (b) Contract: the processing is necessary for a contract you have with the individual, or because they have asked you to take specific steps before entering into a contract.
  • (c) Legal obligation: the processing is necessary for you to comply with the law (not including contractual obligations).
  • (d) Vital interests: the processing is necessary to protect someone’s life.
  • (e) Public task: the processing is necessary for you to perform a task in the public interest or for your official functions, and the task or function has a clear basis in law.
  • (f) Legitimate interests: the processing is necessary for your legitimate interests or the legitimate interests of a third party, unless there is a good reason to protect the individual’s personal data which overrides those legitimate interests. (This cannot apply if you are a public authority processing data to perform your official tasks.)

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

In the case of the GDPR you need to apply one of the 6 available legal grounds that offer us the lawful basis for processing the data (art. 6). Check with your privacy expert if you can really support what you want to do with one of the available legal grounds. Not being able to do this could bring the project in danger.

Please take into account though, that other laws besides the GDPR could also be applicable.

Non-compliance Category
Design PhaseInput PhaseModel PhaseOutput Phase
Is the creation of the AI system proportional to the end goal?
  • Proportionality is a general principle of EU law. It requires you to strike a balance between the means used and the intended aim.
  • In the context of fundamental rights, proportionality is key for any limitation on these rights.

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • More specifically, proportionality requires that advantages due to limiting the right are not outweighed by the disadvantages to exercise the right. In other words, the limitation on the right must be justified.
  • Safeguards accompanying a measure can support the justification of a measure. A pre-condition is that the measure is adequate to achieve the envisaged objective.
  • In addition, when assessing the processing of personal data, proportionality requires that only that personal data which is adequate and relevant for the purposes of the processing is collected and processed. Source: EDPS
Non-compliance Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could the purpose limitation principle be an issue?
  • Data repurposing is one of the biggest challenges. Can you use the data for this (new) purpose? This question brings us to the origin of the data collected.
  • Are the datasets that you are using originally collected for a different purpose? Did their original users give consent for only that specific purpose?

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

Check with your privacy and/or legal department what the original purpose of the data was and if they are any possible constraints.

Non-compliance Category
Design PhaseInput PhaseModel PhaseOutput Phase
Can we comply with all the applicable GDPR data subjects’ rights?
  • Can you implement the right to withdraw consent, the right to object to the processing and the right to be forgotten into the development of the AI system?
  • Can you provide individuals with access and a way to rectify their data?

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Complying with these provisions from the GDPR (art. 15-21) could have an impact on the design of your product. What if users withdraw their consent? Do you need to delete their data used to train the model? What if you cannot identify the users in the datasets anymore? And what information should the users have access to?
  • Consider all these possible scenarios and involve your privacy experts early in the design phase.
Non-compliance Category
Design Phase
Have we considered the possibility to start with a data protection impact assessment (DPIA)?

The use of AI is more likely to trigger the requirement for a DPIA, based on criteria in Art 35 GDPR. The GDPR and the EDPB’s Guidelines on DPIAs identify both “new technologies” and the type of automated decision-making that produce legal effects or similarly significantly affect persons as likely to result in a “high risk to the rights and freedoms of natural persons”.

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • This threat modeling library can help you to assess possible risks.
  • Remember that a DPIA is not a piece of paper that needs to be done once the product is live. The DPIA starts today in the design phase by assessing the risks, documenting them and taking the necessary actions to create a responsible product from the beginning until it’s finalized.
  • Consider the time and resources necessary if you need to start a DPIA, as it could have some impact on your project deadlines.
Non-compliance Category
Design Phase
If children or other type of vulnerable users are part of the users group, do our third party providers need to comply with their data processing?
  • If you are processing data of children or other vulnerable groups, remember that all third parties you are dealing with that could also be processing their data should comply with regulations.
  • Your own system might be protecting the individuals, but remember to also check third party libraries, SDKs, and any other third party tooling you might be using.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Check which data your third party applications are collecting and if you have the right agreements in place.
  • Sometimes you can change the configuration of a tool to avoid sending data that is not necessary, or you can protect that data with pseudonymisation/anonymisation techniques.
  • You could also consider stop using some of your third party providers or change provider, depending on the impact it has on your organisation. A risk-based approach can be helpful in these situations.
Non-compliance Category
Design PhaseInput Phase
Do we need to use metadata to feed our model?
  • Metadata provides information about one or more aspects of the data. Think about: date, time, author, file size, etc. Source: Wikipedia
  • Metadata is also considered personal data and it can contain sensitive information.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Make sure you are allowed to use this data.
  • Verify the data sources.
  • Consider using anonymisation techniques.
Non-compliance Category
Design PhaseInput PhaseModel PhaseOutput Phase
Will our product make automatic decisions without human intervention?

Can these decisions have an important impact on the individual? Think about someone’s legal rights, legal status, rights under a contract, or a decision with similar effects and significance. (art. 22 GDPR) Automatic profiling from individuals is also included in art.22.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Check with your privacy expert if your processing falls under art. 22 GDPR or under the exceptions. Human oversight could be a way to mitigate certain risks for individuals. Discuss this with your legal advisors and the rest of the team.
  • Article 22(3) also provides individuals with a right to obtain human intervention in a decision made by AI and the right to contest the decision.
  • Implement specific oversight and control measures to oversee (and audit) the self-learning or autonomous nature of the AI system.
  • Remember that transparency, human agency, oversight and accountability are key principles for trustworthy AI.

Interesting resources/references

Non-compliance Category
Design PhaseInput PhaseModel PhaseOutput Phase
Could copyright restrictions on the dataset be an issue?

Can you use the datasets that you need or are there any legal restrictions? This could also apply to libraries and any other proprietary element you might want to use.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Consider if you also need to claim ownership or give credits to creators.
  • Think about trademarks, copyrights in databases or training data, patents, license agreements that could be part of the dataset, library or module that you are using.
  • Legal ownership of digital data can sometimes be complex and uncertain so get the proper legal advise here.
Non-compliance CategorySecurity Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we planning to use a third party AI tool?

If you are using a third party tool you might still have a responsibility towards the users. Think about employees, job applicants, patients, etc. It is also your responsibility to make sure that the AI system you choose won't cause harm to the individuals.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

Review which ones are your responsibilities (look into art. 24 and 28 GDPR). You can start by checking:

  • That you have the right agreements in place with the third party provider
  • That the origin and data lineage of their datasets are verified
  • How their models are fed; do they anonymise the data?
  • How you have assessed their security, ethical handling of data, quality process and ways to prevent bias and discrimination in their AI system.
Non-compliance Category
Design PhaseInput PhaseOutput Phase
Could we have geolocation restrictions for implementing the product?

It could be that usage of your product would not be allowed in certain countries due to certain legal restrictions.

If you answered Yes then you are at risk

If you are not sure, then you might be at risk too

Recommendations

There is no AI international regulatory environment and there are more and more new regulations that are being enforced in different countries. Keep up to date!

Non-compliance CategoryTechnique & Processes Category
Design PhaseInput PhaseModel PhaseOutput Phase
Can we comply with the storage limitation principle?
  • Do you know how long you need to keep the data?
  • Do you need to comply with specific local, national and/or international retention rules for the storage of data?

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Personal data must not be stored longer than necessary for the intended purpose. (art.5 e GDPR). In order to comply with this principle it is important to have a clear overview of the data flow during the life cycle of the model
  • You might receive raw data that you need to transform. Check what are you doing with this data and all the different types of input files you might be receiving/collecting.
  • Check if you need to store that data for quality and auditing purposes.
  • Check where are you going to store the data from the data preparation, the training and test sets, the outputs, the processed outputs (when they are merged or linked to other information), metrics, etc.
  • How long should all this data be stored? What type of deletion process can you put in place? And who will be responsible for the retention and deletion of this data?
  • Implement the right retention schedules when applicable. In case you might still need a big part of the data in order to feed the model, consider anonymising the data.
  • Deleting data from a trained model can be challenging to carry out (short of retraining the model from scratch from a dataset with the deleted data removed, but that is expensive and often infeasible). Note that through the learning process, input data are always encoded in some way in the model itself during training. That means the internal representation developed by the model during learning (say, thresholds and weights) may end up being legally encumbered as well. Source: BerryvilleiML

Generating deck ...