Are we protected from adversarial example?

This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.

Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we protected from adversarial example?
  • An adversarial example is an input/query from a malicious entity sent with the sole aim of misleading the machine learning system.
  • Example: researchers constructed sunglasses with a design that could fool image recognition systems, which could no longer recognize the faces correctly.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

These attacks manifest themselves because issues in the machine learning layer were not mitigated. As with any other software, the layer below the target can always be attacked through traditional vectors. Because of this, traditional security practices are more important than ever, especially with the layer of unmitigated vulnerabilities (the data/algo layer) being used between AI and traditional software. Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.