Are we protected from reprogramming deep neural nets attacks?

This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.

Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
Are we protected from reprogramming deep neural nets attacks?
  • By means of a specially crafted query from an adversary, Machine Learning systems can be reprogrammed to a task that deviates from the creator’s original intent.
  • Example: ImageNet, a system used to classify one of several categories of images was repurposed to count squares.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

  • Configure a strong client-server mutual authentication and access control to model interfaces.
  • Takedown of the offending accounts.
  • Identify and enforce a service-level agreement for your APIs. Determine the acceptable time-to-fix for an issue once reported and ensure the issue no longer repros once SLA expires.

Source: Microsoft, Threat Modelling AI/ML Systems and Dependencies.