If our AI system uses randomness, is the source of randomness properly protected?

This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.

Security Category
Design PhaseInput PhaseModel PhaseOutput Phase
If our AI system uses randomness, is the source of randomness properly protected?

Randomness plays an important role in stochastic systems. “Random” generation of dataset partitions may be at risk if the source of randomness is easy to control by an attacker interested in data poisoning. Source: BerryVilleiML

If you answered No then you are at risk

If you are not sure, then you might be at risk too

Recommendations

Use of cryptographic randomness sources is encouraged. When it comes to machine learning (ML), setting weights and thresholds “randomly” must be done with care. Many pseudo-random number generators (PRNG) are not suitable for use. PRNG loops can really damage system behaviour during learning. Cryptographic randomness directly intersects with ML when it comes to differential privacy. Using the wrong sort of random number generator can lead to subtle security problems. Source: BerryVilleiML