This page is a fallback for search engines and cases when javascript fails or is disabled.
Please view this card in the library, where you can also find the rest of the plot4ai cards.
Could unsafe deserialization of model artifacts lead to code execution or system compromise?
Could unsafe deserialization of model artifacts lead to code execution or system compromise?
Models are serialized and transferred between systems for deployment, a stage vulnerable to model serialization attacks. Models are often serialized for storage, sharing, or deployment, using formats like pickle
, joblib
, ONNX
, or TensorFlow SavedModel. However, many serialization formats can embed executable code or unsafe object structures.
If an attacker tampers with a serialized model artifact and it is later deserialized without validation, they may achieve:
- Remote Code Execution (RCE) during deserialization.
- Privilege escalation or lateral movement inside the deployment environment.
- Tampering with model behavior (e.g., inserting a backdoor or triggering silent failures).
These risks are especially severe when models are downloaded from untrusted sources, integrated via ML pipelines, or auto-loaded during CI/CD processes.
If you answered Yes then you are at risk
If you are not sure, then you might be at risk too
Recommendations
- Avoid unsafe deserialization methods on untrusted inputs, prefer safer formats.
- Use model scanning tools to detect malicious payloads in serialized artifacts.
- Enforce cryptographic signing and integrity checks for all model files before deployment.
- Store and transport models using secure channels (e.g., signed, encrypted artifact registries).
- Load models only in sandboxed or containerized environments with minimal privileges and no internet access.
- Track model provenance throughout the development lifecycle to detect unauthorized changes.