Pickle Deserialization Rules
Detects unsafe Python pickle streams, joblib artifacts, and embedded pickle payloads in ML model files.
The PICKLE rule family turns findings on this surface into actionable records with rule ID, severity, CWE, OWASP LLM mapping, owner, release decision, and retest command.
Pickle is executable by design. A model file from a registry, notebook, or vendor can import Python symbols and run attacker-controlled code during load.
Supported inputs
.pkl.pickle.joblib.pt.pthNumPy object arrays with pickle payloads
Typical attack scenarios
- A malicious model calls os.system during deserialization.
- A poisoned notebook artifact hides a pickle payload inside a compressed project export.
- A dependency downloads a model checkpoint that imports unexpected Python modules.
Detection logic
Sentinel ties PICKLE evidence to reproducible signals such as file path, metadata, opcode, AST node, manifest field, dependency, or archive entry. The same signal should disappear when the finding is closed.
Triage
Do not read PICKLE findings as scanner noise. Verify the evidence first, map the finding to a severity-based release decision, and then produce closure evidence with the same Sentinel command.
- Source: where did the file, manifest, prompt, archive, or dependency come from?
- Impact: code execution, data leakage, supply chain, or resource consumption?
- Control: allowlist, hash, sandbox, egress policy, or secret rotation?
- Evidence: does the same rule category return clean after the fix?
Remediation
Remediation should change the risk boundary, not merely silence the finding: remove executable formats, pin source or hash, narrow tool permissions, rotate secrets, or add runtime sandboxing.
CI policy
category: PICKLE
fail_on:
- CRITICAL
- HIGH
ticket_on:
- MEDIUM
retest: "sentinel artifact ./models/ --rule PICKLE"Rule index
| Rule ID | Severity | Title | CWE | Fix Hint |
|---|---|---|---|---|
| PICKLE-EXEC | CRITICAL | Dangerous Pickle Execution | CWE-502 | Do not load untrusted pickle files. Convert the artifact to a non-executable format. |
| PICKLE-GLOBAL-IMPORT | HIGH | Unexpected Global Import | CWE-502CWE-829 | Restrict allowed globals and require signed model artifacts. |
| PICKLE-STRUCT | HIGH | Pickle Opcode Structural Tampering | CWE-915 | Reject structurally abnormal pickle artifacts during intake. |
PICKLE-EXEC — Dangerous Pickle Execution
CRITICAL| Rule ID | PICKLE-EXEC |
|---|---|
| Category | PICKLE |
| Severity | CRITICAL |
| CWE | CWE-502 |
| OWASP LLM | LLM03 — Supply Chain |
| FP Risk | LOW |
| Owner | AI/ML platform or model release owner |
| Release decision | Block release; do not promote the artifact or code path until it is isolated. |
Description
Flags pickle opcode flows that resolve dangerous Python callables such as os.system, subprocess.run, eval, exec, or loader functions that execute code.
Why it matters
Pickle is executable by design. A model file from a registry, notebook, or vendor can import Python symbols and run attacker-controlled code during load.
When it fires
Sentinel fires this rule in the PICKLE category when it sees global or stack_global opcode followed by a high-risk module/function pair.. The finding should be reported with reproducible evidence such as file name, metadata, opcode, AST node, or manifest field.
Evidence format
GLOBAL or STACK_GLOBAL opcode followed by a high-risk module/function pair.
Expected evidence
The report should include the affected file or manifest path, observed signal, rule ID, severity, owner, and retest command required for closure.
False-positive notes
False-positive probability is low. If evidence points directly to a file, opcode, secret pattern, path, or manifest field, treat it as real and require closure evidence.
Triage
- Owner: AI/ML platform or model release owner.
- Decision: Block release; do not promote the artifact or code path until it is isolated.
- Evidence: GLOBAL or STACK_GLOBAL opcode followed by a high-risk module/function pair.
- Closure: sentinel artifact ./models/ --rule PICKLE must return clean output.
How to fix
Replace pickle with safetensors or ONNX. If pickle is unavoidable, use a restricted unpickler with an explicit allowlist and load only trusted artifacts.
CLI
sentinel artifact ./models/ --rule PICKLEPolicy example
rules:
PICKLE-EXEC:
owner: "AI/ML platform or model release owner"
fail_on: ["CRITICAL", "HIGH"]
retest: "sentinel artifact ./models/ --rule PICKLE"Expected output
PICKLE-EXEC CRITICAL
Dangerous Pickle Execution
Do not load untrusted pickle files. Convert the artifact to a non-executable format.Example
import pickle
with open("model.pkl", "rb") as file:
model = pickle.load(file)from safetensors.torch import load_file
weights = load_file("model.safetensors")Related rules
- PICKLE-GLOBAL-IMPORT: Unexpected Global Import
- PICKLE-STRUCT: Pickle Opcode Structural Tampering
PICKLE-GLOBAL-IMPORT — Unexpected Global Import
HIGH| Rule ID | PICKLE-GLOBAL-IMPORT |
|---|---|
| Category | PICKLE |
| Severity | HIGH |
| CWE | CWE-502CWE-829 |
| OWASP LLM | LLM03 — Supply Chain |
| FP Risk | MEDIUM |
| Owner | AI/ML platform or model release owner |
| Release decision | Treat as a release gate; remediation or explicit risk acceptance is required. |
Description
Detects pickle streams importing modules outside a trusted ML allowlist during artifact load.
Why it matters
Pickle is executable by design. A model file from a registry, notebook, or vendor can import Python symbols and run attacker-controlled code during load.
When it fires
Sentinel fires this rule in the PICKLE category when it sees global opcode references modules such as posix, nt, subprocess, socket, urllib, importlib, or sitecustomize.. The finding should be reported with reproducible evidence such as file name, metadata, opcode, AST node, or manifest field.
Evidence format
GLOBAL opcode references modules such as posix, nt, subprocess, socket, urllib, importlib, or sitecustomize.
Expected evidence
The report should include the affected file or manifest path, observed signal, rule ID, severity, owner, and retest command required for closure.
False-positive notes
False-positive probability is medium. Verify source, expected use, and owner first; add an allowlist if needed, but do not remove evidence from the report.
Triage
- Owner: AI/ML platform or model release owner.
- Decision: Treat as a release gate; remediation or explicit risk acceptance is required.
- Evidence: GLOBAL opcode references modules such as posix, nt, subprocess, socket, urllib, importlib, or sitecustomize.
- Closure: sentinel artifact ./models/ --rule PICKLE must return clean output.
How to fix
Review the artifact provenance, pin source checksums, and allow only expected model classes and tensor containers.
CLI
sentinel artifact ./models/ --rule PICKLEPolicy example
rules:
PICKLE-GLOBAL-IMPORT:
owner: "AI/ML platform or model release owner"
fail_on: ["CRITICAL", "HIGH"]
retest: "sentinel artifact ./models/ --rule PICKLE"Expected output
PICKLE-GLOBAL-IMPORT HIGH
Unexpected Global Import
Restrict allowed globals and require signed model artifacts.Example
import pickle
with open("model.pkl", "rb") as file:
model = pickle.load(file)from safetensors.torch import load_file
weights = load_file("model.safetensors")Related rules
- PICKLE-EXEC: Dangerous Pickle Execution
- PICKLE-STRUCT: Pickle Opcode Structural Tampering
PICKLE-STRUCT — Pickle Opcode Structural Tampering
HIGH| Rule ID | PICKLE-STRUCT |
|---|---|
| Category | PICKLE |
| Severity | HIGH |
| CWE | CWE-915 |
| OWASP LLM | LLM03 — Supply Chain |
| FP Risk | MEDIUM |
| Owner | AI/ML platform or model release owner |
| Release decision | Treat as a release gate; remediation or explicit risk acceptance is required. |
Description
Finds malformed stack behavior, unexpected reducers, or unusual persistent IDs that can hide execution paths from shallow scanners.
Why it matters
Pickle is executable by design. A model file from a registry, notebook, or vendor can import Python symbols and run attacker-controlled code during load.
When it fires
Sentinel fires this rule in the PICKLE category when it sees reducer opcode chains, persistent_load markers, or stack imbalance around object construction.. The finding should be reported with reproducible evidence such as file name, metadata, opcode, AST node, or manifest field.
Evidence format
Reducer opcode chains, persistent_load markers, or stack imbalance around object construction.
Expected evidence
The report should include the affected file or manifest path, observed signal, rule ID, severity, owner, and retest command required for closure.
False-positive notes
False-positive probability is medium. Verify source, expected use, and owner first; add an allowlist if needed, but do not remove evidence from the report.
Triage
- Owner: AI/ML platform or model release owner.
- Decision: Treat as a release gate; remediation or explicit risk acceptance is required.
- Evidence: Reducer opcode chains, persistent_load markers, or stack imbalance around object construction.
- Closure: sentinel artifact ./models/ --rule PICKLE must return clean output.
How to fix
Re-export the model from a trusted build pipeline and compare the artifact hash against a signed release manifest.
CLI
sentinel artifact ./models/ --rule PICKLEPolicy example
rules:
PICKLE-STRUCT:
owner: "AI/ML platform or model release owner"
fail_on: ["CRITICAL", "HIGH"]
retest: "sentinel artifact ./models/ --rule PICKLE"Expected output
PICKLE-STRUCT HIGH
Pickle Opcode Structural Tampering
Reject structurally abnormal pickle artifacts during intake.Example
import pickle
with open("model.pkl", "rb") as file:
model = pickle.load(file)from safetensors.torch import load_file
weights = load_file("model.safetensors")Related rules
- PICKLE-EXEC: Dangerous Pickle Execution
- PICKLE-GLOBAL-IMPORT: Unexpected Global Import