EresusSecurity
Back to Research
Deserialization Threats

Keras HDF5 Lambda Layer Arbitrary Code Execution

Yiğit İbrahim SağlamOffensive Security Specialist
April 10, 2026
Updated: April 27, 2026
5 min read

Overview

Keras operates natively as the high-level API for TensorFlow. Before the widespread adoption of the modern, secure .keras format (v3), millions of developers saved their neural networks using the legacy HDF5 (.h5) format.

The PAIT-KERAS-100 vulnerability triggers when Eresus Sentinel intercepts an older-generation Keras .h5 file that utilizes weaponized Lambda layers or forged custom objects to execute malicious operations during the standard keras.models.load_model() initialization.

The core vulnerability exists because HDF5 files natively support storing custom Python code structures inside the model object. When a model uses a custom Lambda layer (a mathematical layer defined by custom Python code rather than standard backend matrix math), the Keras engine uses pickle underneath to serialize and package that source code into the .h5 document. Consequently, an attacker can replace that mathematical function with an Arbitrary Code Execution (ACE) payload.

Why safe_mode Fails

Many contemporary Keras developers falsely assume that initiating load_model(safe_mode=True) universally protects them. However, safe_mode is explicitly engineered for the newer .keras (v3) files. For legacy .h5 objects, the parameter is frequently ignored entirely, meaning the architecture blindly executes the hidden payload upon loading.

How The Attack Works

Cybercriminals upload seemingly legitimate deep learning models to open-source model hubs. The underlying math is correct, but one specific Lambda layer contains a reverse-shell wrapper disguised as a standard activation function.

sequenceDiagram
    participant Attacker
    participant File_Registry as Public Model Hub
    participant Keras_Backend as Data Scientist's Runtime
    participant OS as Underlying Server

    Attacker->>Attacker: Compiles Keras Model with malicious Lambda layer
    Attacker->>File_Registry: Distributes 'ImageClassifier_V1.h5'
    Keras_Backend->>File_Registry: Downloads legacy model logic
    Keras_Backend->>Keras_Backend: Starts executing `keras.models.load_model()`
    Keras_Backend->>Keras_Backend: Encounters Lambda Object & calls Pickle to deserialize
    Keras_Backend->>OS: The deserialization instantiates attacker's `os.system` hook
    OS-->>Attacker: Initiates outbound backdoor connection to Attacker's C2

Key Points

  • Unsuspecting Deserialization: The attack bypasses standard behavioral security because the Keras runtime itself legitimately intends to execute the Python code extracted from the Lambda layer. To the underlying operating system, this looks like normal MLOps application behavior.
  • Ubiquity of Legacy Code: Despite being deprecated, massive archives of perfectly trained .h5 datasets are still aggressively circulated across academic institutions and corporate tutorial structures.
  • CVE Proliferation: Specific CVEs (e.g., CVE-2024-3660, CVE-2025-9905) actively track variations of Keras and TensorFlow failing to properly sanitize untrusted Lambda strings.

Impact

Executing an unvalidated .h5 model places the absolute authority of the Python runtime into the hands of the threat actor. If the Keras environment is running on an AWS SageMaker node or within a corporate Google Colab profile, the attacker immediately intercepts the underlying IAM credentials, cloud-storage access keys, and hardware resources (Cryptojacking high-VRAM pods). Furthermore, the malware acts immediately upon load(), rendering the actual dataset results irrelevant.

Best Practices

  • Never Trust Untrusted .h5 Libraries: Explicitly prohibit the ingest of legacy Keras datasets generated by unknown developers or found on public messaging boards. Only load externally-derived models utilizing the modernized, secure .keras extension format.
  • Disable Custom Objects: If loading an archaic HDF5 model is absolutely necessary, override the inference process to strictly reject custom objects (custom_objects=None and enforce compile=False if mathematically viable) to disrupt the deserialization pathway.
  • Air-Gapped Assessment: When pulling dependencies from open model registries, ensure initial validation loops always execute inside ephemeral sandboxes (like isolated Docker nodes possessing zero external ingress/egress networking capability).

Remediation

If an Eresus Sentinel PAIT-KERAS-100 alarm registers, the engine successfully intercepted an attempt to evaluate unauthorized Python code embedded inside an HDF5 wrapper. Immediately terminate the evaluating script and isolate the affected machine, as Keras loads objects concurrently and portions of the payload may be attempting memory manipulation. Purge the highly-toxic .h5 file from the corporate network, locate its parent repository, and transition the underlying network architecture to natively utilize modern .safetensors or .keras specifications.

Further Reading

Enhance your knowledge of the structural deficiencies in legacy serialization formats:


📥 Eresus Sentinel Blocks Legacy File Deserialization Traps A deprecated file structure should not bankrupt your cloud infrastructure. Eresus Sentinel aggressively decodes the sub-layers of older .h5 archives, scanning the internal opcodes of custom Lambda modules. We instantaneously severe the execution thread the second an unverified system command attempts to deserialize inside your PyTorch or Keras pipelines. Defend your deep-learning arrays today.

Learn more | Book a Demo

SSS

Bu risk sadece prompt injection ile mi sınırlı?

Hayır. AI güvenliğinde prompt injection önemli bir başlangıçtır ama tek başına resmi anlatmaz. Retrieval katmanı, tool izinleri, model artefact güveni, loglarda hassas veri, kullanıcı yetkisi ve entegrasyon sınırları birlikte değerlendirilmelidir.

İlk teknik kontrol ne olmalı?

Önce sistemin hangi veriye eriştiği, hangi aksiyonları alabildiği ve bu aksiyonların hangi kimlikle çalıştığı haritalanmalıdır. Bu harita olmadan yapılan test genellikle birkaç prompt denemesinden öteye geçemez.

Ne zaman profesyonel destek gerekir?

AI uygulaması müşteri verisine, iç dokümana, üretim API’lerine veya otomatik aksiyon alan agent akışlarına erişiyorsa profesyonel güvenlik incelemesi gerekir. Bu noktada risk artık model cevabı değil, kurum içi yetki ve veri sınırıdır.