Model Security
Assessment and hardening for model files, inference paths, external weights, and unsafe runtime behaviors across modern AI stacks.
Remote code execution through unsafe model loading paths.
Stealthy model poisoning and data exfiltration from inference environments.
Supply chain inheritance across every downstream environment using the same artifact.
Built For
MLOps teams importing third-party weights or model files.
Security teams governing AI supply chain risk.
Platform engineers exposing inference pipelines to internal or external users.
Use Cases
Audit pickle, GGUF, ONNX, Keras, and similar model artifacts.
Map unsafe deserialization and execution paths before models reach production.
Build safer intake and quarantine flows for external models.
Related Content
The Overlooked Attack Surface: Hunting 0-Days in AI Model Files
When discussing cybersecurity in Artificial Intelligence, everyone fixates on API security, prompt injections, and web vulnerabilities. Meanwhile, ...
The Overlooked Threat in AI Models: Keras & Pickle File Vulnerabilities
While everyone focuses on prompt injection, the biggest threat lies in the background: AI model files (Keras, Pickle) executing malicious code. Learn...
Critical Vulnerabilities in AI Frameworks (GGUF & MXNet): The Heap Overflow Threat
Model compression standards like GGUF make running LLMs easy, but are they secure? Discover how malicious model files induce memory and heap overflows...
Frequently Asked Questions
Do you only review deployed models?
No. We can assess pre-production model intake, artifact provenance, CI/CD controls, and runtime isolation as one workflow.
Can you review open-source model imports?
Yes. That is one of the most common threat surfaces, especially when organizations rely on third-party or community artifacts.
Need help validating this attack surface?
Talk with Eresus Security about scoped testing, threat modeling, and remediation priorities for this workflow.
Talk to Eresus