AI Security Hub
A practical hub for LLM, RAG, agent, MCP, model file, and MLOps security decisions before AI systems reach production.
Prompt injection that becomes data exposure or unauthorized tool use.
RAG systems that retrieve sensitive content without permission-aware controls.
Untrusted model artifacts entering production without quarantine or provenance.
What you will find here
Model risks
Unsafe file formats, model intake decisions, source trust, and supply-chain exposure.
Agent and MCP security
Tool permissions, MCP registration, identity boundaries, and unintended action paths.
RAG and data boundaries
Retrieval rules, sensitive content leakage, tenant boundaries, and audit evidence.
This first version uses Eresus resources, advisories, and tools. It does not import third-party scan results or invent model status.
Open data sources
A ProtectAI-style view should combine public risk databases, model metadata, and Eresus validation. External records stay labeled as signals until Sentinel or an Eresus review verifies them.
Open database of GPAI failure modes, reports, vulnerabilities, and measurements with evidence metadata.
Use for model, agent, and AI application risk signals.
Package and version-level vulnerability data for open-source dependencies in ecosystems like PyPI, npm, Go, and Rust.
Use for AI stack dependencies such as transformers, gradio, llama-index, langchain, torch, and tensorflow.
Model repository metadata such as tags, files, last modification date, safetensors info, and security scan status fields.
Use for inventory fields; do not treat metadata alone as a security verdict.
Tactics, techniques, mitigations, and case studies for adversarial threats against AI systems.
Use for classification and shared vocabulary, not model scan status.
Real-world AI incidents and near harms collected for learning from deployed system failures.
Use for context and trend evidence around deployed AI harms.
Model installation flow with unsafe model deserialization risk.
Crafted model archive configuration can affect model loading safety.
Scanner bypass signal around unsafe pickle globals in model files.
Prompt injection measurement across multiple model families.
Related advisories
Unauthenticated Remote Code Execution via Arbitrary Command Injection in MCPHub Server Registration
MCPHub accepts attacker-controlled command and args values during server registration and spawns them through STDIO, enabling full remote code execution on the host.
Authentication Bypass via skipAuth Configuration Grants Full Admin Access in MCPHub
When skipAuth is enabled, MCPHub bypasses both authentication and admin authorization checks, allowing any unauthenticated user to access privileged API functionality.