EresusSecurity
AI Security / Resources

AI Security Hub

A practical hub for LLM, RAG, agent, MCP, model file, and MLOps security decisions before AI systems reach production.

Risk signals

Prompt injection that becomes data exposure or unauthorized tool use.

RAG systems that retrieve sensitive content without permission-aware controls.

Untrusted model artifacts entering production without quarantine or provenance.

What you will find here

Model risks

Unsafe file formats, model intake decisions, source trust, and supply-chain exposure.

Agent and MCP security

Tool permissions, MCP registration, identity boundaries, and unintended action paths.

RAG and data boundaries

Retrieval rules, sensitive content leakage, tenant boundaries, and audit evidence.

Model and AI security knowledge base

This first version uses Eresus resources, advisories, and tools. It does not import third-party scan results or invent model status.

Open data sources

A ProtectAI-style view should combine public risk databases, model metadata, and Eresus validation. External records stay labeled as signals until Sentinel or an Eresus review verifies them.

Featured resources

Related advisories