Security Resources
SEO-first resource hubs for AI security teams, covering benchmarks, factuality, RAG evaluation, model reports, and operational references.
API Reference
Public machine-readable endpoints, discovery files, and reference surfaces exposed by the Eresus website today.
LLM Red Teaming
A hub for prompt injection, jailbreaks, tool misuse, and the operational mindset behind adversarial testing of language models.
Foundation Model Reports
A report hub for model-specific risk notes, security posture snapshots, and practitioner-oriented interpretation of model behavior.
Language Model Security DB
A curated hub for security-relevant model issues, integration weaknesses, and recurring attack classes across the AI ecosystem.
Running Benchmarks
Practical guidance on operationalizing benchmark suites, release gates, and security-relevant regression tracking.
Evaluating Factuality
A resource hub for measuring groundedness, answer reliability, source quality, and the operational security side of factuality failures.
Evaluating RAGs
Operational guidance for testing retrieval quality, permission boundaries, poisoning risk, and downstream answer safety in RAG systems.
Minimizing Hallucinations
Practical patterns for reducing hallucinations through retrieval design, evaluation, guardrails, and workflow-specific quality gates.
Config Validator
A resource page for configuration hygiene across prompts, retrieval, MCP servers, environment secrets, and AI deployment defaults.