Intro
Sentinel is a CLI-first security scanner for AI/LLM applications and model supply chains. These docs explain what each Sentinel finding detects, why it matters, and how to fix it.
Eresus Sentinel is a CLI-first security platform that reports model artifact, prompt, agent, MCP, container, secret, and AI supply-chain risks with rule IDs, severity, CWE, OWASP LLM mapping, and retest commands.
The goal is to make AI security documentation useful during engineering decisions: which file is risky, which finding should block a release, and which command should be used for retest.
With Sentinel, you can:
- Find model artifact risks — pickle, PyTorch, ONNX, GGUF, safetensors, and compressed bundles.
- Test prompt and template security — prompt injection, unsafe Jinja2, RAG leakage, and guardrail bypass patterns.
- Validate agent and MCP surfaces — manifests, permissions, tool boundaries, and network exposure.
- Report in CI/CD — JSON, SARIF, JUnit, CSV, HTML, and Markdown outputs.
Who uses Sentinel?
Sentinel is not built for a single persona. The developer downloading a model, the platform team building release gates, the researcher testing an LLM app, and the security lead explaining risk all use the same finding IDs.
| Team | Question | Sentinel output |
|---|---|---|
| AI / ML | Can this model file be loaded safely? | Artifact findings, AIBOM, hash/provenance notes |
| AppSec | Can the prompt, RAG, or agent flow leak data? | Firewall, Jinja2, secret, network findings |
| Platform | Should this release stop or open an issue? | SARIF/JUnit, severity, release gate policy |
What Sentinel covers
- Artifact scanning — model files, archives, and model metadata.
- SAST — AI/ML anti-pattern checks across Python and multi-language code.
- Prompt Firewall — prompt injection, jailbreak, and output guardrail checks.
- Supply chain — dependency, HuggingFace repository, and model provenance checks.
Where Sentinel fits in the AI security landscape
Awesome AI Security lists consistently group the field around governance, attack techniques, red teaming, MCP security, model artifact scanning, guardrails, privacy, and supply-chain controls. Sentinel docs focus on the applied evidence layer: scan the asset, close the finding by rule ID, reproduce it in CI/CD, and explain risk through OWASP LLM and CWE language.
- ottosulin/awesome-ai-security
- TalEliyahu/Awesome-AI-Security
- AISecHub Awesome AI Security
- Awesome AI for Security
- Floating Pragma Awesome AI Security
How outputs are used
Use readable tables for local review, SARIF or JUnit in CI/CD, and Markdown/HTML for security reports. The same finding ID stays consistent for engineering and security teams.
Recommended workflow
- Start with a small known model directory so the team understands scanner behavior.
- Map rule IDs and severity levels to your release policy.
- Attach remediation and retest commands to CRITICAL/HIGH findings.
- Map findings to OWASP LLM, CWE, and your internal risk language.
- Install — prepare Sentinel locally.
- Rule Reference — all Sentinel detection rule categories.
- Prompt Firewall — test prompt, template, and tool-call boundaries.
- MCP / Agent Security — agent tool permissions and MCP surfaces.
- Severity Guide — how findings should be prioritized.
- CWE Mapping — cross-reference from CWE to Sentinel rules.
CLI
The shortest workflow starts with a single artifact scan, then expands to project and CI output.
sentinel artifact model.pt
sentinel artifact ./models/ -f sarif -o report.sarif
sentinel scan ./project/FAQ
Does Sentinel replace a pentest?
No. Sentinel catches reproducible technical signals; live exploit chains, business-logic abuse, and risk acceptance still require manual security validation.
What search intent does Sentinel answer?
It serves teams searching for AI security scanner, LLM security scanner, prompt injection firewall, model artifact scanner, MCP security scanner, and AI supply chain security guidance.
Which command should run on day one?
Start with a small model directory: `sentinel artifact ./models/`. Then run `sentinel scan ./project/` and move SARIF output into CI.
How should findings be explained to customers?
Use OWASP LLM and business impact in the executive summary; include Sentinel rule ID, CWE, evidence, fix hint, and retest command in the technical appendix.
Eresus support
Turn the finding into an action your team can actually close.
If you need exploit evidence, prioritization, remediation direction, and retesting for an AI/LLM security program, Eresus can help scope the work with your team.
Start Security Test