EresusSecurity
Concepts

Concepts

Sentinel documentation connects rule ID, evidence, severity, and remediation into one operating model.

Definition

The Sentinel concept model treats finding, evidence, severity, owner, release decision, and closure evidence as one operational risk record.

What is a finding?

A finding is a verifiable signal that Sentinel detects in a file, prompt, manifest, dependency, or runtime configuration. A finding is not always an exploit by itself; it is interpreted with severity and evidence.

Evidence model

  • Rule IDkeeps the same finding traceable across reports, CI, and retests.
  • Evidenceshows the opcode, regex, AST node, manifest field, or metadata clue.
  • Fix hintgives engineering the first actionable remediation direction.

Risk taxonomy

Sentinel findings are read across three layers: file or code signal, AI/LLM attack surface, and business impact. Unsafe pickle is first a file-format risk, then a supply-chain risk, and finally a code-execution impact on the runner that loads it.

Operational checklist
  • File or code signal: what did the scanner observe?
  • Attack surface: model, prompt, agent, network, container, or secret?
  • Business impact: data leakage, code execution, cost blow-up, or release trust?
  • Closure evidence: does the same command produce a clean result?

Release decision

CRITICAL and HIGH findings usually block promotion. MEDIUM findings should get an owner and tracking item. LOW and INFO findings are used for hygiene, inventory, and policy tuning.

OWASP AI Exchange threat taxonomy

OWASP AI Exchange classifies threats to AI systems into three categories. Each Sentinel module maps directly to one or more of these categories.

  • Threats through use (Input Threats)Covered by Prompt Firewall, Runtime Gateway, and MCP/Agent Security. Prompt injection, indirect injection, and living-off-the-land command relay fall here.
  • Development-time threatsCovered by HuggingFace Guard and Supply Chain/AIBOM. Model poisoning, rogue checkpoints, poisoned training data, and dependency vulnerabilities fall here.
  • Runtime conventional threatsCovered by API/Dashboard and Runtime Gateway. Authentication issues, rate-limit bypass, unauthorized access, and log manipulation fall here.

The Red Team/Evals module can run targeted probe suites against all three categories and cross-map to OWASP LLM Top 10 2025 categories.

Agentic AI security (OWASP Agentic AI Top 10 2026)

The OWASP Agentic AI Security Top 10 2026 documents risks specific to tool-using and multi-step AI agents. The Sentinel MCP/Agent Security module directly validates most of these risks.

  • ASI01Prompt Injection: direct and indirect command hijacking
  • ASI02Excessive Permissions: least-privilege violations and overbroad permission scope
  • ASI03Memory Manipulation: hijacking via conversation and vector memory channels
  • ASI04Supply Chain: poisoned MCP servers and malicious tool swaps
  • ASI05–ASI10Insufficient monitoring, unsafe auth, data privacy, rate-limit abuse, dependent agent trust, and tool poisoning

Governance frameworks

Sentinel findings align with multiple AI governance and security frameworks. Each framework in the table can be used for compliance tracking via finding metadata.

  • OWASP LLM Top 10 2025Primary reference; Sentinel rule IDs map directly to LLM categories (LLM01–LLM10) in SARIF output
  • MITRE ATLASAdversarial ML threat matrix; tactic and technique IDs (AML.T*, AML.M*) surface in evidence context
  • OWASP AI ExchangeComprehensive control catalog for runtime and development-time threats; Runtime Gateway and HuggingFace Guard map explicitly
  • NIST AI RMFGovern, manage, measure, and mitigate functions; Sentinel findings support the measure and mitigate workflows
  • ISO/IEC 42001AI management system standard; Sentinel AIBOM output supports ISO 42001 supply chain evidence requirements