Runtime Forensics
Runtime forensics is the AISecOps v0.5 replay foundation for reconstructing how an agentic AI system reached a runtime decision. It connects prompts, skills, retrieved context, memory, tool results, policy checks, approvals, and execution outcomes into replayable evidence.
What Runtime Forensics Means
AISecOps reconstructs agent execution history so security and platform teams can understand what happened, why it happened, and which instruction source influenced the decision. The goal is not only observability; it is forensic reconstruction of autonomous runtime behavior.
Why Logs Are Not Enough
Traditional logs are event records. They can show that a tool was called, a request failed, or a policy returned a result. AISecOps replay logs are decision-chain evidence: they preserve the execution plan, provenance inputs, capability result, policy result, and final runtime decision in a structured JSONL format.
Replayable Audit Model
Prompt / Skill / Memory
↓
Execution Plan
↓
Capability Check
↓
Policy Evaluation
↓
Approval / Block / Execute
↓
Structured JSONL Audit
↓
Replay Engine Each stage emits structured evidence that can be replayed without trusting the model to explain itself after the fact.
Instruction Provenance
AISecOps tracks instruction and context sources as first-class provenance fields. A replay trace can identify whether a decision was influenced by:
user_promptsystem_promptskillretrieval_chunkmemorytool_resultagent_message
This distinction matters during incident response because policy violations often originate from indirect instructions rather than the original user request.
Replay CLI
The replay workflow starts from a trace identifier and the JSONL audit file produced by the runtime control plane.
aisecops-replay --trace-id run-123 --audit-file audit/events.jsonl Example Replay Output
trace_id: run-123
provenance:
user_prompt: "summarize the vendor renewal"
retrieval_chunk: contract-renewal-2026.md#section-4
skill: vendor_review
tool_result: crm.lookup_vendor
execution_plan_id: plan-8f42
capability_result:
capability: cap_contract_review
status: allowed
policy_result:
policy: external_email_recipients
status: approval_required
reason: recipient_not_allowlisted
final_decision: block_pending_approval The replay output is designed for investigation workflows, not model-generated justification. It gives responders the runtime facts needed to reconstruct the decision boundary.
Enterprise Use Cases
- incident response for agent-initiated actions
- audit review of high-risk tool calls
- policy drift analysis across model, prompt, and skill changes
- approval reconstruction for regulated workflows
- agent behavior debugging across planning, evaluation, and execution
Current Limits
- no replay UI yet
- no cryptographic provenance signing yet
- no distributed trace reconciliation yet
Roadmap
- replay API
- timeline UI
- execution graph visualization
- provenance signing
- distributed runtime correlation