Skip to main content

Launch

pip install "plyra-guard[sidecar]"
plyra-guard serve
# → http://localhost:8765
Or start from code:
guard.serve(host="0.0.0.0", port=8080)
The default sidecar port in config is 8080. The CLI command plyra-guard serve uses port 8765. You can override both via the sidecar section in your YAML config.

What you can see

  • Live action feed — every tool call as it happens, with verdict and latency
  • Policy hit rates — which rules fire most
  • Session replay — reconstruct any agent session from the audit log
  • Block details — full context on every blocked call: action type, intent, matched rule, and reason
The default CORS setting (allow_origins=["*"]) is fine for local development. Lock it down before exposing the dashboard beyond localhost.

Query the audit log

from plyra_guard import AuditFilter, Verdict

# Get the 20 most recent blocked actions
entries = guard.get_audit_log(AuditFilter(
    verdict=Verdict.BLOCK,
    limit=20,
))

for entry in entries:
    print(f"{entry.action_type} | {entry.verdict.value} | {entry.duration_ms}ms")

AuditFilter fields

FieldTypeDefaultDescription
agent_idstr | NoneNoneFilter by agent
task_idstr | NoneNoneFilter by task
verdictVerdict | NoneNoneFilter by verdict type
action_typestr | NoneNoneFilter by action type
from_timedatetime | NoneNoneStart of time range
to_timedatetime | NoneNoneEnd of time range
limitint100Max entries to return

Explain a decision

from plyra_guard import ActionIntent

intent = ActionIntent(
    action_type="file.delete",
    tool_name="delete_file",
    parameters={"path": "/etc/passwd"},
    agent_id="default",
)

explanation = guard.explain(intent)
print(explanation)
# Human-readable description of which evaluators ran,
# which rule matched, and why the verdict was reached

Metrics snapshot

metrics = guard.get_metrics()
print(f"Total: {metrics.total_actions}")
print(f"Blocked: {metrics.blocked_actions}")
print(f"Avg risk: {metrics.avg_risk_score:.2f}")
print(f"Avg latency: {metrics.avg_duration_ms:.1f}ms")
Export as Prometheus text format:
print(metrics.to_prometheus())