Skip to main content

ActionGuard

The main entry point. Assembles the evaluation pipeline, audit log, rollback system, and multi-agent trust ledger.

Constructor

from plyra_guard import ActionGuard

guard = ActionGuard(config=None)
config
GuardConfig | None
A GuardConfig Pydantic model. If None, uses GuardConfig() defaults.

ActionGuard.default()

Create an instance with sensible defaults. No config file needed — good for quick starts and testing.
guard = ActionGuard.default()

ActionGuard.from_config()

Create an instance from a YAML config file.
guard = ActionGuard.from_config("guard_config.yaml")
path
str
required
Path to a YAML configuration file.
return
ActionGuard
A configured ActionGuard instance.

guard.protect()

Decorator to protect a function with ActionGuard.
@guard.protect("file.delete", risk_level=RiskLevel.HIGH)
def delete_file(path: str) -> str:
    import os
    os.remove(path)
    return f"Deleted {path}"
action_type
str
required
Hierarchical action descriptor (e.g. "file.delete", "db.execute").
risk_level
RiskLevel
default:"RiskLevel.MEDIUM"
Baseline risk level for this action. One of LOW, MEDIUM, HIGH, CRITICAL.
rollback
bool
default:"True"
Whether to capture a snapshot before execution for rollback support.
tags
list[str] | None
default:"None"
Optional tags for categorization and filtering.
return
Callable
Decorator function. The wrapped function gains _plyra_guard_protected = True.

guard.wrap()

Wrap framework-native tools with ActionGuard protection. Auto-detects the framework and routes to the appropriate adapter.
safe_tools = guard.wrap([read_file, write_file, delete_file])
tools
list[Any]
required
List of framework-native tool objects (LangChain tools, plain functions, etc.).
return
list[Any]
Wrapped tools in their native format.
Do not use guard.wrap() with LangGraph. See framework integrations for the required pattern.

guard.evaluate()

Evaluate an ActionIntent without executing (dry-run).
result = guard.evaluate(intent)
intent
ActionIntent
required
The action to evaluate.
return
EvaluatorResult
The final result with the most restrictive verdict from all evaluators.

guard.evaluate_async()

Async version. Runs the evaluation pipeline in a thread to avoid blocking the event loop.
result = await guard.evaluate_async(intent)

guard.explain()

Run the full evaluation pipeline in dry-run mode and return a rich, human-readable explanation string. Never executes the action.
explanation = guard.explain(intent)
print(explanation)
intent
ActionIntent
required
The action to explain.
return
str
Human-readable explanation of which evaluators ran and why.

guard.explain_async()

Async version of explain().

guard.get_audit_log()

Query the audit log with optional filters.
entries = guard.get_audit_log(AuditFilter(verdict=Verdict.BLOCK, limit=20))
filters
AuditFilter | None
default:"None"
Optional filter criteria. If None, returns all entries up to the default limit.
return
list[AuditEntry]
Matching audit entries, newest first.

guard.get_metrics()

Get a snapshot of aggregate metrics.
metrics = guard.get_metrics()
print(metrics.to_prometheus())
return
GuardMetrics
Aggregate statistics for all actions evaluated by this guard instance.

guard.add_exporter()

Register an audit log exporter. Exporters receive every AuditEntry as it is written.
from plyra_guard.observability.exporters import OTelExporter

guard.add_exporter(OTelExporter())
exporter
Any
required
An object implementing export(entry: AuditEntry) -> None.

guard.register_agent()

Register an agent with a trust level for multi-agent systems.
guard.register_agent("researcher", trust_level=TrustLevel.PEER)
agent_id
str
required
Unique agent identifier.
trust_level
TrustLevel
required
Trust classification for this agent.

guard.rollback()

Roll back a single action by its ID.
success = guard.rollback("action-uuid-here")
action_id
str
required
The action_id from an ActionIntent or AuditEntry.
return
bool
True if rollback succeeded.

guard.rollback_last()

Roll back the last N actions.
results = guard.rollback_last(n=3, agent_id="researcher")
n
int
default:"1"
Number of actions to roll back.
agent_id
str | None
default:"None"
Optionally filter to one agent.
return
list[bool]
Per-action rollback results.

guard.rollback_task()

Roll back all actions for a task across all agents.
report = guard.rollback_task("task-uuid")
print(report.success)        # True if all rolled back
print(report.rolled_back)    # list of action IDs
print(report.failed)         # list of action IDs that failed
task_id
str
required
The task identifier.
return
RollbackReport
Summary with task_id, total_actions, rolled_back, failed, skipped lists.

guard.serve()

Start the HTTP sidecar server (dashboard + REST API).
guard.serve(host="0.0.0.0", port=8080)
host
str
default:"\"0.0.0.0\""
Bind address.
port
int
default:"8080"
Port number.
Requires pip install "plyra-guard[sidecar]". Raises ImportError if FastAPI or uvicorn are not installed.

Data classes

ActionIntent

The primary data structure that flows through the evaluation pipeline.
from plyra_guard import ActionIntent
FieldTypeDefaultDescription
action_typestrHierarchical action descriptor, e.g. "file.delete"
tool_namestrName of the tool being invoked
parametersdict[str, Any]Arguments to the tool call
agent_idstrIdentity of the calling agent
task_contextstr""Human-readable description of what the agent is doing
action_idstruuid4()Unique ID for this intent (auto-generated)
task_idstr | NoneNoneOptional task grouping for multi-step workflows
timestampdatetimenow(UTC)When the intent was created
estimated_costfloat0.0Estimated monetary cost in USD
risk_levelRiskLevelMEDIUMPre-declared risk classification
instruction_chainlist[AgentCall][]Full delegation chain for multi-agent provenance
metadatadict[str, Any]{}Arbitrary metadata bag for extensibility

EvaluatorResult

The output of a single evaluator in the pipeline.
from plyra_guard import EvaluatorResult
FieldTypeDefaultDescription
verdictVerdictThe evaluator’s decision
reasonstrHuman-readable explanation
confidencefloat1.0Confidence score (0.0–1.0)
evaluator_namestr""Name of the evaluator class
suggested_actionstr | NoneNoneOptional remediation suggestion
metadatadict[str, Any]{}Arbitrary metadata

AuditEntry

Immutable audit record written for every action evaluated.
from plyra_guard import AuditEntry
FieldTypeDefaultDescription
action_idstrMatches the originating ActionIntent
agent_idstrAgent that initiated the action
action_typestrHierarchical action descriptor
verdictVerdictFinal verdict
risk_scorefloat0.0Computed risk score
task_idstr | NoneNoneTask grouping
policy_triggeredstr | NoneNoneName of the policy that matched
evaluator_resultslist[EvaluatorResult][]Results from all evaluators that ran
instruction_chainlist[AgentCall][]Delegation chain
parametersdict[str, Any]{}Tool call parameters
duration_msint0Execution time in milliseconds
timestampdatetimenow(UTC)When the action was evaluated
rolled_backboolFalseWhether this action was rolled back
errorstr | NoneNoneError message, if any

AuditFilter

Filter criteria for querying the audit log.
FieldTypeDefaultDescription
agent_idstr | NoneNoneFilter by agent
task_idstr | NoneNoneFilter by task
verdictVerdict | NoneNoneFilter by verdict
action_typestr | NoneNoneFilter by action type
from_timedatetime | NoneNoneStart of time range
to_timedatetime | NoneNoneEnd of time range
limitint100Max entries to return

GuardMetrics

Prometheus-style metrics snapshot.
FieldTypeDefaultDescription
total_actionsint0Total actions evaluated
allowed_actionsint0Actions with ALLOW verdict
blocked_actionsint0Actions with BLOCK verdict
escalated_actionsint0Actions escalated
warned_actionsint0Actions warned
deferred_actionsint0Actions deferred
rollbacksint0Successful rollbacks
rollback_failuresint0Failed rollbacks
total_costfloat0.0Cumulative cost in USD
avg_risk_scorefloat0.0Average risk score
avg_duration_msfloat0.0Average evaluation duration
actions_by_agentdict[str, int]{}Per-agent action counts
actions_by_typedict[str, int]{}Per-action-type counts
verdicts_by_policydict[str, int]{}Per-policy verdict counts

RollbackReport

Summary of a batch rollback operation.
FieldTypeDefaultDescription
task_idstrTask that was rolled back
total_actionsint0Total actions in the task
rolled_backlist[str][]Action IDs successfully rolled back
failedlist[str][]Action IDs where rollback failed
skippedlist[str][]Action IDs skipped
Property: report.successTrue if no failures and at least one rollback.

AgentCall

One hop in a multi-agent delegation chain.
FieldTypeDefaultDescription
agent_idstrAgent making the call
trust_levelfloatNumeric trust score (0.0–1.0)
instructionstrInstruction given to this agent
timestampdatetimenow(UTC)When this delegation occurred

Enums

Verdict

from plyra_guard.core.verdict import Verdict
ValueDescription
Verdict.ALLOWAction may proceed
Verdict.BLOCKAction is denied
Verdict.ESCALATERequires higher authority or human approval
Verdict.DEFERDeferred for async approval
Verdict.WARNMay proceed with a warning logged
Helper methods:
  • verdict.is_permissive()True for ALLOW and WARN
  • verdict.is_blocking()True for BLOCK, ESCALATE, and DEFER

RiskLevel

from plyra_guard import RiskLevel
ValueBase score
RiskLevel.LOW0.1
RiskLevel.MEDIUM0.3
RiskLevel.HIGH0.6
RiskLevel.CRITICAL0.9
Helper: risk_level.base_score() → returns the float value.

TrustLevel

from plyra_guard import TrustLevel
ValueTrust score
TrustLevel.HUMAN1.0
TrustLevel.ORCHESTRATOR0.8
TrustLevel.PEER0.5
TrustLevel.SUB_AGENT0.3
TrustLevel.UNKNOWN0.0
Helper: trust_level.score() → returns the float value.

Exceptions

All exceptions inherit from ActionGuardError.

Execution exceptions

ExceptionWhen
ExecutionBlockedErrorAction blocked by evaluation pipeline. Has verdict, reason, what_happened, policy_triggered, how_to_fix.
ActionEscalatedErrorAction requires escalation. Has reason, escalate_to.
ActionDeferredErrorAction deferred. Has reason, defer_seconds.
ExecutionTimeoutErrorAction execution exceeded timeout
TrustViolationErrorAgent trust too low. Has agent_id, required_trust, actual_trust.
CascadeDepthExceededErrorDelegation chain too deep. Has current_depth, max_depth.

Policy exceptions

ExceptionWhen
PolicyErrorBase for policy errors
PolicyParseErrorYAML policy file cannot be parsed
PolicyConditionErrorPolicy condition expression is invalid

Config exceptions

ExceptionWhen
ConfigErrorBase for config errors
ConfigFileNotFoundErrorConfig file not found
ConfigValidationErrorConfig values fail validation
ConfigSchemaErrorConfig structure doesn’t match schema

Budget and rate limit exceptions

ExceptionWhen
RateLimitExceededErrorAgent or tool exceeds rate limit. Has agent_id, tool_name, limit.
BudgetExceededErrorAction would exceed budget. Has current_spend, budget_limit.
HumanApprovalTimeoutErrorHuman-in-the-loop approval timed out

Other exceptions

ExceptionWhen
RollbackErrorBase for rollback errors
RollbackHandlerNotFoundErrorNo handler registered for action type
SnapshotNotFoundErrorSnapshot not found for action
RollbackFailedErrorRollback failed to restore state
AgentNotRegisteredErrorUnregistered agent_id referenced
AdapterNotFoundErrorNo adapter for the given tool type
AdapterWrappingErrorTool wrapping failed
ExporterErrorExporter error
SidecarErrorSidecar server error
SidecarStartupErrorSidecar failed to start

Structured error messages

All blocking exceptions (ExecutionBlockedError, ActionEscalatedError, RateLimitExceededError, BudgetExceededError, TrustViolationError, CascadeDepthExceededError, ActionDeferredError) provide three structured fields:
  • what_happened — clear plain-English description
  • policy_triggered — name of the policy or evaluator
  • how_to_fix — concrete, actionable steps

Environment variables

VariableDefaultDescription
PLYRA_SNAPSHOT_PATH~/.plyra/snapshots.dbSQLite database path for action snapshots

Configuration schema

The YAML config file maps to the GuardConfig Pydantic model:
version: "1.0"

global:
  default_verdict: ALLOW         # ALLOW | BLOCK
  max_risk_score: 0.85           # 0.0–1.0
  max_delegation_depth: 4
  max_concurrent_delegations: 10

budget:
  per_task: 5.00                 # USD
  per_agent_per_run: 1.00
  currency: USD

rate_limits:
  default: "60/min"
  per_tool:
    file.delete: "10/min"

policies: []                     # list of PolicyConfig

agents:                          # list of AgentConfig
  - id: researcher
    trust_level: 0.5
    can_delegate_to: []
    max_actions_per_run: 100

evaluators:
  schema_validator:
    enabled: true
  policy_engine:
    enabled: true
  risk_scorer:
    enabled: true
  rate_limiter:
    enabled: true
  cost_estimator:
    enabled: true
  human_gate:
    enabled: false

rollback:
  enabled: true
  snapshot_dir: null
  max_snapshots: 1000

observability:
  exporters: ["stdout"]
  audit_log_max_entries: 10000

sidecar:
  host: "0.0.0.0"
  port: 8080