AI agents execute real actions — deleting records, calling APIs, sending emails. There is no standard safety layer between the LLM’s decision and execution. plyra-guard is that layer. It intercepts every tool call your agent makes, evaluates it against your policy, and blocks, logs, or escalates — before anything irreversible happens.Documentation Index
Fetch the complete documentation index at: https://docs.plyra.dev/llms.txt
Use this file to discover all available pages before exploring further.
plyra-guard runs in-process — no sidecar, no network hop.
Every evaluation completes in under 2ms.
How it works
Every tool call passes through plyra-guard before execution:- Your agent decides to call a tool
- plyra-guard intercepts the call in-process
- The call is evaluated against your policy — sub-2ms, no network hop
- Verdict: ALLOW, BLOCK, ESCALATE, DEFER, or WARN
- The decision is written to the audit log
Zero latency overhead
Evaluation runs in-process. No network hop. Sub-2ms per call.
Framework agnostic
Works with LangGraph, AutoGen, CrewAI, LangChain, OpenAI, Anthropic,
and plain Python.
Policy as code
Rules live in your repo, reviewed in PRs, tested in CI.
YAML or Python — your choice.
Full audit log
Every decision logged — allowed and blocked. Ships to OTEL,
Datadog, or your own sink.
What’s available today
v0.1.9 on PyPI
Install with
pip install plyra-guard. Stable beta API.
Python 3.10+.Apache 2.0
No telemetry. No usage tracking. Run it on-prem, in a container,
or embedded in your agent loop.
LangGraph + 6 more
Native support for LangGraph, LangChain, AutoGen, CrewAI, OpenAI,
Anthropic, and plain Python.
Quickstart
Protect your first tool call in 60 seconds.
Policy reference
Write rules that match your threat model.