Loading...
Loading...
DRD's rule-based policy engine is the enforcement backbone for AI agent governance. Define declarative policies that control what agents can and cannot do, evaluate actions in real time, and maintain a complete audit trail of every decision.
Every action an AI agent attempts passes through the policy engine before execution. The engine evaluates the action against all applicable policies, applies priority-based conflict resolution, and returns a decision in single-digit milliseconds. Policies are version-controlled, hot-reloadable, and extensible with custom WASM evaluation logic.
Policy decisions are returned in under 5 milliseconds, keeping agent workflows fast.
Manage policies as code in your repository. Every change is versioned and auditable.
Write custom evaluation logic in any language that compiles to WebAssembly.
A policy is a named collection of rules that govern agent behavior. Each policy contains the following fields:
| Field | Type | Description |
|---|---|---|
| name | string | Human-readable name for the policy (unique within workspace) |
| description | string | Optional explanation of the policy's purpose and scope |
| rules | Rule[] | Ordered array of rule objects that define the governance logic |
| priority | number | Global priority (0-1000). Higher values take precedence in conflicts |
| enabled | boolean | Toggle to activate or deactivate the policy without deleting it |
Each rule within a policy specifies an effect that determines how the engine responds when the rule matches an incoming action.
Permits the action to proceed. If multiple rules match, the highest-priorityallowrule wins unless a higher-priority deny exists.
Blocks the action immediately. Deny rules are evaluated first at each priority level, making them the strongest enforcement primitive.
Allows the action but creates a flag event in the audit trail. Use this for monitoring suspicious patterns without blocking agent operations.
Pauses the action and creates an approval request. A human reviewer must approve or deny the action before execution continues.
Rules match incoming actions using either exact match or wildcard patterns. The action field in a rule is compared against the action identifier in the evaluation request.
| Pattern | Matches | Example |
|---|---|---|
| send_email | Exact action name only | send_email |
| email.* | Any action starting with email. | email.send, email.draft, email.schedule |
| * | All actions (global rule) | Any action the agent attempts |
| content.publish.* | Nested wildcard | content.publish.blog, content.publish.social |
Note: Wildcards use glob-style matching. A single * matches any sequence of characters within a single segment. For multi-segment matching, the engine evaluates dot-separated segments left to right.
When an agent attempts an action, the policy engine follows a deterministic evaluation pipeline:
The SDK or API sends an evaluation request containing the action, agent context, and environment metadata.
The engine scans all enabled policies and collects every rule whose action pattern matches the requested action.
Matched rules are sorted by priority (highest first). At each priority level, deny rules are checked before allow rules.
The first matching rule at the highest priority level determines the decision. Deny always wins over allow at equal priority.
The engine returns the effect (allow, deny, flag, or require-approval) along with a decision trace for auditing.
// Evaluation flow (simplified pseudocode)
function evaluate(request: EvalRequest): Decision {
const matchedRules = policies
.filter(p => p.enabled)
.flatMap(p => p.rules)
.filter(r => matchAction(r.action, request.action))
.sort((a, b) => b.priority - a.priority);
for (const rule of matchedRules) {
if (rule.condition && !evaluateCondition(rule.condition, request.context)) {
continue;
}
return { effect: rule.effect, rule: rule.id, trace: buildTrace(matchedRules) };
}
return { effect: "deny", reason: "No matching rule (default deny)" };
}DRD ships with production-ready policy templates that cover common governance scenarios. Apply a template with a single API call or customize it to fit your requirements.
Controls which content types agents can read, create, or modify. Supports MIME-type filters and ownership checks.
Enforces PII redaction, data classification, and export restrictions. Maps to GDPR data processing requirements.
Prevents agent abuse with per-action, per-agent, and per-workspace rate limits. Sliding-window and token-bucket modes.
Pre-configured rules for EU AI Act, SOC 2, HIPAA, and ISO 27001 control mappings. Audit-ready evidence generation.
Before deploying policies to production, test them in simulation mode. Simulated evaluations return the same decision the engine would produce without actually enforcing it. The simulation results include a full decision trace so you can verify rule matching and priority resolution.
// Simulate a policy evaluation without enforcement
const simulation = await drd.policies.simulate({
action: "content.publish.blog",
context: {
contentType: "text/markdown",
wordCount: 2400,
author: "agent-019abc",
sensitivityLevel: "internal",
},
agentId: "01956abc-def0-7890-abcd-1234567890ab",
});
console.log(simulation);
// {
// effect: "require-approval",
// matchedRule: "rule-content-review",
// policyName: "Content Moderation Policy",
// trace: {
// policiesEvaluated: 4,
// rulesMatched: 2,
// winningPriority: 20,
// evaluationTimeMs: 2.8
// },
// simulated: true // Not enforced
// }Policies can be defined as YAML or JSON files in your repository and synced to DRD automatically. This enables pull-request-based review workflows, change history, and rollback capabilities.
# .drd/policies/content-moderation.yaml
apiVersion: drd.io/v1
kind: Policy
metadata:
name: content-moderation
description: Blocks unsafe content from being published
spec:
priority: 100
enabled: true
rules:
- id: block-unsafe
effect: deny
action: "content.publish.*"
condition: "safety.score < 0.7"
priority: 10
reason: "Content safety score below threshold"
- id: review-borderline
effect: require-approval
action: "content.publish.*"
condition: "safety.score >= 0.7 && safety.score < 0.9"
priority: 20
reason: "Content requires human review"
- id: allow-safe
effect: allow
action: "content.publish.*"
condition: "safety.score >= 0.9"
priority: 30Use drd policies sync to push local policy files to the DRD platform, or configure a GitHub Action to sync automatically on merge to your default branch.
Policy updates take effect immediately without restarting agents or redeploying services. When a policy is created, updated, or deleted via the API or GitOps sync, the policy engine invalidates its in-memory cache and loads the new policy set. All subsequent evaluations use the updated rules.
The engine uses a write-through cache. On policy mutation, the cache entry is updated atomically before the API response is returned.
In-flight evaluations complete against the previous policy version. The next evaluation uses the new version. No requests are dropped.
For evaluation logic that goes beyond declarative conditions, you can write custom policy functions in any language that compiles to WebAssembly (Rust, Go, AssemblyScript, etc.). WASM extensions run in a sandboxed environment with memory limits and execution timeouts.
// Example: custom WASM policy extension (Rust)
// Compiled to .wasm and uploaded via drd policies upload-wasm
#[no_mangle]
pub extern "C" fn evaluate(action_ptr: *const u8, action_len: u32,
context_ptr: *const u8, context_len: u32) -> i32 {
let action = unsafe { std::str::from_utf8_unchecked(
std::slice::from_raw_parts(action_ptr, action_len as usize)
)};
let context: serde_json::Value = serde_json::from_slice(
unsafe { std::slice::from_raw_parts(context_ptr, context_len as usize) }
).unwrap();
// Custom logic: deny transfers over $10,000 outside business hours
if action == "transfer_funds" {
let amount = context["amount"].as_f64().unwrap_or(0.0);
let hour = context["hour"].as_u64().unwrap_or(12);
if amount > 10_000.0 && (hour < 9 || hour > 17) {
return -1; // DENY
}
}
1 // ALLOW
}// Create a policy using the DRD TypeScript SDK
import { DRD } from "@drd-io/sdk";
const drd = new DRD({ apiKey: process.env.DRD_API_KEY });
const policy = await drd.policies.create({
name: "Email Rate Limiter",
description: "Prevents agents from sending more than 50 emails per hour",
priority: 100,
enabled: true,
rules: [
{
id: "deny-excessive-emails",
effect: "deny",
action: "send_email",
condition: "context.emailCount > 50",
priority: 10,
reason: "Hourly email limit exceeded",
},
{
id: "flag-high-volume",
effect: "flag",
action: "send_email",
condition: "context.emailCount > 30",
priority: 5,
reason: "Agent approaching email rate limit",
},
{
id: "allow-normal-email",
effect: "allow",
action: "send_email",
condition: "context.emailCount <= 30",
priority: 1,
},
],
});
console.log(policy.id);
// "019policy-5678-abcd-ef01-234567890def"// Evaluate an action using the guard endpoint
const decision = await drd.guard.evaluate({
action: "send_email",
context: {
target: "user@example.com",
emailCount: 12,
sensitivityLevel: "normal",
},
agentId: "01956abc-def0-7890-abcd-1234567890ab",
});
console.log(decision);
// {
// allowed: true,
// effect: "allow",
// matchedRule: "allow-normal-email",
// policyName: "Email Rate Limiter",
// reason: null,
// decisionTrace: {
// policiesEvaluated: 3,
// rulesMatched: 1,
// evaluationTimeMs: 2.1
// }
// }
// Cedar-style evaluation with full context
const cedarDecision = await drd.policies.evaluate({
principal: { type: "agent", id: "01956abc-..." },
action: { type: "governance", id: "send_email" },
resource: { type: "communication", id: "email-019..." },
environment: {
time: "2026-02-14T10:00:00Z",
region: "us-east-1",
emailCount: 12,
},
});
console.log(cedarDecision.effect); // "allow"GenAI Safety
Content provenance, deepfake detection, and EU AI Act compliance.
Learn moreEvent Sourcing
Hash-chained audit trail for every policy decision and agent action.
Learn moreFederated Trust
Share policy evaluations across organizations without exposing raw data.
Learn moreEnforcement
Automated takedown actions driven by policy decisions.
Learn more