Loading...
Loading...
DRD provides comprehensive governance for fleets of autonomous AI agents. Register, monitor, and control hundreds of agents with real-time trust scoring, automated enforcement, and instant kill switches for rogue behavior.
Every agent in DRD follows a well-defined lifecycle from registration to potential revocation. Each state transition is recorded as an event in the audit trail.
Agent is registered in the workspace with its metadata, capabilities, and initial configuration. A DID is created and an API key is issued.
Agent is activated and begins operating. Trust score starts at 50. All actions are logged to the event-sourced audit trail.
Real-time monitoring via heartbeats, action logs, and anomaly detection. Trust score adjusts based on observed behavior.
Agent is temporarily suspended due to policy violation, anomaly detection, or manual intervention. All API calls are rejected.
Agent is permanently revoked. API keys are invalidated, credentials are revoked, and the DID document is deactivated.
Agents can transition from Monitor back to Activate after suspension review. Revocation is permanent.
Every agent starts with a trust score of 50 and the score adjusts dynamically based on observed behavior. The algorithm combines three weighted components to produce a score from 0 to 100.
40%Policy adherence rate, rule violations, audit completeness
35%Uptime, error rate, response consistency, heartbeat regularity
25%Logging coverage, explainability of decisions, metadata completeness
// Trust Score = weighted sum of component scores
trustScore = (compliance * 0.40)
+ (reliability * 0.35)
+ (transparency * 0.25)
// Example: Agent "Content Scanner v2"
trustScore = (92 * 0.40) + (85 * 0.35) + (84 * 0.25)
= 36.8 + 29.75 + 21.0
= 87.55 // rounded to 87
// Score adjustments (examples):
// +2 Policy evaluation passed (no violations)
// +1 Heartbeat received on schedule
// -5 Policy violation detected
// -10 Anomaly detected (unusual behavior)
// -20 Kill switch triggered60-6970-8485-9495-100DRD continuously monitors all active agents using heartbeats, action logs, and anomaly detection. Monitoring data flows into the event-sourced audit trail and feeds real-time dashboards and alerting.
Agents send periodic heartbeats (default: 30s). Missing heartbeats trigger alerts and can auto-suspend agents.
Every API call, data access, and content generation is logged with full context and policy evaluation results.
Statistical models detect unusual behavior patterns: sudden rate spikes, off-hours activity, new action types.
// Server-Sent Events (SSE) for real-time monitoring
// GET /api/v1/events/stream
event: agent.heartbeat
data: {
"agentId": "01956abc-...",
"status": "active",
"uptime": 86400,
"actionsLastHour": 142,
"trustScore": 87
}
event: agent.anomaly
data: {
"agentId": "01956bcd-...",
"anomalyType": "rate_spike",
"severity": "warning",
"details": {
"metric": "actions_per_minute",
"expected": 5,
"observed": 47,
"zScore": 4.2
}
}The kill switch provides instant suspension of rogue agents. When triggered, the agent's API key is immediately invalidated, all pending actions are cancelled, and the agent's status is set to suspended. The kill switch can be triggered manually or automatically by policy rules.
The kill switch is designed for sub-100ms response time. API key invalidation propagates to all edge nodes within 50ms. Agents attempting to make API calls after suspension receive a 403 FORBIDDEN response with a clear reason code.
Define governance policies that apply across your entire agent fleet. Swarm policies control rate limits, action restrictions, data access controls, and resource quotas at the workspace level.
Per-agent and per-workspace rate limits with sliding window enforcement. Configurable by action type.
Allowlist or denylist specific actions. Restrict by time-of-day, geography, or risk level.
Fine-grained permissions for which data resources agents can read, write, or delete.
{
"name": "Production Swarm Policy",
"scope": "workspace",
"enabled": true,
"rules": [
{
"id": "rate-limit-global",
"action": "deny",
"condition": "agent.actionsPerHour > 1000",
"effect": "suspend_agent",
"priority": 1
},
{
"id": "restrict-off-hours",
"action": "require_approval",
"condition": "env.hour < 6 || env.hour > 22",
"resourceType": "financial",
"priority": 5
},
{
"id": "block-high-risk",
"action": "deny",
"condition": "agent.trustScore < 60 && action.riskLevel == 'high'",
"priority": 10
},
{
"id": "data-access-pii",
"action": "deny",
"condition": "resource.containsPII && !agent.capabilities.includes('pii-handler')",
"priority": 2
}
]
}Test new agents safely before deploying them to production. The sandbox provides an isolated environment where agents can operate with simulated data and restricted permissions.
Sandbox agents cannot access production data or affect live systems. All actions are contained within the sandbox namespace.
Monitor agent behavior patterns, resource usage, and policy compliance before promoting to production.
Run predefined test scenarios to evaluate agent behavior under edge cases, policy conflicts, and adversarial conditions.
When an agent passes sandbox evaluation, it can be promoted to production with a one-click approval workflow.
Discover and deploy pre-verified agents from the DRD Agent Marketplace. All marketplace agents have been evaluated for compliance, security, and reliability, and carry verified trust badges.
Every marketplace agent has been evaluated and carries a verified DRD trust badge with minimum Silver tier.
Agents are rated by the community based on reliability, accuracy, and ease of integration.
Deploy marketplace agents to your workspace with pre-configured policies and monitoring.
For critical enforcement decisions, DRD can use multi-model consensus. Multiple AI models independently evaluate the situation, and a decision is made only when a quorum agrees. This prevents single-model bias and increases decision reliability.
{
"consensusPolicy": {
"enabled": true,
"quorumSize": 3,
"minimumAgreement": 2,
"models": [
{ "provider": "anthropic", "model": "claude-opus-4-6", "weight": 1.0 },
{ "provider": "openai", "model": "gpt-5.2", "weight": 1.0 },
{ "provider": "google", "model": "gemini-3-pro", "weight": 0.8 }
],
"timeout": 5000,
"fallback": "deny",
"appliesTo": [
"enforcement.create",
"agent.revoke",
"trust.override"
]
}
}Configurable quorum size and agreement threshold. Default: 2-of-3 models must agree.
If consensus is not reached within the timeout, the configurable fallback action applies (default: deny).
Each model's individual assessment and the consensus result are recorded in the audit trail.
Use the DRD SDK to register a new agent in your workspace. The agent receives a DID, an API key, and an initial trust score of 50.
import { DRDClient } from "@drd.io/sdk";
const drd = new DRDClient({
apiKey: process.env.DRD_API_KEY!,
workspaceId: "ws-019...",
});
// Register a new agent
const agent = await drd.agents.create({
name: "Content Scanner v2",
description: "Scans uploaded content for policy compliance",
metadata: {
provider: "anthropic",
model: "claude-opus-4-6",
version: "2.1.0",
},
capabilities: ["content-analysis", "image-classification"],
tags: ["production", "content-team"],
});
console.log(`Agent registered: ${agent.id}`);
console.log(`DID: ${agent.did}`);
console.log(`API Key: ${agent.apiKey}`);
console.log(`Trust Score: ${agent.trustScore}`);
// => "Agent registered: 01956abc-def0-7890-abcd-1234567890ab"
// => "DID: did:drd:01956abc-def0-7890-abcd-1234567890ab"
// => "API Key: drd_agent_sk_1a2b3c4d5e6f..."
// => "Trust Score: 50"Query an agent's current status, trust score, and detailed score breakdown.
import { DRDClient } from "@drd.io/sdk";
const drd = new DRDClient({
apiKey: process.env.DRD_API_KEY!,
workspaceId: "ws-019...",
});
// Get agent details with trust score breakdown
const agent = await drd.agents.get("01956abc-def0-7890-abcd-1234567890ab");
console.log(`Agent: ${agent.name}`);
console.log(`Status: ${agent.status}`);
console.log(`Trust Score: ${agent.trustScore}`);
// => "Agent: Content Scanner v2"
// => "Status: active"
// => "Trust Score: 87"
// Get detailed trust score breakdown
const trust = await drd.agents.getTrustScore(agent.id);
console.log(`Tier: ${trust.tier}`);
console.log(`Compliance: ${trust.breakdown.compliance.score}`);
console.log(`Reliability: ${trust.breakdown.reliability.score}`);
console.log(`Transparency: ${trust.breakdown.transparency.score}`);
// => "Tier: gold"
// => "Compliance: 92"
// => "Reliability: 85"
// => "Transparency: 84"
// List all agents with filtering
const agents = await drd.agents.list({
status: "active",
minTrustScore: 70,
limit: 50,
});
console.log(`Active agents (score >= 70): ${agents.data.length}`);
// => "Active agents (score >= 70): 12"Evaluate an agent's action against workspace policies using the guard endpoint. The SDK integrates this into the agent's workflow for seamless enforcement.
import { DRDClient } from "@drd.io/sdk";
const drd = new DRDClient({
apiKey: process.env.DRD_API_KEY!,
workspaceId: "ws-019...",
});
// Evaluate an action against swarm policies
const decision = await drd.guard.evaluate({
action: "send_email",
agentId: "01956abc-def0-7890-abcd-1234567890ab",
context: {
target: "user@example.com",
subject: "Order Confirmation",
emailCount: 12,
riskLevel: "low",
},
});
if (decision.allowed) {
console.log("Action allowed:", decision.reason);
// Proceed with the action
await sendEmail(decision.context);
} else {
console.log("Action denied:", decision.reason);
// Handle denial (log, alert, fallback)
}
// => "Action allowed: Passed all policies (3 evaluated in 4.2ms)"
// Cedar-style evaluation for complex policies
const cedarDecision = await drd.policies.evaluate({
principal: { type: "agent", id: "01956abc-..." },
action: { type: "governance", id: "publish_content" },
resource: { type: "content", id: "019content-..." },
environment: {
time: new Date().toISOString(),
region: "eu-west-1",
riskLevel: "medium",
},
});
console.log(`Effect: ${cedarDecision.effect}`);
console.log(`Matched rules: ${cedarDecision.matchedRules.length}`);
console.log(`Evaluation time: ${cedarDecision.evaluationTimeMs}ms`);
// => "Effect: allow"
// => "Matched rules: 1"
// => "Evaluation time: 3.2ms"