Loading...
Loading...
Get your first AI agent registered, governed, and monitored in under 5 minutes. This guide walks you through the essential steps from account creation to live enforcement.
Before you begin, make sure you have:
Step 1
Head to app.drd.io/sign-up and create your account. After signing in, you will be prompted to create your first workspace. Workspaces are isolated environments where agents, policies, and events are scoped.
Each workspace has its own API keys, agents, and policies. Use separate workspaces for development, staging, and production.
Step 2
Install the DRD TypeScript SDK from npm:
npm install @drd/sdkThe SDK works in Node.js (18+), Deno, Bun, and modern browsers. It includes full TypeScript type definitions.
Step 3
Create a DRD client instance with your workspace API key. You can find your API key in the dashboard under Settings → API Keys.
import { DRDClient } from "@drd/sdk";
const drd = new DRDClient({
apiKey: process.env.DRD_API_KEY!, // drd_ws_...
// baseUrl: "https://api.drd.io/v1" (default)
});Security: Never hardcode API keys in source code. Use environment variables or a secrets manager. See the Authentication guide for best practices.
Step 4
Register an AI agent with the platform. Each agent gets a unique DID (Decentralized Identifier) and starts with a trust score of 50.
const agent = await drd.agents.create({
name: "Content Scanner v1",
description: "Scans uploaded content for policy compliance",
metadata: {
provider: "anthropic",
model: "claude-opus-4-6",
version: "1.0.0",
},
capabilities: ["content-analysis", "image-classification"],
tags: ["production", "content-team"],
});
console.log("Agent ID:", agent.id);
console.log("Agent DID:", agent.did);
console.log("Trust Score:", agent.trustScore); // 50 (default)Step 5
Create a governance policy that controls what your agent can do. Policies contain rules that are evaluated in priority order.
const policy = await drd.policies.create({
name: "Content Moderation Policy",
description: "Blocks unsafe content from being published",
enabled: true,
rules: [
{
action: "deny",
condition: "safety.score < 0.7",
resourceType: "content",
priority: 10,
},
{
action: "require_approval",
condition: "safety.score < 0.9 && safety.score >= 0.7",
resourceType: "content",
priority: 20,
},
],
});
console.log("Policy ID:", policy.id);
console.log("Rules:", policy.ruleCount);Step 6
Before your agent performs an action, call the guard endpoint to check if the action is allowed by your policies.
const decision = await drd.policies.evaluate({
agentId: agent.id,
action: "publish_content",
context: {
contentType: "image",
safety: { score: 0.85 },
region: "eu-west-1",
},
});
if (decision.allowed) {
console.log("Action allowed:", decision.reason);
// Proceed with the action
} else {
console.log("Action denied:", decision.reason);
// Handle denial
}Step 7
Log your agent’s actions to the immutable, hash-chained audit trail. Events are used to compute trust scores and generate compliance reports.
const result = await drd.events.ingest([
{
type: "content.published",
agentId: agent.id,
data: {
contentId: "019content-abcd-...",
title: "Product Image - Summer Collection",
contentType: "image",
approved: true,
},
timestamp: new Date().toISOString(),
},
]);
console.log("Events ingested:", result.ingested);
console.log("Chain hash:", result.events[0].chainHash);Step 8
Check your agent’s trust score as it builds a track record of compliant behavior.
const trust = await drd.trust.getScore(agent.id);
console.log("Trust Score:", trust.trustScore); // 50-100
console.log("Tier:", trust.tier); // bronze | silver | gold | government
console.log("Breakdown:", trust.breakdown);
// {
// compliance: { score: 92, weight: 0.40 },
// reliability: { score: 85, weight: 0.35 },
// transparency: { score: 84, weight: 0.25 },
// }Here is the full workflow in a single file:
import { DRDClient } from "@drd/sdk";
const drd = new DRDClient({
apiKey: process.env.DRD_API_KEY!,
});
async function main() {
// 1. Register agent
const agent = await drd.agents.create({
name: "My First Agent",
description: "Demo agent for quick start guide",
capabilities: ["content-analysis"],
});
// 2. Create policy
await drd.policies.create({
name: "Basic Safety Policy",
enabled: true,
rules: [
{ action: "deny", condition: "safety.score < 0.5", resourceType: "content", priority: 10 },
],
});
// 3. Guard an action
const decision = await drd.policies.evaluate({
agentId: agent.id,
action: "publish_content",
context: { safety: { score: 0.95 } },
});
if (decision.allowed) {
// 4. Log the event
await drd.events.ingest([{
type: "content.published",
agentId: agent.id,
data: { action: "publish_content", approved: true },
timestamp: new Date().toISOString(),
}]);
}
// 5. Check trust score
const trust = await drd.trust.getScore(agent.id);
console.log(`Agent ${agent.name}: Trust Score ${trust.trustScore} (${trust.tier})`);
}
main();REST API Reference
Full endpoint documentation with request/response examples
Learn moreTypeScript SDK
Detailed SDK guide with advanced patterns and error handling
Learn moreAuthentication
API key types, JWT authentication, and OAuth2 integration
Learn moreAPI Playground
Interactive endpoint explorer with live request simulation
Learn more