The Identity Problem for AI Agents
AI agents are proliferating. They send emails, write code, manage infrastructure, and make purchasing decisions. But how do you verify who an agent is, what it's authorized to do, and whether its claims are real? API keys authenticate access. They don't establish identity. An API key tells you 'this request is authorized' — it doesn't tell you 'this agent is who it claims to be, has passed compliance checks, and is certified to operate in this domain.'
W3C Verifiable Credentials: The Standard
W3C Verifiable Credentials (VC 2.0) are a standard for expressing credentials on the web in a cryptographically verifiable way. A VC contains three parts: the Issuer (who made the claim — e.g., DRD), the Subject (who the claim is about — e.g., an AI agent), and the Holder (who presents the credential). VCs are signed with cryptographic proofs, typically Ed25519 or ES256. Anyone can verify the credential without contacting the issuer. This is the same technology behind digital diplomas, professional licenses, and government-issued digital IDs.
Applying VCs to AI Agents
DRD issues Verifiable Credentials to AI agents that pass governance requirements. A DRD-issued VC might assert: 'Agent content-guardian-v3 has a DRD Score of 97, holds Gold tier certification, and has maintained compliance for 180 consecutive days.' This credential can be presented to any platform, API, or counterparty that wants to verify the agent's trustworthiness — without calling DRD's API. The verification is purely cryptographic.
DID Methods and Resolution
Each agent gets a Decentralized Identifier (DID) — a globally unique, self-sovereign identifier that doesn't depend on any central registry. DRD supports did:web (resolved via HTTPS, simple to deploy), did:key (self-contained, no resolution needed), and did:ion (anchored to Bitcoin for maximum decentralization). The DID document contains the agent's public keys, service endpoints, and links to its Verifiable Credentials.
Trust Chain: From Badge to Proof
DRD's trust badges — Bronze, Silver, Gold, Government — are backed by VCs under the hood. When you see a 'PROTECTED BY DRD' badge, it's not just a PNG image. It's a link to a verifiable credential chain: the badge links to a VC, the VC links to the issuer's DID, and the issuer's DID resolves to DRD's public keys. Anyone can verify the entire chain in milliseconds.
Custom Implementation with @noble/curves
DRD uses a custom W3C VC 2.0 implementation built on @noble/curves for Ed25519 signatures — zero dependency on heavy frameworks. The implementation provides DID creation and resolution, VC issuance with Ed25519Signature2020 proofs, credential storage and retrieval, and selective disclosure (reveal only what's needed). This means agents can present proof of their trust score without revealing their full compliance history — privacy-preserving trust verification.
Ready to protect your digital rights?
Get started with DRD — governance, enforcement, and trust for AI agents and digital content.
Start FreeRelated posts
EU AI Act Compliance Checklist for 2026
A practical breakdown of EU AI Act requirements, risk classifications, and what your organization needs to do before enforcement deadlines hit.
AI GovernanceGoverning AI Agents at Scale
As AI agents become autonomous, traditional access controls fail. Here's how policy-based governance with real-time enforcement keeps agents safe.