Loading...
Loading...
Comprehensive protection against generative AI risks. DRD provides content provenance tracking, deepfake detection, regulatory compliance automation, bias auditing, and post-quantum cryptography -- all integrated into your agent governance pipeline.
As generative AI capabilities expand, so do the risks: synthetic content that erodes trust, models trained on unauthorized data, biased outputs that create liability, and evolving regulations that demand documented compliance. DRD's GenAI Safety suite addresses each of these vectors with automated detection, classification, and evidence generation.
C2PA v2.2 Content Credentials with full provenance chain and tamper-evident signatures.
EU AI Act compliance engine with risk classification, evidence packs, and deadline tracking.
ML-KEM, ML-DSA, and SLH-DSA algorithms for cryptographic signatures that resist quantum attacks.
DRD implements the Coalition for Content Provenance and Authenticity (C2PA) v2.2 standard to embed tamper-evident Content Credentials into images, video, audio, and documents. Every piece of content registered through DRD receives a cryptographically signed manifest that records its origin, creation tools, and modification history.
Content manifests are signed with Ed25519 (default) or ECDSA P-384 keys issued by the DRD Platform CA. Signatures are verifiable by any C2PA-compliant tool.
Each edit, transformation, or AI-generation step adds a new assertion to the provenance chain. The full history is embedded in the content file.
Supported formats: JPEG, PNG, WebP, HEIF, TIFF, MP4, WebM, WAV, MP3, PDF, and OOXML (Word, Excel, PowerPoint).
The deepfake detection pipeline analyzes visual and audio content for synthetic generation artifacts. Multiple detection models run in parallel, and their results are fused into a single confidence score.
Detects GAN artifacts in the frequency domain (DCT/DFT analysis) that are invisible in the spatial domain.
Checks for lighting inconsistencies, boundary artifacts, and temporal coherence in video face regions.
Analyzes mel-frequency spectrograms for synthetic speech patterns including vocoder artifacts and unnatural prosody.
Cross-references EXIF/XMP metadata against known synthetic content patterns and generation tool signatures.
Fuses results from all detectors using a weighted ensemble. Returns a composite confidence score (0-1).
Track the provenance of training data and model lineage from dataset curation through fine-tuning to production deployment. DRD maintains a chain of custody that satisfies regulatory requirements for AI model transparency.
| Capability | Description |
|---|---|
| Dataset Registration | Register training datasets with content hashes, licenses, and consent records |
| License Compliance | Automated checking of training data licenses against model usage (CC, Apache, proprietary) |
| Model Lineage Graph | Visual DAG showing base model, fine-tuning datasets, RLHF data, and deployment checkpoints |
| Consent Verification | Cross-reference data subjects against consent records and opt-out registries |
| Data Poisoning Detection | Statistical analysis for distribution anomalies and adversarial samples in training sets |
Embed imperceptible watermarks in AI-generated content to enable downstream identification. DRD watermarks survive common transformations including compression, cropping, scaling, and format conversion.
Frequency-domain watermarking using DWT (Discrete Wavelet Transform). Imperceptible at PSNR > 42dB. Survives JPEG compression down to quality 20.
Statistical token-level watermarking that modifies LLM sampling probabilities. Detectable with high confidence (>99.9%) on passages of 200+ tokens.
Spread-spectrum watermarking in the mel-frequency domain. Survives transcoding, noise addition, and pitch shifting within +/- 2 semitones.
Per-frame DWT watermarking with temporal redundancy. Each frame carries the full watermark payload for robustness against frame dropping.
DRD automates compliance with the EU Artificial Intelligence Act by classifying AI systems, generating required documentation, and monitoring ongoing obligations. The compliance engine maps your agent configurations to the Act's requirements and identifies gaps.
Every registered AI agent is automatically classified into one of four risk tiers based on its capabilities, deployment context, and data access patterns.
Prohibited uses under Article 5: social scoring, real-time biometric identification (with exceptions), manipulation of vulnerable groups. DRD blocks registration of agents in this category.
Annex III systems: biometric categorization, critical infrastructure, education, employment, law enforcement, migration. Requires conformity assessment, risk management, and ongoing monitoring.
Systems with transparency obligations: chatbots, emotion recognition, deepfake generation. Must disclose AI involvement to users and label AI-generated content.
Low-risk applications with no specific regulatory obligations. DRD recommends voluntary compliance with codes of practice for trust score benefits.
When regulators request documentation, DRD generates a complete evidence pack containing all artifacts needed to demonstrate compliance. Evidence packs are cryptographically signed and include hash-chain verification for tamper evidence.
| Artifact | Contents |
|---|---|
| Risk Assessment Report | Risk classification rationale, capability analysis, and deployment context evaluation |
| Technical Documentation | System architecture, data flow diagrams, model cards, and API specifications |
| Audit Trail Export | Complete event history with hash-chain verification for the assessment period |
| Bias Audit Results | Statistical analysis of model outputs across protected categories |
| Policy Configuration | Active governance policies with version history and evaluation statistics |
| Incident Reports | Any policy violations, enforcement actions, and resolution records |
The compliance calendar tracks all regulatory deadlines, audit schedules, and renewal dates. Automated reminders are sent via webhook, email, or Slack integration at configurable intervals before each deadline.
EU AI Act phased deadlines, SOC 2 audit windows, GDPR DPIA reviews, and custom organizational milestones. Color-coded by urgency (green/amber/red).
Configurable reminders at 90, 60, 30, 14, and 7 days before deadlines. Escalation chains ensure critical deadlines are never missed.
The bias audit framework systematically tests LLM outputs for differential treatment across protected categories including race, gender, age, disability, and national origin. Audits run on configurable schedules and produce detailed statistical reports.
Measures whether positive outcomes are distributed equally across demographic groups. Reports the maximum disparity ratio.
Checks that true positive and false positive rates are consistent across groups. Identifies groups with disproportionate error rates.
Tests whether changing a protected attribute in the prompt changes the output. Measures sensitivity to identity-related terms.
Detects outputs that reinforce harmful stereotypes. Uses a curated taxonomy of stereotypical associations.
DRD implements NIST-standardized post-quantum cryptographic algorithms alongside classical algorithms in a hybrid configuration. This ensures that content provenance, trust signatures, and audit chains remain secure even against future quantum computing attacks.
| Algorithm | Type | Use Case | NIST Standard |
|---|---|---|---|
| ML-KEM | Key Encapsulation | Federation key exchange and encrypted trust score transport | FIPS 203 |
| ML-DSA | Digital Signature | Content credential signing and audit chain signatures | FIPS 204 |
| SLH-DSA | Digital Signature (stateless) | Long-lived certificate signatures and root CA keys | FIPS 205 |
Hybrid mode: DRD uses a hybrid signing scheme that produces both a classical Ed25519 signature and a post-quantum ML-DSA signature. Verifiers can validate either or both, providing backwards compatibility with existing C2PA tooling.
// Register content with C2PA Content Credentials
import { DRD } from "@drd-io/sdk";
const drd = new DRD({ apiKey: process.env.DRD_API_KEY });
const content = await drd.content.register({
title: "Product Launch Announcement",
contentType: "image",
contentUrl: "https://cdn.acme.com/images/product-launch-2026.jpg",
contentHash: "sha256:a1b2c3d4e5f6...",
// C2PA options
c2pa: {
enabled: true,
assertions: [
{
label: "c2pa.created",
data: {
softwareAgent: "DRD.io Content Protection v1.0",
when: new Date().toISOString(),
},
},
{
label: "stds.schema-org.CreativeWork",
data: {
author: [{ name: "Acme Corp Marketing" }],
copyrightNotice: "Copyright 2026 Acme Corp. All rights reserved.",
},
},
],
// Use hybrid signing (Ed25519 + ML-DSA)
signatureAlgorithm: "hybrid-ed25519-mldsa65",
},
// Watermarking options
watermark: {
enabled: true,
payload: "acme-corp:product-launch:2026-02-14",
strength: "medium", // imperceptible but robust
},
});
console.log(content);
// {
// id: "019content-abcd-1234-ef56-7890abcdef01",
// title: "Product Launch Announcement",
// c2pa: {
// manifestId: "urn:uuid:019manifest-...",
// status: "signed",
// signatureAlgorithms: ["Ed25519", "ML-DSA-65"],
// assertions: 2,
// },
// watermark: {
// embedded: true,
// algorithm: "dwt-spread-spectrum",
// detectable: true,
// },
// registeredAt: "2026-02-14T10:00:00Z"
// }// Run an EU AI Act compliance check on an agent
const complianceReport = await drd.compliance.check({
agentId: "01956abc-def0-7890-abcd-1234567890ab",
frameworks: ["eu-ai-act", "gdpr"],
// Optional: include bias audit
biasAudit: {
enabled: true,
categories: ["gender", "race", "age", "disability"],
sampleSize: 1000,
},
});
console.log(complianceReport);
// {
// agentId: "01956abc-def0-7890-abcd-1234567890ab",
// agentName: "Content Scanner v2",
// overallStatus: "compliant_with_findings",
//
// riskClassification: {
// tier: "limited",
// reason: "Agent performs content analysis (transparency obligations apply)",
// requiredActions: [
// "Disclose AI involvement to end users",
// "Label AI-generated analysis results",
// ],
// },
//
// frameworks: {
// "eu-ai-act": {
// status: "compliant",
// findings: 0,
// articles: ["Article 50 - Transparency (satisfied)"],
// },
// "gdpr": {
// status: "finding",
// findings: 1,
// details: [{
// article: "Article 35 - DPIA",
// finding: "Data Protection Impact Assessment due for renewal",
// severity: "medium",
// deadline: "2026-04-15T00:00:00Z",
// }],
// },
// },
//
// biasAudit: {
// status: "pass",
// metrics: {
// demographicParity: { score: 0.96, threshold: 0.8, pass: true },
// equalizedOdds: { score: 0.93, threshold: 0.85, pass: true },
// counterfactualFairness: { score: 0.91, threshold: 0.85, pass: true },
// stereotypeReinforcement: { detected: 0, threshold: 5, pass: true },
// },
// sampleSize: 1000,
// completedAt: "2026-02-14T10:05:00Z",
// },
//
// evidencePack: {
// id: "evidence-019abc-...",
// generatedAt: "2026-02-14T10:05:30Z",
// signedHash: "sha256:f1e2d3c4b5a6...",
// downloadUrl: "/api/v1/compliance/evidence/evidence-019abc-...",
// },
// }Policy Engine
Configure governance rules that enforce safety and compliance policies.
Learn moreFederated Trust
Share safety signals across organizations in the federated trust network.
Learn moreEvent Sourcing
Hash-chained audit trail that backs compliance evidence packs.
Learn moreTrust Alerts
Real-time alerting on safety threshold breaches.
Learn more