Loading...
Loading...
Generate human-readable explanations for every AI decision on your platform. DRD's explainability engine translates complex policy evaluations, trust score changes, and enforcement actions into clear, auditable narratives. Required for EU AI Act compliance and essential for building user trust.
Regulatory Compliance
EU AI Act Article 13 requires high-risk AI systems to provide meaningful explanations. GDPR Article 22 grants data subjects the right to an explanation.
Trust Building
Users trust AI systems more when they understand how decisions are made. Explanations increase adoption and reduce dispute rates.
Auditability
Regulators and auditors need to understand AI decision-making. Explanations create human-readable audit trails.
Debugging
Engineers use explanations to identify policy misconfigurations, model drift, and unexpected behavior patterns.
DRD generates explanations for decisions made by agents running on these model providers. The explanation engine adapts its analysis based on the model's architecture and capabilities.
Anthropic Claude for nuanced, safety-aware explanations.
OpenAI GPT for broad knowledge explanations.
Google Gemini for multimodal context explanations.
Self-hosted models for privacy-sensitive explanations.
DRD supports multiple explanation formats tailored to different audiences. Request the format that best fits your use case.
natural_languageEnd users, customersNatural Language
Plain English explanation suitable for end users and non-technical stakeholders.
technicalEngineers, developersTechnical
Detailed technical explanation including policy rules matched, evaluation logic, and confidence calculations.
regulatoryAuditors, regulatorsRegulatory
Structured explanation formatted for regulatory compliance with legal basis, proportionality, and appeal info.
counterfactualProduct managers, usersCounterfactual
Explains what would need to change for a different outcome. Useful for remediation guidance.
summaryOperations, monitoringSummary
One-line summary of the decision and its primary reason. Suitable for dashboards and notifications.
Generate explanations for any decision via the REST API or SDK.
curl -X POST https://api.drd.io/v1/explanations \
-H "Authorization: Bearer drd_ws_..." \
-H "Content-Type: application/json" \
-d '{
"decisionId": "dec_01abc...",
"format": "natural_language",
"language": "en",
"options": {
"includeCounterfactual": true,
"includeConfidenceBreakdown": true,
"maxLength": 500
}
}'{
"id": "expl_01xyz...",
"decisionId": "dec_01abc...",
"format": "natural_language",
"language": "en",
"generatedAt": "2026-02-14T10:30:05Z",
"explanation": {
"summary": "Your request to send 52 emails was denied because it exceeded the hourly rate limit of 50.",
"detail": "The Content Scanner v2 agent attempted to send an email to user@example.com, which would have been its 52nd email in the current hour. Your organization's 'Email Rate Limiter' policy sets a maximum of 50 emails per hour to prevent spam and protect sender reputation.",
"factors": [
{
"factor": "Email count in current hour",
"value": "52",
"threshold": "50",
"weight": 0.95,
"impact": "primary"
},
{
"factor": "Agent trust score",
"value": "87",
"threshold": null,
"weight": 0.05,
"impact": "secondary"
}
]
},
"counterfactual": {
"description": "This action would have been allowed if:",
"conditions": [
"The agent had sent fewer than 50 emails this hour",
"The rate limit policy threshold was increased to 55 or higher",
"The agent had a trust score of 95+ (Gold tier agents have a 20% rate limit bonus)"
]
},
"confidence": {
"overall": 0.95,
"breakdown": {
"policyMatch": 0.99,
"contextAccuracy": 0.94,
"explanationQuality": 0.92
}
},
"metadata": {
"modelUsed": "claude-opus-4-6",
"generationTimeMs": 1240,
"tokenCount": 342
}
}import { DRDClient } from "@drd/sdk";
const drd = new DRDClient({ apiKey: process.env.DRD_API_KEY! });
// Generate a natural language explanation
const explanation = await drd.explanations.generate({
decisionId: "dec_01abc...",
format: "natural_language",
language: "en",
options: {
includeCounterfactual: true,
includeConfidenceBreakdown: true,
},
});
console.log("Summary:", explanation.explanation.summary);
console.log("Detail:", explanation.explanation.detail);
console.log("Confidence:", explanation.confidence.overall);
// Generate for end-user display
if (explanation.counterfactual) {
console.log("\nHow to resolve:");
for (const condition of explanation.counterfactual.conditions) {
console.log(" -", condition);
}
}
// Generate a regulatory explanation
const regulatoryExplanation = await drd.explanations.generate({
decisionId: "dec_01abc...",
format: "regulatory",
language: "en",
options: {
framework: "eu_ai_act",
includeAppealInfo: true,
},
});
console.log("Legal basis:", regulatoryExplanation.explanation.legalBasis);
console.log("Proportionality:", regulatoryExplanation.explanation.proportionality);
console.log("Appeal process:", regulatoryExplanation.explanation.appealProcess);Generate explanations for multiple decisions at once, or configure auto-explain to attach explanations to all new decisions.
// Generate explanations for multiple decisions
const batch = await drd.explanations.batch({
decisionIds: ["dec_01abc...", "dec_02def...", "dec_03ghi..."],
format: "technical",
language: "en",
});
for (const result of batch.results) {
if (result.status === "success") {
console.log(result.decisionId, result.explanation.summary);
} else {
console.error(result.decisionId, "Failed:", result.error);
}
}
// Auto-explain: attach explanations to all new decisions
await drd.explanations.configure({
autoExplain: true,
defaultFormat: "summary",
defaultLanguage: "en",
triggers: {
onDenied: true, // Explain all denied decisions
onEscalated: true, // Explain all escalated decisions
onLowConfidence: true, // Explain decisions with confidence < 0.8
onAllowed: false, // Don't explain routine approvals
},
});Cost note: Explanation generation uses AI model calls internally. Summary explanations cost ~$0.001 each; natural language ~$0.005; regulatory ~$0.01. Batch generation offers a 30% discount over individual calls.
Each explanation includes a confidence score indicating how accurately the explanation represents the actual decision process. Low-confidence explanations are flagged for human review.
| Component | What It Measures |
|---|---|
| policyMatch | Accuracy of identifying which policy rules led to the decision |
| contextAccuracy | Correctness of the context variables referenced in the explanation |
| explanationQuality | Clarity, completeness, and readability of the generated text |
| counterfactualValidity | Whether the suggested alternative conditions would actually change the outcome |
Quality assurance: Explanations with an overall confidence below 0.7 include a disclaimer indicating that the explanation may not fully represent the decision process. These are automatically queued for human review in compliance-sensitive workspaces.
Explanations can be generated in 20+ languages. Specify the language parameter using ISO 639-1 codes.
// German explanation for EU AI Act compliance
const deExplanation = await drd.explanations.generate({
decisionId: "dec_01abc...",
format: "regulatory",
language: "de",
options: { framework: "eu_ai_act" },
});
// Supported languages include:
// en, de, fr, es, it, pt, nl, pl, sv, da, fi, no,
// ja, ko, zh, ar, hi, ru, tr, thCreate reusable question templates with pattern matching for consistent explanations.
await drd.explainability.createTemplate({
name: 'Policy Denial',
questionPattern: 'Why was {action} denied for {agent}?',
context: 'policy_enforcement',
promptTemplate: 'Explain the policy evaluation...',
});