Loading...
Loading...
Explainability
Understand why AI agents make specific decisions. Human-readable explanations with factor analysis and confidence scoring.
Capabilities
Get clear, human-readable explanations for any AI agent decision. No technical jargon, just plain answers.
Generate explanations using Claude, GPT, Gemini, or self-hosted local models. Choose the right model for each context.
Ask follow-up questions about any decision. Deep-dive into specific factors or request alternative perspectives.
Create reusable question patterns for consistent explanations across similar decisions and events.
See exactly which factors were considered in each decision with weighted importance scores.
Every explanation includes a confidence level (high, medium, low) so you know how reliable the answer is.
Getting Started
Select an agent decision or event and ask why it happened. Use natural language or a pre-built template.
DRD generates a detailed explanation with factor analysis, confidence scoring, and supporting evidence.
Ask follow-up questions, save useful templates, and build a library of explainability patterns.
Developer Integration
import { DRD } from '@drd/sdk';
const drd = new DRD({ token: 'drd_live_sk_...' });
// Generate an explanation
const report = await drd.explainability.generate({
agentId: 'agent_abc123',
eventId: 'evt_xyz789',
questionText: 'Why was this request denied?',
modelUsed: 'claude',
});
console.log(report.explanation);
console.log('Confidence:', report.confidence);
console.log('Factors:', report.factorsConsidered);Human-readable explanations for every AI decision. Build trust through transparency.