Loading...
Loading...
Track every model's ancestry, training method, data sources, and compliance status. Maintain a complete audit trail from base model to production deployment.
Labeled training data with explicit input-output pairs.
Adapting a pre-trained model on domain-specific data.
Reinforcement learning from human feedback.
Knowledge transfer from a larger teacher model.
Each training method carries different compliance implications for data licensing, consent requirements, and audit depth.
Requires documentation of labeling methodology, annotator agreements, and label quality metrics.
Compliance requirements: Dataset licensing, annotator consent, bias auditing
Requires lineage of the base model, fine-tuning dataset provenance, and hyperparameter documentation.
Compliance requirements: Base model license, derivative rights, data consent
Tracks reward model lineage, human preference datasets, and feedback loop iterations with annotator demographics.
Compliance requirements: Annotator agreements, preference data consent, bias mitigation
Requires documentation of teacher model identity, distillation methodology, and capability retention metrics.
Compliance requirements: Teacher model license, capability disclosure, performance benchmarks
High-risk AI systems must document the training methodologies used, including data selection, labeling, and cleaning techniques. DRD Model Lineage provides this documentation automatically.
Every dataset linked to a model lineage record includes full provenance metadata.
nameHuman-readable dataset identifierYessourceOrigin URL or internal registry pathYeslicenseSPDX license identifier (e.g., CC-BY-4.0)YesversionSemantic version or hash of the dataset snapshotYesrecordCountNumber of records/samples in the datasetYescollectedAtISO 8601 timestamp of data collectionYesconsentTypenone | opt-in | opt-out | contractualYespiiCategoriesArray of PII types presentIf applicablebiasAuditReference to bias assessment reportRecommendedretentionPolicyData retention period and deletion rulesRecommendedDRD automatically builds a directed acyclic graph (DAG) of model dependencies, showing the relationships between foundation models, fine-tuned variants, distilled models, and their datasets.
{
"model": "content-guardian-v3",
"method": "fine-tuning",
"parent": {
"model": "llama-3-70b",
"method": "supervised",
"datasets": [
{ "name": "CommonCrawl-2025", "license": "CC-BY-4.0" },
{ "name": "Wikipedia-EN-2025", "license": "CC-BY-SA-4.0" }
]
},
"datasets": [
{ "name": "drd-content-safety-v2", "license": "proprietary", "consentType": "contractual" },
{ "name": "drd-policy-violations", "license": "proprietary", "consentType": "opt-in" }
],
"children": [
{
"model": "content-guardian-v3-lite",
"method": "distillation"
}
]
}DRD validates license compatibility across the dependency graph. If a parent model uses a copyleft license, all derivatives are flagged for compliance review.
import { DRD } from '@drd/sdk';
const drd = new DRD({ apiKey: process.env.DRD_API_KEY });
const record = await drd.modelLineage.createRecord({
modelId: 'custom-classifier-v3',
modelName: 'Custom Content Classifier',
version: '3.0.0',
parentModelId: 'base-model-v2-id',
trainingMethod: 'fine_tuning',
datasetSize: 50000,
license: 'Apache-2.0',
trainingDataSources: {
datasets: ['licensed-dataset-a', 'internal-dataset-b'],
consentVerified: true,
},
});Create audits to verify model compliance with licensing, data usage, and regulatory requirements.
await drd.modelLineage.createAudit({
recordId: record.id,
auditType: 'compliance',
findings: {
dataSourcesVerified: true,
licenseCompatible: true,
consentChainValid: true,
},
complianceStatus: 'compliant',
nextAuditDue: '2026-06-01T00:00:00Z',
});curl -X POST https://api.drd.io/v1/models/lineage \
-H "Authorization: Bearer drd_ws_sk_live_Abc123..." \
-H "Content-Type: application/json" \
-d '{
"modelId": "mdl_01JM7XBN4RTYP",
"name": "content-guardian-v3",
"version": "3.0.0",
"method": "fine-tuning",
"parentModelId": "mdl_00BASE70BLLAMA",
"datasets": [
{
"name": "drd-content-safety-v2",
"source": "s3://drd-datasets/content-safety-v2",
"license": "proprietary",
"version": "2.1.0",
"recordCount": 2500000,
"consentType": "contractual"
}
],
"hyperparameters": {
"learningRate": 2e-5,
"epochs": 3,
"batchSize": 32,
"warmupSteps": 500
},
"metrics": {
"accuracy": 0.96,
"f1": 0.94,
"latencyP99Ms": 45
}
}'
// Response
{
"ok": true,
"data": {
"id": "lin_01JM7XBN4RTYP",
"modelId": "mdl_01JM7XBN4RTYP",
"name": "content-guardian-v3",
"method": "fine-tuning",
"parentModelId": "mdl_00BASE70BLLAMA",
"datasetsCount": 1,
"createdAt": "2026-02-14T12:00:00Z",
"signedHash": "sha256:a1b2c3d4..."
}
}curl "https://api.drd.io/v1/models/lineage/mdl_01JM7XBN4RTYP/graph?depth=3" \
-H "Authorization: Bearer drd_ws_sk_live_Abc123..."
// Response
{
"ok": true,
"data": {
"root": "mdl_01JM7XBN4RTYP",
"nodes": [
{ "id": "mdl_01JM7XBN4RTYP", "name": "content-guardian-v3", "method": "fine-tuning" },
{ "id": "mdl_00BASE70BLLAMA", "name": "llama-3-70b", "method": "supervised" },
{ "id": "mdl_02LITE7XBN4RT", "name": "content-guardian-v3-lite", "method": "distillation" }
],
"edges": [
{ "from": "mdl_00BASE70BLLAMA", "to": "mdl_01JM7XBN4RTYP", "relation": "parent" },
{ "from": "mdl_01JM7XBN4RTYP", "to": "mdl_02LITE7XBN4RT", "relation": "parent" }
]
}
}Register and query model lineage with the DRD TypeScript SDK.
import { DRD } from '@drd/sdk';
const drd = new DRD({ apiKey: process.env.DRD_API_KEY });
// Register a new model lineage record
const lineage = await drd.modelLineage.createRecord({
modelId: 'mdl_01JM7XBN4RTYP',
modelName: 'content-guardian-v3',
version: '3.0.0',
parentModelId: 'mdl_00BASE70BLLAMA',
trainingMethod: 'fine_tuning',
datasets: [{
name: 'drd-content-safety-v2',
source: 's3://drd-datasets/content-safety-v2',
license: 'proprietary',
recordCount: 2_500_000,
consentType: 'contractual',
}],
});
// Query the dependency graph
const graph = await drd.modelLineage.graph(lineage.modelId, { depth: 3 });
console.log(`Nodes: ${graph.nodes.length}, Edges: ${graph.edges.length}`);
// Check license compatibility
const licenseCheck = await drd.modelLineage.checkLicenses(lineage.modelId);
if (licenseCheck.conflicts.length > 0) {
console.warn('License conflicts found:', licenseCheck.conflicts);
}
// Get full audit trail
const trail = await drd.modelLineage.auditTrail(lineage.modelId);
for (const entry of trail) {
console.log(`${entry.timestamp}: ${entry.action} by ${entry.actor}`);
}