Skip to main content

Audit Trail

Every edit in Chronicle is recorded in an immutable, verifiable audit trail powered by vest-node. This provides cryptographic proof of who changed what, when.

How It Works

  1. Every CRDT operation passes through the relay
  2. The relay forwards a signed copy to vest-node
  3. vest-node appends the operation to a hash chain (each entry references the previous hash)
  4. The resulting ledger is queryable and exportable

Querying the Audit Trail

By Document

import { AuditClient } from '@chronicle-hq/editor';

const audit = new AuditClient({ vestUrl: 'http://localhost:5000' });

// Get full history for a document
const history = await audit.getDocumentHistory('doc-abc-123');
// Returns: AuditEntry[]

By User

// Get all edits by a specific user
const userEdits = await audit.getUserActivity('user-1', {
  documentId: 'doc-abc-123', // optional -- omit for cross-document
  from: new Date('2025-01-01'),
  to: new Date('2025-12-31'),
});

By Time Range

// Get edits within a time window
const recentEdits = await audit.getTimeRange('doc-abc-123', {
  from: new Date(Date.now() - 24 * 60 * 60 * 1000), // last 24 hours
  to: new Date(),
});

Audit Entry Structure

Each audit entry contains:
interface AuditEntry {
  /** Unique entry identifier */
  id: string;
  /** Document this entry belongs to */
  documentId: string;
  /** User who made the change */
  userId: string;
  /** Display name at time of edit */
  displayName: string;
  /** Timestamp of the operation */
  timestamp: Date;
  /** Type of operation */
  operationType: 'insert' | 'delete' | 'format' | 'structural';
  /** Affected content range */
  range: { from: number; to: number };
  /** Hash of this entry */
  hash: string;
  /** Hash of the previous entry in the chain */
  previousHash: string;
  /** Digital signature */
  signature: string;
}

Verification

The hash chain can be verified independently:
// Verify the integrity of the audit trail
const verification = await audit.verify('doc-abc-123');

console.log(verification);
// {
//   valid: true,
//   entries: 1847,
//   firstEntry: '2025-01-15T10:00:00Z',
//   lastEntry: '2025-03-14T16:30:00Z',
//   chainIntegrity: 'intact'
// }
If any entry has been tampered with, the hash chain breaks and valid returns false with details about the first corrupted entry.

Compliance Export

For regulatory or legal requirements, export the full audit trail:
// Export as JSON
const jsonExport = await audit.export('doc-abc-123', { format: 'json' });

// Export as CSV
const csvExport = await audit.export('doc-abc-123', { format: 'csv' });

// Export with full operation payloads (larger file)
const detailedExport = await audit.export('doc-abc-123', {
  format: 'json',
  includePayloads: true,
});

Integration with Timeline

The audit trail is the data source for Chronicle’s timeline navigation feature. When a user scrubs through the timeline, the editor:
  1. Queries the audit trail for the target timestamp
  2. Reconstructs document state by replaying operations up to that point
  3. Renders the historical state in read-only mode
See Timeline Navigation for the user-facing experience.

Retention & Storage

SettingDefaultDescription
retention.daysunlimitedHow long to keep audit entries
compaction.enabledfalseWhether to compact old entries
compaction.threshold10000Entries before compaction triggers
storage.backendpostgresStorage backend (postgres, s3)
The audit trail grows proportionally to edit volume. For a document with 100 edits per day:
  • ~3,000 entries per month
  • ~36,000 entries per year
  • ~2MB of storage per year (without payloads)
For high-volume documents, consider:
  1. Enabling compaction to merge old entries
  2. Using S3 backend for cold storage
  3. Setting a retention policy for non-compliance documents

Best Practices

  1. Never disable the audit trail in production — it’s the foundation of Chronicle’s trust model
  2. Export regularly for compliance-sensitive documents
  3. Verify periodically — run audit.verify() as part of your health checks
  4. Monitor storage — audit data grows linearly; plan capacity accordingly
  5. Use time-range queries instead of full history for UI responsiveness