Overview
The Veritas API provides programmatic access to SO1’s prompt engineering infrastructure. Manage prompts, design multi-step chains, curate reusable fragments, and test prompt variations at scale.
Base URL : https://api.so1.io/v1/veritasVeritas is SO1’s centralized prompt library that powers all agent LLM interactions.
API Architecture
The Veritas API is organized into three main categories:
What is Veritas?
Veritas is SO1’s prompt library and management system , providing:
Centralized Prompt Storage : Single source of truth for all agent prompts
Version Control : Track prompt evolution and A/B test variations
Chain Architecture : Design complex multi-step LLM workflows
Fragment System : Reusable prompt components (roles, constraints, examples)
Testing Infrastructure : Validate prompt quality before deployment
Performance Analytics : Track prompt effectiveness and token usage
Integration : All 20 SO1 agents use Veritas prompts. Changes to prompts propagate to agents automatically.
Quick Start
Create a Prompt
curl -X POST https://api.so1.io/v1/veritas/prompts \
-H "Authorization: Bearer so1_key_abc123xyz" \
-H "Content-Type: application/json" \
-d '{
"name": "code-review-prompt",
"description": "Reviews code for quality, security, and best practices",
"template": "You are an expert code reviewer.\n\nReview the following code:\n\n```{{language}}\n{{code}}\n```\n\nFocus on:\n- Code quality and readability\n- Security vulnerabilities\n- Performance issues\n- Best practices\n\nProvide specific, actionable feedback.",
"variables": ["language", "code"],
"metadata": {
"domain": "engineering",
"version": "1.0.0",
"tags": ["code-review", "quality", "security"]
}
}'
Test a Prompt
curl -X POST https://api.so1.io/v1/veritas/prompts/code-review-prompt/test \
-H "Authorization: Bearer so1_key_abc123xyz" \
-H "Content-Type: application/json" \
-d '{
"variables": {
"language": "typescript",
"code": "function add(a, b) { return a + b }"
},
"model": "claude-sonnet-4",
"temperature": 0.7
}'
Create a Chain
curl -X POST https://api.so1.io/v1/veritas/chains \
-H "Authorization: Bearer so1_key_abc123xyz" \
-H "Content-Type: application/json" \
-d '{
"name": "feature-implementation-chain",
"description": "Multi-step chain for implementing a feature end-to-end",
"steps": [
{
"order": 1,
"promptId": "analyze-requirements",
"outputKey": "requirements"
},
{
"order": 2,
"promptId": "design-architecture",
"inputMapping": { "requirements": "step_1.requirements" },
"outputKey": "design"
},
{
"order": 3,
"promptId": "generate-code",
"inputMapping": {
"requirements": "step_1.requirements",
"design": "step_2.design"
},
"outputKey": "code"
}
]
}'
Authentication
All Veritas API requests require a valid API key:
Authorization : Bearer so1_key_abc123xyz789
See Authentication for API key management.
Common Use Cases
Prompt Engineering Workflow
Create Initial Prompt
Define prompt template with variables and metadata
Test Variations
Run A/B tests with different temperatures, models, and phrasings
Analyze Performance
Review output quality, token usage, and latency
Iterate and Refine
Update prompt based on test results
Deploy to Agents
Associate refined prompt with agent workflows
Multi-Agent Chain Design
// Design a chain that coordinates multiple agents
const chain = await fetch ( 'https://api.so1.io/v1/veritas/chains' , {
method: 'POST' ,
headers: {
'Authorization' : `Bearer ${ API_KEY } ` ,
'Content-Type' : 'application/json'
},
body: JSON . stringify ({
name: 'full-stack-feature-chain' ,
steps: [
{
order: 1 ,
promptId: 'requirement-analysis' ,
agentId: 'prompt-refiner' ,
outputKey: 'analyzed_requirements'
},
{
order: 2 ,
promptId: 'backend-design' ,
agentId: 'hono-backend' ,
inputMapping: { requirements: 'step_1.analyzed_requirements' },
outputKey: 'backend_spec'
},
{
order: 3 ,
promptId: 'frontend-design' ,
agentId: 'nextjs-frontend' ,
inputMapping: { requirements: 'step_1.analyzed_requirements' },
outputKey: 'frontend_spec' ,
parallel: true // Run in parallel with backend
},
{
order: 4 ,
promptId: 'implementation' ,
agentId: 'factory-orchestrator' ,
inputMapping: {
backend: 'step_2.backend_spec' ,
frontend: 'step_3.frontend_spec'
},
outputKey: 'implementation_result'
}
]
})
});
Fragment Reuse
// Create reusable fragments for consistent agent behavior
const fragments = [
{
name: 'expert-role-engineering' ,
type: 'role' ,
content: 'You are a senior software engineer with expertise in TypeScript, React, and Node.js.'
},
{
name: 'code-quality-constraints' ,
type: 'constraints' ,
content: '- Follow TypeScript best practices \n - Use functional programming patterns \n - Include comprehensive error handling \n - Add JSDoc comments for all functions'
},
{
name: 'api-endpoint-example' ,
type: 'example' ,
content: '```typescript \n app.get("/api/users/:id", async (c) => { \n const id = c.req.param("id"); \n const user = await db.user.findUnique({ where: { id } }); \n return c.json(user); \n }); \n ```'
}
];
// Compose prompt from fragments
const prompt = `
{{fragment:expert-role-engineering}}
{{fragment:code-quality-constraints}}
Example:
{{fragment:api-endpoint-example}}
Now implement: {{task_description}}
` ;
All Veritas API responses follow a consistent structure:
Success Response
{
"success" : true ,
"data" : {
"promptId" : "code-review-prompt" ,
"version" : 2 ,
"status" : "active"
},
"meta" : {
"requestId" : "req-abc123" ,
"timestamp" : "2024-03-10T15:30:00.123Z"
}
}
Error Response
{
"success" : false ,
"error" : {
"code" : "PROMPT_NOT_FOUND" ,
"message" : "Prompt 'invalid-prompt' does not exist" ,
"details" : {
"promptId" : "invalid-prompt"
}
},
"meta" : {
"requestId" : "req-abc123" ,
"timestamp" : "2024-03-10T15:30:00.123Z"
}
}
Rate Limits
Veritas API endpoints have tier-based rate limits:
Tier Requests/Min Prompt Tests/Min Free 10 5 Starter 60 20 Professional 300 50 Enterprise 1,000 200
Prompt Testing : Uses LLM quotas and has additional rate limits to prevent excessive API costs.
See Rate Limits for detailed policies.
SDKs and Libraries
Official SDKs include Veritas client:
TypeScript/Node.js
Python
Go
import { SO1 } from '@so1/sdk' ;
const client = new SO1 ({ apiKey: process . env . SO1_API_KEY });
// Create prompt
const prompt = await client . veritas . prompts . create ({
name: 'code-review-prompt' ,
template: 'Review this code: {{code}}' ,
variables: [ 'code' ]
});
// Test prompt
const result = await client . veritas . prompts . test ( 'code-review-prompt' , {
variables: { code: 'function add(a, b) { return a + b }' },
model: 'claude-sonnet-4'
});
console . log ( 'Prompt output:' , result . output );
Best Practices
1. Use Descriptive Variable Names
# ❌ Bad
template : "Analyze {{x}} and generate {{y}}"
# ✅ Good
template : "Analyze {{source_code}} and generate {{test_cases}}"
Clear variable names improve prompt maintainability and agent integration.
2. Version Prompts Systematically
Use semantic versioning (1.0.0, 1.1.0, 2.0.0)
Document changes in metadata
Test new versions before promoting to production
Maintain backward compatibility when possible
3. Test Prompts Thoroughly
// Test with edge cases
const testCases = [
{ code: 'function add(a, b) { return a + b }' },
{ code: '' }, // Empty input
{ code: 'import * as fs from "fs"; \n fs.readFileSync("sensitive.txt")' }, // Security concern
{ code: '// Very long file with 10,000 lines...' } // Token limits
];
for ( const testCase of testCases ) {
const result = await client . veritas . prompts . test ( 'code-review-prompt' , {
variables: testCase
});
console . log ( 'Test result:' , result . output );
}
4. Leverage Fragments for Consistency
Create reusable fragments for:
Roles : Expert personas (e.g., “senior engineer”, “security auditor”)
Constraints : Consistent guidelines across prompts
Examples : Few-shot learning patterns
Output Formats : Structured response templates
API Endpoints