Overview
The Prompts API manages individual LLM prompts with versioning, testing, and performance analytics.
Base URL: https://api.so1.io/v1/veritas/prompts
Create Prompt
Endpoint
Request Body
Unique prompt identifier (slug format: my-prompt-name)
Prompt purpose and usage description
Prompt template with {{variable}} placeholders
List of variable names used in template
Additional metadata (domain, version, tags, agentId)
Example Request
curl -X POST https://api.so1.io/v1/veritas/prompts \
-H "Authorization: Bearer so1_key_abc123xyz" \
-H "Content-Type: application/json" \
-d '{
"name": "api-endpoint-generator",
"description": "Generates Hono.js API endpoints with validation and error handling",
"template": "You are an expert backend engineer.\n\nGenerate a Hono.js API endpoint for:\n- Method: {{method}}\n- Path: {{path}}\n- Description: {{description}}\n\nInclude:\n1. Input validation using Zod\n2. Comprehensive error handling\n3. TypeScript types\n4. JSDoc comments\n\nProvide only the code, no explanations.",
"variables": ["method", "path", "description"],
"metadata": {
"domain": "engineering",
"version": "1.0.0",
"tags": ["backend", "hono", "api"],
"agentId": "hono-backend"
}
}'
Response
{
"success": true,
"data": {
"promptId": "api-endpoint-generator",
"version": 1,
"status": "draft",
"createdAt": "2024-03-10T15:30:00Z"
}
}
Get Prompt
Endpoint
GET /v1/veritas/prompts/{promptId}
Query Parameters
Specific version (defaults to latest)
Response
{
"success": true,
"data": {
"promptId": "api-endpoint-generator",
"name": "API Endpoint Generator",
"description": "Generates Hono.js API endpoints",
"template": "You are an expert backend engineer...",
"variables": ["method", "path", "description"],
"version": 1,
"status": "active",
"metadata": {
"domain": "engineering",
"agentId": "hono-backend"
},
"performance": {
"totalTests": 47,
"averageTokens": 342,
"averageLatency": 2.4,
"successRate": 97.9
},
"createdAt": "2024-03-10T15:30:00Z",
"updatedAt": "2024-03-10T15:30:00Z"
}
}
Test Prompt
Execute prompt with test variables and get LLM response.
Endpoint
POST /v1/veritas/prompts/{promptId}/test
Request Body
Variable values matching prompt template
model
string
default:"claude-sonnet-4"
LLM model: claude-sonnet-4, claude-opus, gpt-4
Example Request
curl -X POST https://api.so1.io/v1/veritas/prompts/api-endpoint-generator/test \
-H "Authorization: Bearer so1_key_abc123xyz" \
-H "Content-Type: application/json" \
-d '{
"variables": {
"method": "POST",
"path": "/api/users",
"description": "Create new user with email and name"
},
"model": "claude-sonnet-4",
"temperature": 0.7
}'
Response
{
"success": true,
"data": {
"output": "import { Hono } from 'hono';\nimport { z } from 'zod';\n\nconst userSchema = z.object({\n email: z.string().email(),\n name: z.string().min(1)\n});\n\napp.post('/api/users', async (c) => {\n try {\n const body = await c.req.json();\n const validated = userSchema.parse(body);\n // Create user logic...\n } catch (error) {\n return c.json({ error: 'Invalid input' }, 400);\n }\n});",
"metadata": {
"model": "claude-sonnet-4",
"tokensUsed": 287,
"latency": 2.3,
"temperature": 0.7
},
"testId": "test-xyz789"
}
}
Update Prompt
Endpoint
PUT /v1/veritas/prompts/{promptId}
Request Body
All fields optional - only provided fields are updated.
Create new version vs. update current
Response
{
"success": true,
"data": {
"promptId": "api-endpoint-generator",
"version": 2,
"updatedAt": "2024-03-10T16:00:00Z"
}
}
List Prompts
Endpoint
Query Parameters
Filter by status: draft, active, archived
Response
{
"success": true,
"data": {
"prompts": [
{
"promptId": "api-endpoint-generator",
"name": "API Endpoint Generator",
"domain": "engineering",
"version": 2,
"status": "active",
"testsRun": 47,
"successRate": 97.9
}
// ... more prompts
],
"pagination": {
"total": 127,
"limit": 20,
"offset": 0,
"hasMore": true
}
}
}
Delete Prompt
Endpoint
DELETE /v1/veritas/prompts/{promptId}
Query Parameters
Delete specific version (defaults to all versions)
Response
{
"success": true,
"data": {
"promptId": "api-endpoint-generator",
"versionsDeleted": 2
}
}
A/B Test Prompts
Compare two prompt variations.
Endpoint
POST /v1/veritas/prompts/ab-test
Request Body
First prompt configuration (promptId or inline template)
Second prompt configuration
Array of variable sets to test
Example Request
curl -X POST https://api.so1.io/v1/veritas/prompts/ab-test \
-H "Authorization: Bearer so1_key_abc123xyz" \
-H "Content-Type: application/json" \
-d '{
"promptA": { "promptId": "api-endpoint-generator", "version": 1 },
"promptB": { "promptId": "api-endpoint-generator", "version": 2 },
"testCases": [
{ "method": "POST", "path": "/api/users", "description": "Create user" },
{ "method": "GET", "path": "/api/users/:id", "description": "Get user" },
{ "method": "PUT", "path": "/api/users/:id", "description": "Update user" }
]
}'
Response
{
"success": true,
"data": {
"testId": "abtest-xyz789",
"results": {
"promptA": {
"averageTokens": 287,
"averageLatency": 2.3,
"successfulTests": 3,
"failedTests": 0
},
"promptB": {
"averageTokens": 312,
"averageLatency": 2.1,
"successfulTests": 3,
"failedTests": 0
},
"recommendation": "promptB",
"reason": "Lower latency with acceptable token increase"
}
}
}