Overview
This runbook covers operational procedures for managing the Veritas prompt library using SO1 prompt management agents. Veritas is the centralized prompt repository that powers all SO1 agents, ensuring consistency, quality, and versioning of AI prompts across the platform. Purpose: Provide step-by-step instructions for creating, refining, and managing prompts in Veritas Scope: Prompt engineering, chain architecture, fragment management, version control, quality assurance Target Audience: Prompt engineers, AI engineers, agent developersPrerequisites
Required Access
Required Access
- Veritas prompt library access (GitHub:
so1-io/veritas) - Control Plane API access (
CONTROL_PLANE_API_KEY) - OpenAI API access (for prompt testing)
- Agent execution permissions
Required Tools
Required Tools
- OpenCode with prompt management agents installed
- Git CLI (for Veritas repository management)
curlor API client- JSON/YAML editor
- Prompt testing framework
Required Knowledge
Required Knowledge
- Understanding of prompt engineering principles
- Familiarity with chain-of-thought prompting
- Knowledge of SO1 agent architecture
- Basic understanding of LLM capabilities and limitations
Procedure 1: Refine Agent Prompt
Step 1: Identify Prompt for Refinement
Common triggers for prompt refinement:- Agent producing inconsistent outputs
- Agent missing edge cases
- Agent not following instructions
- Agent verbosity issues (too long/short responses)
- New requirements or capabilities added
Step 2: Invoke Prompt Refiner Agent
Step 3: Review Refined Prompt
Step 4: Test Refined Prompt
Step 5: Deploy Refined Prompt
Procedure 2: Design Prompt Chain
Step 1: Define Chain Requirements
Identify when a chain is needed:- Complex task requiring multiple reasoning steps
- Need for specialized sub-tasks
- Output from one prompt feeds into another
- Quality gating (verification/validation steps)
Step 2: Invoke Chain Architect Agent
Step 3: Test Chain Execution
Step 4: Deploy Chain to Veritas
Procedure 3: Manage Prompt Fragments
Step 1: Identify Reusable Prompt Components
Common fragment types:- System instructions: Role definitions, general behavior
- Best practices: Domain-specific guidelines
- Output formats: JSON schemas, markdown templates
- Error handling: How to handle edge cases
- Constraints: Token limits, formatting rules
Step 2: Create New Fragment with Fragment Curator
Step 3: Test Fragment Impact
Step 4: Deploy Fragment
Step 5: Monitor Fragment Performance
Procedure 4: Version Control and Rollback
Step 1: Tag Prompt Version
Step 2: Monitor Production Performance
Step 3: Rollback if Needed
When prompt performance degrades:Verification Checklist
After completing prompt management operations, verify:Troubleshooting
| Issue | Symptoms | Root Cause | Resolution |
|---|---|---|---|
| Inconsistent Outputs | Same input produces different results | Temperature too high, ambiguous instructions | Lower temperature (0.3-0.5), add specific constraints |
| Prompt Too Long | Context window errors, high costs | Excessive verbosity, redundant instructions | Use fragments, remove redundant sections, increase conciseness |
| Chain Step Failures | Chain execution stops mid-way | Step dependency failure, quality gate not met | Add retry logic, review quality gate thresholds |
| Fragment Conflicts | Contradictory instructions | Multiple fragments with conflicting guidance | Review fragment order, merge conflicting fragments |
| Low Success Rate | Agent outputs don’t meet requirements | Unclear instructions, missing edge cases | Add examples, specify edge case handling |
| High Token Usage | Costs increasing rapidly | Verbose prompts, unnecessary chain steps | Optimize prompt length, remove redundant steps |
| Slow Response Times | Agent execution takes >10s | Large context, complex reasoning | Simplify prompt, use chain for multi-step reasoning |
| Rollback Failed | Previous version not available | Version not tagged, deleted | Always tag stable versions, implement retention policy |
Detailed Troubleshooting: Inconsistent Outputs
Related Resources
Prompt Refiner Agent
Iterative prompt optimization
Chain Architect Agent
Multi-step prompt chain design
Fragment Curator Agent
Reusable prompt component management
Veritas Integration
Veritas architecture and integration patterns
Best Practices
Prompt Engineering
- Be specific and clear: Avoid ambiguous language, provide concrete examples
- Use structured output formats: JSON schemas, markdown templates
- Include constraints: Token limits, response length, formatting rules
- Test with edge cases: Empty inputs, very long inputs, malformed data
- Version everything: Tag stable versions, maintain changelog
Chain Design
- Keep chains focused: Each step should have a single, clear purpose
- Add quality gates: Validate outputs before proceeding to next step
- Implement error handling: Retry with feedback, fallback strategies
- Minimize token usage: Only pass necessary context between steps
- Monitor chain performance: Track success rates, execution times per step
Fragment Management
- Keep fragments atomic: Single responsibility per fragment
- Use descriptive names: Clear, searchable fragment IDs
- Document usage: Specify which agents should use each fragment
- Test impact: A/B test before deploying to all agents
- Regular review: Audit fragments quarterly, remove unused ones
Version Control
- Tag all stable versions: Use semantic versioning (major.minor.patch)
- Maintain rollback capability: Keep previous 3 versions accessible
- Document changes: Clear changelogs for every version
- Gradual rollout: Use canary deployments (10% → 50% → 100%)
- Monitor after deployment: Alert on performance degradation