Skip to main content

Quick Reference

PropertyValue
DomainPrompts
FORGE Stage3 (Documentation)
Version1.0.0
Primary OutputRefined Veritas-compliant prompts
Use this agent when you need to:
  • Improve the clarity and effectiveness of draft prompts
  • Validate prompts against Veritas schema requirements
  • Document variables and placeholders in existing prompts
  • Calculate quality scores for prompt assessment

Core Capabilities

Prompt Optimization

Improves prompt clarity, specificity, and effectiveness using established engineering patterns

Schema Validation

Ensures prompts conform to Veritas JSON schema with complete metadata

Variable Documentation

Documents all placeholders, types, defaults, and usage requirements

Quality Scoring

Assesses prompts against clarity rubric (target: >0.8)

When to Use

Ideal Use Cases

Draft prompts need clarity improvements before production use
Existing prompts produce inconsistent or ambiguous outputs
Variables/placeholders lack documentation or type information
Prompts require Veritas schema validation and compliance
Quality metrics needed for prompt performance assessment

Usage Examples

Refining a Code Review Prompt

Input: Draft Prompt
Review this code and tell me if it's good.

Code: {{code}}
Agent Invocation
# Via Factory Orchestrator
curl -X POST https://api.so1.io/v1/orchestrate \
  -H "Authorization: Bearer $SO1_API_KEY" \
  -d '{
    "task": "refine-prompt",
    "agent": "prompt-refiner",
    "inputs": {
      "draft_prompt": "Review this code and tell me if it'\''s good.\n\nCode: {{code}}",
      "use_case": "Automated PR code review",
      "target_format": "markdown"
    }
  }'
Output: Refined Prompt
{
  "id": "vrt-a1b2c3d4",
  "name": "Code Review Analysis",
  "category": "task",
  "prompt": {
    "system": "You are a senior software engineer conducting a thorough code review. Focus on code quality, security vulnerabilities, performance issues, and maintainability. Provide specific, actionable feedback with line references.",
    "user": "Review the following {{language}} code from the {{repository}} repository:\n\n```{{language}}\n{{code}}\n```\n\nContext: {{context}}\nFocus areas: {{focus_areas}}"
  },
  "variables": [
    {
      "name": "language",
      "type": "string",
      "required": true,
      "description": "Programming language (e.g., typescript, python)"
    },
    {
      "name": "code",
      "type": "string",
      "required": true,
      "description": "The code to review"
    }
  ],
  "refinement": {
    "clarity_score": 0.92,
    "improvements": [
      "Added specific role context (senior engineer)",
      "Documented all variables with types",
      "Specified output format expectations"
    ]
  }
}

Output Format

Veritas-Compliant Prompt Schema

interface RefinedPrompt {
  type: "veritas-prompt";
  version: "1.0.0";
  generated_by: "prompt-refiner";
  timestamp: string; // ISO8601
  
  content: {
    id: string; // vrt-{sha256[:8]}
    name: string;
    category: "task" | "chain" | "fragment" | "role";
    version: string;
    status: "draft" | "reviewed" | "production";
    
    prompt: {
      system: string;
      user: string;
      assistant_prefill?: string;
    };
    
    variables: Array<{
      name: string;
      type: "string" | "number" | "array" | "object" | "boolean";
      description: string;
      required: boolean;
      default?: any;
      enum?: any[];
    }>;
    
    output_format: {
      type: "text" | "json" | "markdown";
      schema?: object;
    };
    
    metadata: {
      author: string;
      created: string;
      tags: string[];
      use_cases: string[];
    };
  };
  
  refinement: {
    original_hash: string;
    changes: string[];
    clarity_score: number; // 0.0 - 1.0
    improvements: string[];
  };
}

Prompt Quality Rubric

The agent assesses prompts using a weighted scoring system:
ComponentWeightCriteria
Specificity25%Clear, unambiguous instructions
Structure20%Logical organization and flow
Variables20%Well-defined, documented placeholders
Output Format20%Clear output expectations
Context15%Appropriate role/persona setting
Target Score: >0.8 for production readiness

Improvement Patterns

Common prompt engineering patterns applied during refinement:

Role Anchoring

"You are a senior TypeScript engineer..."
Sets clear expertise context

Output Structuring

"Respond with JSON containing: {severity, justification, actions}"
Defines expected format

Example Injection

"Here's an example of good output: ..."
Provides few-shot learning

Constraint Setting

"Keep response under 500 words, focus on actionable items"
Adds explicit boundaries

FORGE Gate Compliance

Before invoking this agent, ensure:
  • Draft prompt provided: Raw prompt text to be refined
  • Use case documented: Target application or workflow context
  • Veritas schema available: Schema version for validation
  • Output format specified: Expected response format (text/JSON/markdown)
Verification: Factory Orchestrator validates inputs before agent activation
This agent completes successfully when:
  • Refined prompt delivered: Improved version with documented changes
  • Schema validation passed: Full Veritas compliance confirmed
  • Clarity score calculated: Score >0.8 achieved
  • Variables documented: All placeholders have type/description/defaults
  • Test cases defined: Coverage >80% of variable combinations
  • Decision record logged: ADR format with refinement rationale
Verification: Gatekeeper validates outputs before marking stage complete
All significant refinement decisions are logged as:
date:2024-01-15T10:30:00Z|context:Refining code review prompt for PR automation|decision:Added severity scale and structured output|rationale:Original prompt produced inconsistent severity classifications|consequences:Clarity score improved from 0.65 to 0.92|status:accepted

Integration Points

Control Plane API

This agent does not directly interact with the Control Plane API - it operates on prompt assets managed in the Veritas repository.

Veritas Prompt Library

Consumes:
  • vrt-meta001: Meta-prompt template for prompt improvement
  • vrt-clarity01: Clarity assessment criteria rubric
  • vrt-schema01: Veritas schema validation rules
Produces:
  • Refined prompts in veritas/agent-prompts/prompts/
  • Status: draft (requires manual review before production)

Repository Integration

  • Primary: so1-io/veritas - Veritas prompt library
  • Secondary: so1-io/so1-agents - Agent prompt consumption
AgentRelationshipIntegration Point
Chain ArchitectDownstreamReceives refined prompts for multi-step chains
Fragment CuratorPeerProvides reusable fragments during refinement
Factory OrchestratorUpstreamRoutes refinement requests and manages workflow
Documentation AgentsConsumerRequest prompt refinement for doc generation tasks

Error Handling

Common Issues

Low Clarity Score (<0.8)Cause: Ambiguous language or missing contextResolution: Apply specificity patterns (role anchoring, constraint setting)
Schema Validation FailureCause: Missing required fields or incorrect typesResolution: Add required fields per Veritas schema specification
Variable Extraction FailedCause: Inconsistent placeholder format (e.g., $var vs {{var}})Resolution: Standardize all placeholders to {{variable}} format

Escalation Path

If the agent cannot complete refinement:
  1. Log decision record with status:blocked
  2. Output partial refinement with clarity issues flagged
  3. Return control to Factory Orchestrator with blocker details

Success Metrics

MetricTargetMeasurement
Clarity Score>0.8Rubric-based weighted assessment
Schema Compliance100%Veritas schema validation pass
Variable Coverage100%All placeholders documented
Test Case Coverage>80%Tests per variable combination

Source Files

View Agent Source

Maintained in so1-agents repository under agents/prompts/prompt-refiner.md