Summary
Baseline audit of five repositories that form the knowledge, testing, prompt engineering, and tooling backbone of the Devarno ecosystem. Purpose: establish measurable starting points for learning outcomes, agentic operation maturity, and marketing campaign coverage so future sessions can track delta.Repositories baselined
so1-content (content hub)
38 findings documents, 9 blog posts, 4 social campaign directories, 30 agent documentation pages, and 14 operational runbooks. 27 files contain marketing or campaign keywords. This is the primary publishing pipeline for technical content and campaign material.atlas (business knowledge base)
35 learning documents across Sparki, V01T, SO1, and family products. 695 total markdown files. 14 learnings reference agentic or autonomous operations. Marketing content lives insparki/assets/campaigns/, v01t/marketing/, and monetisation strategy docs. Atlas is the richest single source of business context across the ecosystem.
veritas (prompt engineering)
298 prompts in the index (116 graduated, 182 unreviewed). The README states 255 — a discrepancy that needs reconciliation. 8 v3 agent suites with 62 XML specifications and 44 runbooks. 14 prompts are marketing-specific (campaign creation, social media, branding, SEO, landing pages). The FORGE pipeline prompts and A2A handoff chains represent the most mature agentic operations in the ecosystem.ariel (testing framework)
4 baselines (2 active, 1 draft, 1 missing status field). 41 test definitions. 80% CI coverage gate. 3 files document learning outcomes. 18 agentic operation instances across 4 files. Zero marketing campaigns — ariel delegates all marketing to atlas by design.traceo-mcp-server (MCP tooling + infrastructure)
19 production MCP tools (9 core, 10 Ariel integration). 154 test functions across 33 files. 13 Terraform configs, 28 Kubernetes manifests, 2 Helm charts, 22 CI workflows. No dedicated learning outcome documents or marketing campaigns — this repo is pure infrastructure.Business impact
Learning outcomes are concentrated in atlas (35) and so1-content (38) with light coverage in ariel (3) and none in traceo-mcp. Future sessions that produce learnings should always commit to atlas; findings always to so1-content. Agentic operations are deepest in veritas (971 grep matches, FORGE pipeline, multi-agent chains) and growing in ariel (18 instances). The MCP server enables agentic operations but doesn’t document its own patterns — worth addressing as the tool count grows past 20. Marketing campaigns exist in atlas (22 files), so1-content (27 files), and veritas (14 prompts + reference template). Ariel and traceo-mcp have zero, which is correct for their role. Theveritas/reference/marketing-campaign-base.md template should be the canonical starting point for new campaigns.
Action items
- Reconcile veritas prompt count: index.json (298) vs README (255)
- Add status field to ariel’s draft baseline
- Consider adding a learning-outcomes doc pattern to traceo-mcp once tool count exceeds 20
- Use this baseline to measure delta at next quarterly review