What happened
Following the platform audit that identified 4 critical bugs and mapped all 25 repos, we scoped the complete implementation backlog: 46 GitHub issues organized into 7 phases across 8 repositories. Every issue has clear acceptance criteria and dependency chains. The platform can now be built incrementally by any agent or developer picking up the next available issue.Business impact
The Clari platform converts raw agent work artifacts (TASKSET documents) into polished project documentation — Findings, Architecture Decision Records, Software Design Documents, and Runbooks. This session defined the full product roadmap from broken prototype to shippable product. Revenue model defined: Three Polar subscription tiers scoped with exact feature boundaries.| Tier | Price point | What it unlocks |
|---|---|---|
| FREE | $0 | Rule-based extraction, Findings only, 10 docs/month |
| PRO | TBD | AI-powered extraction (Claude), Findings + ADRs + SDDs, 100 docs/month |
| TEAM | TBD | Full synthesis across multiple documents, all doc types, unlimited |
Operational takeaways
- Execution is now unblocked: 46 issues with dependency chains mean any session can pick up the next logical unit of work. No more re-deriving “what should I build next” — the backlog answers that.
- Critical path is 4 phases: Fix bugs -> Build ingestion -> Build renderers -> Ship MCP tools. Estimated 6-8 focused sessions to MVP.
- Shared models prevent drift: A recurring problem was Go and Python implementations diverging on data models. Shared library issues (#2, #3 on
clari-shared-lib) ensure all services speak the same language from day one. - Each phase is independently deployable: F-0 fixes make sift-service usable today. F-1 makes extraction real. F-2 produces actual documents. No phase requires all others to deliver value.
Action items
- Next session: Start F-0 execution — clone
sift-service, fix the 4 blocking bugs (#3-#6), verify pipeline runs end-to-end - Pricing decision needed: Set PRO and TEAM price points before F-3 (Polar integration). Requires market positioning analysis.
- Agent delegation opportunity: F-0 bug fixes are mechanical — ideal for a code-focused agent. F-1 parser work is similarly well-scoped. Consider parallel agent sessions for independent subtasks.
- Customer validation: Once F-0 + F-1 + F-2 produce real Findings from real TASKSETs, test with one external user before building F-3 payment gating.