Summary
Set up the full delivery tracking for Olly Chat MVP — a feature that lets users interact with the Olly AI assistant through a web chat interface instead of the command line. Created 18 tracked issues across 3 delivery phases on GitHub, with clear dependencies and acceptance criteria for each piece.What is being built
Olly is Nestr’s local AI assistant (runs on-device, no cloud API keys needed). Today it only works in the terminal. This MVP adds:- An HTTP server around Olly so other apps can talk to it over the network
- A chat page in the Nestr web dashboard where users type messages and see real-time AI responses, including tool executions displayed as expandable cards
- Docker packaging so Olly can be deployed to Railway (cloud hosting) for shared access
Delivery structure
| Phase | What | Depends on | Priority |
|---|---|---|---|
| Phase 1 | Olly HTTP server (5 tasks) | Nothing | Critical path |
| Phase 2 | Web chat UI (5 tasks) | Phase 1 API | High |
| Phase 3 | Docker + Railway deploy (4 tasks) | Phase 1 + 2 | Medium |
Key design decision
The HTTP server is a pure adapter — zero changes to Olly’s AI engine, tools, or safety policy. Existing behaviour carries over unchanged. Lower risk, faster delivery.Operational takeaways
GitHub tooling gaps
- Organisation-level project boards require extra auth permissions not granted by default — needs manual browser re-auth. Note for any CI/CD automation touching GitHub Projects.
- Adding items to a project board must be done sequentially; parallel adds cause conflicts.
Cross-repo tracking on .github
Using the org’s .github repo as a central tracker for multi-repo features keeps everything in one view. The project board spans all phases.
Action items
- Begin Phase 1 implementation in
nestr-tools/olly(server package + serve command) - Finalise WebSocket event contract before Phase 2 starts — it’s the critical handoff between backend and frontend
- Set up Railway project with persistent volume for model caching (4GB model download on first boot, cached thereafter)
- Add
VITE_OLLY_URLto Vercel environment when Railway URL is known
Risks flagged
| Risk | Impact | Mitigation |
|---|---|---|
| AI response time (10-30s on CPU) | Users may think it’s broken | Status indicator shows “Thinking…” with animation |
| 4GB model download on cold start | First deploy takes ~4 minutes | Railway persistent volume caches after first boot |
| No authentication on Olly server | Unauthorised access possible | CORS restriction + Railway internal networking for MVP; auth planned post-MVP |