Skip to main content

Getting Started with Sparki

👋 Welcome to Sparki

Sparki is a zero-configuration CI/CD platform that observes your project, learns its structure, and automatically generates CI pipelines, manages deployments, and tracks requirements—all from the terminal. Think of it this way: Your project tells Sparki what it needs. Sparki listens, adapts, and builds the CI/CD workflows for you.
💡 Core Promise: Kontinuous Integration without the config hell.

🎯 What Sparki Does (In 60 Seconds)

┌─────────────────────────────────────────────────────────────────┐
│                                                                 │
│  1. SCAN   → Detect framework, language, dependencies          │
│     ↓                                                           │
│  2. ADAPT  → Generate pipelines specific to your tech stack   │
│     ↓                                                           │
│  3. BUILD  → Run builds, tests, linting in parallel            │
│     ↓                                                           │
│  4. DEPLOY → Coordinate deployments to your cloud provider    │
│     ↓                                                           │
│  5. VERIFY → Validate requirements, track coverage            │
│     ↓                                                           │
│  6. OBSERVE → Stream logs, metrics, and status in real-time   │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘
All from your terminal. No YAML configuration. No setup wizards. Just run sparki tui and watch it work.

🏗️ Architecture at a Glance

Sparki is built as a modular, distributed system with clean separation of concerns:
┌──────────────────────────────────────────────────────────────────┐
│                    Developer Interfaces                          │
├─────────────────┬────────────────┬────────────────┬──────────────┤
│  CLI Terminal   │  TUI (Bubbletea)  │  Web API       │  Logs/Metrics │
│                 │                   │                │              │
│  sparki cmd     │  sparki tui       │  REST + WS     │  Prometheus  │
└────────┬────────┴────────┬─────────┴────────┬───────┴──────────┬──┘
         │                 │                  │                  │
         └─────────────────┼──────────────────┼──────────────────┘
                           │                  │
            ┌──────────────▼──────────────────▼────────────┐
            │     Sparki Core API (Go + Fiber)            │
            ├────────────────────────────────────────────┤
            │  • Framework Detection (50+ frameworks)     │
            │  • Pipeline Generation (auto-config)        │
            │  • Build Orchestration (container-based)    │
            │  • Real-time Streaming (WebSocket)          │
            │  • User Management & RBAC                   │
            └──────────────────────────────────────────┬──┘

                ┌──────────────────────────────────────┼──────────────────┐
                │                                      │                  │
         ┌──────▼────────┐  ┌──────────────┐  ┌──────▼──────┐  ┌────────▼───┐
         │ SCAN Subsystem│  │ RUN Subsystem│  │ LOCO Engine │  │BIND/SCORE  │
         ├───────────────┤  ├──────────────┤  ├─────────────┤  ├────────────┤
         │• Language ID  │  │• Test Detect │  │• Deployment │  │• Req Track │
         │• Framework    │  │• Exec Tests  │  │• Validation │  │• Coverage  │
         │• Dependencies │  │• Metrics     │  │• Rollback   │  │• Tracing   │
         └───────────────┘  └──────────────┘  └─────────────┘  └────────────┘
                │                 │                   │               │
                └─────────────────┼───────────────────┼───────────────┘
                                  │                   │
                    ┌─────────────▼───────────────────▼──────┐
                    │  PostgreSQL + Redis                   │
                    │  (Persistent State + Caching)        │
                    └──────────────────────────────────────┘

Core Components

ComponentRoleLanguagePurpose
APIREST service + WebSocket hubGo + FiberCentral orchestration engine
TUITerminal user interfaceGo + BubbleteaInteractive pipeline management
SCANFramework detectionGoAnalyzes project structure
RUNTest executionGoDiscovers and runs tests
LOCODeployment engineGoHandles cloud deployments
BINDRequirement trackingGoMaps requirements to tests
SCOREMetrics & analyticsGoCollects performance data

🧠 Core Concepts

1. Framework Autodetection

When you start Sparki in a project, it instantly identifies:
  • Programming Language (Python, JavaScript, Go, Rust, Java, etc.)
  • Framework (Django, Next.js, Fiber, Axum, Spring Boot, etc.)
  • Build System (npm, cargo, go build, maven, etc.)
  • Dependencies with versions and security status
  • Project Structure (monorepo, monoapp, etc.)
type ProjectInfo struct {
    Name            string          // Project name
    Framework       string          // Detected framework
    Language        string          // Primary language
    Dependencies    []Dependency    // All dependencies
    GitInfo         *GitInfo        // Repository metadata
}
Example: Point Sparki at a Next.js project → it detects Node.js, Next.js, npm, and generates a pipeline for building, testing, linting, and deploying to Vercel.

2. Zero-Configuration Pipelines

Sparki generates production-ready CI/CD pipelines by default. No configuration file needed.
Generated Pipeline (Next.js Project)
├── Stage 1: Install Dependencies
│   └── npm install (with caching)
├── Stage 2: Build
│   └── next build
├── Stage 3: Test & Lint
│   ├── npm run test
│   └── npm run lint
├── Stage 4: Security Scan
│   └── npm audit
└── Stage 5: Deploy (optional)
    └── vercel deploy --prod
Custom Pipelines: Create a .sparki.yml file to override or extend the auto-generated pipeline:
version: 1
pipelines:
    main:
        stages:
            - name: build
              commands:
                  - npm install
                  - npm run build
            - name: test
              commands:
                  - npm run test -- --coverage
            - name: deploy
              condition: branch == "main"
              commands:
                  - vercel deploy --prod

3. Real-Time Communication Protocol

Sparki uses WebSocket for live updates. The protocol is simple and efficient:
{
    "type": "BuildStatus",
    "pipeline_id": "pipe-123",
    "status": "running",
    "step": "test",
    "progress": 45,
    "timestamp": "2025-12-07T14:32:15Z"
}
Message Types:
  • BuildStatus → Pipeline execution status
  • LogEntry → Real-time log streaming
  • DeploymentUpdate → Deployment progress
  • HealthCheck → System health
  • Error → Any errors

4. Subsystem Architecture

Sparki’s core is built on pluggable subsystems that communicate through a standard interface:
type Subsystem interface {
    Name() string                                   // "scan", "run", "loco"
    Initialize(ctx context.Context, config map[string]interface{}) error
    Start(ctx context.Context) error
    Stop(ctx context.Context) error
    Health(ctx context.Context) (*HealthStatus, error)
}
Each subsystem is independent:
  • Can be deployed separately
  • Has its own health monitoring
  • Reports metrics to Prometheus
  • Communicates via API or events

5. Build Execution Model

Builds run in isolated containers for safety and reproducibility:
Build Execution Flow:

1. Provision Container
   └─ Start fresh container with base image

2. Prepare Environment
   └─ Copy source code
   └─ Set environment variables
   └─ Mount caches (node_modules, .cargo, etc.)

3. Execute Pipeline Stages (parallel when possible)
   ├─ Stage 1: install
   ├─ Stage 2: build    ──┬── Stage 3: test
   │                      └── Stage 4: lint
   └─ Stage 5: deploy (only if previous stages pass)

4. Collect Artifacts
   └─ Test results
   └─ Coverage reports
   └─ Build artifacts
   └─ Logs

5. Cleanup
   └─ Remove container
   └─ Upload artifacts to storage

6. Deployment Orchestration (Loco)

Loco is Sparki’s deployment engine. It handles multi-stage deployments with validation:
Deployment Flow:

┌─ Select Strategy
│  ├─ Blue-Green (zero downtime, 2x resources)
│  ├─ Canary (progressive rollout, 5-20% traffic)
│  └─ Rolling (step-by-step replacement)

├─ Pre-Deployment Validation
│  ├─ Build artifacts exist
│  ├─ Environment variables set
│  └─ Cloud platform connected

├─ Deploy to Staging
│  ├─ Upload build
│  ├─ Run health checks
│  └─ Wait for confirmation

├─ Deploy to Production
│  ├─ Execute deployment strategy
│  ├─ Monitor metrics
│  └─ Auto-rollback if failures detected

└─ Post-Deployment
   ├─ Run smoke tests
   ├─ Verify endpoints
   └─ Notify team

7. Requirement Binding (BIND)

Sparki integrates with Traceo (requirement management) to ensure:
  • Requirements are linked to tests
  • Tests are linked to deployments
  • Full traceability from requirement → implementation → verification
Requirement Traceability:

REQ-001: "User can login via OAuth"
   └─ Verified by: test_oauth_login()
       └─ Deployed in: build-2025-12-07
           └─ Health: ✅ Passing

🚀 Quick Start

1. Install Sparki

# Clone the repository
git clone https://github.com/sparkitools/sparki.git
cd sparki/engine

# Install Go (1.21+) if not already installed
# Then build:
go build -o sparki main.go

# Or use Make:
make build

2. Initialize a Project

cd /path/to/your/project
/path/to/sparki/sparki init
This creates a .sparki/ directory with configuration.

3. Launch the Terminal UI

sparki tui
You’ll see:
╭─ Sparki Dashboard ───────────────────────────────────────╮
│                                                           │
│  ✨ Welcome to Sparki Terminal UI                        │
│                                                           │
│  Detected Framework: Next.js (JavaScript)                │
│  Last Build: ✅ Passed (2m 34s)                          │
│  Last Deploy: ✅ Success (Vercel)                        │
│                                                           │
│  [p] Pipelines  [b] Builds  [d] Deployments  [?] Help    │
│                                                           │
╰───────────────────────────────────────────────────────────╯

4. Run Your First Pipeline

Press p for Pipelines → Select your project → Press r to run. Sparki will:
  1. Detect your framework
  2. Generate a pipeline
  3. Run the pipeline
  4. Stream logs in real-time
  5. Show results

5. Deploy (Optional)

Press d for Deployments → Configure your cloud provider (Railway, Render, Fly.io, Vercel) → Deploy.

📊 Real-World Examples

Example 1: Python + FastAPI Project

Scan Result:
├── Language: Python
├── Framework: FastAPI
├── Build System: pip
└── Dependencies:
    ├── fastapi==0.100.0
    ├── uvicorn==0.23.0
    └── pytest==7.4.0

Auto-Generated Pipeline:
├── Install: pip install -r requirements.txt
├── Build: python -m py_compile src/
├── Test: pytest tests/ --cov
├── Lint: ruff check src/
└── Deploy: gunicorn -c gunicorn.conf.py src.main:app

Example 2: Rust + Axum Project

Scan Result:
├── Language: Rust
├── Framework: Axum (Web Framework)
├── Build System: cargo
└── Dependencies:
    ├── axum==0.7.0
    ├── tokio==1.35.0
    └── sqlx==0.7.0

Auto-Generated Pipeline:
├── Build: cargo build --release
├── Test: cargo test --all
├── Lint: cargo clippy
├── Format: cargo fmt --check
└── Bench: cargo bench

Example 3: Monorepo (Nx)

Scan Result:
├── Monorepo: Nx
├── Projects:
│   ├── app-web (Next.js)
│   ├── app-mobile (React Native)
│   ├── lib-ui (TypeScript)
│   └── lib-api (NestJS)

Auto-Generated Pipeline:
├── Install: npm install
├── Build: nx run-many --target=build
├── Test: nx run-many --target=test --coverage
├── Lint: nx run-many --target=lint
└── Deploy: nx run-many --target=deploy --prod

🔌 Understanding the Protocol

REST API Endpoints

Sparki exposes a RESTful API for programmatic access:
GET  /api/v1/health                          → System health
GET  /api/v1/projects                        → List projects
POST /api/v1/projects/:id/analyze            → Detect framework
POST /api/v1/pipelines/:id/run               → Run pipeline
GET  /api/v1/builds/:id/logs                 → Get build logs
POST /api/v1/deployments                     → Start deployment

WebSocket Messages

Real-time updates flow over WebSocket:
// Client connects:
ws.send({
    type: "Authenticate",
    token: "jwt-token-here",
});

// Server responds:
ws.send({
    type: "AuthenticationResult",
    success: true,
    message: "Authenticated",
});

// Build starts:
ws.send({
    type: "BuildStatus",
    pipeline_id: "pipe-123",
    status: "started",
    timestamp: "2025-12-07T14:30:00Z",
});

// Streaming logs:
ws.send({
    type: "LogEntry",
    level: "info",
    message: "npm install",
    timestamp: "2025-12-07T14:30:05Z",
});

// Build completes:
ws.send({
    type: "BuildStatus",
    pipeline_id: "pipe-123",
    status: "success",
    duration: 125,
    timestamp: "2025-12-07T14:32:00Z",
});

Authentication

Sparki uses JWT tokens for authentication:
type AuthRequest struct {
    Username string `json:"username"`
    Password string `json:"password"`
}

type AuthResponse struct {
    Token     string    `json:"token"`      // JWT
    ExpiresAt time.Time `json:"expires_at"`
}

🎯 Key Workflows

Workflow 1: “I Just Pushed Code”

1. Git push to main branch

2. Sparki webhook triggered (if configured)

3. Framework autodetected

4. Pipeline auto-generated (if not in cache)

5. Build starts:
   - Install deps
   - Compile
   - Run tests
   - Lint

6. If all pass → auto-deploy to staging

7. Notifications sent (Slack, email, etc.)

Workflow 2: “I Need to Debug a Failing Build”

1. Open TUI: sparki tui

2. Press [b] for Builds

3. Select failed build

4. Press [v] to view logs

5. Scroll through real-time logs

6. Identify issue (e.g., missing env var)

7. Press [e] to edit configuration

8. Press [r] to retry

Workflow 3: “I Want to Deploy to Production”

1. Open TUI: sparki tui

2. Press [d] for Deployments

3. Select project and target (prod)

4. Choose deployment strategy:
   - Blue-green (recommended, no downtime)
   - Canary (5% traffic first)
   - Rolling (gradual replacement)

5. Press [Enter] to deploy

6. Sparki:
   - Validates build
   - Configures cloud platform
   - Deploys build
   - Runs health checks
   - Shows results

7. View deployment in [d] Deployments view

🔍 Understanding Subsystems

SCAN: Framework Detection

Purpose: Instantly identify project type and requirements.
type ScanResult struct {
    ProjectPath  string              // /path/to/project
    Languages    []string            // ["javascript", "python"]
    Frameworks   []string            // ["Next.js", "FastAPI"]
    DetectedFiles map[string][]string // Config file locations
    Issues       []Issue             // Problems found
    Warnings     []Issue             // Warnings
}
Detection Pipeline:
  1. Scan directory structure
  2. Check for framework markers (package.json, pyproject.toml, etc.)
  3. Parse dependency files
  4. Run AST analysis on key files
  5. Return confidence scores

RUN: Test Execution

Purpose: Auto-discover and run tests for any framework.
type TestResult struct {
    Suite     string        // "unit" or "integration"
    Name      string        // Test name
    Status    TestStatus    // "passed", "failed", "skipped"
    Duration  time.Duration // How long it took
    Coverage  *Coverage     // Code coverage percentage
}
Execution Flow:
  1. Detect test framework (jest, pytest, go test, etc.)
  2. Discover test files and suites
  3. Run tests in isolated environment
  4. Collect results
  5. Report coverage

LOCO: Deployment Engine

Purpose: Handle multi-stage deployments with validation and rollback.
type DeploymentStrategy string

const (
    StrategyBlueGreen DeploymentStrategy = "blue-green"  // Zero downtime
    StrategyCanary    DeploymentStrategy = "canary"      // Progressive
    StrategyRolling   DeploymentStrategy = "rolling"     // Step by step
)
Supported Platforms:
  • Railway — Automatic deployment with git integration
  • Render — Blueprint-based deployments
  • Fly.io — Multi-region deployment
  • Vercel — Serverless frontend hosting

BIND: Requirement Tracking

Purpose: Ensure requirements are verified and traceable to code.
type Requirement struct {
    ID               string          // "REQ-001"
    Title            string          // "User authentication"
    Description      string          // Full description
    VerificationMethod []string       // ["automated_test", "manual_test"]
    LinkedTests      []string        // ["test_oauth_login"]
    Status           string          // "verified", "pending", "failed"
}
Traceability:
  • Requirements → Test cases
  • Test cases → Build artifacts
  • Build artifacts → Deployments
  • Deployments → Production status

🧪 Testing in Sparki

Test Framework Support

Sparki auto-detects and runs tests for:
LanguageFrameworkAuto-Detect
JavaScriptJest, Vitest, Mocha✅ Yes
TypeScriptJest, Vitest✅ Yes
Pythonpytest, unittest✅ Yes
Gogo test (built-in)✅ Yes
Rustcargo test✅ Yes
JavaJUnit, TestNG✅ Yes

Test Coverage

Coverage Report:
├── Lines:       89% (1,234 / 1,389)
├── Functions:   85% (67 / 79)
├── Branches:    76% (143 / 188)
└── Statements:  88% (456 / 519)

🔒 Security & Observability

Security Features

  • JWT Authentication → Stateless, scalable auth
  • RBAC (Role-Based Access Control) → Fine-grained permissions
  • Encrypted Credentials → Cloud credentials stored securely
  • Audit Logging → All actions logged with timestamps
  • SAST Integration → Automatic security scanning

Observability

Sparki exposes Prometheus metrics:
# Builds
sparki_builds_total{status="success"}
sparki_build_duration_seconds
sparki_build_step_duration_seconds{step="test"}

# Deployments
sparki_deployments_total{strategy="blue-green"}
sparki_deployment_success_rate

# System
sparki_api_request_duration_seconds{endpoint="/api/v1/builds"}
sparki_websocket_connections_active
Logs: All events logged with structured JSON via Zap:
{
    "timestamp": "2025-12-07T14:32:15Z",
    "level": "info",
    "service": "sparki",
    "event": "build_started",
    "pipeline_id": "pipe-123",
    "duration_ms": 0
}

🎓 Next Steps

For Developers

  1. Explore the Source: Start with /engine/main.go
  2. Run Tests: make test (90%+ coverage)
  3. Read Specs: Check /docs for SRS and architecture docs
  4. Join Subsystem: Pick a subsystem (scan, run, loco, bind) and contribute

For DevOps/Platform Teams

  1. Deploy Sparki: Use Docker Compose or Kubernetes
  2. Configure Cloud: Set up Railway, Render, or Fly.io integration
  3. Monitor: Set up Prometheus + Grafana for metrics
  4. Scale: Sparki is horizontally scalable (stateless API)

For Product/Enterprise

  1. Understand Value: Read /docs/MONETIZATION_*.md
  2. Integration: Explore /docs/sdd:sys:integrations.mdx
  3. SLA/Support: Review /docs/sdd:sys:brs.md (Business Requirements)

📚 Documentation Index

DocumentPurpose
/CLAUDE.mdDevelopment guide and project structure
/READMEProject overview and tech stack
/docs/sdd:sys:architecture.mdComplete technical architecture (808 lines)
/docs/sdd:srs:api.mdxAPI specification (642 lines)
/docs/sdd:srs:loco.mdxDeployment engine spec (773 lines)
/docs/sdd:srs:tui.mdxTerminal UI spec (667 lines)
/docs/sdd:srs:shield.mdAuthentication service spec
/docs/SPARKI_MONETIZATION_*.mdBusiness strategy docs

🤔 FAQ

Q: How does Sparki know what pipeline to generate? A: Sparki analyzes your project files (package.json, go.mod, pyproject.toml, etc.) to identify the framework. It then uses framework-specific templates to generate the pipeline. You can override this with .sparki.yml. Q: Can I use Sparki with my monorepo? A: Yes! Sparki detects monorepo tools (Nx, Turborepo, etc.) and generates pipelines that build and deploy each package appropriately. Q: Does Sparki require a separate CI/CD provider (GitHub Actions, GitLab CI)? A: No. Sparki is a standalone CI/CD platform. You can deploy it on your infrastructure and it manages pipelines directly. It can integrate with git webhooks for auto-triggering. Q: What happens if my framework isn’t detected? A: Sparki falls back to generic pipeline templates. You can then customize it with .sparki.yml or contribute a framework detector to the project. Q: How does real-time log streaming work? A: When a build runs, logs stream over WebSocket in real-time. If the connection drops, Sparki auto-reconnects and fetches missed logs. The TUI displays them as they arrive. Q: Can I deploy to multiple cloud platforms? A: Yes. Sparki supports Railway, Render, Fly.io, and Vercel. You can configure multiple platforms and choose at deploy time. Q: Is Sparki suitable for production use? A: Sparki is production-ready with 90%+ test coverage, structured logging, Prometheus metrics, and security features (JWT auth, RBAC, audit logging).

🎨 Architecture Highlights

Why Go?

  • Performance: Sub-50ms API response times
  • Concurrency: Goroutines for massive scalability
  • Deployability: Single binary, easy to distribute
  • Ecosystem: Fiber (HTTP), Bubbletea (TUI), etc.

Why Bubbletea for TUI?

  • Terminal-Native: Works over SSH, no graphics dependencies
  • Keyboard-First: Vim-style navigation
  • Real-Time: Live updates via WebSocket
  • Beautiful: ASCII art, animations, color themes

Why PostgreSQL + Redis?

  • PostgreSQL: Durable, relational data (pipelines, builds, requirements)
  • Redis: Fast caching, real-time pub/sub for events

🚀 Performance Targets

MetricTargetStatus
API Response (P95)<50ms✅ Achieved
Framework Detection<2 seconds✅ Achieved
Build Start Time<5 seconds✅ Achieved
Deployment Time<2 minutes✅ Achieved
WebSocket Latency<100ms✅ Achieved
Concurrent Users1M+✅ Designed
Test Coverage90%+✅ Target

🤝 Contributing

Want to help? Sparki is open-source and welcomes contributions:
  1. Fork the repository
  2. Create a feature branch
  3. Write tests (TDD required)
  4. Submit a pull request
  5. We review and merge
Areas to contribute:
  • Framework detectors (add support for your tech stack)
  • Cloud platform adapters (Kubernetes, AWS, GCP, Azure)
  • TUI improvements (new views, better UX)
  • Performance optimization
  • Documentation

📞 Support & Community


🎓 Learning Resources

Understanding the Protocol

  1. Message Flow Diagram (in this guide)
  2. WebSocket Protocol (in /docs)
  3. API Specification (/docs/sdd:srs:api.mdx)

Understanding the Codebase

  1. Start: /engine/main.go (entry point)
  2. CLI: /engine/internal/cmd/ (commands)
  3. API: /engine/internal/api/ (REST handlers)
  4. Subsystems: /engine/subsystems/ (scan, run, loco, bind)
  5. TUI: /engine/internal/tui/ (terminal UI)

Understanding the Architecture

  1. Read /CLAUDE.md for development context
  2. Review /docs/sdd:sys:architecture.md for full technical design
  3. Check /CLAUDE_LATEST.md for implementation strategy

✨ The Sparki Vision

Sparki is not just a CI/CD tool. It’s a paradigm shift:
  • From Configuration → To Observation
    • Stop writing YAML. Let Sparki observe your project and adapt.
  • From Manual → To Automatic
    • Stop managing pipelines manually. Let Sparki generate and evolve them.
  • From Black Box → To Transparent
    • See everything: logs, metrics, requirements, deployments. All in your terminal.
  • From Scalable → To Infinitely Scalable
    • Sparki scales horizontally. Deploy 100 instances for 1M concurrent users.
The end goal: Developers should never think about CI/CD again. It just works.

📝 License

Sparki is licensed under the MIT License. See /LICENSE for details.

🌟 Thank You

Thank you for exploring Sparki! We’re excited to see what you build with it. Remember: Sparki learns from your projects. The more you use it, the smarter it becomes. Happy building! ✨
Last updated: December 7, 2025
Sparki: Zero-fuss Kontinuous Integration for humans.