Skip to main content

Sparki Technical Architecture Document

:::info readme This document provides the comprehensive technical architecture for the Sparki platform, detailing the design decisions, technology choices, deployment patterns, and scalability strategies that enable Sparki to deliver zero-configuration CI/CD at massive scale. The architecture is designed around core principles of performance, reliability, scalability, and developer experience. ::: Version: 1.0
Date: December 3, 2025
Architecture Owner: Chief Technology Officer
Last Updated: December 3, 2025

1. Architecture Overview

1.1 System Architecture Diagram

┌─────────────────────────────────────────────────────────────────┐
│                     Developer Interfaces                         │
├──────────────────┬──────────────────┬──────────────────┬────────┤
│  CLI (Sparki)    │  Terminal UI     │  Web Dashboard   │ REST   │
│  (Go Fiber)      │  (Go Bubbletea)  │  (React/Vue)     │ API    │
└────────┬──────────┴──────┬───────────┴───────┬──────────┴────┬───┘
         │                 │                   │              │
         └─────────────────┼───────────────────┼──────────────┘
                           │                   │
         ┌─────────────────┴───────────────────┴────────────┐
         │                                                   │
    ┌────▼────────────────────────────────────────────────┐ │
    │    Sparki Core Platform (Go Fiber + Axum)          │ │
    ├─────────────────────────────────────────────────────┤ │
    │  Load Balancer / API Gateway (Fiber)                │ │
    ├────┬──────────────┬──────────────┬──────┬──────────┤ │
    │    │              │              │      │          │ │
    │ ┌──▼──┐      ┌────▼──┐     ┌────▼──┐ ┌─▼──┐ ┌────▼──┐│
    │ │API  │      │Framework│    │Build  │ │Auth│ │Workspace││
    │ │Svc  │      │Detection│    │Engine │ │Svc │ │Admin  ││
    │ └──┬──┘      └────┬───┘     └──┬────┘ └─┬──┘ └───┬───┘│
    │    │              │            │        │        │    │
    └────┼──────────────┼────────────┼────────┼────────┼────┘
         │              │            │        │        │
    ┌────▼──────────────▼────────────▼────────▼────────▼────┐
    │  Loco Deployment Engine (Rust Axum/Tokio)            │
    ├───────────────────────────────────────────────────────┤
    │  Pre-Deploy Validation  │  Stage Manager              │
    │  Health Checks          │  Rollback Orchestration     │
    └──┬────────────────────────────────────────────────┬───┘
       │                                                │
    ┌──▼────────────┬────────────────────┬─────────────▼──┐
    │               │                    │                │
┌───▼────┐   ┌──────▼──┐   ┌────────────▼────┐  ┌──────▼──────┐
│PostgreSQL   │  Redis  │   │Cloud Adapters   │  │ Observability│
│ (Primary)   │ (Cache) │   │ (Railway, Render)  │  │ Stack       │
│             │         │   │                 │  │ (Prom, ELK) │
└─────────────┴─────────┴───┴─────────────────┴──┴─────────────┘

1.2 Layered Architecture

┌─────────────────────────────────────────────────────────┐
│  Presentation Layer                                     │
│  CLI, TUI, Web Dashboard, REST API                      │
├─────────────────────────────────────────────────────────┤
│  Application Layer                                      │
│  API Services (Go Fiber), Business Logic                │
├─────────────────────────────────────────────────────────┤
│  Integration Layer                                      │
│  Cloud Platform Adapters, External Service Integration  │
├─────────────────────────────────────────────────────────┤
│  Orchestration Layer                                    │
│  Loco Engine (Rust Axum), Job Scheduling, Workflows     │
├─────────────────────────────────────────────────────────┤
│  Data Layer                                             │
│  PostgreSQL, Redis, S3/Object Storage                   │
├─────────────────────────────────────────────────────────┤
│  Infrastructure Layer                                   │
│  Kubernetes, Docker, Observability Stack                │
└─────────────────────────────────────────────────────────┘

2. Core Components

2.1 Sparki API (Go Fiber)

Purpose: Central REST API providing project management, pipeline orchestration, and CI/CD operations. Technology Stack:
  • Framework: Go Fiber v2 (ultra-high-performance HTTP framework)
  • Runtime: Go 1.22+ with goroutine-based concurrency
  • Database Driver: pgx for PostgreSQL
  • Caching: go-redis/redis
  • JSON Parsing: encoding/json (stdlib optimized)
  • Error Handling: Custom error types with structured context
Key Characteristics:
  • Zero-allocation JSON parsing
  • Minimal garbage collection pressure
  • Sub-50ms p95 response times at scale
  • 100K+ concurrent connections per instance
  • Horizontal scaling without session affinity
Core Modules:
cmd/
├── api/
│   ├── main.go                  # Application entry point
│   └── config.go                # Configuration management
internal/
├── api/
│   ├── handlers/
│   │   ├── projects.go          # Project CRUD operations
│   │   ├── pipelines.go         # Pipeline management
│   │   ├── builds.go            # Build orchestration
│   │   ├── deployments.go       # Deployment coordination
│   │   └── auth.go              # Authentication flow
│   ├── middleware/
│   │   ├── auth.go              # JWT validation
│   │   ├── logging.go           # Request/response logging
│   │   ├── metrics.go           # Prometheus instrumentation
│   │   └── errors.go            # Error handling
│   ├── service/
│   │   ├── project.go           # Project business logic
│   │   ├── pipeline.go          # Pipeline generation
│   │   ├── build.go             # Build orchestration
│   │   └── detector.go          # Framework detection
│   ├── repository/
│   │   ├── project.go           # Project data access
│   │   ├── pipeline.go          # Pipeline persistence
│   │   └── build.go             # Build record storage
│   └── models/
│       ├── project.go           # Project domain model
│       ├── pipeline.go          # Pipeline domain model
│       └── build.go             # Build domain model
pkg/
├── detector/                    # Framework detection engine
├── adapters/                    # Framework adapters
├── storage/                     # Storage abstraction
└── observability/               # Logging and metrics

2.2 Loco Deployment Engine (Rust Axum/Tokio)

Purpose: Intelligent deployment orchestration with validation, health checks, and rollback capabilities. Technology Stack:
  • Framework: Rust Axum web framework
  • Async Runtime: Tokio (industry-leading async runtime)
  • HTTP Client: reqwest for cloud platform APIs
  • Database: sqlx for async PostgreSQL queries
  • Caching: redis crate for Redis integration
  • Serialization: serde/serde_json for JSON
  • Error Handling: anyhow/thiserror for result chains
Key Characteristics:
  • Lock-free async with zero-copy operations
  • Memory-safe concurrency without garbage collection
  • Sub-millisecond latencies for deployment operations
  • Predictable performance under load
  • Automatic resource cleanup via RAII pattern
Core Modules:
src/
├── main.rs                      # Application entry point
├── config/
│   └── mod.rs                   # Configuration management
├── orchestrator/
│   ├── mod.rs                   # Orchestration coordinator
│   ├── validator.rs             # Pre-deployment validation
│   ├── stage_manager.rs         # Multi-stage orchestration
│   ├── health_checker.rs        # Health check execution
│   └── rollback.rs              # Rollback coordination
├── adapters/
│   ├── mod.rs                   # Platform adapter trait
│   ├── railway.rs               # Railway adapter
│   ├── render.rs                # Render adapter
│   ├── fly_io.rs                # Fly.io adapter
│   └── vercel.rs                # Vercel adapter
├── scripts/
│   ├── executor.rs              # Custom script execution
│   └── sandbox.rs               # Execution sandbox
├── models/
│   ├── deployment.rs            # Deployment domain model
│   ├── validation.rs            # Validation result model
│   └── health.rs                # Health check model
├── repository/
│   └── deployment.rs            # Async data access
└── observability/
    ├── logging.rs               # Structured logging
    ├── metrics.rs               # Prometheus metrics
    └── tracing.rs               # Distributed tracing

2.3 Terminal UI (Go Bubbletea)

Purpose: Delightful, keyboard-centric terminal interface for CI/CD management. Technology Stack:
  • Framework: Charmbracelet Bubbletea
  • Terminal Rendering: tcell for cross-platform terminal support
  • Colors: charmbracelet/lipgloss for styling
  • Input Handling: Custom keybind mapping
  • WebSocket Client: gorilla/websocket for real-time updates
  • Graphics: ASCII art and Unicode support
Core Modules:
cmd/tui/
├── main.go                      # TUI entry point
└── config.go                    # Configuration loading
internal/tui/
├── app/
│   ├── app.go                   # Main Bubbletea model
│   ├── state.go                 # Application state
│   └── theme.go                 # Theme configuration
├── views/
│   ├── overview.go              # Overview dashboard
│   ├── pipelines.go             # Pipeline list view
│   ├── builds.go                # Build monitor view
│   ├── deployments.go           # Deployment view
│   └── settings.go              # Settings view
├── components/
│   ├── pipeline_list.go         # Pipeline list widget
│   ├── build_progress.go        # Build progress bar
│   ├── log_viewer.go            # Log streaming viewer
│   └── status_bar.go            # Status indicator
├── keybinds/
│   └── keymap.go                # Keyboard shortcuts
└── client/
    ├── api.go                   # API client
    └── websocket.go             # WebSocket client

2.4 Shield Authentication Service (Django)

Purpose: Identity and access management for users, workspaces, and RBAC. Technology Stack:
  • Framework: Django 4.2+
  • Database: PostgreSQL via psycopg2
  • Caching: Redis via django-redis
  • Password Hashing: bcrypt
  • JWT: djangorestframework-simplejwt
  • OAuth/SAML: django-allauth, python3-saml
  • Async Tasks: Celery for async operations
Core Modules:
manage.py
config/
├── settings.py                  # Django settings
├── urls.py                      # URL routing
└── wsgi.py                      # WSGI entry point
apps/
├── authentication/
│   ├── models.py                # User, tokens, refresh models
│   ├── views.py                 # Auth endpoints
│   ├── serializers.py           # Request/response serialization
│   └── permissions.py           # Custom permissions
├── workspaces/
│   ├── models.py                # Workspace, team models
│   ├── views.py                 # Workspace management endpoints
│   └── admin.py                 # Admin dashboard
├── rbac/
│   ├── models.py                # Role, permission models
│   ├── services.py              # Permission checking logic
│   └── cache.py                 # Permission cache management
└── integrations/
    ├── oauth.py                 # OAuth 2.0 handlers
    └── saml.py                  # SAML 2.0 handlers

3. Data Architecture

3.1 PostgreSQL Schema

Core Tables:
-- Users and authentication
CREATE TABLE users (
    id UUID PRIMARY KEY,
    email VARCHAR(255) NOT NULL UNIQUE,
    name VARCHAR(255),
    password_hash VARCHAR(255) NOT NULL,
    email_verified BOOLEAN DEFAULT FALSE,
    created_at TIMESTAMP DEFAULT NOW(),
    updated_at TIMESTAMP DEFAULT NOW()
);

-- Workspaces
CREATE TABLE workspaces (
    id UUID PRIMARY KEY,
    name VARCHAR(255) NOT NULL,
    slug VARCHAR(255) NOT NULL UNIQUE,
    owner_id UUID REFERENCES users(id),
    created_at TIMESTAMP DEFAULT NOW(),
    updated_at TIMESTAMP DEFAULT NOW()
);

-- Projects
CREATE TABLE projects (
    id UUID PRIMARY KEY,
    workspace_id UUID REFERENCES workspaces(id),
    name VARCHAR(255) NOT NULL,
    git_url VARCHAR(255) NOT NULL,
    framework VARCHAR(50),
    language VARCHAR(50),
    created_at TIMESTAMP DEFAULT NOW(),
    updated_at TIMESTAMP DEFAULT NOW()
);

-- Pipelines
CREATE TABLE pipelines (
    id UUID PRIMARY KEY,
    project_id UUID REFERENCES projects(id),
    name VARCHAR(255) NOT NULL,
    configuration JSONB NOT NULL,
    version INT DEFAULT 1,
    created_at TIMESTAMP DEFAULT NOW(),
    updated_at TIMESTAMP DEFAULT NOW()
);

-- Builds
CREATE TABLE builds (
    id UUID PRIMARY KEY,
    pipeline_id UUID REFERENCES pipelines(id),
    commit_sha VARCHAR(40),
    branch VARCHAR(255),
    status VARCHAR(50),
    started_at TIMESTAMP,
    completed_at TIMESTAMP,
    created_at TIMESTAMP DEFAULT NOW()
);

-- Deployments
CREATE TABLE deployments (
    id UUID PRIMARY KEY,
    build_id UUID REFERENCES builds(id),
    stage VARCHAR(50),
    status VARCHAR(50),
    platform VARCHAR(50),
    strategy VARCHAR(50),
    started_at TIMESTAMP,
    completed_at TIMESTAMP,
    created_at TIMESTAMP DEFAULT NOW()
);

-- Audit logs
CREATE TABLE audit_logs (
    id UUID PRIMARY KEY,
    user_id UUID REFERENCES users(id),
    workspace_id UUID REFERENCES workspaces(id),
    event_type VARCHAR(50),
    resource_type VARCHAR(50),
    resource_id UUID,
    action VARCHAR(50),
    details JSONB,
    created_at TIMESTAMP DEFAULT NOW()
);
Indexes:
-- Query performance indexes
CREATE INDEX idx_projects_workspace ON projects(workspace_id);
CREATE INDEX idx_pipelines_project ON pipelines(project_id);
CREATE INDEX idx_builds_pipeline ON builds(pipeline_id);
CREATE INDEX idx_deployments_build ON deployments(build_id);
CREATE INDEX idx_audit_logs_workspace ON audit_logs(workspace_id);
CREATE INDEX idx_audit_logs_timestamp ON audit_logs(created_at);

3.2 Redis Cache Strategy

Cache Keys Pattern:
# User/auth cache
user:{user_id}                          → User object
user:{user_id}:permissions:{workspace}  → Permission set
user:{user_id}:workspaces               → Workspace list
token:refresh:{token}                   → Refresh token metadata

# Project/pipeline cache
project:{project_id}                    → Project metadata
pipeline:{pipeline_id}                  → Pipeline configuration
build:{build_id}:logs                   → Build log stream
deployment:{deployment_id}:status       → Deployment status

# Session cache
session:{session_id}                    → Session data
Cache TTLs:
  • User profiles: 1 hour
  • Permissions: 15 minutes
  • Pipelines: 5 minutes
  • Build logs: Streaming (no TTL)
  • Deployments: 10 minutes
  • Sessions: 24 hours

3.3 Object Storage (S3/MinIO)

Bucket Structure:
sparki-artifacts/
├── projects/{project_id}/
│   ├── builds/{build_id}/
│   │   ├── artifacts/
│   │   ├── logs/
│   │   └── test-reports/
│   └── deployments/{deployment_id}/
│       └── logs/

sparki-backups/
├── database-backups/
├── audit-logs/
└── deployment-snapshots/

4. Storage Integration Strategy

4.1 Fiber Storage Adapter Selection

Primary Choices for Sparki:
AdapterUse CaseWhy Selected
PostgreSQLPrimary databaseACID compliance, scalability, proven reliability
RedisCaching layerSub-millisecond performance, native data structures
S3Build artifactsScalability, durability, cost efficiency
BadgerEmbedded state (optional)Fast embedded key-value for local caching
SurrealDBFuture real-time DBMulti-platform, time-series for metrics

4.2 Storage Adapter Implementation

// Fiber storage adapter interface
type StorageAdapter interface {
    // Key-value operations
    Get(ctx context.Context, key string) (string, error)
    Set(ctx context.Context, key string, value string, ttl time.Duration) error
    Delete(ctx context.Context, key string) error

    // List operations
    List(ctx context.Context, prefix string) ([]string, error)

    // Cleanup
    Close() error
}

// PostgreSQL adapter (primary)
type PostgresAdapter struct {
    db *pgx.Conn
}

// Redis adapter (caching)
type RedisAdapter struct {
    client *redis.Client
}

// S3 adapter (artifacts)
type S3Adapter struct {
    client *s3.Client
}

5. Observability Architecture

5.1 Logging Stack

Three-Tier Logging Architecture:
Application Logs (JSON)

Structured Logging Layer (Fiber middleware)

Log Aggregation (Filebeat/Logstash)

Elasticsearch (Centralized storage)

Kibana (Log visualization & analysis)
Log Format:
{
    "timestamp": "2025-12-03T10:30:45.123Z",
    "level": "INFO",
    "service": "sparki-api",
    "component": "build-executor",
    "correlation_id": "req_xyz789",
    "user_id": "user_123",
    "workspace_id": "ws_456",
    "message": "Build started successfully",
    "metadata": {
        "build_id": "build_789",
        "pipeline_id": "pipe_123",
        "duration_ms": 1234
    }
}

5.2 Metrics Stack

Prometheus Metrics:
# API Performance
sparki_api_request_duration_seconds{method, endpoint, status} - Histogram
sparki_api_request_errors_total{method, endpoint, error_type} - Counter
sparki_api_active_connections{} - Gauge

# Build Metrics
sparki_build_duration_seconds{project, framework} - Histogram
sparki_build_success_total{project, framework} - Counter
sparki_build_cache_hits_total{} - Counter

# Deployment Metrics
sparki_deployment_duration_seconds{stage, platform} - Histogram
sparki_deployment_success_total{stage, platform} - Counter
sparki_deployment_rollback_total{reason} - Counter

# System Metrics
sparki_system_memory_bytes - Gauge
sparki_system_cpu_usage{core} - Gauge
process_goroutines_count - Gauge
Visualization:
  • Grafana dashboards for real-time monitoring
  • Custom dashboards for pipeline/deployment metrics
  • Alert rules for performance/reliability thresholds

5.3 Distributed Tracing

Jaeger Integration:
Application Traces

Jaeger Agent (local collector)

Jaeger Collector (centralized)

Elasticsearch (trace storage)

Jaeger UI (trace visualization)
Trace Propagation:
  • Correlation IDs across all requests
  • Span tracking through service calls
  • Performance analysis per service

6. Deployment Architecture

6.1 Kubernetes Deployment

Service Topology:
Ingress (NGINX/Traefik)

┌─────────────────────────────┐
│ Sparki API (3+ replicas)    │
│ (Go Fiber, Auto-scaling)    │
└──────────┬────────────────┬─┘
           │                │
    ┌──────▼──────┐    ┌────▼──────┐
    │PostgreSQL   │    │Redis       │
    │(HA mode)    │    │(Cluster)   │
    └─────────────┘    └────────────┘

    ┌─────────────────────────────┐
    │ Loco Engine (2+ replicas)   │
    │ (Rust Axum, Job queue)      │
    └─────────────────────────────┘

    ┌─────────────────────────────┐
    │ Shield Auth (2+ replicas)   │
    │ (Django, Session store)     │
    └─────────────────────────────┘
Resource Requests:
ServiceCPUMemoryReplicasNotes
Sparki API2000m2Gi3+Auto-scaling enabled
Loco Engine1000m1Gi2+Job queue backed
Shield Auth1000m1Gi2+Session stateful
PostgreSQL2000m4Gi1 masterHA via replication
Redis1000m2Gi3 nodesCluster mode

6.2 High Availability Strategy

Service-Level Agreements:
  • API: 99.95% uptime SLA
  • Loco: 99.95% uptime SLA
  • Auth: 99.95% uptime SLA
HA Components:
  • Multi-replica deployments with load balancing
  • Database replication and automatic failover
  • Redis cluster with sentinel monitoring
  • Cross-region deployment option

7. Security Architecture

7.1 Authentication Flow

Developer Login

OAuth 2.0 / SAML 2.0 / Email+Password

Shield Authentication Service

JWT Token Generation

Return Access Token + Refresh Token

Store in secure HttpOnly cookie (Web)

Use in Authorization header (CLI/API)

7.2 Permission Validation Flow

Request with JWT

API Middleware: Validate JWT signature

Authorization Service: Check permissions

Cache lookup (Redis) - <1ms hit

Database lookup if cache miss

Cache result for 15 minutes

Allow/Deny request

7.3 Encryption Strategy

  • In Transit: TLS 1.3 for all external traffic
  • At Rest: AES-256 for sensitive data in database
  • Credentials: Bcrypt for password hashing, encrypted vault for API keys

8. Scalability Patterns

8.1 Horizontal Scaling

Stateless Services:
  • Sparki API: Scale horizontally without session affinity
  • Loco Engine: Scale via job queue (no local state)
  • Shield Auth: Scale via shared token store (Redis)
Stateful Services:
  • PostgreSQL: Vertical scaling + read replicas
  • Redis: Cluster mode for horizontal scaling
  • Object Storage: Inherently scalable (S3)

8.2 Performance Optimization

Caching Strategy:
  • Framework detection results: 5 minutes
  • Pipeline configurations: 5 minutes
  • User permissions: 15 minutes
  • Project metadata: 1 hour
Database Optimization:
  • Indexed queries for frequent operations
  • Connection pooling to reduce overhead
  • Query optimization and monitoring
  • Read replicas for scaling read-heavy workloads

9. Disaster Recovery

9.1 Backup Strategy

Database Backups:
  • Continuous replication to standby
  • Daily snapshots to S3
  • Weekly full backups for archives
  • 30-day retention policy
Configuration Backups:
  • Version control for all pipeline configurations
  • Immutable audit logs
  • Configuration snapshots before changes

9.2 Recovery Procedures

RTO (Recovery Time Objective): <15 minutes RPO (Recovery Point Objective): <5 minutes Recovery Steps:
  1. Automated failover to replica
  2. Restore from latest snapshot if needed
  3. Verify data integrity
  4. Resume operations

10. Compliance & Security

10.1 Data Privacy

  • GDPR Compliance: Data subject rights, right to be forgotten
  • HIPAA Compliance: Encryption, audit logging, access controls
  • SOC 2 Type II: Annual audit, continuous controls

10.2 Audit Logging

All authentication, authorization, and data access events logged:
  • User: Who performed action
  • Resource: What was affected
  • Action: What happened
  • Timestamp: When it occurred
  • Result: Success/failure
  • IP Address: Where from

Conclusion

Sparki’s technical architecture is designed around core principles of performance, reliability, and developer experience. The combination of Go Fiber for ultra-fast HTTP performance and Rust Axum for deterministic deployment orchestration positions Sparki as the highest-performance CI/CD platform available. The architecture supports millions of concurrent developers, delivers sub-50ms API response times, and provides 99.95% availability required for mission-critical production workloads.
Document History:
VersionDateAuthorChanges
1.02025-12-03Sparki EngineeringInitial technical architecture