Skip to main content

Sparki.tools DevOps & CI/CD Guide

Complete guide for setting up development, testing, and production environments for Sparki.tools.

Table of Contents

  1. Local Development Setup
  2. Docker Setup
  3. Testing
  4. CI/CD Pipeline
  5. Kubernetes Deployment
  6. Monitoring & Observability
  7. Database Management

Local Development Setup

Prerequisites

  • Go 1.24+
  • Docker & Docker Compose 3.8+
  • Make
  • PostgreSQL client tools (optional)

Quick Start

# Clone the repository
git clone https://github.com/sparki.tools/sparki.tools.git
cd sparki.tools

# Start development environment with all services
make dev

# Initialize database with migrations
make dev-db-init

# Run the API in development mode
make run-dev

# Run all tests
make test

Available Make Commands

# Development
make dev              # Start all development services
make dev-down        # Stop development services
make dev-logs        # Show service logs
make dev-reset       # Reset environment (clean restart)

# Testing
make test            # Run all tests
make test-unit       # Run unit tests only
make test-integration # Run integration tests
make test-coverage   # Generate coverage report
make test-benchmark  # Run benchmarks

# Building
make build           # Build API binary
make build-all       # Build all binaries
make run             # Build and run API

# Database
make dev-db-init     # Initialize database
make dev-db-seed     # Seed test data
make db-migrate-up   # Run pending migrations
make db-reset        # Reset database

# Cleanup
make clean           # Clean build artifacts
make clean-all       # Full cleanup

Docker Setup

Development Environment

The docker-compose.dev.yml provides complete local development environment:
# Start all services
docker-compose -f docker-compose.dev.yml up -d

# Services available:
# - PostgreSQL (dev): localhost:5432
# - PostgreSQL (test): localhost:5433
# - Redis (dev): localhost:6379
# - Redis (test): localhost:6380
# - Prometheus: http://localhost:9090
# - Grafana: http://localhost:3000 (admin/admin)
# - Jaeger: http://localhost:16686

# View logs
docker-compose -f docker-compose.dev.yml logs -f postgres

# Stop services
docker-compose -f docker-compose.dev.yml down

# Clean volumes (careful - deletes data)
docker-compose -f docker-compose.dev.yml down -v

Building Docker Images

# Build API image
make docker-build

# Push to registry
make docker-push

# Build test image
make docker-build-test

Testing

Unit Tests

# Run all unit tests
make test-unit

# Run tests for specific package
cd engine && go test -v ./internal/detection

# Run tests with coverage
make test-coverage

# View coverage report
open engine/coverage.html

Integration Tests

Requirements:
  • PostgreSQL running on localhost:5432
  • Redis running on localhost:6379
# Start test services
make dev

# Run integration tests
make test-integration

# Run specific integration test
cd engine && go test -v -run TestIntegration_ProjectWorkflow ./testing/integration

Benchmark Tests

# Run benchmarks
make test-benchmark

# Run benchmarks with specific memory stats
cd engine && go test -bench=. -benchmem ./internal/executor

Load Testing

# Run load tests
make test-load

# This runs benchmark tests for 10 seconds each

CI/CD Pipeline

GitHub Actions Workflow

The .github/workflows/ci-cd.yml provides automated testing and deployment: Triggers:
  • Push to main or develop
  • Pull requests to main or develop
  • Push to release branches
Jobs:
  1. Lint - Code quality checks with golangci-lint
  2. Test Unit - Unit tests on Go 1.24 and 1.25
  3. Test Integration - Integration tests with PostgreSQL and Redis
  4. Security - Gosec and Nancy vulnerability scanning
  5. Build Docker - Build and push Docker images
  6. Deploy Staging - Deploy to staging on develop push
  7. Deploy Production - Deploy to production on release branch

Running Locally

# Run all CI tests locally
make ci-test

# Build as CI would
make ci-build

Secrets Required in GitHub

SLACK_WEBHOOK          # For deployment notifications
GITHUB_TOKEN          # Automatic (GitHub Actions)

Kubernetes Deployment

Prerequisites

  • Kubernetes 1.24+
  • kubectl configured
  • cert-manager for TLS
  • nginx-ingress-controller

Deploying to Kubernetes

# Apply manifests
kubectl apply -f k8s/sparki-deployment.yaml

# Check deployment status
kubectl get pods -n sparki
kubectl logs -n sparki deployment/sparki-api

# Port forward for local testing
kubectl port-forward -n sparki svc/sparki-api 8080:80

# View resources
kubectl get all -n sparki

Accessing Services

# API (via ingress - requires DNS setup)
https://api.sparki.tools

# Grafana
http://localhost:3000  (port-forward required)

# Prometheus
http://localhost:9090  (port-forward required)

Scaling

# Manual scaling
kubectl scale deployment sparki-api -n sparki --replicas=5

# Check HPA status
kubectl get hpa -n sparki

Updating Deployment

# Update image
kubectl set image deployment/sparki-api -n sparki \
  api=ghcr.io/sparki/api:latest

# Check rollout status
kubectl rollout status deployment/sparki-api -n sparki

# Rollback if needed
kubectl rollout undo deployment/sparki-api -n sparki

Monitoring & Observability

Prometheus

Metrics:
  • Application metrics: :8080/metrics
  • HTTP request latency, errors, status codes
  • Go runtime metrics (goroutines, memory, GC)
  • Docker executor metrics
Access:
# Port forward
kubectl port-forward -n sparki svc/prometheus 9090:9090

# Query examples
rate(http_requests_total[5m])
engine_build_duration_seconds_bucket

Grafana

Default Login:
  • Username: admin
  • Password: admin
Dashboards included:
  • Application Overview
  • HTTP Metrics
  • Docker Executor
  • Database Performance

Jaeger Distributed Tracing

Access: http://localhost:16686 Traces include:
  • API request flows
  • Database queries
  • Docker executor operations

Health Checks

# Health check (liveness)
curl http://localhost:8080/health

# Readiness check
curl http://localhost:8080/ready

Database Management

Migrations

# Run pending migrations
make db-migrate-up

# Check migration status
make db-migrate-status

# Rollback last migration
make db-migrate-down

# Force migrate version
cd engine && go run ./cmd/migrate/main.go force <version>

Backup and Restore

# Backup PostgreSQL
docker-compose -f docker-compose.dev.yml exec postgres \
  pg_dump -U sparki sparki_dev > backup.sql

# Restore from backup
docker-compose -f docker-compose.dev.yml exec -T postgres \
  psql -U sparki sparki_dev < backup.sql

Database Reset (Development Only)

# Reset database
make db-reset

# This will:
# 1. Drop existing database
# 2. Create new database
# 3. Run all migrations
# 4. Seed test data (optional)

Environment Variables

Development

APP_ENVIRONMENT=development
APP_PORT=8080
DATABASE_HOST=localhost
DATABASE_PORT=5432
DATABASE_USER=sparki
DATABASE_PASSWORD=sparki
DATABASE_NAME=sparki_dev
REDIS_URL=localhost:6379
LOG_LEVEL=debug
LOG_FORMAT=json

Production

APP_ENVIRONMENT=production
APP_PORT=8080
DATABASE_HOST=postgres.default.svc.cluster.local
DATABASE_PORT=5432
DATABASE_USER=${DB_USER}
DATABASE_PASSWORD=${DB_PASSWORD}
DATABASE_NAME=sparki_prod
REDIS_URL=${REDIS_URL}
LOG_LEVEL=info
LOG_FORMAT=json
ENABLE_METRICS=true
ENABLE_TRACING=true

Troubleshooting

Database Connection Issues

# Check PostgreSQL is running
docker-compose -f docker-compose.dev.yml ps postgres

# Test connection
psql -h localhost -U sparki -d sparki_dev

# View logs
docker-compose -f docker-compose.dev.yml logs postgres

Redis Connection Issues

# Check Redis is running
docker-compose -f docker-compose.dev.yml ps redis

# Test connection
redis-cli -p 6379 ping

# View logs
docker-compose -f docker-compose.dev.yml logs redis

API Won’t Start

# Check logs
docker-compose -f docker-compose.dev.yml logs api

# Verify services are running
docker-compose -f docker-compose.dev.yml ps

# Check ports are available
lsof -i :8080

Tests Failing

# Check test database exists
psql -h localhost -U sparki -d sparki_test

# Reset test environment
make dev-down
make dev
make test-unit

Performance Tuning

Database Pool Settings

MaxOpenConns:    25,  // Increase for high concurrency
MaxIdleConns:    5,   // Keep connections in pool
ConnMaxIdleTime: 5 * time.Minute,

Redis Configuration

  • Use persistence: appendonly yes
  • Set eviction: maxmemory-policy allkeys-lru
  • Increase client output buffer: client-output-buffer-limit

Go Runtime

# Set GOMAXPROCS for CPU utilization
export GOMAXPROCS=8

# Enable CPU profiling
curl http://localhost:8080/debug/pprof/profile -o cpu.prof
go tool pprof cpu.prof

References