Sparki Developer Toolkit
Complete guide to Sparki CLI tools and utilitiesTable of Contents
- Installation
- Quick Start
- Sparki CLI Commands
- Environment Management
- Deployment Tools
- Debugging Tools
- Utility Scripts
- Advanced Usage
Installation
Prerequisites
- Docker Desktop: 4.0+ with Docker Compose
- Kubernetes: kubectl 1.24+, Minikube or Docker Desktop K8s
- Go: 1.20+ (for backend development)
- Node.js: 18+ (for web development)
- Terraform: 1.4+ (for infrastructure)
Install Sparki CLI
Copy
# Clone repository
git clone https://github.com/sparki/sparki-tools.git
cd sparki-tools
# Make CLI executable
chmod +x scripts/sparki
chmod +x scripts/sparki-*
# Add to PATH
export PATH="$PWD/scripts:$PATH"
# Or create symlink
sudo ln -s "$PWD/scripts/sparki" /usr/local/bin/sparki
# Verify installation
sparki version
# Output: Sparki CLI v1.0.0
Quick Start
5-Minute Setup
Copy
# 1. Initialize environment
sparki init docker-compose
# Creates: ~/.sparki/ with config, certs, data directories
# 2. Start services
sparki start
# Starts: PostgreSQL, Redis, API server, workers
# 3. Check health
sparki health
# Output: Database ✓, Redis ✓, API ✓
# 4. View logs
sparki logs api
# Shows last 100 API service logs
# 5. Deploy a service
sparki deploy api v1.0.0
# Builds image and deploys to environment
Docker Compose Environment
Best for: Local development, testing, single-developer setupsCopy
# Start all services
sparki start docker-compose
# Services started:
# - PostgreSQL: localhost:5432
# - Redis: localhost:6379
# - API: localhost:3000
# - Worker: background process
# Environment file: ~/.sparki/config/dev.env
# Logs: Docker Compose logs view
# Stop services
sparki stop docker-compose
Kubernetes Environment
Best for: Multi-service deployments, production simulation, team developmentCopy
# Start Kubernetes
sparki start kubernetes
# Automatically:
# - Starts Minikube (if needed)
# - Creates sparki-dev namespace
# - Deploys services via kubectl
# Check status
kubectl get pods -n sparki-dev
# View logs
kubectl logs -f deployment/api -n sparki-dev
# Stop
sparki stop kubernetes
Sparki CLI Commands
init - Initialize Environment
Copy
# Docker Compose (default)
sparki init docker-compose
# Kubernetes with Minikube
sparki init kubernetes
# Hybrid (K8s infrastructure + local services)
sparki init hybrid
# Options:
# SPARKI_HOME Set home directory (default: ~/.sparki)
# SPARKI_DEBUG Enable debug output (default: false)
# Creates:
# ~/.sparki/
# ├── config/
# │ └── dev.env
# ├── certs/
# │ ├── cert.pem
# │ └── key.pem
# ├── keys/
# ├── logs/
# └── data/
start - Start Environment
Copy
# Start with default (Docker Compose)
sparki start
# Start specific environment
sparki start docker-compose
sparki start kubernetes
sparki start hybrid
# What happens:
# 1. Checks prerequisites
# 2. Loads configuration
# 3. Starts services (pulls images if needed)
# 4. Waits for services to be healthy
# 5. Shows connection strings
stop - Stop Environment
Copy
sparki stop
# Stops all services gracefully
# Data persists in volumes
status - Show Status
Copy
sparki status
# Output:
# Sparki Development Environment Status
#
# Docker Compose Services:
# NAME STATUS PORTS
# postgres Up 2 min 0.0.0.0:5432->5432/tcp
# redis Up 2 min 0.0.0.0:6379->6379/tcp
# api Up 1 min 0.0.0.0:3000->3000/tcp
# worker Up 1 min
#
# Configuration: ~/.sparki/config/dev.env
logs - View Service Logs
Copy
# View all logs (last 100 lines, follow)
sparki logs
# View specific service
sparki logs api
sparki logs postgres
sparki logs redis
# View specific number of lines
sparki logs api 500
# View without following
docker-compose logs -t api | head -50
deploy - Deploy Service
Copy
# Deploy with default version (latest)
sparki deploy api
# Deploy with specific version
sparki deploy api v1.2.3
# What happens:
# 1. Validates service exists
# 2. Builds Docker image
# 3. Tags image (sparki/api:v1.2.3)
# 4. Deploys to environment (Docker Compose or K8s)
# 5. Verifies deployment health
# Examples:
sparki deploy api v1.2.3 # Deploy to current environment
sparki deploy infrastructure # Deploy infra service
sparki deploy notification latest # Deploy latest version
test - Run Tests
Copy
# Run all tests (unit + integration + e2e)
sparki test all
# Run specific test type
sparki test unit # Go unit tests
sparki test integration # Docker Compose integration tests
sparki test e2e # Browser-based end-to-end tests
# Run tests for specific service
sparki test unit api
sparki test e2e web
# Examples:
sparki test unit # Fast feedback (1-2 min)
sparki test integration # Full stack (5-10 min)
sparki test e2e # Browser tests (10-15 min)
sparki test all # Complete suite (20-30 min)
lint - Lint Code
Copy
# Lint all code
sparki lint
# Lint specific path
sparki lint ./engine
sparki lint ./web
# What runs:
# - Go: golangci-lint (engine/)
# - TypeScript: eslint + prettier (web/)
# - YAML: yamllint (infrastructure/)
# Format code
sparki fmt # Auto-format all code
sparki fmt ./engine # Format Go code
db - Database Operations
Copy
# Run migrations up
sparki db migrate
# Rollback (down)
sparki db rollback
# Reset database
sparki db reset # Drop and recreate
sparki db seed # Load seed data
# Examples:
sparki db migrate # Apply all pending migrations
sparki db rollback # Rollback one migration
health - Health Check
Copy
# Quick health check
sparki health
# Detailed check
sparki health --detailed
# Output:
# Health Check
#
# Database: ✓
# Redis: ✓
# API: ✓
#
# Kubernetes Status:
# NAME READY STATUS RESTARTS AGE
# postgres-0 1/1 Running 0 5m
# redis-0 1/1 Running 0 5m
# api-abc123def456 1/1 Running 0 2m
Environment Management
Configuration
Copy
# Configuration file: ~/.sparki/config/dev.env
# Edit configuration
nano ~/.sparki/config/dev.env
# Common settings:
DB_HOST=postgres # Database host
DB_PORT=5432 # Database port
DB_NAME=sparki_dev # Database name
REDIS_HOST=redis # Redis host
API_PORT=3000 # API port
JWT_SECRET=your-secret # JWT signing key
LOG_LEVEL=debug # Log level
Multiple Environments
Copy
# Create environment for different purposes
mkdir -p ~/.sparki/{dev,staging,test}
# Copy config to each environment
cp ~/.sparki/config/dev.env ~/.sparki/dev/dev.env
# Use specific environment
export SPARKI_HOME=~/.sparki/dev
sparki start
# Or specify inline
SPARKI_HOME=~/.sparki/staging sparki start
Secrets Management
Copy
# Generate development secrets
sparki init
# Secrets stored in: ~/.sparki/config/dev.env
# Production secrets (never commit):
# Use environment variables or secret management tools
# - AWS Secrets Manager
# - HashiCorp Vault
# - Kubernetes Secrets
# - .env files (gitignored)
# Export secrets for deployment
export JWT_SECRET=$(grep JWT_SECRET ~/.sparki/config/dev.env | cut -d= -f2)
Deployment Tools
Local Deployment
Copy
# Deploy to local Docker Compose
sparki deploy api v1.2.3
# Verify deployment
sparki health
sparki logs api | head -20
# Rollback to previous version
docker-compose down api
docker-compose up -d api:previous-tag
Kubernetes Deployment
Copy
# Deploy to Kubernetes cluster
sparki deploy api v1.2.3
# Monitor rollout
kubectl rollout status deployment/api -n sparki-dev
# View pods
kubectl get pods -n sparki-dev -l app=api
# Check events
kubectl describe deployment api -n sparki-dev
# Scale deployment
kubectl scale deployment/api --replicas=5 -n sparki-dev
# Rollback
kubectl rollout undo deployment/api -n sparki-dev
Blue-Green Deployment
Copy
# Deploy new version alongside old
kubectl set image deployment/api-blue "api=sparki/api:v1.2.3" -n sparki-dev
# Wait for all pods to be ready
kubectl rollout status deployment/api-blue -n sparki-dev
# Switch traffic to new version
kubectl patch service api -p '{"spec":{"selector":{"version":"blue"}}}' -n sparki-dev
# Keep old version for quick rollback
kubectl set selector service api-green "version=green" -n sparki-dev
Debugging Tools
View Logs
Copy
# Tail live logs
sparki logs api
# Get last N lines
sparki logs api 500
# Get logs with timestamp
docker-compose logs -t --timestamps api
# Filter logs
docker-compose logs api | grep ERROR
# Save logs to file
docker-compose logs > logs.txt
Inspect Services
Copy
# Get service info
docker-compose ps
kubectl get pods -n sparki-dev
# Describe pod
kubectl describe pod api-abc123def456 -n sparki-dev
# Get container logs
docker logs container-id
kubectl logs pod-name -n sparki-dev -c container-name
# Execute command in container
docker-compose exec api sh
kubectl exec -it pod-name -n sparki-dev -- /bin/sh
Debug Database
Copy
# Connect to PostgreSQL
docker-compose exec postgres psql -U sparki -d sparki_dev
# Common queries
SELECT version(); # Check version
\dt # List tables
\du # List users
SELECT COUNT(*) FROM users; # Row count
# Connect to Redis
docker-compose exec redis redis-cli
# Check keys
KEYS *
GET my-key
Performance Monitoring
Copy
# Monitor Docker container resources
docker stats
# Monitor Kubernetes pod resources
kubectl top pod -n sparki-dev
kubectl top node
# Check system resources
free -h # Memory
df -h # Disk space
top # Process list
Utility Scripts
Database Migration Tool
Copy
# Create migration
sparki db create-migration add_user_roles
# Generated file: migrations/20250101120000_add_user_roles.sql
# Apply migrations
sparki db migrate
# Rollback last migration
sparki db rollback 1
# Rollback multiple
sparki db rollback 5
Code Generation
Copy
# Generate Go types from database schema
sparki generate types
# Generate API client
sparki generate client go
sparki generate client typescript
sparki generate client python
# Generate test fixtures
sparki generate fixtures
Docker Image Management
Copy
# Build image for service
docker build -t sparki/api:v1.2.3 -f Dockerfile ./engine
# Tag image
docker tag sparki/api:v1.2.3 sparki/api:latest
# Push to registry
docker push sparki/api:v1.2.3
# View images
docker images | grep sparki
Advanced Usage
Custom Commands
Copy
# Create custom command script
cat > scripts/sparki-custom << 'EOF'
#!/bin/bash
# Custom Sparki command
source "$(dirname "$0")/sparki"
# Your custom logic here
EOF
chmod +x scripts/sparki-custom
# Use it
sparki custom
CI/CD Integration
Copy
# Run tests in CI pipeline
sparki test all
# Deploy from CI/CD
sparki deploy api $CI_COMMIT_SHA
# Health check
sparki health || exit 1
Batch Operations
Copy
# Deploy multiple services
for service in api worker notification; do
sparki deploy $service v1.2.3
done
# Stop all services
for service in $(docker-compose ps -q); do
docker stop $service
done
Debugging Script
Copy
#!/bin/bash
# save as: scripts/sparki-debug
# Full environment debugging
echo "=== System Info ==="
uname -a
which docker
docker --version
echo "=== Docker Status ==="
docker-compose ps
docker images | grep sparki
echo "=== Kubernetes Status ==="
kubectl get nodes
kubectl get pods -A
echo "=== Network ==="
netstat -tlnp | grep LISTEN
echo "=== Services Health ==="
curl http://localhost:3000/health
redis-cli ping
psql -U sparki -c "SELECT 1"
Troubleshooting
Services Won’t Start
Copy
# Check Docker daemon
docker ps
# Check logs
sparki logs
# Verify configuration
cat ~/.sparki/config/dev.env
# Reset environment
sparki stop
docker-compose down -v # Remove volumes
sparki start
Port Already in Use
Copy
# Check what's using the port
lsof -i :3000
netstat -tlnp | grep 3000
# Kill process
kill -9 process-id
# Or use different ports
docker-compose set ports api 3001:3000
Database Connection Issues
Copy
# Test connection
docker-compose exec postgres psql -U sparki -d sparki_dev -c "SELECT 1"
# Check environment variables
env | grep DB_
# Reset database
sparki db reset
Out of Memory
Copy
# Check memory usage
docker stats
free -h
# Increase Docker memory limit
# Docker Desktop: Preferences → Resources → Memory
# Kill unused containers
docker container prune
Support & Resources
- Documentation: https://docs.sparki.io/cli
- Examples: https://github.com/sparki/sparki-tools/tree/main/examples
- Issues: https://github.com/sparki/sparki-tools/issues
- Slack: #sparki-devtools
- Discord: https://discord.gg/sparki