Skip to main content

Sparki Monetization Implementation Guide

Document ID: SPARKI-IMPLEMENTATION-001
Version: 1.0
Date: 2025-12-04
Status: Developer Roadmap
Audience: Backend Engineers, DevOps

Executive Summary

This guide provides 8-week, step-by-step implementation of Sparki’s tiered monetization system. The implementation:
  1. Preserves all existing Sparki functionality (no breaking changes)
  2. Runs alongside open-source code (billing is an optional add-on)
  3. Enables all three tiers (Community, Team, Pro, Enterprise)
  4. Integrates with Polar for payments
  5. Can be deployed incrementally (test → staging → production)
Key Timeline: 8 weeks, 6-8 engineers, ~3,000 lines of code

Phase 1: Preparation & Infrastructure (Weeks 1-2)

Week 1: Setup Polar Account & Database

1.1 Polar Account Setup (1 engineer, 2 hours)

  1. Create Polar.sh account at https://dashboard.polar.sh
  2. Create organization: “Sparki”
  3. Create API tokens:
    • Organization Access Token (OAT) for backend API calls
    • Secret webhook key for webhook validation
  4. Store in vault:
    # .env.vault or HashiCorp Vault
    POLAR_API_KEY=oat_prod_abc123...
    POLAR_WEBHOOK_SECRET=whsec_prod_xyz789...
    POLAR_API_URL=https://api.polar.sh/v1
    

1.2 Create Polar Products (1 engineer, 3 hours)

// script: scripts/setup_polar_products.go
package main

func main() {
    client := polar.NewClient(os.Getenv("POLAR_API_KEY"))

    // Product 1: Team ($25/month)
    teamProduct, _ := client.CreateProduct(&polar.Product{
        Name:        "Sparki Team",
        Description: "5 team seats, 200 concurrent jobs, private projects",
        Price:       2500,  // in cents
        Recurring: &polar.Recurring{
            Interval: "month",
            IntervalCount: 1,
        },
    })

    // Product 2: Pro ($99/month)
    proProduct, _ := client.CreateProduct(&polar.Product{
        Name:        "Sparki Pro",
        Description: "20 team seats, 1000 concurrent jobs, unlimited projects",
        Price:       9900,  // in cents
        Recurring: &polar.Recurring{
            Interval: "month",
            IntervalCount: 1,
        },
    })

    // Product 3: Enterprise (custom pricing)
    enterpriseProduct, _ := client.CreateProduct(&polar.Product{
        Name:        "Sparki Enterprise",
        Description: "Unlimited everything, on-premises, SSO",
        Price:       0,  // custom
        Recurring: &polar.Recurring{
            Interval: "month",
            IntervalCount: 1,
        },
    })

    // Store product IDs in database
    config := &Config{
        PolarProductTeam:       teamProduct.ID,
        PolarProductPro:        proProduct.ID,
        PolarProductEnterprise: enterpriseProduct.ID,
    }
    db.SaveConfig(config)

    fmt.Printf("✓ Created Polar products\n")
}
Run:
go run scripts/setup_polar_products.go

1.3 Database Schema Migration (1 engineer, 4 hours)

File: migrations/001_monetization_schema.sql
-- Users table (add columns if not exists)
ALTER TABLE users
ADD COLUMN IF NOT EXISTS tier VARCHAR(50) DEFAULT 'community',
ADD COLUMN IF NOT EXISTS polar_order_id VARCHAR(255),
ADD COLUMN IF NOT EXISTS polar_customer_id VARCHAR(255),
ADD COLUMN IF NOT EXISTS subscription_created_at TIMESTAMP,
ADD COLUMN IF NOT EXISTS subscription_canceled_at TIMESTAMP;

-- Create billing table
CREATE TABLE IF NOT EXISTS billing_subscriptions (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    user_id UUID NOT NULL UNIQUE,
    tier VARCHAR(50) NOT NULL,
    polar_order_id VARCHAR(255),
    polar_product_id VARCHAR(255),
    status VARCHAR(50),  -- active, canceled, expired
    created_at TIMESTAMP DEFAULT NOW(),
    updated_at TIMESTAMP DEFAULT NOW(),
    FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
);

-- Create usage_metrics table (TimescaleDB)
CREATE TABLE IF NOT EXISTS usage_metrics (
    time TIMESTAMP NOT NULL,
    user_id UUID NOT NULL,
    metric_type VARCHAR(50) NOT NULL,
    value INTEGER NOT NULL DEFAULT 1,
    synced_to_polar BOOLEAN DEFAULT FALSE,
    polar_event_id VARCHAR(255),
    metadata JSONB
);

SELECT create_hypertable(
    'usage_metrics', 'time',
    if_not_exists => TRUE
);

-- Create tier configuration table
CREATE TABLE IF NOT EXISTS tier_config (
    id SERIAL PRIMARY KEY,
    tier VARCHAR(50) UNIQUE NOT NULL,
    config JSONB NOT NULL,
    updated_at TIMESTAMP DEFAULT NOW()
);

-- Create webhook logs (for debugging)
CREATE TABLE IF NOT EXISTS webhook_logs (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    webhook_id VARCHAR(255),
    event_type VARCHAR(50),
    status VARCHAR(50),  -- success, failure
    request_body JSONB,
    response_body JSONB,
    error_message TEXT,
    created_at TIMESTAMP DEFAULT NOW()
);

-- Create indexes
CREATE INDEX idx_users_tier ON users(tier);
CREATE INDEX idx_users_polar_order ON users(polar_order_id);
CREATE INDEX idx_subs_user ON billing_subscriptions(user_id);
CREATE INDEX idx_metrics_user_time ON usage_metrics(user_id, time DESC);
CREATE INDEX idx_metrics_unsync ON usage_metrics(synced_to_polar, time)
    WHERE synced_to_polar = FALSE;
Run migrations:
cd api
go run cmd/migrate/main.go up

Week 2: Load Tier Configuration

2.1 Tier Configuration YAML (1 engineer, 2 hours)

File: config/tiers.yaml (as per SPARKI_POLAR_INTEGRATION_TECHNICAL.md §2.2)

2.2 Load Configuration on Startup (1 engineer, 3 hours)

File: internal/config/loader.go
package config

import (
    "gopkg.in/yaml.v3"
    "os"
)

type TierConfig struct {
    DisplayName string `yaml:"display_name"`
    PriceMonthly int `yaml:"price_monthly"`
    Quotas map[string]int `yaml:"quotas"`
    Features map[string]bool `yaml:"features"`
}

var TierConfigs = make(map[string]TierConfig)

func LoadTierConfiguration(filepath string) error {
    data, err := os.ReadFile(filepath)
    if err != nil {
        return err
    }

    type TierList struct {
        Tiers map[string]TierConfig `yaml:"tiers"`
    }

    var tiers TierList
    err = yaml.Unmarshal(data, &tiers)
    if err != nil {
        return err
    }

    TierConfigs = tiers.Tiers
    return nil
}

// In main.go:
func init() {
    config.LoadTierConfiguration("config/tiers.yaml")
}

2.3 Infrastructure Setup (DevOps, 2 hours)

  • PostgreSQL 14+ with TimescaleDB extension
  • Redis 7+ for caching
  • Prometheus for metrics
  • Create vault/secrets for Polar credentials

Phase 2: Tier Validation System (Weeks 3-4)

Week 3: API Gateway Middleware

3.1 Tier Validation Middleware (2 engineers, 5 hours)

File: internal/middleware/tier_validator.go
package middleware

import (
    "context"
    "fmt"
    "time"
    "github.com/gofiber/fiber/v2"
    "sparki/internal/config"
    "sparki/pkg/database"
)

type TierContext struct {
    UserID   string
    Tier     string
    Config   config.TierConfig
}

func TierValidator(db *database.DB, cache *Cache) fiber.Handler {
    return func(c *fiber.Ctx) error {
        // Step 1: Extract API key
        apiKey := c.Get("Authorization")
        if apiKey == "" {
            return c.Status(401).JSON(fiber.Map{
                "error": "Missing Authorization header",
            })
        }

        // Step 2: Get tier (cache first, then DB)
        tier, err := getTier(apiKey, db, cache)
        if err != nil {
            return c.Status(401).JSON(fiber.Map{
                "error": "Invalid API key",
            })
        }

        // Step 3: Get tier config
        tierConfig, ok := config.TierConfigs[tier]
        if !ok {
            return c.Status(500).JSON(fiber.Map{
                "error": "Invalid tier configuration",
            })
        }

        // Step 4: Attach to context
        c.Locals("tier", tier)
        c.Locals("tier_config", tierConfig)

        // Step 5: Check rate limit
        if !checkRateLimit(apiKey, tierConfig, cache) {
            return c.Status(429).JSON(fiber.Map{
                "error": fmt.Sprintf(
                    "Rate limited: %d req/min",
                    tierConfig.Quotas["rate_limit_per_minute"],
                ),
            })
        }

        return c.Next()
    }
}

func getTier(apiKey string, db *database.DB, cache *Cache) (string, error) {
    // Try cache
    if tier, ok := cache.Get("tier:" + apiKey); ok {
        return tier.(string), nil
    }

    // Query DB
    user, err := db.GetUserByAPIKey(apiKey)
    if err != nil {
        return "", err
    }

    // Cache for 5 minutes
    cache.Set("tier:"+apiKey, user.Tier, 5*time.Minute)

    return user.Tier, nil
}

func checkRateLimit(apiKey string, config config.TierConfig, cache *Cache) bool {
    limit := config.Quotas["rate_limit_per_minute"]
    key := "ratelimit:" + apiKey

    count, _ := cache.Increment(key, 1)
    if count == 1 {
        cache.Expire(key, 1*time.Minute)
    }

    return count <= limit
}

3.2 Register Middleware in Router (1 engineer, 2 hours)

File: cmd/api/main.go
func main() {
    app := fiber.New()
    db := database.Connect()
    cache := NewRedisCache()

    // Register tier validator on all routes
    app.Use(middleware.TierValidator(db, cache))

    // Routes
    api := app.Group("/api/v1")

    // Projects
    api.Get("/projects", handlers.ListProjects)
    api.Post("/projects", handlers.CreateProject)

    // Pipelines
    api.Get("/pipelines", handlers.ListPipelines)
    api.Post("/pipelines", handlers.CreatePipeline)

    // Start server
    app.Listen(":8080")
}

Week 4: Quota Enforcement in Handlers

4.1 Private Projects Endpoint (1 engineer, 3 hours)

File: internal/handlers/projects.go
func (h *ProjectHandler) CreatePrivateProject(c *fiber.Ctx) error {
    tier := c.Locals("tier").(string)
    tierConfig := c.Locals("tier_config").(config.TierConfig)
    user := c.Locals("user").(database.User)

    // Check feature flag
    if !tierConfig.Features["private_projects"] {
        return c.Status(403).JSON(fiber.Map{
            "error": "Private projects require Team tier",
            "upgrade": "https://sparki.tools/pricing",
        })
    }

    // Check quota
    count := h.db.CountPrivateProjects(user.ID)
    limit := tierConfig.Quotas["private_projects"]

    if count >= limit {
        return c.Status(403).JSON(fiber.Map{
            "error": fmt.Sprintf("Limit exceeded: %d/%d", count, limit),
        })
    }

    // Create project
    project := h.db.CreateProject(user.ID, true)

    return c.Status(201).JSON(project)
}

4.2 Concurrent Jobs Endpoint (1 engineer, 3 hours)

File: internal/handlers/builds.go
func (h *BuildHandler) CreateBuild(c *fiber.Ctx) error {
    tier := c.Locals("tier").(string)
    tierConfig := c.Locals("tier_config").(config.TierConfig)
    user := c.Locals("user").(database.User)

    // Check concurrent job quota
    running := h.db.CountRunningBuilds(user.ID)
    limit := tierConfig.Quotas["concurrent_jobs"]

    if running >= limit {
        // Queue instead of running
        h.db.QueueBuild(user.ID)
        return c.Status(202).JSON(fiber.Map{
            "status": "queued",
            "message": fmt.Sprintf(
                "At limit (%d concurrent). Job queued.",
                limit,
            ),
        })
    }

    // Run build
    build := h.db.StartBuild(user.ID)

    return c.Status(201).JSON(build)
}

Phase 3: Usage Tracking (Week 5)

5.1 Usage Service (1 engineer, 3 hours)

File: internal/services/usage_service.go
type UsageService struct {
    db *database.DB
}

func (s *UsageService) RecordMetric(
    ctx context.Context,
    userID string,
    metric string,
    value int,
) {
    // Fire-and-forget goroutine
    go func() {
        err := s.db.Exec(ctx, `
            INSERT INTO usage_metrics (time, user_id, metric_type, value)
            VALUES (NOW(), $1, $2, $3)
        `, userID, metric, value)

        if err != nil {
            log.Warnf("Failed to record metric: %v", err)
        }
    }()
}

5.2 Instrumentation (2 engineers, 5 hours)

Add usage recording to key handlers:
// In CreatePrivateProject:
usageService.RecordMetric(ctx, user.ID, "private_projects_created", 1)

// In StartBuild:
usageService.RecordMetric(ctx, user.ID, "concurrent_jobs", 1)

// In CreatePipeline:
usageService.RecordMetric(ctx, user.ID, "pipelines_created", 1)

// In TeamMember.Add:
usageService.RecordMetric(ctx, user.ID, "team_seats_used", 1)

5.3 Nightly Aggregation Job (1 engineer, 4 hours)

File: internal/jobs/meter_sync.go
type MeterSyncJob struct {
    db    *database.DB
    polar *PolarClient
}

func (j *MeterSyncJob) Run(ctx context.Context) error {
    yesterday := time.Now().Add(-24 * time.Hour)

    // Aggregate usage
    rows, _ := j.db.Query(ctx, `
        SELECT user_id, metric_type, SUM(value)
        FROM usage_metrics
        WHERE DATE(time) = $1 AND synced_to_polar = FALSE
        GROUP BY user_id, metric_type
    `, yesterday.Format("2006-01-02"))

    for rows.Next() {
        var userID, metric string
        var value int
        rows.Scan(&userID, &metric, &value)

        // Get user's polar order
        order, _ := j.db.GetPolarOrder(userID)
        if order == nil {
            continue
        }

        // Send to Polar
        j.polar.CreateMeterEvent(order.ID, metric, value)

        // Mark as synced
        j.db.Exec(ctx, `
            UPDATE usage_metrics
            SET synced_to_polar = TRUE
            WHERE user_id = $1 AND DATE(time) = $2
        `, userID, yesterday.Format("2006-01-02"))
    }

    return nil
}

// In main.go: Schedule daily
schedule.Every().Day().At("02:00").Do(func() {
    job.Run(context.Background())
})

Phase 4: Polar Integration (Weeks 6-7)

Week 6: Checkout & Subscription

6.1 Polar Client (1 engineer, 3 hours)

File: pkg/polar/client.go
package polar

import (
    "bytes"
    "encoding/json"
    "net/http"
)

type Client struct {
    apiKey  string
    baseURL string
    http    *http.Client
}

func NewClient(apiKey string) *Client {
    return &Client{
        apiKey:  apiKey,
        baseURL: "https://api.polar.sh/v1",
        http:    &http.Client{},
    }
}

// Create checkout session
func (c *Client) CreateCheckoutSession(
    productID string,
    successURL string,
) (map[string]interface{}, error) {
    body := map[string]interface{}{
        "product_id": productID,
        "success_url": successURL,
        "cancel_url":  "https://sparki.tools/billing/cancel",
    }

    return c.post("/checkouts", body)
}

// Create meter event (usage tracking)
func (c *Client) CreateMeterEvent(
    orderID string,
    metricType string,
    value int,
) error {
    body := map[string]interface{}{
        "event_type": metricType,
        "value":      value,
        "timestamp":  time.Now().Format(time.RFC3339),
    }

    _, err := c.post("/orders/"+orderID+"/meter_events", body)
    return err
}

func (c *Client) post(
    endpoint string,
    body map[string]interface{},
) (map[string]interface{}, error) {
    data, _ := json.Marshal(body)

    req, _ := http.NewRequest("POST", c.baseURL+endpoint, bytes.NewBuffer(data))
    req.Header.Set("Authorization", "Bearer "+c.apiKey)
    req.Header.Set("Content-Type", "application/json")

    resp, _ := c.http.Do(req)

    var result map[string]interface{}
    json.NewDecoder(resp.Body).Decode(&result)

    return result, nil
}

6.2 Billing Handler (1 engineer, 3 hours)

File: internal/handlers/billing.go
type BillingHandler struct {
    db    *database.DB
    polar *polar.Client
}

// Create checkout
func (h *BillingHandler) CreateCheckout(c *fiber.Ctx) error {
    user := c.Locals("user").(database.User)

    var req struct {
        Tier string `json:"tier"`
    }
    c.BodyParser(&req)

    // Map tier to Polar product
    productID := map[string]string{
        "team": "prod_team_monthly",
        "pro":  "prod_pro_monthly",
    }[req.Tier]

    // Create session
    session, _ := h.polar.CreateCheckoutSession(
        productID,
        fmt.Sprintf("https://sparki.tools/billing/success?user=%s", user.ID),
    )

    return c.JSON(session)
}

// Get subscription status
func (h *BillingHandler) GetSubscription(c *fiber.Ctx) error {
    user := c.Locals("user").(database.User)

    sub, _ := h.db.GetBillingSubscription(user.ID)

    return c.JSON(fiber.Map{
        "tier":      sub.Tier,
        "status":    sub.Status,
        "created":   sub.CreatedAt,
    })
}

Week 7: Webhooks & Event Handling

7.1 Webhook Handler (1 engineer, 4 hours)

File: internal/handlers/webhooks.go
func (h *WebhookHandler) HandlePolarWebhook(c *fiber.Ctx) error {
    // Verify signature
    signature := c.Get("x-polar-signature")
    body := c.Body()

    if !verifySignature(signature, body, os.Getenv("POLAR_WEBHOOK_SECRET")) {
        return c.Status(401).JSON(fiber.Map{"error": "Invalid signature"})
    }

    // Parse event
    var event map[string]interface{}
    json.Unmarshal(body, &event)

    eventType := event["type"].(string)

    switch eventType {
    case "order.created":
        return h.handleOrderCreated(event)
    case "order.subscription.canceled":
        return h.handleCanceled(event)
    default:
        return c.Status(200).JSON(fiber.Map{"status": "ok"})
    }
}

func (h *WebhookHandler) handleOrderCreated(event map[string]interface{}) error {
    metadata := event["metadata"].(map[string]interface{})
    userID := metadata["sparki_user_id"].(string)
    tier := metadata["sparki_tier"].(string)
    orderID := event["id"].(string)

    // Update user
    h.db.UpdateUser(userID, map[string]interface{}{
        "tier":             tier,
        "polar_order_id":   orderID,
        "subscription_created_at": time.Now(),
    })

    // Invalidate cache
    cache.Delete("tier:" + userID)

    // Send welcome email
    sendEmail(userID, "Welcome to Sparki "+tier)

    return nil
}

func (h *WebhookHandler) handleCanceled(event map[string]interface{}) error {
    metadata := event["metadata"].(map[string]interface{})
    userID := metadata["sparki_user_id"].(string)

    // Downgrade to free
    h.db.UpdateUser(userID, map[string]interface{}{
        "tier":                    "community",
        "polar_order_id":          "",
        "subscription_canceled_at": time.Now(),
    })

    cache.Delete("tier:" + userID)

    return nil
}

7.2 Register Webhook Endpoint (1 engineer, 1 hour)

File: cmd/api/main.go
app.Post("/api/v1/webhooks/polar", webhookHandler.HandlePolarWebhook)

Phase 5: Monitoring & Observability (Week 8)

8.1 Prometheus Metrics (1 engineer, 3 hours)

File: internal/metrics/collector.go
var (
    activeSubscriptions = prometheus.NewGaugeVec(
        prometheus.GaugeOpts{
            Name: "sparki_active_subscriptions",
            Help: "Active subscriptions by tier",
        },
        []string{"tier"},
    )

    subscriptionsCreated = prometheus.NewCounterVec(
        prometheus.CounterOpts{
            Name: "sparki_subscriptions_created_total",
            Help: "Total subscriptions created",
        },
        []string{"tier"},
    )

    checkoutFailures = prometheus.NewCounter(
        prometheus.CounterOpts{
            Name: "sparki_checkout_failures_total",
            Help: "Total failed checkouts",
        },
    )

    webhookProcessingTime = prometheus.NewHistogramVec(
        prometheus.HistogramOpts{
            Name:    "sparki_webhook_processing_seconds",
            Help:    "Webhook processing latency",
            Buckets: []float64{0.1, 0.5, 1, 5},
        },
        []string{"event_type"},
    )
)

func init() {
    prometheus.MustRegister(
        activeSubscriptions,
        subscriptionsCreated,
        checkoutFailures,
        webhookProcessingTime,
    )
}

8.2 Grafana Dashboard (DevOps, 2 hours)

JSON dashboard config tracking:
  • Active subscriptions by tier (gauge)
  • New subscriptions (counter)
  • Failed checkouts (counter)
  • Webhook latency (histogram)
  • Failed webhooks (counter)
  • MRR trend (gauge)

8.3 Alerting (DevOps, 2 hours)

PagerDuty alerts:
  • Polar API down >5 min → P1
  • Webhook failures >10% → P2
  • MRR decline >5% MoM → P3

Testing Checklist

Unit Tests (2 engineers, 5 hours)

// Test tier validation
func TestTierValidator_Team(t *testing.T) {
    // Create team user
    user := db.CreateUser("team")

    // API call should succeed
    resp := makeRequest("GET", "/projects", user.APIKey)
    assert.Equal(t, 200, resp.StatusCode)
}

// Test quota enforcement
func TestQuotaEnforcement_PrivateProjects(t *testing.T) {
    // Create community user
    user := db.CreateUser("community")

    // Create private project should fail
    resp := makeRequest(
        "POST", "/projects",
        user.APIKey,
        `{"private": true}`,
    )
    assert.Equal(t, 403, resp.StatusCode)
}

// Test usage tracking
func TestUsageMetrics(t *testing.T) {
    user := db.CreateUser("team")

    // Create project
    makeRequest("POST", "/projects", user.APIKey, `{}`)

    // Metrics should be recorded
    metrics := db.GetUsageMetrics(user.ID, "private_projects_created")
    assert.Equal(t, 1, metrics[0].Value)
}

// Test Polar webhook
func TestWebhook_OrderCreated(t *testing.T) {
    webhook := `{
        "type": "order.created",
        "id": "order_123",
        "metadata": {
            "sparki_user_id": "user_123",
            "sparki_tier": "team"
        }
    }`

    resp := makeRequest("POST", "/webhooks/polar", "", webhook)
    assert.Equal(t, 200, resp.StatusCode)

    // User should be upgraded
    user := db.GetUser("user_123")
    assert.Equal(t, "team", user.Tier)
}
Run tests:
go test ./... -v

Integration Tests (2 engineers, 5 hours)

  • Signup → free tier → works
  • Checkout → Polar → webhook → tier upgrade
  • API call as team → succeeds
  • API call as community on private → fails
  • Usage recorded → nightly sync → Polar
  • Cancel subscription → webhook → tier downgrade

Manual Testing (1 engineer, 3 hours)

  1. Sandbox Environment
    • Deploy to staging
    • Test full flow with Polar sandbox
    • Verify webhook handling
  2. Production Dry-Run
    • Deploy with feature flag OFF
    • Enable for 1% of users
    • Monitor metrics
    • Gradually roll out to 100%

Deployment Checklist

Pre-Deployment (Day Before)

  • Database migrations tested on production backup
  • Polar webhook secret loaded in vault
  • Tier configuration YAML deployed
  • Metrics collectors registered
  • Alerting rules configured in PagerDuty
  • Runbook written for incident response

Deployment

# 1. Deploy code
git push origin main
k apply -f deployment.yaml

# 2. Run migrations
kubectl exec sparki-api-0 -- go run migrate up

# 3. Load tier config
kubectl create configmap tier-config --from-file=config/tiers.yaml

# 4. Enable feature flag
kubectl patch deployment sparki-api -p \
  '{"spec":{"template":{"spec":{"containers":[{"name":"api","env":[{"name":"BILLING_ENABLED","value":"true"}]}]}}}}'

# 5. Monitor
watch kubectl logs -l app=sparki-api

Post-Deployment

  • Check dashboard for errors
  • Verify Polar integration working
  • Test checkout flow in production
  • Monitor billing metrics (MRR, subscriptions)
  • Check webhook lag <5 seconds

Rollback Plan

If issues occur:
# 1. Disable feature flag
kubectl set env deployment/sparki-api BILLING_ENABLED=false

# 2. Downgrade users to free (if needed)
kubectl exec sparki-api-0 -- psql -c "UPDATE users SET tier='community' WHERE tier NOT IN ('community')"

# 3. Revert deployment
git revert <commit>
k apply -f deployment.yaml

# 4. Notify users
send_email("Temporary maintenance on billing system")

Success Metrics

By end of Week 8: Functionality
  • 3 tiers fully implemented (Community, Team, Pro)
  • All endpoints enforce tier quotas
  • Usage tracking records all metrics
  • Polar integration working end-to-end
  • Webhooks processing reliably
Performance
  • API latency <100ms (tier validation overhead <5ms)
  • Webhook processing <5s
  • Nightly meter sync completes <1 hour
Reliability
  • 99.9% webhook delivery (Polar ensures)
  • 100% meter sync success (retry logic)
  • Zero data loss (transactional updates)
Observability
  • All metrics exported to Prometheus
  • Grafana dashboard complete
  • Alerts configured + tested
  • Runbook updated

Continuation: Beyond Week 8

Once monetization is deployed:
  1. Marketing Launch (Week 9-10)
    • Launch pricing page
    • Email existing users
    • Blog post on sustainable open-source
    • Demo video
  2. Customer Success (Week 11+)
    • Onboarding flow for paid tiers
    • Team collaboration features
    • Advanced security scanning
    • Support escalation for Enterprise
  3. Optimization (Month 3+)
    • Analyze churn patterns
    • Optimize tier boundaries
    • Build usage-based analytics dashboard
    • Prepare for Sparki 2.0 features (on-prem, advanced deploy)

Conclusion

This 8-week plan makes Sparki a sustainable, profitable open-source platform while keeping the core forever free. The modular design allows iterative deployment, testing at each phase, and rollback if needed. Key success factors:
  1. Team buys in to sustainable monetization
  2. Testing is thorough (unit + integration + manual)
  3. Deployment is gradual (1% → 10% → 100%)
  4. Monitoring is real-time (catch issues early)
  5. Communication is transparent (users understand tiers)