Sparki + Polar Integration: Technical Architecture & Implementation
Document ID: SPARKI-POLAR-TECHNICAL-001Version: 1.0
Date: 2025-12-04
Status: Technical Specification
Audience: Backend Engineers, DevOps, Architects
Executive Summary
This document provides complete technical specifications for integrating Sparki with Polar.sh for subscription billing, tier enforcement, and revenue tracking. The integration ensures:- ✅ Clean separation of billing logic (Polar) from CI/CD logic (Sparki)
- ✅ API-first enforcement of tier limits (all endpoints validate tier)
- ✅ Usage tracking (concurrent jobs, storage, seats, API calls)
- ✅ Subscription lifecycle (checkout → webhook → tier enforcement)
- ✅ Free tier works offline (Sparki runs locally without Polar)
1. Architecture Overview
1.1 System Context
Copy
┌──────────────────────────────────────────────────────────────┐
│ SPARKI USERS │
│ (Developers, Teams, Enterprises) │
└──────────────────┬───────────────────────────────────────────┘
│
┌──────────┴──────────┐
│ │
┌───▼──────┐ ┌────▼────┐
│CLI/TUI │ │Web UI │
│(Go) │ │(React) │
└───┬──────┘ └────┬────┘
│ │
└────────┬───────────┘
│
┌────────▼──────────────────────────────┐
│ Sparki API Gateway (Go Fiber) │
│ │
│ ┌────────────────────────────────┐ │
│ │ Middleware: Tier Validator │ │ ← ALL requests validated
│ │ (check API key, enforce tier) │ │ against Polar tier
│ └────────┬───────────────────────┘ │
│ │ │
│ ┌────────▼────────────────────────┐ │
│ │ API Endpoints (Projects, │ │ ← Business logic
│ │ Pipelines, Builds, Deploys) │ │ (Sparki subsystems)
│ └────────┬───────────────────────┘ │
│ │ │
│ ┌────────▼────────────────────────┐ │
│ │ Usage Tracking Service │ │ ← Record metrics
│ │ (concurrent jobs, storage, etc) │ │
│ └──────────────────────────────────┘ │
└────────┬──────────────────────────────┘
│
┌────────────┴────────────┬──────────────┐
│ │ │
┌───▼────────┐ ┌──────▼───┐ ┌────▼────┐
│PostgreSQL │ │Redis │ │TimescaleDB
│(Projects, │ │(Cache: │ │(Usage
│Users, │ │ tier, │ │ metrics,
│Subs) │ │ sessions)│ │ timeseries)
└────────────┘ └──────────┘ └─────────┘
│
┌────────────────┴─────────────────┐
│ │
┌───▼────────────────────────────────┐ │
│ Meter Aggregation Job (Nightly) │ │ ← Daily summary
│ (Read TimescaleDB, send to Polar) │ │ of usage metrics
└───┬────────────────────────────────┘ │
│ │
└──────────────┬───────────────────┘
│ (HTTPS + Auth)
┌──────────────▼──────────────────────────┐
│ POLAR.SH SYSTEM │
│ │
│ ┌────────────────────────────────┐ │
│ │ Products & Prices │ │ ← 3 tier configs
│ │ (Team $25, Pro $99, Enterprise)│ │
│ └────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────┐ │
│ │ Checkouts & Payments │ │ ← Payment capture
│ │ (create session, process card) │ │
│ └────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────┐ │
│ │ Subscriptions & Orders │ │ ← Billing mgmt
│ │ (active, renewal, cancel) │ │
│ └────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────┐ │
│ │ Webhooks │ │ ← State sync
│ │ (order.created, sub.canceled) │ │
│ └────────────────────────────────┘ │
│ │
└────────────────────────────────────────┘
1.2 Data Flow: Complete Lifecycle
Copy
SIGNUP FLOW:
User → "Sign Up" → Create account (tier=free, no Polar ID)
User → Use free tier fully (10k free users)
User → Hit private project limit → "Upgrade to Team"
CHECKOUT FLOW:
User → "Upgrade to Team" → Create Polar checkout session
Polar → Present payment form (card/bank transfer/crypto)
User → Complete payment
Polar → Webhook: order.created
Sparki → Link user.polar_order_id → tier="team"
Sparki → Webhook response 200 OK
TIER ENFORCEMENT FLOW:
User → API request with API key
Sparki → Validate API key (lookup user.tier)
Sparki → Check if tier can access endpoint
Sparki → Allow or deny request
Sparki → Record usage metric (if allowed)
USAGE SYNC FLOW (Nightly):
TimescaleDB → Query usage_meters (last 24h)
Sparki → Aggregate by user, metric type
Sparki → Call Polar: create_meter_events()
Polar → Record for billing (if usage-based)
Sparki → Mark as synced
SUBSCRIPTION LIFECYCLE:
Month 1 → order.created (subscription active)
Month 2 → renewal (automatic)
Month 3 → user.cancel_subscription()
Polar → Webhook: order.subscription.canceled
Sparki → tier=free (downgrade)
User → can still use free tier features
2. Tier Enforcement System
2.1 API Gateway Middleware
Location:internal/api/middleware/tier_validator.go
Every API request goes through this middleware to validate:
- API key exists and is valid
- User tier can access this endpoint
- Rate limits are not exceeded
- Usage quotas are not exceeded
Copy
// Middleware: Validate tier on every request
middleware := func(c *fiber.Ctx) error {
// Step 1: Extract API key
apiKey := c.Get("Authorization")
if apiKey == "" {
return c.Status(401).JSON(fiber.Map{
"error": "Missing API key",
})
}
// Step 2: Lookup user tier (cached, 5-min TTL)
tier, err := getTierFromCache(apiKey)
if err != nil {
tier, err = getTierFromDB(apiKey)
if err != nil {
return c.Status(401).JSON(fiber.Map{"error": "Invalid API key"})
}
cacheSetTier(apiKey, tier, 5*time.Minute)
}
// Step 3: Attach tier context to request
c.Locals("tier", tier)
c.Locals("tier_config", tierConfigs[tier])
// Step 4: Check route-level tier requirement
route := c.Route()
requiredTier := getRequiredTier(route)
if !canAccess(tier, requiredTier) {
return c.Status(403).JSON(fiber.Map{
"error": fmt.Sprintf(
"This feature requires %s tier. Current: %s",
requiredTier, tier,
),
})
}
// Step 5: Check rate limits
rateLimit := tierConfigs[tier].RateLimit
if !checkRateLimit(apiKey, rateLimit) {
return c.Status(429).JSON(fiber.Map{
"error": fmt.Sprintf(
"Rate limit exceeded: %d requests/min",
rateLimit,
),
})
}
// Step 6: Proceed to handler
return c.Next()
}
// Attach middleware to all routes
app.Use(middleware)
2.2 Tier Configuration (Declarative)
Location:config/tiers.yaml
This YAML file is the source of truth for all tier definitions. Changes here automatically propagate to enforcement without redeployment.
Copy
tiers:
# Community (Free)
community:
display_name: "Community"
price_monthly: 0
rate_limits:
requests_per_minute: 100
burst_size: 10
quotas:
concurrent_jobs: 50
max_job_duration_minutes: 60
private_projects: 0
team_seats: 1
deployment_targets: 3
storage_gb_per_month: 1
api_calls_per_day: 1000
build_minutes_per_month: 1000
features:
public_projects: true
framework_detection: true
cli_access: true
tui_access: true
deploy_to_any_cloud: true
private_projects: false
team_collaboration: false
slack_notifications: false
rbac: false
sso: false
api_access: false
advanced_security_scanning: false
dlp: false
on_premises: false
support:
channel: "github_discussions"
response_time_hours: null # community-driven
deployment_model: "self_hosted_free" # can run locally
# Team ($25/month)
team:
display_name: "Team"
price_monthly: 25
rate_limits:
requests_per_minute: 500
burst_size: 25
quotas:
concurrent_jobs: 200
max_job_duration_minutes: 120
private_projects: 20
team_seats: 5
deployment_targets: 10
storage_gb_per_month: 10
api_calls_per_day: 10000
build_minutes_per_month: 10000
features:
public_projects: true
framework_detection: true
cli_access: true
tui_access: true
deploy_to_any_cloud: true
private_projects: true
team_collaboration: true
slack_notifications: true
rbac: true
sso: false
api_access: false
advanced_security_scanning: false
dlp: false
on_premises: false
support:
channel: "email"
response_time_hours: 24
deployment_model: "cloud_managed"
# Pro ($99/month)
pro:
display_name: "Pro"
price_monthly: 99
rate_limits:
requests_per_minute: 2000
burst_size: 100
quotas:
concurrent_jobs: 1000
max_job_duration_minutes: 480 # 8 hours
private_projects: null # unlimited
team_seats: 20
deployment_targets: null # unlimited
storage_gb_per_month: 100
api_calls_per_day: null # unlimited
build_minutes_per_month: null # unlimited
features:
public_projects: true
framework_detection: true
cli_access: true
tui_access: true
deploy_to_any_cloud: true
private_projects: true
team_collaboration: true
slack_notifications: true
rbac: true
sso: true
api_access: true
advanced_security_scanning: true
dlp: false
on_premises: false
support:
channel: "email"
response_time_hours: 1
deployment_model: "cloud_managed"
# Enterprise (Custom)
enterprise:
display_name: "Enterprise"
price_monthly: "custom"
rate_limits:
requests_per_minute: null # unlimited
burst_size: null
quotas:
concurrent_jobs: null # unlimited
max_job_duration_minutes: null
private_projects: null
team_seats: null
deployment_targets: null
storage_gb_per_month: null
api_calls_per_day: null
build_minutes_per_month: null
features:
public_projects: true
framework_detection: true
cli_access: true
tui_access: true
deploy_to_any_cloud: true
private_projects: true
team_collaboration: true
slack_notifications: true
rbac: true
sso: true
api_access: true
advanced_security_scanning: true
dlp: true
on_premises: true
support:
channel: "slack_phone"
response_time_hours: 0.25 # 15 minutes
deployment_model: "on_premises_or_dedicated"
2.3 Quota Enforcement in Handlers
Location:internal/api/handlers/projects.go
Endpoints actively check quotas and deny operations when limits are exceeded.
Copy
// Handler: Create private project
func (h *ProjectHandler) CreatePrivateProject(c *fiber.Ctx) error {
tier := c.Locals("tier").(string)
tierConfig := c.Locals("tier_config").(TierConfig)
user := c.Locals("user").(User)
// Step 1: Check if tier allows private projects
if !tierConfig.Features.PrivateProjects {
return c.Status(403).JSON(fiber.Map{
"error": fmt.Sprintf(
"Private projects require %s tier (minimum: Team)",
"Team",
),
"upgrade_url": "https://sparki.tools/pricing/team",
})
}
// Step 2: Check private project quota
privateProjectCount := db.CountPrivateProjects(user.ID)
if privateProjectCount >= tierConfig.Quotas.PrivateProjects {
// Graceful error with upgrade suggestion
nextTier := suggestNextTier(tier)
return c.Status(403).JSON(fiber.Map{
"error": fmt.Sprintf(
"Private project limit (%d) reached",
tierConfig.Quotas.PrivateProjects,
),
"upgrade_suggestion": fmt.Sprintf(
"Upgrade to %s for unlimited private projects",
nextTier,
),
})
}
// Step 3: Create project
project := createProject(user, private=true)
// Step 4: Record usage metric
recordUsageMetric(user.ID, "private_projects_created", 1)
return c.Status(201).JSON(project)
}
// Handler: Create concurrent job
func (h *BuildHandler) CreateBuild(c *fiber.Ctx) error {
tier := c.Locals("tier").(string)
tierConfig := c.Locals("tier_config").(TierConfig)
user := c.Locals("user").(User)
// Step 1: Get current concurrent job count
concurrentJobs := db.CountRunningBuilds(user.ID)
// Step 2: Check against tier limit
if concurrentJobs >= tierConfig.Quotas.ConcurrentJobs {
// Queue the job instead of running
queueJob(user, build)
return c.Status(202).JSON(fiber.Map{
"status": "queued",
"message": fmt.Sprintf(
"Tier limit (%d concurrent jobs) reached. Job queued.",
tierConfig.Quotas.ConcurrentJobs,
),
})
}
// Step 3: Run the job
build := startBuild(user, project)
recordUsageMetric(user.ID, "concurrent_jobs", 1)
return c.Status(201).JSON(build)
}
3. Usage Tracking & Billing
3.1 Usage Meter Database
Location:pkg/database/schema.sql
Time-series table (TimescaleDB hypertable) for storing usage events:
Copy
-- Usage metrics (time-series)
CREATE TABLE IF NOT EXISTS usage_metrics (
time TIMESTAMP NOT NULL,
user_id UUID NOT NULL,
metric_type VARCHAR(50) NOT NULL,
value INTEGER NOT NULL DEFAULT 1,
-- Sync status
synced_to_polar BOOLEAN DEFAULT FALSE,
polar_event_id VARCHAR(255),
sync_error TEXT,
-- Context
organization_id UUID,
metadata JSONB,
CONSTRAINT fk_user FOREIGN KEY (user_id)
REFERENCES users(id),
CONSTRAINT metric_type_check
CHECK (metric_type IN (
'concurrent_jobs',
'private_projects_created',
'team_seats_added',
'storage_bytes_used',
'api_calls',
'deployments',
'builds'
))
);
-- Convert to TimescaleDB hypertable (time-series optimized)
SELECT create_hypertable(
'usage_metrics',
'time',
if_not_exists => TRUE
);
-- Indexes for common queries
CREATE INDEX idx_usage_by_user_time
ON usage_metrics (user_id, time DESC);
CREATE INDEX idx_usage_by_metric_time
ON usage_metrics (metric_type, time DESC);
CREATE INDEX idx_usage_unsync
ON usage_metrics (synced_to_polar, time)
WHERE synced_to_polar = FALSE;
-- Continuous aggregate: daily summary
CREATE MATERIALIZED VIEW usage_daily_summary
WITH (timescaledb.continuous) AS
SELECT
time_bucket('1 day', time) AS day,
user_id,
metric_type,
SUM(value) AS total_value,
COUNT(*) AS event_count
FROM usage_metrics
GROUP BY day, user_id, metric_type;
3.2 Recording Usage Metrics
Location:internal/services/usage_service.go
Fire-and-forget recording of usage events (doesn’t block request):
Copy
type UsageService struct {
db *pgx.Conn
}
func (s *UsageService) RecordMetric(
ctx context.Context,
userID uuid.UUID,
metricType string,
value int,
metadata map[string]interface{},
) error {
// Fire-and-forget: spawn goroutine
go func() {
err := s.db.Exec(ctx, `
INSERT INTO usage_metrics (
time, user_id, metric_type, value, metadata
) VALUES (NOW(), $1, $2, $3, $4)
`, userID, metricType, value, metadata)
if err != nil {
logger.Warnf("Failed to record usage: %v", err)
// Don't fail the main request
}
}()
return nil
}
// Usage in handlers:
// After creating a private project:
usageService.RecordMetric(
ctx,
user.ID,
"private_projects_created",
1,
map[string]interface{}{"project_id": project.ID},
)
3.3 Nightly Aggregation Job
Location:internal/jobs/meter_sync_job.go
Daily job that aggregates usage and syncs to Polar:
Copy
type MeterSyncJob struct {
db *pgx.Conn
polar *PolarClient
}
func (j *MeterSyncJob) Run(ctx context.Context) error {
logger.Infof("Starting nightly meter sync...")
// Step 1: Get yesterday's usage
yesterday := time.Now().Add(-24 * time.Hour)
rows := j.db.Query(ctx, `
SELECT user_id, metric_type, SUM(value) as total_value
FROM usage_metrics
WHERE
DATE(time) = $1
AND synced_to_polar = FALSE
GROUP BY user_id, metric_type
`, yesterday.Format("2006-01-02"))
defer rows.Close()
syncedCount := 0
// Step 2: For each user, aggregate and sync to Polar
for rows.Next() {
var userID uuid.UUID
var metricType string
var totalValue int
err := rows.Scan(&userID, &metricType, &totalValue)
if err != nil {
logger.Errorf("Error scanning row: %v", err)
continue
}
// Get user's Polar subscription ID
user := j.db.QueryRow(ctx, `
SELECT polar_order_id FROM users WHERE id = $1
`, userID).
Scan(&polarOrderID)
if polarOrderID == "" {
// User not on paid tier; skip
continue
}
// Send meter event to Polar
err = j.polar.CreateMeterEvent(&PolarMeterEvent{
OrderID: polarOrderID,
EventType: mapMetricToPolarType(metricType),
Value: totalValue,
Timestamp: yesterday.Format(time.RFC3339),
})
if err != nil {
logger.Errorf("Failed to sync meter for user %s: %v", userID, err)
// Mark sync error in DB for manual review
j.db.Exec(ctx, `
UPDATE usage_metrics
SET sync_error = $1
WHERE user_id = $2 AND DATE(time) = $3
`, err.Error(), userID, yesterday.Format("2006-01-02"))
continue
}
// Mark as synced
j.db.Exec(ctx, `
UPDATE usage_metrics
SET synced_to_polar = TRUE, polar_event_id = $1
WHERE user_id = $2 AND DATE(time) = $3
`, polarOrderID, userID, yesterday.Format("2006-01-02"))
syncedCount++
}
logger.Infof("Meter sync complete: %d users synced", syncedCount)
return nil
}
// Schedule in main.go:
schedule.Every().Day().At("02:00").Do(func() {
job.Run(context.Background())
})
4. Subscription Lifecycle
4.1 Signup Flow (Free Tier)
Endpoint:POST /api/v1/auth/signup
Copy
func (h *AuthHandler) Signup(c *fiber.Ctx) error {
var req SignupRequest
c.BodyParser(&req)
// Create user with free tier
user := &User{
Email: req.Email,
PasswordHash: hashPassword(req.Password),
Tier: "community", // Free tier by default
PolarOrderID: "", // No Polar integration yet
}
db.CreateUser(user)
// Generate API key
apiKey := generateAPIKey()
db.CreateAPIKey(user.ID, apiKey)
return c.Status(201).JSON(fiber.Map{
"user_id": user.ID,
"api_key": apiKey, // Show only once!
"tier": "community",
})
}
4.2 Checkout Flow (Upgrade to Team)
Endpoint:POST /api/v1/subscriptions/checkout
Copy
func (h *BillingHandler) CreateCheckout(c *fiber.Ctx) error {
user := c.Locals("user").(User)
var req CheckoutRequest
c.BodyParser(&req) // {"tier": "team"}
// Step 1: Map tier to Polar product
productID := tierToPolarchProductID(req.Tier)
// "team" → "prod_team_monthly"
// "pro" → "prod_pro_monthly"
// Step 2: Create Polar checkout session
checkoutSession, err := polarClient.CreateCheckoutSession(&polar.CheckoutSessionRequest{
ProductID: productID,
SuccessURL: "https://sparki.tools/billing/success",
FailureURL: "https://sparki.tools/billing/cancelled",
Metadata: map[string]string{
"sparki_user_id": user.ID.String(),
"sparki_tier": req.Tier,
},
})
if err != nil {
return c.Status(500).JSON(fiber.Map{"error": err.Error()})
}
return c.JSON(fiber.Map{
"checkout_url": checkoutSession.URL,
"session_id": checkoutSession.ID,
})
}
4.3 Webhook Handler (Payment Confirmation)
Endpoint:POST /api/v1/webhooks/polar
Copy
func (h *BillingHandler) HandlePolarWebhook(c *fiber.Ctx) error {
// Step 1: Verify Polar signature
signature := c.Get("X-Polar-Signature")
body := c.Body()
if !verifySignature(signature, body, polarWebhookSecret) {
return c.Status(401).JSON(fiber.Map{"error": "Invalid signature"})
}
// Step 2: Parse event
var event polar.WebhookEvent
json.Unmarshal(body, &event)
// Step 3: Handle event type
if event.Type == "order.created" {
return handleOrderCreated(event)
} else if event.Type == "order.subscription.canceled" {
return handleSubscriptionCanceled(event)
}
return c.JSON(fiber.Map{"status": "ok"})
}
func handleOrderCreated(event polar.WebhookEvent) error {
// Extract metadata
userID := event.Metadata["sparki_user_id"]
tier := event.Metadata["sparki_tier"]
polarOrderID := event.OrderID
// Update user in database
db.UpdateUser(userID, &User{
Tier: tier,
PolarOrderID: polarOrderID,
})
// Invalidate cache (force fresh lookup on next request)
cache.DeleteUserTier(userID)
logger.Infof("User %s upgraded to %s tier", userID, tier)
return nil
}
func handleSubscriptionCanceled(event polar.WebhookEvent) error {
// Extract metadata
userID := event.Metadata["sparki_user_id"]
// Downgrade to free tier
db.UpdateUser(userID, &User{
Tier: "community",
PolarOrderID: "",
})
// Invalidate cache
cache.DeleteUserTier(userID)
logger.Infof("User %s downgraded to community tier", userID)
return nil
}
5. Self-Hosted (Free Tier) Deployment
For users running Sparki locally (self-hosted), no Polar integration is needed:Copy
// In API gateway middleware:
if os.Getenv("SPARKI_DEPLOYMENT") == "self_hosted" {
// Skip Polar tier validation
// Always return tier="community"
c.Locals("tier", "community")
c.Locals("tier_config", tierConfigs["community"])
} else {
// Cloud mode: validate against Polar
tier := getTierFromPolar(apiKey)
c.Locals("tier", tier)
}
- ✅ Users to run Sparki locally without any cloud dependency
- ✅ Free tier features work completely offline
- ✅ Upgrade to paid tier only when needed (cloud deployment)
6. Monitoring & Observability
Key Metrics to Track
Copy
Billing Metrics:
- active_subscriptions_by_tier (gauge)
- subscription_created_total (counter)
- subscription_canceled_total (counter)
- failed_checkout_total (counter)
- failed_polar_webhook_total (counter)
Usage Metrics:
- concurrent_jobs_by_user (gauge)
- private_projects_by_user (gauge)
- team_seats_by_organization (gauge)
- storage_used_by_user (gauge)
- api_calls_by_tier (counter)
Revenue Metrics:
- monthly_recurring_revenue (gauge)
- annual_recurring_revenue (gauge)
- customer_acquisition_cost (gauge)
- lifetime_value (gauge)
- churn_rate (gauge)
Alerts
Copy
critical:
- Polar API unreachable for >5 minutes
- Webhook processing failing >10% of messages
- Database sync falling behind >1000 rows
- Memory leak detected in API service
warning:
- High churn rate (>5% monthly)
- Large spike in failed checkouts
- Meter sync latency >1 hour
- Unusual usage patterns (spike in API calls)
7. Implementation Roadmap
Phase 1: Infrastructure (Weeks 1-2)
- Create Polar account + products
- Database migrations (usage_metrics table)
- Tier configuration YAML
Phase 2: Tier Validation (Weeks 3-4)
- API gateway middleware
- Tier enforcement in handlers
- Rate limiting + quota checks
- Cache layer (Redis)
Phase 3: Usage Tracking (Week 5)
- UsageService implementation
- Metric recording in handlers
- Nightly aggregation job
- TimescaleDB setup
Phase 4: Polar Integration (Weeks 6-7)
- Checkout session creation
- Webhook handler
- Subscription lifecycle
- Testing with Polar sandbox
Phase 5: Monitoring (Week 8)
- Prometheus metrics
- Grafana dashboards
- PagerDuty alerts
- Production deployment
Conclusion
This architecture cleanly separates billing (Polar) from CI/CD core (Sparki). Key benefits: ✅ Modular design: Tier enforcement at gateway; business logic untouched✅ Scalable: Async usage tracking; nightly batch sync to Polar
✅ Resilient: Cache fallback if Polar unavailable; free tier works offline
✅ Observable: Metrics, monitoring, alerts on all critical paths
✅ Testable: Easy to mock Polar for unit tests This enables Sparki to sustainably monetize while keeping core platform free and open-source.