Skip to main content

Load Balancing Configuration for Nestr Backend

Overview

The Nestr backend is designed to be stateless, allowing multiple instances to run behind a load balancer for horizontal scaling. This document outlines the configuration steps to deploy the backend with a load balancer.

Prerequisites

  • Multiple instances of the Nestr backend running on different servers or ports.
  • A load balancer solution such as NGINX, HAProxy, or a cloud provider’s load balancer (e.g., AWS ELB, Google Cloud Load Balancer).
  • Access to configuration files or management consoles for the load balancer.

Configuration Steps

1. Deploy Multiple Backend Instances

Ensure that each backend instance is configured to connect to the same database and cache systems. Update the configuration in config.yaml or environment variables to point to shared resources:
database:
  path: "shared_database_path"
redis:
  addr: "shared_redis_address:6379"
Start each instance on a different port or server:
# Instance 1
PORT=8080 nestr serve

# Instance 2
PORT=8081 nestr serve

2. Configure Load Balancer

NGINX Example

Configure NGINX as a reverse proxy and load balancer to distribute traffic across backend instances. Edit nginx.conf or create a new configuration file:
http {
    upstream nestr_backend {
        server localhost:8080;
        server localhost:8081;
        # Add more instances as needed
    }

    server {
        listen 80;
        server_name api.nestr.example.com;

        location / {
            proxy_pass http://nestr_backend;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
        }
    }
}
Restart NGINX to apply the configuration:
sudo nginx -s reload

Cloud Provider Load Balancer

If using a cloud provider, configure the load balancer through their management console or CLI:
  • AWS ELB: Create an Application Load Balancer, add the backend instances to a target group, and configure the listener to forward traffic to this group.
  • Google Cloud Load Balancer: Set up an HTTP(S) Load Balancer, define backend services with the Nestr instances, and configure routing rules.

3. Health Checks

Configure health checks on the load balancer to ensure traffic is only routed to healthy instances. Use the /health endpoint (if implemented) or a simple HTTP GET to check instance availability:
  • Path: /health
  • Method: GET
  • Expected Response: 200 OK

4. Monitoring and Scaling

Monitor the load balancer and backend instances using the metrics provided by the Nestr backend. Adjust the number of instances based on traffic load:
  • Use auto-scaling groups in cloud environments to automatically add or remove instances based on CPU usage or request rate.
  • Monitor latency and error rates through Prometheus metrics exposed by the backend.

Notes

  • Ensure all backend instances share the same database and cache to maintain data consistency.
  • Use session affinity (sticky sessions) if necessary, though the stateless design should minimize this need.
  • Regularly update load balancer configurations when adding or removing backend instances.
For further assistance or specific configurations, refer to the documentation of your load balancer solution or contact support.