Load balancer (Networking)

From wikieduonline
Jump to navigation Jump to search

Load balancing across multiple application instances is a commonly used technique for optimizing resource utilization, maximizing throughput, reducing latency, and ensuring fault‑tolerant configurations.


Load balancing in Nginx is configured using upstream directive.[1].

Network load balancer can provide service for different protocols, such as TCP, UDP, HTTP or HTTPS.


Typical options:

Nginx configuration example

upstream backend {
    # no load balancing method is specified for Round Robin. Other options: least_conn, ip_hash, least_time header, random two least_time=last_byte
    server backend1.example.com slow_start=30s;
    server backend2.example.com max_conns=3;
    server backend3.example.com weight=5;
    server backend4.example.com;
    #server backend5.example.com:443;    (if you are connecting to an https backend. Additional configuration is required)
    server 192.0.0.1 backup;
    #queue 100 timeout=70;  (option if using max_conns directive)
}


HTTPS termination

HTTPS termination is at least supported on Nginx, Amazon ELB[2] and OpenStack [3]

Activities

Related terms

See also

Advertising: