ScaleScaleScaleScale

Tips / Nginx


Nginx load balancing: from theory to practice

Load balancer

Time ago I wrote a tutorial called How to configure Nginx load balancing, and even it included some practice code, it didn’t fill the need of lot of users (included myself) to have a full tutorial about nginx load balancer. So, today I will share a how to configure nginx load balancing with real life examples I used on my local network.

Network scenario: 3 servers/machines, working with an EL (fedora, rhel, centos) based system Linux:

192.168.1.100 (master node)
192.168.1.109 (slave)
192.168.1.106 (slave)

Install Nginx on the 3 servers

yum install nginx

Configure nginx.conf on the 3 servers

nginx.conf file on EL systems is usually located at /etc/nginx/nginx.conf

user  nginx;
worker_processes  1;
error_log  /var/log/nginx_error.log crit;

worker_rlimit_nofile  8192;

events {
worker_connections  1024; # you might need to increase this setting for busy servers
use epoll; #  Linux kernels 2.6.x change to epoll
}

http {
server_names_hash_max_size 2048;
server_names_hash_bucket_size 512;

server_tokens off;

include    mime.types;
default_type  application/octet-stream;

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout  10;

# Gzip on
gzip on;
gzip_min_length  1100;
gzip_buffers  4 32k;
gzip_types    text/plain application/x-javascript text/xml text/css;

ignore_invalid_headers on;
client_max_body_size    8m;
client_header_timeout  3m;
client_body_timeout 3m;
send_timeout     3m;
connection_pool_size  256;
client_header_buffer_size 4k;
large_client_header_buffers 4 64k;
request_pool_size  4k;
output_buffers   4 32k;
postpone_output  1460;

# Cache most accessed static files
open_file_cache          max=10000 inactive=10m;
open_file_cache_valid    2m;
open_file_cache_min_uses 1;
open_file_cache_errors   on;

# Include each virtual host
include "/etc/nginx/conf.d/*.conf";
}

Configure virtual hosts

On server 2 (192.168.1.109) and 3 (192.168.1.106):

nano -w /etc/nginx/conf.d/mysite.com.conf

Then paste this inside and modify the document root and other things as you need:

server {
access_log off;
error_log /var/log/yoursite.com-error.log;
        listen 80;
        server_name  yoursite.com www.yoursite.com;

location ~* .(gif|jpg|jpeg|png|ico|wmv|3gp|avi|mpg|mpeg|mp4|flv|mp3|mid|js|css|wml|swf)$ {
        root   /var/www/yoursite.com;
                expires max;
                add_header Pragma public;
                add_header Cache-Control "public, must-revalidate, proxy-revalidate";

        }

location / {
            root   /var/www/yoursite.com;
            index  index.php index.html index.htm;
        }

}

Remember to modify /var/www/yoursite.com with the real path and sitename of yours.

At the master server (192.168.1.100) create this file:

nano -w /etc/nginx/conf.d/balancer.com.conf

Place this content inside:

upstream balancer {
    server 192.168.1.109:80 ; 
    server 192.168.1.106:80 ;
}

server {
    listen 192.168.1.100:80;
    server_name yoursite.com;
    error_log /var/log/yoursite.com-error.log;
    location / {
        proxy_pass http://balancer;
    }

}

Remember to replace yoursite.com with your real site name.

Restart nginx on the three servers:

service nginx restart

DNS records

In this example I explained, I assume you have pointed your DNS records to IP 192.168.1.100, which is the “master” server and load balancer. Your DNS entries should look like:

yoursite.com IN A 192.168.1.100
www IN A 192.168.1.100

For testing purposes you can also use your /etc/hosts file mapping the website to your local network instead of using a real dns server.

The “master” server

I call it the master, because is the one that I use as main load balancer, however, it can also be used to serve requests, as any of the two slaves. In this scenario, you can also use the load balancer server to serve your content.

Alternative scenario: using the load balancer server, to serve requests too.
Here is a little “hack” you need to do. Instead of having this content on the master:

upstream balancer {
    server 192.168.1.100:80 ; 
    server 192.168.1.106:80 ;
}

server {
    listen 192.168.1.100:80;
    server_name yoursite.com;
    error_log /var/log/yoursite.com-error.log;
    location / {
        proxy_pass http://balancer;
    }

}

You should have this:

server {
access_log off;
error_log /var/log/yoursite.com-error.log;
        listen 127.0.01:80;
        server_name  yoursite.com www.yoursite.com;

location ~* .(gif|jpg|jpeg|png|ico|wmv|3gp|avi|mpg|mpeg|mp4|flv|mp3|mid|js|css|wml|swf)$ {
        root   /var/www/yoursite.com;
                expires max;
                add_header Pragma public;
                add_header Cache-Control "public, must-revalidate, proxy-revalidate";

        }

location / {
            root   /var/www/yoursite.com;
            index  index.php index.html index.htm;
        }

}

upstream balancer {
    server 192.168.1.100:80 ; 
    server 192.168.1.106:80 ;
    server 127.0.0.1:80 ;
}

server {
    listen 192.168.1.100:80;
    server_name yoursite.com;
    error_log /var/log/yoursite.com-error.log;
    location / {
        proxy_pass http://balancer;
    }

}

As you see, I did two modifications:

Added the virtual host for 127.0.0.1 listening on 80 port. And then added that same server to the upstream servers, that means it is gonna be used to serve requests from Nginx running on your localhost (master server localhost).

At this point you should have your http nginx load balancer working without problems. Of course there are many ways to configure this balancing with Nginx, there are lot of options that you should explore to improve your balancing.

Keep in mind to read the official documentation at Nginx.org website:

  • http://nginx.org/en/docs/http/load_balancing.html
  • http://wiki.nginx.org/HttpUpstreamModule
  • http://wiki.nginx.org/LoadBalanceExample

Popular search terms:

  • nginx master slave
  • 127 0 01:80
  • connection_pool_size nginx
  • https://yandex ru/clck/jsredir?from=yandex ru;search;web;;&text=&etext=1824 2WOBpyb5aQ-11V23F4Q4pb9KxNNfZTvJm4AEq54OX5uwl24UAEiL2L9pYFNBdWKz 01aa59546b57de7c182802aff45f09ceaf637c31&uuid=&state=_BLhILn4SxNIvvL0W45KSic66uCIg23qh8iRG98qeIXme
profile

Esteban Borges

Linux Geek, Webperf Addict, Nginx Fan. CTO @Infranetworking

  • mk0

    do you have tips for load balancing across multiple regions / nodes around the world as CDN like ?
    I’m very interested for large trafic website.

  • admin

    I’ve done that using Round Robin DNS, files syncronized using rsync and cron, however it is not a real time sync and not a real “balanced” solution, but it works for many cases.

  • vipin

    I think you haven’t used the second slave anywhere…In nginx.conf, don’t you think the upstream balanced IP’s should be server 192.168.1.106:80 and server 192.168.1.109:80 ???