ScaleScaleScaleScale

Tips / Nginx


Nginx Optimization – The Definitive Guide

We all love Nginx because it’s open-source, free and a high performance solution for any website. By default it provides a great web speed, however, you can get 100% more speed if you tweak nginx properly. In this tutorial, I’ll teach you the basics about optimizing Nginx for maximum performance.

Welcome to the Nginx Optimization Guide, and remember this nginx optimization tutorial does not cover PHP / OS performance tweaks, it just focus on Nginx entirely.

12 steps to optimize Nginx for maxium performance

1) Hardware: choose your hardware platform carefully

That’s a crucial question when you are going to serve traffic with Nginx web server. For example, if you are going to use Nginx for static file only, you won’t need a huge amount of ram or a big CPU, however if you use php with php-fpm and mysql on the same server you probably will need a better CPU with more RAM. I’m talking in general terms here, it all depends on how much traffic you have. Have that in mind.

2) Use standalone Nginx, don’t use it as proxy for apache

Why using nginx as a proxy for apache? Today almost every application works perfectly with Nginx without the need of having apache or another web server installed as backend. It may take some time to get it working (most if you use rewrite rules), but the performance you will get will exceed any previous expectations. Give it a try!

3) Install Nginx from the source code with minimal required modules

If you compile nginx from source, you will get no extra stuff than what you really need, that doesn’t happen when you use a rpm or deb file that already includes lot of extra modules and configurations you probably won’t use.
Compiling fromn source only with required moduels reduces the memory footprint and improves the server performance.

You can choose what modules are compiled when you run ./configure command, then you can choose --with-module and/or --without-module names. Most used are --with-http_gzip_static_module --with-http_ssl_module --with-http_stub_status_module. Example:

./configure --with-http_gzip_static_module --with-http_ssl_module --with-http_stub_status_module

And remember, if you need to add a module in the future, you can recompile again adding all the modules you need.

4) Worker_process and worker_connection tuning

worker_process tuning

This is one of the most important directives. It allows you to set the maximum number of simultaneous processes that nginx can handle. To know the correct value for this, you must find how many CPUs do you have in your server, and that can be easily done running this command from the shell:

grep processor /proc/cpuinfo | wc -l

Example:

[user@server ~]$ grep processor /proc/cpuinfo | wc -l
8

Then edit nginx.conf and set:

worker_processes  8;

worker_connections tuning

This directive determines how many clients will be served by each worker process. If you have high traffic, you may need to tweak it to higher values. For most sites, 768 (default value) to 1024 is just fine.

max_clients = worker_processes * worker_connections

5) Tweaking Buffers

This is one of the most important things to tweak to avoid high write and read IO. If you set this buffer variables too low, it will have to write/read in the disk, and that is traduced in low performance and a higher load average in te box. This is just an example:

client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;

Nginx official documentation explanation:

client_body_buffer_size If the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file.

client_header_buffer_size For the overwhelming majority of requests it is completely sufficient with a buffer size of 1K.

client_max_body_size Specifies the maximum accepted body size of a client request, as indicated by the request header Content-Length.

large_client_header_buffers assigns the maximum number and size of buffers for large headers to read from client request.

6) Set proper Timeouts

This are some example timeouts, you can tweak this as you need to improve server performance.

client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
send_timeout 10;

Nginx official documentation explanation:

client_body_timeout Directive sets the read timeout for the request body from client. The timeout is set only if a body is not get in one readstep.

client_header_timeout Specifies how long to wait for the client to send a request header (e.g.: GET / HTTP/1.1).

keepalive_timeout The first parameter assigns the timeout for keep-alive connections with the client. The server will close connections after this time.

send_timeout Specifies the response timeout to the client. This timeout does not apply to the entire transfer but, rather, only between two subsequent client-read operations

7) Sendfile, tcp_nodelay and tcp_nopush

Sendfile can be activated from main nginx.conf config file. It copies data between one file descriptor and another at kernel level so it’s far more efficient than the combination of read and write, which would require transferring data to and from user space.  Read more.

tcp_nodelay and tcp_nopush

This two directives affect the performance at very deep network level and determine how the operating system handles the network buffers and decides when to flush them to the end user.

  • Tcp_nopush this option is only available if you are using sendfile, it  causes Nginx to attempt to send its HTTP response head in one packet, instead of using partial frames. This is useful for prepending headers before calling sendfile, or for throughput optimization. Read more.
  • Tcp_nodelay helps you to avoid buffer data-sends and it is recommended for sending frequent small bursts of data in real time. This directive allows or forbids the use of the socket option tcp_nodelay. Only included in keep-alive connections. Read more.
sendfile on;
tcp_nopush on;
tcp_nodelay on;
Note: normally, using tcp_nopush along with sendfile is very good. However, there are some cases where it can slow down things (specially from cache systems), so, run your own tests and find if it’s useful in that way.

8) Enable Gzip & Expires Header

Enabling Gzip is another thing you can’t lose, gzip gives you 50% to 75% improve in website speed reducig the amount of data transfered over the network. You can enable gzip using this configuration

gzip on;
gzip_min_length  1100;
gzip_buffers  4 32k;
gzip_types    text/plain application/x-javascript text/xml text/css;

Expires header can be set from each virtual host configuration, you can place this expires directive inside http {}, server {} or location {} blocks. For example:

location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {
expires 365d;
}

That will avoid unnecessary requests to your webserver while having all static stuff cached for the amount of time you need.

Note: you can also tweak this separately for each file extension. Need to know more? Check out How to enable Browser Cache Static Files on Nginx

9) Disable unnecessary logs

The access_log is a file that is determined by access_log directive, and it basically logs all requests to your websites into that single file. That means you will have writing IO if you have it enabled. So, if you don’t need the access_log for data analytic, then the best you can do is disable this directive:

access_log off;

You may also modify error_log directive to only log critical logs and avoid unnecessary warning errors:

error_log logs/error.log crit;

10) Configure open_file cache

A big part of a living operating system results in opening and closing files, and that can determine a lot in your server performance. That’s why open_file_cache exists. Enabling open_file_cache allows you to cache open file descriptors, frequently accessed files, file information with their size and modification time, among other things. This can help you to significantly improve your I/O. Read more.

Tweak as you need:

open_file_cache max=5000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;

11) Install Google’s PageSpeed module

PageSpeed for Nginx is a module that allows your websites to get automatically optimized in a very large number of ways (check out this link to see the complete list of features). Even it’s in beta state now, you can still try it to get the best performance for your web apps. Check out this installation guide: How to install Nginx PageSpeed module

12) Setup a Nginx load balancing solution

If you have a big traffic and you need to have high availability and also increase your web server performance, remember you can always use the fantastic Nginx load balancing feature, check out this article to know how to do it: How to configure Nginx load balancing

After you are done tweaking just restart nginx to apply the changes:

service nginx restart

That’s all, happy tweaking 😀

Popular search terms:

  • writev vs tcp_nopush nginx
  • client_body_buffer_size
  • optimiization on nginx
  • nginx tcp_nodelay
profile

Esteban Borges

Linux Geek, Webperf Addict, Nginx Fan. CTO @Infranetworking

  • Thanks a tone for this nginx tutorial, I had plenty of struggle with my website (always running into 502 bad gateways due to high traffic ).

    I think the opencache + smaller timeouts actually saved my ass, on the never ending pending processes.

  • Matthew Moisen

    Hi Chris, thanks for this optimization guide. Regarding client_body_buffer_size, would it be OK to set this to be equal to client_max_body_size, which in my case is 20MB? I have an API receiving anywhere from 1K to 20MB of data per POST request, and figure that raising client_body_buffer_size to 20MB would reduce IO. Are there any downsides to this?

    • Esteban Borges

      Glad you like it.

      Can’t tell for sure, it depends on the app you are running.
      Try setting it, and let us know the results.