ScaleScaleScaleScale

Tips / Nginx


Optimizing Nginx for serving files bigger than 1GB

Yesterday I faced a strange issue, I realize that nginx was not serving files larger than 1GB. After investigation I found that it was due to the proxy_max_temp_file_size variable, that is configured by default to serve up to 1024 MB max.

This variable indicates the max size of a temporary file when the data served is bigger than the proxy buffer. If it is indeed bigger than the buffer, it will be served synchronously from the upstream server, avoiding the disk buffering.
If you configure proxy_max_temp_file_size to 0, then your temporary files will be disabled.

In this fix it was enough to locate this variable inside the location block, although you can use it inside server and httpd blocks. With this configuration you will optimize nginx for serving more than 1GB of data.

location / {
...
proxy_max_temp_file_size 1924m;
...
}

Restart nginx to take the changes:

service nginx restart

Popular search terms:

  • proxy_max_temp_file_size
  • https://www scalescale com/tips/nginx/optimizing-nginx-for-serving-files-bigger-than-1gb/
  • nginx 1g
  • nginx max size download limit
profile

Esteban Borges

Linux Geek, Webperf Addict, Nginx Fan. CTO @Infranetworking

  • George

    That is wrong and should not be done, as it does not scale and has no point at all except for a temporary band aid. And if you are to cache and deliver huge number of files that are large, the proposed solution will kill your disk I/O even if you are in Raid-0 with SSDs.

    What one should do is to make sure the connection between the proxy and the backend has a large enough keepalive timeout (and is a stable one), and then set the proxy_max_temp_file_size to a small amount, so small chunks are buffered at once, then sent to the client, then another chuck is buffered again from the backend on the same connection.