Nginx and PHP-FPM Configuration and Optimizing Tips and Tricks - Comment Page: 4

I wrote before a guide Howto install Nginx/PHP-FPM on Fedora 29/28, CentOS/RHEL 7.5/6.10, but this guide is just installation guide and many cases Nginx and PHP-FPM basic configuration is good enough, but if you want to squeeze all the juice out of your VPS or web server / servers and do your maintenance work little bit easier, then this guide might be useful. These tips are based entirely on my own experience, so they may not be an absolute truth, and in some situations, a completely different configuration may work better. It's also good to remember leave resources for another services also if you run example, MySQL, PostgreSQL, MongoDB, Mail server, Name server and/or SSH server on same...

168 comments on “Nginx and PHP-FPM Configuration and Optimizing Tips and Tricks - Comment Page: 4

1 2 3 4 5 6
    1. Hi,
      1. How many connection can handle php-fpm child process? I’m trying to handle 15k concurrent connection per/sec, but without setting pm.max_children = 20480 it’s not working (testing with ab) and there I have high system load, io waits and so on…
      2. Which method can you consult port or unix socket?


      • Hi nixoid,

        1. First I have to ask, are you running just one server for this amount of concurrent php-fpm requests per second? And next, could you setup proper caching to reduce php-fpm usage? So question is, do you really have 15k concurrent connections per second which all needs php?

        Let’s assume that you have 1k totally different pages to serve per second or 10 second. If you could use 1 second cache, then it could reduce example over 14k php-fpm requests / per second or 10 seconds cache could reduce 149k php-fpm connections per 10 seconds. This is just theory.

        2. Speed difference between socket and port is not massive (socket is normally little bit faster when you use one machine local setup), but if you use port connection, then you can easily add several php-fpm servers to process your requests.

    2. yes, I really have more than 15k connection per/sec and all of tham needs php (running moodle). I’m using apc opcode caching.

      • Hi again nixoid,

        Yes, I believe that you have more than 15k connection per/sec, but if you can’t take load of php-fpm, then you need more real processing capacity. Example couple of new servers to handle this load. I think that you can’t improve this situation with changing server configs, because you have too much php-fpm requests to one server. I think that, one Nginx can handle this load easily, but normally it’s possible do some real caching and serve static content to every users until this content is changed, then serve this changed content until it’s changed again and so on.

    3. yes, nginx can handle this load, not php-fpm, but what will be best value for pm.max_children?

      JR, are you about more new nginx+php-fpm server or only php-fpm? and what about virtual servers?

      • Best pm.max_children value depends on your amount of RAM and how much PHP use RAM per Moodle page load?

        Virtual servers might be best choice, because then you could easily add more RAM, CPUs, disk space on the fly or add new nodes when you need it or vice versa. It’s also possible to do HA cluster setup with virtual servers, using DNS Round Robin and/or Load/Node balancer(s).

        If you want a more stable setup, then it’s wise use multiple nginx+php-fpm servers, if it’s possible, because then there is fail over when one server is down. Or two Nginx server to handle and web traffic and two or three php-fpm servers to handle php processing. And I mean servers with few gigs RAM and 4-8 CPU cores.

        Still it’s good to check out is there any way to cache some Moodle content. Front pages, listings etc. I understand it’s impossible cache tests or anything like that content which are served per user. I know what Moodle is, but I’m never setup any Moodle environment, so i’m not sure is there some internal caching methods.

    4. I have one physical server with 4 cpu and 96 gigs of ram. That I have haproxy for balancing http traffic between physical and virtual servers, plus there is mysql database nginx+php-fpm. Moodle uses internal caching but it’s very bad caching system (high io). I can add 5x virtual server with 4vcpu and 4vram but how many connections can they hande I don’t know and which modules should I run on this servers, nginx+php-fpm or only php-fpm and how Should I share web content ti remote php-fpm, by nfs?

      • One thing came to my mind about apc and high io, first check that you have apc.stat set to 0, here you can find more info about it. Of course it’s good to test that it works with Moodle without any problems.

        Then real problem, I just read more about Moodle from Moodle’s Performance FAQ. Looks like Moodle is same style monster, like Magento. Based on this FAQ info, one request can easily use more than 50 Mb memory, so 10-20 concurrent users use ~ 1Gb RAM. Best case where every request use “only” 50 Mb RAM, then you need 15 000 * 50 Mb = 750 000 Mb = ~ 732 Gb RAM to handle these php-fpm requests, then you also need more RAM for nginx and mysql.

        It’s very hard to say without testing, what you really need to run Moodle and serve pages to 15 000+ concurrent users without problems. Biggest problems are Moodle’s very high memory usage and the lack of a proper caching.

        You can check this page also:

        Just some fine tuning tips.

        But if you add more servers then I recommend separated MySQL server(s) and nginx+php-fpm nodes behind load balancer. One nginx server and multiple php-fpm servers is much more error prone setup. Your site is totally down when your single nginx server go down. Yes, you can share web content via nfs, if it’s fast enough, this might work with one server, but might be also problem when your all nodes uses same disk and you have 15 000+ users load. Another possible solution is sync all data from network disk to local disks or maybe use some shared directories and some local directories / per node.

    5. Oh no, we are using every day moodle for 100-150 concurrent users without problem and hardware is laptop with 1 cpu (4 core) and 4g RAM ))

      • Okay. Do you mean that you you get 100-150 connections (which use php-fpm) per second or do you mean that you can have 100-150 users on site same time (some idle and some do something)?

        If you mean concurrent connections, then if we do some calculation, let say 70% of your memory is used by php:
        4096 * 0.7 = 2867.2

        With 150 concurrent connections this mean that your Moodle site is using memory only 2867.2 / 150 = 19.1 Mb per php process, which is much less what Moodle docs says (50 Mb or much more), but if this is case, then you don’t need so much memory to run your 15K+ site.

    6. Yes, I mean 150 concurrent connection.
      So, I have some test results, I don’t have memory problem, everything depends on php-fpm and php processing capacity, but now I have 4 physical server with 24 core and can’t handle more than 1000 connection per second. All cores are 100% load and load average is 100-130, iowait 0.4-1.2, pm.max_children = 200, pm.max_requests = 500 and lot of errors in php-fpm log file, about nginx can’t connect to php-fpm ip socket

      • Okay, how much memory you have on your current servers?

    7. Nicely explained post. I got what I needed for my testing

    8. Thanks a lot for all these tips. I was configuring my low memory VPS server and I noticed fpm use lot of memory. After tuning with your posts I’ve half of the memory free with all websites installed :)

      • Hi Maco,

        Excellent! :)

        Yes, in principle PHP-FPM uses as much memory as you give to it. If you use large PHP frameworks or software, then some kind of proxy/cache is best idea to reduce PHP load from server(s) and speed up things.

    9. ;)

      Yes I’m using W3C total cache on WordPress, but I think I’ll try memcached next days… even if I really don’t need it (I’ve 200 visits per day… so not a big traffic).

      I’m not really sure how many users at the same time I can support with the configuration I made:
      pm.max_children = 4
      pm.start_servers = 2
      pm.min_spare_servers = 1
      pm.max_spare_servers = 2
      pm.max_requests = 200

      with, on nginx:
      worker_connections 1024;

      Normally nginx supports 1024 connection simultaneously with max 4 using php… I think if php responses are fast 4 could be good for almost of “personal” blog on the net.

      Anyway a stress test with BlazeMeter say the configuration if good with 50 (!!) concurrents users receiving responses in a reasonable time :)

      Thanks a lot… I’ll ping back this article on my post blog talking about all my tests…

      • Hi Marco,

        Your configuration looks reasonable.

        Repeatedly I have seen configurations where is very high numbers on pm.* section. It’s okay if you have example 16 cores and lot of memory, but with low-end boxes result is not so good. At the worst case PHP-FPM uses all processor capacity and memory what you have and try to generate all requested pages same time. With reasonable low values PHP-FPM adds requests to queue and process those requests as soon as possible. Normally every users get their pages, but if there is some traffic peaks, then waiting time might be little bit longer.

        So many cases less is more. :)

    10. […] create this useful configuration I follow this guide. If !1 then 0 it’s an incredible technical […]

    11. hi , do have any good configuration for php-fpm.
      for RAM 2gb on middle high traffic site.

      thank you~

      • Hi Paddy,

        I don’t have any “ready” configuration for you, because it depends lot of application(s) what you are running, how much every page generation needs ram and what else you are running on your server.

        Could you tell little bit more?

    12. Hi Paul

      I’m currently setting up Apple ProfileManager 2 (to manage iPads and iPhones) proxied through Nginx. I’m new to Nginx so thank you so much for this article as, I think I might have had a lightbulb moment. Would something like this work?

      server {
      listen 80;
      return 301 https://$host$request_uri;
      server {
      listen 443;
      root /var/www;
      index index.php;
      # Pass PHP scripts to PHP-FPM
      location ~* \.php$ {
      fastcgi_index index.php;
      #fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
      include fastcgi_params;
      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
      fastcgi_param SCRIPT_NAME $fastcgi_script_name;
      location ~ /\.ht {
      deny all;

      I haven’t include any certificate directives because the profilemanger has its own certificates – is this the correct way to handle this?

      I presume I have to have a fast_cgi params file?



    13. Hi JR!
      Thank you for this great post.
      However there is something that I don’t understand how to set the pm.max_children, pm.start_servers, pm.min_spare_servers and pm.max_spare_servers…
      I have an Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz with 8 cores and 48GB of RAM.
      I have set:
      pm.max_children = 1000
      pm.start_servers = 245
      pm.min_spare_servers = 45
      pm.max_spare_servers = 350

      I want at least 1000 concurrents users.
      But I have some error saying me that i have too mane processes…
      What’s wrong?
      Thank you very Much!

      • Hi Josh,

        First, you have excellent server and handling 1000 concurrent users shouldn’t be problem. Of course depending your network and content.

        Then, I have a few questions to you:

        1. Could you post values of following nginx parameters:

        2. What app you are running, how much every php processes use memory?

        3. Do you use currently any caching or is it possible use any caching?

        4. What else is running on this same server (some db, background processes etc.)?

    14. Great article, thank you!

    15. Hi.

      First of all thanks for your article it is really good.

      Now I need to show you somthing.

      I write an “real time” application over nginx + php-fpm on an amazon ec2 cloud. I use ajax to ask every 0.3 to 0.8 some information. On high load moments, there are 50 to 75 concurrent users, everyone making 1 or 2 requests every 0.3 to 0.8 seconds, so it is more or less 500 to 750 requests per seconds on the moments whith the highest loads.

      The ec2 instance is a large type instance whith intel xeon 1.8ghz with 2 cores and 8GB ram and it is fully dedicated to this web application.

      On my nginx config file the worker_processes is set to auto and worker_connections to 2048 and pm parameters are as follows:
      pm = dynamic
      pm.max_children = 250
      pm.start_servers = 50
      pm.min_spare_servers = 20
      pm.max_spare_servers = 50
      pm.max_requests = 5000

      On some moments on the ajax requests the server response is 503 error due to back end capacity.

      From your point of view, this configuration is enought to support our environment?

      Aditionally, the system is behind a load balancer that redirect the request to 3 instances like I mention before.

      Please, I hope you can give me some advice about it.

      kind regards!

      • Hi Chemi,

        First you are very welcome! Then let’s check some useful background information.

        Do you have php-fpm status page enabled? If yes, could you post current status? Or if not, could you enable it, run your system some time and post then current status?

        You can use some subdomain or other domain for status page, if you don’t want to touch your site production config files.

        You have to enable status page from php-fpm pool config (uncomment pm.status_path = /status) and use something like following on your nginx config:

        location = /status {
            access_log off;
            # Replace following with your ip (this keeps your status page private) 
            deny all;
            include /etc/nginx/fastcgi_params; 
            fastcgi_param SCRIPT_FILENAME /status; 

        You need to reload/restart your php-fpm and nginx.

        Then could you post output of following command, when everything is running and you have users using your service:

        ps -ylC php-fpm --sort:rss

        And could you also post few 503 error lines, from log?

1 2 3 4 5 6

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.