NGINX and PHP-FPM 502 bad gateway

6,737

Found the problem last year. Just want to share what I found out. It was due to S3Fuse + PHP script that will scan through the folder where S3Fuse is being mounted. It get's a lot of load. S3Fuse is so slow for scanning through the files, especially if you have so much files in the bucket.

I think it is recommended to use S3Fuse if you will only use it for reading files or for backing up files. The alternative that you can use in AWS for multi-mounted storage is EFS.

Share:
6,737

Related videos on Youtube

Rei
Author by

Rei

Updated on September 18, 2022

Comments

  • Rei
    Rei over 1 year

    Ok so here is the situation, we currently have a server and we are now migrating to AWS. We have somehow identical configuration and we already tried to run apache benchmark so the PHP-FPM pool is somehow optimize as far as I know. But after we point the domain in the AWS DNS after an hour we are getting 502 bad gateway and is receiving this error:

    connect() to unix:/var/run/nginx/php-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 127.0.0.1, server: domain.com, request: \"GET / HTTP/1.0\", upstream: \"fastcgi://unix:/var/run/nginx/php-fpm.sock:\", host: \"domain.com\""
    

    Do you have an idea what is wrong in here? Or are there any way to trace which is the one who is causing the 502 bad gateway?


    Resources

    - EC2: m4.large
      - CPU: 2
      - RAM: 8
    - Cloudfront
    - ELB
      - min: 2 instances
    - Memcached (AWS Elasticache) for PHP session handling
    

    Setup

    Running in AWS using: CloudFront - ELB - NGINX - PHP

    1. NGINX 1.8
    2. PHP 7.1.11

    Configuration

    NGINX

    worker_processes auto;
    worker_connections 4096;
    multi_accept on;
    use epoll;
    send_timeout 3600;
    fastcgi_buffers 8 128k;
    fastcgi_buffer_size 128k;
    fastcgi_connect_timeout 600;
    fastcgi_send_timeout 600;
    fastcgi_read_timeout 3600;
    gzip on;
    

    PHP-FPM

    user = nginx
    group = nginx
    listen = /var/run/nginx/php-fpm.sock
    pm = dynamic
    pm.max_children = 46
    pm.start_servers = 5
    pm.min_spare_servers = 3
    pm.max_spare_servers = 5
    request_terminate_timeout = 3600
    pm.max_requests = 400
    process.priority = -19
    request_terminate_timeout = 3600
    catch_workers_output = yes
    
    • Tim
      Tim over 6 years
      That URL "fastcgi://unix:(etc)" looks fishy to me. Double check the URL, and if you can't get Unix sockets working try http sockets.
    • Rei
      Rei over 6 years
      I already check the unix sockets and the same configuration works on different website with the exact same configuration.
    • Tim
      Tim over 6 years
      Are both servers using exactly the same version of the same operating system? Things can change, or be set up differently. I'm 70% sure the problem is connecting Nginx to PHP, just try different approaches until one works.
    • Rei
      Rei over 6 years
      I'm using saltstack to setup the configuration so I am 100% sure that they have the same configuration or setup.
    • Tim
      Tim over 6 years
      Read the first sentence of my last comment again please.
    • Rei
      Rei over 6 years
      Is it possible that the problem is in the performance of S3Fuse? Cause we are using a php script to scan the directory.
  • Rei
    Rei over 6 years
    Hey, thanks for the answer, but I already tried that and it didn't work. Although interestingly, after I fire a script - session_destory - it takes me to cloudfront error and now I'm getting 502 bad gateway or the page would not load.