Server response gets cut off half way through

34,148

Solution 1

I had the same problem:

Nginx cut off some responses from the FastCGI backend. For example, I couldn't generate a proper SQL backup from PhpMyAdmin. I checked the logs and found this:

2012/10/15 02:28:14 [crit] 16443#0: *14534527 open() "/usr/local/nginx/fastcgi_temp/4/81/0000004814" failed (13: Permission denied) while reading upstream, client: *, server: , request: "POST / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "", referrer: "http://*/server_export.php?token=**"

All I had to do to fix it was to give proper permissions to the /usr/local/nginx/fastcgi_temp folder, as well as client_body_temp.

Fixed!

Thanks a lot samvermette, your Question & Answer put me on the right track.

Solution 2

Looked up my nginx error.log file and found the following:

13870 open() "/var/lib/nginx/tmp/proxy/9/00/0000000009" failed (13: Permission denied) while reading upstream...

Looks like nginx's proxy was trying to save the response content (passed in by thin) to a file. It only does so when the response size exceeds proxy_buffers (64kb by default on 64 bits platform). So in the end the bug was connected to my request response size.

I ended fixing my issue by setting proxy_buffering to off in my nginx config file, instead of upping proxy_buffers or fixing the file permission issue.

Still not sure about the purpose of nginx's buffer. I'd appreciate if anyone could add up on that. Is disabling the buffering completely a bad idea?

Solution 3

I had similar problem with cutting response from server.

It happened only when I added json header before returning response header('Content-type: application/json');

In my case gzip caused the issue.

I solved it by specifying gzip_types in nginx.conf and adding application/json to list before turning on gzip:

gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript application/json;
gzip on;

Solution 4

It's possible you ran out of inodes, which prevents NginX from using the fastcgi_temp directory properly.

Try df -i and if you have 0% inodes free, that's a problem.

Try find /tmp -mtime 10 (older than 10 days) to see what might be filling up your disk.

Or maybe it's another directory with too many files. For example, go to /home/www-data/example.com and count the files:

find . -print | wc -l

Solution 5

Thanks for the question and the great answers, it saved me a lot of time. In the end, the answer of clement and sam helped me solve my issue, so the credits go to them.

Just wanted to point out that after reading a bit about the topic, it seems it is not recommended to disable proxy_buffering since it could make your server stall if the clients (user of your system) have a bad internet connection for example.

I found this discussion very useful to understand more. The example of Francis Daly made it very clear for me:

Perhaps it is easier to think of the full process as a chain of processes.

web browser talks to nginx, over a 1 MB/s link. nginx talks to upstream server, over a 100 MB/s link. upstream server returns 100 MB of content to nginx. nginx returns 100 MB of content to web browser.

With proxy_buffering on, nginx can hold the whole 100 MB, so the nginx-upstream connection can be closed after 1 s, and then nginx can spend 100 s sending the content to the web browser.

With proxy_buffering off, nginx can only take the content from upstream at the same rate that nginx can send it to the web browser.

The web browser doesn't care about the difference -- it still takes 100 s for it to get the whole content.

nginx doesn't care much about the difference -- it still takes 100 s to feed the content to the browser, but it does have to hold the connection to upstream open for an extra 99 s.

Upstream does care about the difference -- what could have taken it 1 s actually takes 100 s; and for the extra 99 s, that upstream server is not serving any other requests.

Usually: the nginx-upstream link is faster than the browser-nginx link; and upstream is more "heavyweight" than nginx; so it is prudent to let upstream finish processing as quickly as possible.

Share:
34,148
samvermette
Author by

samvermette

Updated on August 23, 2020

Comments

  • samvermette
    samvermette almost 4 years

    I have a REST API that returns json responses. Sometimes (and what seems to be at completely random), the json response gets cut off half-way through. So the returned json string looks like:

    ...route_short_name":"135","route_long_name":"Secte // end of response
    

    I'm pretty sure it's not an encoding issue because the cut off point keeps changing position, depending on the json string that's returned. I haven't found a particular response size either for which the cut off happens (I've seen 65kb not get cut off, whereas 40kbs would).

    Looking at the response header when the cut off does happen:

    {
        "Cache-Control" = "must-revalidate, private, max-age=0";
        Connection = "keep-alive";
        "Content-Type" = "application/json; charset=utf-8";
        Date = "Fri, 11 May 2012 19:58:36 GMT";
        Etag = "\"f36e55529c131f9c043b01e965e5f291\"";
        Server = "nginx/1.0.14";
        "Transfer-Encoding" = Identity;
        "X-Rack-Cache" = miss;
        "X-Runtime" = "0.739158";
        "X-UA-Compatible" = "IE=Edge,chrome=1";
    }
    

    Doesn't ring a bell either. Anyone?