Securing Nginx proxy

9,736

Solution 1

Nginx will accept connections and process them based on matching the server_name (which checks against the Host header). Nginx comes with a default server block configured to match all Hosts. This allows for processing of any request that comes to the server.

I like to setup a server block checking for an empty Host header, and also configure the default server to return a 403 error (e.g. if you try to access my server via its IP address). Each virtual host then gets its own configuration (i.e. any valid host matches a configuration, all others either hit the default server block or the empty host block).

Server for checking empty host:

server {
    listen       80;
    server_name  "";
    return       444;
}

Server for throwing a 403 on all not configured hosts:

server {
        listen       80;
        server_name  _;
        root /path/to/error/files;
        error_page 403 /403.html;
                location  /403.html {
                allow all;
        }
        deny all;
}

It should be noted that the listen directive isn't necessary above (nginx listens on port 80 by default) - but my nginx running behind varnish, so doesn't actually listen on port 80.

In your case, you will add a 3rd server that will handle your reverse proxy requests:

server {
server_name mydomain.com;
...your other blocks...
}

You can test your config in a variety of ways (I am sure there are more, but these come to mind at the moment):

(I am using google.com as my test domain below, change it to your site of choice):

Specify the entire request in one go:

telnet mydomain.com 80
GET http://google.com

Specify the request and host header separately:

telnet mydomain.com 80
GET / HTTP/1.1
Host: google.com

Setup an entry in your hosts file (on your server):

127.0.0.1 google.com

Use curl to try and fetch a page:

curl google.com

(In this case, the hosts file is telling your server that google.com can be found on your machine - which gets the request to nginx - remove the entry when done testing.)

Edit: It appears that the unintended consequence of the above is that invalid requests result in a 400 error. You can determine the root cause of this, if interested, by adding the 'info' parameter to your error_log directive. In my case, the following causes were associated with the 400 errors that I saw:

Using telnet with the one-line GET request (and no host header):

client sent invalid request while reading client request line

Random requests (non-standard) made:

client sent invalid method while reading client request line

Using telnet, waited too long:

client timed out (110: Connection timed out) while reading client request headers

Other common causes were:

client sent invalid host header while reading client request headers
recv() failed (104: Connection reset by peer) while reading client request line
client closed prematurely connection while reading client request line

Using curl produced the expected 444 error. I imagine there is some additional syntax to a valid request. At any rate, the 400 errors are processed before the 444s from my understanding, so it is likely that these will not go away for truly invalid requests.

I was able to successfully get a 444 error using telnet though, it required modifying my config a bit:

server {
    listen       80 default;
    server_name  _ "";
    return       444;
}

Note that in the above, the 'unspecified server name' (underscore) and blank host (double quotes) do not explicitly define the default server, so you must add 'default' to the listen line.

Telnet output:

telnet localhost 80
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET / HTTP/1.1
Host: google.com

Access log output:

127.0.0.1 - - [19/Oct/2011:00:51:16 -0400] "GET / HTTP/1.1" 444 0 "-" "-" "-"

Solution 2

I would recommend you to reduce the size of your submitted headers (url-length).

Please take a look at client_header_buffer_size and large_client_header_buffers

Limiting the client URI is a common way to keep a scanner or broken client from sending large requests that may cause a buffer overflow.

So if you setup large_client_header_buffers 1 1k your nginx service will not accept a URI which is larger than (1x1K=1K) 1 kilobyte of data (incl. cookies).

Additionally you can setup ignore_invalid_headers off if you do not expect to receive any custom made headers.

Share:
9,736

Related videos on Youtube

Nick Lothian
Author by

Nick Lothian

Updated on September 18, 2022

Comments

  • Nick Lothian
    Nick Lothian over 1 year

    I'm using Nginx as a proxy for a Java web service.

    My config looks like this:

    location /webservice {
        proxy_read_timeout 240;
        proxy_connect_timeout 240;
        proxy_pass      http://127.0.0.1:8080/;
    }
    

    In my logs I'm seeing a lot of entries like this:

    xx.xx.xx.xx - - [18/Oct/2011:02:44:23 +0000] "GET http://l04.member.in2.yahoo.com/config/[email protected]&passwd=password HTTP/1.0" 200 9 "-" "Mozilla/4.0 (compatible; MSIE 5.0; Series60/2.8 Nokia6630/4.06.0 Profile/MIDP-2.0 Configuration/CLDC-1.1)"
    

    I've done some testing, as far as I can see my proxy isn't forwarding requests to external sites, but I'd like to block these requests all together and/or return a status code other than 200.

    I've done this:

    if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; }
    

    which blocks CONNECT attempts. Any ideas (beyond IP blocking) would be appreciated.

    • cyberx86
      cyberx86 over 12 years
      A long shot - but, did you specify a server_name in your server block (i.e. you are not using the default server block)?
    • Nick Lothian
      Nick Lothian over 12 years
      no, the default server block is being used
    • Shish
      Shish over 12 years
      nginx is set up as a reverse proxy, not regular (I'm not even sure it /can/ work as a regular proxy); so since unregulated regular proxying can't be a problem, and you've tested to make sure that it isn't, I'm really not sure what else you hope to achieve...
    • cyberx86
      cyberx86 over 12 years
      Unless I am misunderstanding the problem, try setting up a server block that specifies the server_name(s) that you want - that should restrict access to only the domains that you are serving (i.e. you shouldn't see a request for domains other than yours make it through) - if needed, in your default server block, throw a 403 (implying, that everything should be handled by the other server block with the server_name specified, and anything making it through to the default block gets dropped)
    • Nick Lothian
      Nick Lothian over 12 years
      Thanks @cyberx86. I now have server { server_name mydomain.com; .... But I can still telnet to port 80 on mydomain.com, run 'GET l04.member.in2.yahoo.com/config/… HTTP/1.0' and get a 200 response. This seems wrong to me..
    • cyberx86
      cyberx86 over 12 years
      I tried to duplicate your error on my server - both using telnet and curl (with the domain added to the hosts file) to port 80 - and was always returned a 403 (I specifically setup my default server to do that). On the other hand, I have another port open (8080) - and, while unable to get an 200 status, did get 302s. Implementing the same fix I used for my port 80 connections, changed that to 403s, and I soon saw other 403 entries start showing up in my logs. I'll post excerpts of my config as an answer.
  • cyberx86
    cyberx86 over 12 years
    It appears to come down to the type of request - a valid request, with a Host header that doesn't match the one you have specified should return 403, a missing host header should return 444, and all improperly constructed requests will return a 400. I would suggest that 400s will be pretty common. I have updated my answer to reflect a few experiments. After testing, you may want to stop logging some of those requests, or your logs are likely to fill up fairly fast.