HAProxy seems to be load balancing instead of following ACL rules
Solution 1
It looks like it was reusing connections, which you shouldn't do when you're doing context switching. I added the following:
option httpclose
per this post: Why am I getting errors in my HAProxy content switching config?.
All URLs and domains work correctly now.
Solution 2
If you read in the haproxy documentation, you can find this paragraph:
hdr(name) The HTTP header <name> will be looked up in each HTTP request.
Just as with the equivalent ACL 'hdr()' function, the header
name in parenthesis is not case sensitive. If the header is
absent or if it does not contain any value, the roundrobin
algorithm is applied instead.
This can explain why haproxy is doing load balancing between servers not following the ACLs. To make sure, you need to check that you are really getting the request with host
header.
I think you can use the destination IP, name, or URL instead of checking the host
header.
Related videos on Youtube
Comments
-
djsumdog over 1 year
I'm seeing some really odd behavior with HAProxy. I have the following setup below designed to allow example.com/wiki to go to one server and example.com/ go to another. The trouble is that it seems like the /wiki only works half the time and the / webserver only works half the time as well. Upon closer examination, it seems like it's switching between the two backends; possibly load balancing them, instead of going to a specific backend based on the ACL rules!
Another oddity is that both services-staging.example.com/greenoven and staging.example.com both would go to kumquat, even though the rules specifically say that only the services-staging host should go to that backend.
Is something wrong with my HAProxy configuration? Am I using acls or backends incorrectly?
global log 127.0.0.1 local1 debug maxconn 200000 chroot /var/lib/haproxy user haproxy group haproxy daemon #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 200000 contimeout 5000 clitimeout 50000 srvtimeout 50000 stats uri /monitor stats auth admin:GS01 stats refresh 5s stats enable frontend http-in bind *:80 option forwardfor #Staging Hosts acl host_staging hdr(host) -i staging.example.com acl host_staging_services hdr(host) -i staging-services.example.com #Production Hosts acl host_prod hdr(host) -i example.com www.example.com acl host_prod_services hdr(host) -i services.urbanatla.com #URL Paths acl url_wiki url_beg /wiki acl url_go url_beg /greenoven acl url_arcgis url_beg /ArcGIS #Staging Backends use_backend pluto if host_staging_services url_arcgis use_backend kumquat if host_staging_services url_go use_backend kumquat if host_staging url_wiki use_backend cumberland if host_staging #Production Backends use_backend saturn if host_prod_services url_arcgis use_backend willow if host_prod_services url_go use_backend willow if host_prod url_wiki use_backend ganges if host_prod backend kumquat server kumquat kumquat.example.com:8080 maxconn 5000 backend cumberland server cumberland cumberland.example.com:80 maxconn 5000 backend ganges server ganges ganges.example.com:80 maxconn 5000 backend articdata server articdata articdata.example.com:80 maxconn 5000 backend saturn server saturn saturn.example.com:80 maxconn 5000 backend willow server willow willow.example.com:8080 maxconn 5000 backend pluto server pluto pluto.example.com:80 maxconn 5000
-
djsumdog over 12 yearsMy hdr() functions seem to be working correctly for production. I'm only calling the services by their hostnames, never IP, so the host header is always populated. It's the URLs that seem to be screwy. I tried changing the url_beg to path_beg and am having the same issues.
-
ThatGraemeGuy over 12 yearsI'd suggest using "option http-server-close" instead, so that you still have keep-alive enabled on the client side of the connection. Since the client side is the one with the higher latency, it gives you a big advantage.