PHP-MySQLi connection randomly fails with "Cannot assign requested address"

40,762

Solution 1

MySQL: Using giant number of connections

What are dangers of frequent connects ?
It works well, with exception of some extreme cases. If you get hundreds of connects per second from the same box you may get into running out of local port numbers. The way to fix it could be - decrease "/proc/sys/net/ipv4/tcp_fin_timeout" on linux (this breaks TCP/IP standard but you might not care in your local network), increase "/proc/sys/net/ipv4/ip_local_port_range" on the client. Other OS have similar settings. You also may use more web boxes or multiple IP for your same database host to work around this problem. I've realy seen this in production.

Some background about this problem:
TCP/IP connection is identified by localip:localport remoteip:remote port. We have MySQL IP and Port as well as client IP fixed in this case so we can only vary local port which has finite range. Note even after you close connection TCP/IP stack has to keep the port reserved for some time, this is where tcp_fin_timeout comes from.

Solution 2

I had this problem and solved it using persistent connection mode, which can be activated in mysqli by pre-fixing the database hostname with a 'p:'

$link = mysqli_connect('p:localhost', 'fake_user', 'my_password', 'my_db');

From: http://php.net/manual/en/mysqli.persistconns.php :

The idea behind persistent connections is that a connection between a client process and a database can be reused by a client process, rather than being created and destroyed multiple times. This reduces the overhead of creating fresh connections every time one is required, as unused connections are cached and ready to be reused. ...

To open a persistent connection you must prepend p: to the hostname when connecting.

Solution 3

With Vicidial I have run into the same problem frequently, due to the kind of programming used, new MYSQL connections have to be established (very) frequently from a number of vicidial components, we have systems hammering the db server with over 10000 connections per second, most of which are serviced within a few ms and which are closed within a second or less. From experience I can tell you that in a local network, with close to no lost packages, tcp_fin_timeout can be reduced all the way down to 3 with no problems showing up.

Typical linux commands to diagnose if connections waiting to be closed is your problem are:

netstat -anlp | grep :3306 | grep TIME_WAIT -wc

which will show you the number of connections that are waiting to be closed completely.

netstat -nat | awk {'print $5'} | cut -d ":" -f1 | sort | uniq -c | sort -n

which will show the connections per connected host, allowing you to identify which other host is folding your system if there are multiple candidates.

To test the fix you can just

cat /proc/sys/net/ipv4/tcp_fin_timeout
echo "3" > /proc/sys/net/ipv4/tcp_fin_timeout

which will temporarily set the tcp_fin_timeout to 3 sec and tell you how many seconds it was before, so you can revert to the old value for testing.

As a permanent fix I would suggest you add the following line to /etc/sysctl.conf

net.ipv4.tcp_fin_timeout=3

Within a good local network with should not cause any trouble, if you do run into problems e.g. because of packet loss, you can try

net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_tw_recycle=0
net.ipv4.tcp_fin_timeout=10

Wiche allows more time for the connection to close and tries to reuse same ip:port combinations for new connections to the same host:service combination.

OR

net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_fin_timeout=10

Which will even more aggressively try to reuse connections, what can however create new problems with other applications for example with your webserver. So you should try the simple solution first, in most cases it will already fix your problem without any bad side effects!

Good Luck!

Solution 4

Vicidial servers regularly require increasing the connection limit in MySQL. Many installations (and we've seen and worked on a lot of them) have had to do this by modifying the limit

Additionally there have been reports of conntract_Max requiring increase in

/sbin/sysctl -w net.netfilter.nf_conntrack_max=196608

when the problem turns out to be networking related.

Also note that Vicidial has some specific suggested settings and even some enterprise settings for mysql configuration. Have a look in my-bigvici.cnf in /usr/src/astguiclient/conf for some configuration ideas that may open your mysql server up a bit.

So far, no problems have resulted from increasing connection limits, just additional resources used. Since the purpose of the server is to make this application work, dedicating resources to this application does not seem like a problem. LOL

Solution 5

We had the same problem. Although "tcp_fin_timeout" and "ip_local_port_range" solutions worked, the real problem was poorly writen PHP script, which just created new connection almost every second query it made to database. Rewriting script to connect just once solved all trouble. Please be aware that lowering "tcp_fin_timeout" value may be dangerous, as some code may depend on DB connection being still there after some time after connection. It's rather a dirty duct tape and bubble gum path than real solution.

Share:
40,762
Admin
Author by

Admin

Updated on July 03, 2020

Comments

  • Admin
    Admin almost 4 years

    Since about 2 weeks I'm dealing with one of the weirdest problems in LAMP stack. Long story short randomly connection to MySQL server is failing with error message:

    Warning:  mysqli::real_connect(): (HY000/2002): Cannot assign requested address in ..
    

    The MySQL is on different "box", hosted at Rackspace Cloud Today we downgraded it's version to

    Ver 14.14 Distrib 5.1.42, for debian-linux-gnu (x86_64).
    

    The DB server is pretty busy dealing with Queries per second avg: 5327.957 according to it's status variable.

    MySQL is in log-warnings=9 but no warring for connection refused are logged. Both site and gearman workers scripts fail with that error at let's say 1% probability. No server load DO NOT seems to be a factor as we monitor. (CPU load, IO load or MySQL load) The maximum DB connections (max_connections) are setted to 200 but we have never dealed with more than 100 simultaneous connections to the database

    It happens with and without the firewall software.

    I suspect TCP Networking problem rather than PHP/MySQL configurationn problem.

    Can anyone give me clue how to find it?

    UPDATE:

    The connection code is:

    $this->_mysqli = mysqli_init(); 
    $this->_mysqli->options(MYSQLI_OPT_CONNECT_TIMEOUT, 120); 
    $this->_mysqli->real_connect($dbHost,$dbUserName, $dbPassword, $dbName); 
    
    if (!is_null($this->_mysqli->connect_error)) {
        $ping = $this->_mysqli->ping(); 
    
        if(!$ping){
            $error = 'HOST: {'.$dbHost.'};MESSAGE: '. $this->_mysqli->connect_error ."\n"; 
            DataStoreException::raiseHostUnreachable($error);
        }
    } 
    
  • Kanan Farzali
    Kanan Farzali over 7 years
    I am not sure about other answers but this is the only solution worked for me.
  • fgwaller
    fgwaller about 7 years
    This is a great solution where it will work! Our experience was, that not all code works equally well with persistent connections. If your code works well with persistent connections this will VERY substantially reduce the number of new connections made.
  • AAgg
    AAgg over 3 years
    I am already having persistent connections but this problem still persists. Same error but we are also crossing max connection limits. I am not sure whether the fix suggested by @fgwaller would help.
  • Vladimir Hidalgo
    Vladimir Hidalgo over 3 years
    Lifesaver! - this worked wonders in an Ubuntu 20.04 EC2 and Aurora MySQL. Thank you!
  • fgwaller
    fgwaller over 3 years
    I fully agree with the first part, the proper solution is to use persistent connections. If this is not possible tcp_fin_timeout can be lowered with no risk, the long timeout is not warranted when slow connections usually have multiple kbit. tcp_fin_timeout only affects already closed connections, so nothing should expect them to be around any longer. The timeout originally solved problems of late (60 seconds later!) arriving packages, in today's age I have not seen any that late. Reducing the time before the socket can be used again to 3 seconds (in a LAN) will not cause any harm.