cURL/PHP Request Executes 50% of the Time

13,039

Solution 1

The remote host is maybe not a real unique host. Maybe it's some sort of load balancing solution with several servers taking the incoming requests. What make me think it could be that is the 'mac error' in the error message. This could mean the remote host mac address as changed while the SSL negociation was still running. And this could explain that sometimes you do not have any problem.

But maybe not :-) SSL problems are quite hard to find.

I do not understand your answer on prefork MPM vs Worker MPM, if you run PHP in cli mode your apache MPM is not used, you're not even using apache.

Solution 2

You may need this option:

CURLOPT_FORBID_REUSE

Pass a long. Set to 1 to make the next transfer explicitly close the connection when done. Normally, libcurl keeps all connections alive when done with one transfer in case a succeeding one follows that can re-use them. This option should be used with caution and only if you understand what it does. Set to 0 to have libcurl keep the connection open for possible later re-use (default behavior).

Share:
13,039
mquinn
Author by

mquinn

Updated on June 27, 2022

Comments

  • mquinn
    mquinn almost 2 years

    After searching all over, I can't understand why cURL requests issued to a remote SSL-enabled host are successful only 50% or so of the time in my case. Here's the situation: I have a sequence of cURL requests, all of them issued to a HTTPS remote host, within a single PHP script that I run using the PHP CLI. Occasionally when I run the script the requests execute successfully, but for some reason most of the times I run it I get the following error from cURL:

    * About to connect() to www.virginia.edu port 443 (#0)
    *   Trying 128.143.22.36... * connected
    * Connected to www.virginia.edu (128.143.22.36) port 443 (#0)
    * successfully set certificate verify locations:
    *   CAfile: none
      CApath: /etc/ssl/certs
    * error:140943FC:SSL routines:SSL3_READ_BYTES:sslv3 alert bad record mac
    * Closing connection #0
    

    If I try again a few times I get the same result, but then after a few tries the requests will go through successfully. Running the script after that again results in an error, and the pattern continues. Researching the error 'alert bad record mac' didn't give me anything helpful, and I hesitate to blame it on an SSL issue since the script still runs occasionally.

    I'm on Ubuntu Server 10.04, with php5 and php5-curl installed, as well as the latest version of openssl. In terms of cURL specific options, CURLOPT_SSL_VERIFYPEER is set to false, and both CURLOPT_TIMEOUT and CURLOPT_CONNECTTIMEOUT are set to 4 seconds. Further illustrating this problem is the fact that the same exact situation occurs on my Mac OS X dev machine - the requests only go through ~50% of the time.

  • mquinn
    mquinn over 13 years
    Thanks for the suggestion but it was to no avail. I had previously tried CURLOPT_FRESH_CONNECT but that didn't work, and FORBID_REUSE resulted in the same old behavior.
  • mquinn
    mquinn over 13 years
    good point, a load balanced remote host would make sense in my situation. to address that i decided to just block until I can get a valid, non-false curl response and go from there. best i got for now.
  • Charles Oliver Nutter
    Charles Oliver Nutter over 10 years
    I don't think MAC has anything to do with a network MAC address. It stands for Message Authentication Code: en.wikipedia.org/wiki/Message_authentication_code ... this leads me to the conclusion that "remote host mac address" changing has nothing to do with this problem and vice-versa.
  • regilero
    regilero over 10 years
    @Charles Oliver Nutter nice catch, but this may still be a problem with something that should be shared between the hosts (ssl cache?) and which is not.
  • Giovanni Patruno
    Giovanni Patruno over 2 years
    @regilero What is the proposed solution then?
  • regilero
    regilero over 2 years
    If it's a shared cache problem then the solution is to use a shared drive (NFS?) for SSL cache. But it depends on the real problem.