Continue PHP execution after sending HTTP response

43,191

Solution 1

Have the script that handles the initial request create an entry in a processing queue, and then immediately return. Then, create a separate process (via cron maybe) that regularly runs whatever jobs are pending in the queue.

Solution 2

I had this snippet in my "special scripts" toolbox, but it got lost (clouds were not common back then), so I was searching for it and came up with this question, surprised to see that it's missing, I searched more and came back here to post it:

<?php
 ob_end_clean();
 header("Connection: close");
 ignore_user_abort(); // optional
 ob_start();
 echo ('Text the user will see');
 $size = ob_get_length();
 header("Content-Length: $size");
 ob_end_flush(); // Strange behaviour, will not work
 flush();            // Unless both are called !
 session_write_close(); // Added a line suggested in the comment
 // Do processing here 
 sleep(30);
 echo('Text user will never see');
?>

I actually use it in few places. And it totally makes sense there: a banklink is returning the request of a successful payment and I have to call a lot of services and process a lot of data when that happens. That sometimes takes more than 10 seconds, yet the banklink has fixed timeout period. So I acknowledge the banklink and show him the way out, and do my stuff when he is already gone.

Solution 3

What you need is this kind of setup

alt text

Solution 4

One can to use "http fork" to oneself or any other script. I mean something like this:

// parent sript, called by user request from browser

// create socket for calling child script
$socketToChild = fsockopen("localhost", 80);

// HTTP-packet building; header first
$msgToChild = "POST /sript.php?&param=value&<more params> HTTP/1.0\n";
$msgToChild .= "Host: localhost\n";
$postData = "Any data for child as POST-query";
$msgToChild .= "Content-Length: ".strlen($postData)."\n\n";

// header done, glue with data
$msgToChild .= $postData;

// send packet no oneself www-server - new process will be created to handle our query
fwrite($socketToChild, $msgToChild);

// wait and read answer from child
$data = fread($socketToChild, $dataSize);

// close connection to child
fclose($socketToChild);
...

Now the child script:

// parse HTTP-query somewhere and somehow before this point

// "disable partial output" or 
// "enable buffering" to give out all at once later
ob_start();

// "say hello" to client (parent script in this case) disconnection
// before child ends - we need not care about it
ignore_user_abort(1);

// we will work forever
set_time_limit(0);

// we need to say something to parent to stop its waiting
// it could be something useful like client ID or just "OK"
...
echo $reply;

// push buffer to parent
ob_flush();

// parent gets our answer and disconnects
// but we can work "in background" :)
...

The main idea is:

  • parent script called by user request;
  • parent calls child script (same as parent or another) on the same server (or any other server) and gives request data to them;
  • parent says ok to user and ends;
  • child works.

If you need to interact with child - you can use DB as "communication medium": parent may read child status and write commands, child may read commands and write status. If you need that for several child scripts - you should keep child id on the user side to discriminate them and send that id to parent each time you want to check status of respective child.

I've found that here - http://linuxportal.ru/forums/index.php/t/22951/

Solution 5

You can use the PHP function register-shutdown-function that will execute something after the script has completed its dialog with the browser.

See also ignore_user_abort - but you shouldn't need this function if you use the register_shutdown_function. On the same page, set_time_limit(0) will prevent your script to time out.

Share:
43,191
Victor Nicollet
Author by

Victor Nicollet

A former computer scientist and software engineer, currently a founding member of the RunOrg project: http://www.runorg.com

Updated on February 13, 2020

Comments

  • Victor Nicollet
    Victor Nicollet about 4 years

    How can I have PHP 5.2 (running as apache mod_php) send a complete HTTP response to the client, and then keep executing operations for one more minute?

    The long story:

    I have a PHP script that has to execute a few long database requests and send e-mail, which takes 45 to 60 seconds to run. This script is called by an application that I have no control over. I need the application to report any error messages received from the PHP script (mostly invalid parameter errors).

    The application has a timeout delay shorter than 45 seconds (I do not know the exact value) and therefore registers every execution of the PHP script as an error. Therefore, I need PHP to send the complete HTTP response to the client as fast as possible (ideally, as soon as the input parameters have been validated), and then run the database and e-mail processing.

    I'm running mod_php, so pcntl_fork is not available. I could work my way around this by saving the data to be processed to the database and run the actual process from cron, but I'm looking for a shorter solution.

  • Wrikken
    Wrikken over 13 years
    +1, something like Gearman is already set up for it (but other / ones own solutions are of course equally valid).
  • Victor Nicollet
    Victor Nicollet over 13 years
    This the solution I originally had in mind. On the other hand, setting up a processing queue for the sole purpose of working around a timeout in a third party application makes me feel a bit uneasy.
  • Victor Nicollet
    Victor Nicollet over 13 years
    Apparently, according to the docs, register_shutdown_function is called before the script completed the dialog since 4.1.0. Your other link, however, contains a promising comment: php.net/manual/en/features.connection-handling.php#89177 I'll try to delve deeper into this and report back here.
  • Janis Veinbergs
    Janis Veinbergs about 13 years
    Umm, but, based on this diagram, status message gets sent back to client only when cron executes - 5-10 minutes maximum. Anyway, nice diagram!
  • Yanick Rochon
    Yanick Rochon about 13 years
    status messages could be requested at any time :) the point was that there are two separate and independent processes going on here. But otherwise, thanks!
  • Graham Christensen
    Graham Christensen about 12 years
    You may need to disable additional buffering which occurs in Apache: <?php apache_setenv('no-gzip', 1); ini_set('zlib.output_compression', 0); ini_set('implicit_flush', 1);?>
  • Oriol
    Oriol about 11 years
    +1 Wow, great diagram! But instead of the user requesting the status continuously, I think that websockets are better.
  • Pavel Kostenko
    Pavel Kostenko over 10 years
    I advice to add session_write_close(); after flush(); if you are using sessions, otherwise you will not be able to use your site (in the same browser tab) until your (background) processing finish.
  • Michael Wolf
    Michael Wolf over 10 years
    Interesting approach, but unfortunately it doesn't work if you're running behind varnish or, presumably, other proxies.
  • Steel Brain
    Steel Brain about 10 years
    it doesnt work on php5 and chrome browser on linux, chrome waits 30 seconds before terminating the connection
  • ficuscr
    ficuscr almost 10 years
    The ignore_user_abort(); // optional would have no effect, without passing a value (Boolean) that function returns the current setting.
  • mas.morozov
    mas.morozov over 9 years
    This approach (slightly modified) is the only working solution I found to create background task from apache's mod_php without overhead of starting separate OS process - this will occupy and use one of already existing httpd workers instead
  • mas.morozov
    mas.morozov over 9 years
    This solution suffers from lack of parallelism... or one will need to start a pool of worker-proccesses to serve the queue. I ended up posting and then disconnecting http requests to self-localhost (in a manner described by SomeGuy here) to utilize a pool of existing httpd workers as background processors.
  • ErnestV
    ErnestV about 9 years
    exec() is often a problem in shared/hosted spaces. Plus a huge security risk.
  • NLemay
    NLemay almost 9 years
    I tested this solution on my shared hosting, and "Text user will never see" was shown after 30 seconds waiting.
  • Jarek Jakubowski
    Jarek Jakubowski almost 9 years
    There should be ignore_user_abort(true); instead of ignore_user_abort();
  • BeetleJuice
    BeetleJuice almost 8 years
    In the parent script's fread($socketToChild, $dataSize), where does $dataSize come from? Do you need to know exactly how much data to expect out of the socket (including size of headers)? I must be missing something.
  • Erik Kalkoken
    Erik Kalkoken about 7 years
    I have been looking for a solution to this problem for quite a while now and this one works! Thanks a lot. The other solutions might work in specific scenarios, except if you have have limited control over your webserver only and can't fork new processes; a configuration I commonly find on commercial webservers. This solution still works! One important addition. For UNIX systems you need to add curl_setopt($ch, CURLOPT_NOSIGNAL, 1); for timeouts < 1 sec to work. Check here for the explanation.
  • Reloecc
    Reloecc almost 4 years
    finally, genuine!
  • Jette
    Jette about 2 years
    My processor script didn't execute, so I increased CURLOPT_TIMEOUT_MS to 100. It works like a charm, and enables me to quickly respond when receiving events from our payment provider.
  • Aldo
    Aldo about 2 years
    if (ob_get_length() > 0 ) { ob_end_clean(); } fixes this error: ob_end_clean(): failed to delete buffer. No buffer to delete