Home » Php » Continue processing after closing connection [duplicate]

Continue processing after closing connection [duplicate]

Posted by: admin December 5, 2017 Leave a comment

Questions:

This question already has an answer here:

Answers:

If running under fastcgi you can use the very nifty:

fastcgi_finish_request();

http://php.net/manual/en/function.fastcgi-finish-request.php

More detailed information is available in a duplicate answer.

Questions:
Answers:

I finally found a solution (thanks to Google, I just had to keep trying different combinations of search terms). Thanks to the comment from arr1 on this page (it’s about two thirds of the way down the page).

<?php
 ob_end_clean();
 header("Connection: close");
 ignore_user_abort(); // optional
 ob_start();
 echo ('Text the user will see');
 $size = ob_get_length();
 header("Content-Length: $size");
 ob_end_flush(); // Strange behaviour, will not work
 flush();            // Unless both are called !
 // Do processing here 
 sleep(30);
 echo('Text user will never see');
?>

I have yet to actually test this but in short. You send two headers, one that tells the browser exactly how much data to expect then one to tell the browser to close the connection (which it will only do after receiving the expected amount of content). I haven’t tested this yet and I don’t know that the sleep(30) is necessary.

Questions:
Answers:

You can do that by setting time limit to unlimited and ignoring connection

<?php
ignore_user_abort(true);
set_time_limit(0);

see also: http://www.php.net/manual/en/features.connection-handling.php

Questions:
Answers:

PHP doesn’t have such persistence (by default). The only way I can think of is run cron jobs to pre-fill the cache.

Questions:
Answers:

As far as I know, unless you’re running FastCGI, you can’t drop the connection and continue execution (unless you got Endophage’s answer to work, which I failed). So you can:

  1. Use cron or anything like that to schedule this kind of tasks
  2. Use a child process to finish the job

But it gets worse. Even if you spawn a child process with proc_open(), PHP will wait for it to finish before closing connection, even after calling exit(), die(), some_undefined_function_causing_fatal_error(). The only workaround I found is to spawn a child process that itself spawns a child process, like this:

function doInBackground ($_variables, $_code)
{
    proc_open (
        'php -r ' .     
            escapeshellarg ("if (pcntl_fork() === 0) { extract (unserialize (\$argv [1])); $_code }") .
            ' ' . escapeshellarg (serialize ($_variables)),
        array(), $pipes 
    );
}

$message = 'Hello world!';
$filename = tempnam (sys_get_temp_dir(), 'php_test_workaround');
$delay = 10;

doInBackground (compact ('message', 'filename', 'delay'), <<< 'THE_NOWDOC_STRING'
    // Your actual code goes here:
    sleep ($delay);
    file_put_contents ($filename, $message);
THE_NOWDOC_STRING
);

Questions:
Answers:

Can compile and run programs from PHP-CLI(not on shared hosting > VPS)

Caching

For caching I would not do it that way. I would use redis as my LRU cache. It is going to be very fast(benchmarks) especially when you compile it with client library written in C.

Offline processing

When you install beanstalkd message queue you can also do delayed puts. But I would use redis brpop/rpush to do the other message queuing part because redis is going to be faster especially if you use PHP client library(in C user-space).

Can NOT compile or run programs from PHP-CLI(on shared hosting)

set_time_limit

most of the times this set_time_limit is not available(because of safe-mode or max_execution_time directive) to set 0 at least when on shared hosting.Also shared hosting really providers don’t like for users to hold up PHP processes for a long time. Most of the times the default limit is set to 30.

Cron

Use cron to write data to disc using Cache_lite. Some stackoverflow topic already explaining this:

Also rather easy, but still hacky. I thinky you should upgrade(>VPS) when you have to do such hacking.

Asynchronous request

As last resort you could do asynchronous request caching data using Cache_lite for example. Be aware that shared hosting does not like for you to hold up a lot of long running PHP processes. I would use only one background process which calls another one when it reaches max-execution-time directive. I would note time when script starts and between a couple of cache calls I would measure time spent and when it gets near the time I would do another asynchronous request. I would use locking to make sure only 1 process is running. This way I will not piss of the provider and it can be done. On the other hand I don’t think I would write any of this because it is kind of hacky if you ask me. When I get to that scale I would upgrade to VPS.

Questions:
Answers:

If you are doing this to cache content, you may instead want to consider using an existing caching solution such as memcached.

Questions:
Answers:

No. As far as the webserver is concerned, the request from the browser is handled by the PHP engine, and that’s that. The request lasts as long as the PHP.

You might be able to fork() though.

Leave a Reply

Your email address will not be published. Required fields are marked *