Laravel queues getting "killed"

18,772

Solution 1

You can try with giving a timeout. For eg. php artisan queue:work --timeout=120

By default, the timeout is 60 seconds, so we forcefully override the timeout as mentioned above

Solution 2

I know this is not what you are looking for. but i have same problem and i think it's happen bcs of OS ( i will change it if i found the exact reason ) but lets check

queue:listen

instead of

queue:work

the main different between this two is that the queue:listen run Job class codes per job ( so you dont need to restart your workers or supervisor) but the queue:work use cache system and work very faster than queue:listen and OS can not handle this speed and prepare queue connection ( in my case Redis )

queue:listen command will run queue:work in it self ( you can check this from your running process in htop or .. )

But the reason of telling you to check queue:listen command , bcs of the speed . OS can work easily with this speed and have no problem to handle your queue connection and ... ( in my case there is no silent kill any more )

to know if you have my problem , you can change your queue driver to "sync" from .env and see if it's kill again or not - if it's not killed , you can know that the problem is on preparing queue connection for use

  • to know if you have memory problem run your queue with listen method or sync and php will return an Error for that, then you can increase your memory to test it again

  • you can use this code to give more memory for testing in your code

    ini_set('memory_limit', '1G');//1 GIGABYTE
    

Solution 3

Sometimes you work with resource-intensive processes like image converting or BIG excel file creating/parsing. And timeout option is not enough for this. You can set public $timeout = 0; in your job but it still killed because of memory(!). By default memory limit is 128 MB. To fix it just add --memory=256 (or heigher) option to avoid this problem.

BTW:

The time limit is set to 1 hour, and the memory limit to 2GBs

This applying only for php-fpm in your case but not for queue process worker.

Share:
18,772

Related videos on Youtube

PeterInvincible
Author by

PeterInvincible

Updated on September 14, 2022

Comments

  • PeterInvincible
    PeterInvincible over 1 year

    Sometimes when I'm sending over a large dataset to a Job, my queue worker exits abruptly.

    // $taskmetas is an array with other arrays, each subsequent array having 90 properties.
    $this->dispatch(new ProcessExcelData($excel_data, $taskmetas, $iteration, $storage_path));
    

    The ProcessExcelData job class creates an excel file using the box/spout package.

    • in the 1st example $taskmetas has 880 rows - works fine
    • in the 2nd example $taskmetas has 10,000 rows - exits abruptly

    1st example - queue output with a small dataset:

    forge@user:~/myapp.com$ php artisan queue:work --tries=1
    [2017-08-07 02:44:48] Processing: App\Jobs\ProcessExcelData
    [2017-08-07 02:44:48] Processed:  App\Jobs\ProcessExcelData
    

    2nd example - queue output with a large dataset:

    forge@user:~/myapp.com$ php artisan queue:work --tries=1
    [2017-08-07 03:18:47] Processing: App\Jobs\ProcessExcelData
    Killed
    

    I don't get any error messages, logs are empty, and the job doesn't appear in the failed_jobs table as with other errors. The time limit is set to 1 hour, and the memory limit to 2GBs.

    Why are my queues abruptly quitting?

  • PeterInvincible
    PeterInvincible over 6 years
    The time limit is set to 1 hour - the job dies within 5 minutes.
  • Smruti Ranjan
    Smruti Ranjan over 6 years
    then in your case there might be some issue in the code or else memory issue