Как отправить массовую почту с помощью очереди с Laravel 10

В этой статье мы увидим, как laravel 10 отправляет массовую почту с использованием очереди. Здесь мы узнаем, как отправлять массовые рассылки с использованием очереди в laravel 10. Очередь Laravel используется для отправки массовых рассылок в фоновом режиме. 

Как мы знаем, если мы отправляем одно письмо в приложении laravel, оно работает правильно, не занимая больше времени, но если вы хотите отправить несколько писем в laravel, это займет слишком много времени, и вы также не сможете выполнять какие-либо операции в течение этих периодов времени. .

Итак, давайте посмотрим, как отправлять массовые рассылки с помощью очереди в laravel 10, как отправлять массовые рассылки в laravel 10 с помощью очереди, отправлять электронные письма в laravel 10 и отправлять почту в laravel 10.

Шаг 1: Установите Laravel 10

 На этом этапе мы установим laravel 10, используя следующую команду.

composer create-project --prefer-dist laravel/laravel laravel_10_send_mail

Шаг 2: Обновите файл .env

Теперь мы настроим конфигурацию почты в  файле .env , как показано ниже. Здесь мы использовали mailtrap.io.  Таким образом, вы можете использовать его в соответствии с вашими требованиями.

MAIL_MAILER=smtp
MAIL_HOST=smtp.mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=your_username
MAIL_PASSWORD=your_passowrd
MAIL_ENCRYPTION=TLS

QUEUE_DRIVER=database

 Шаг 3: Создайте маршрут

На этом шаге мы создадим маршруты для отправки массовой почты с использованием очереди.

<?php
  
use Illuminate\Support\Facades\Route;
  
use App\Http\Controllers\SendMailController;
  
/*
|--------------------------------------------------------------------------
| Web Routes
|--------------------------------------------------------------------------
|
| Here is where you can register web routes for your application. These
| routes are loaded by the RouteServiceProvider within a group which
| contains the "web" middleware group. Now create something great!
|
*/
  
Route::get('send/mail', [SendMailController::class, 'sendMail'])->name('send_mail');

Шаг 4: Создайте таблицу очереди

Теперь мы создадим таблицу вакансий  в базе данных. Итак, скопируйте приведенную ниже команду и запустите ее в своем терминале.

php artisan queue:table

php artisan migrate​

Шаг 5: Создайте контроллер 

На этом шаге мы создадим SendMailController  , используя следующую команду.

php artisan make:controller SendMailController

приложение/Http/Контроллеры/SendMailController.php

<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;

class SendMailController extends Controller
{
    public function sendMail(Request $request)
    {
    	$details = [
    		'subject' => 'Test Notification'
    	];
    	
        $job = (new \App\Jobs\SendQueueEmail($details))
            	->delay(now()->addSeconds(2)); 

        dispatch($job);
        echo "Mail send successfully !!";
    }
}

Шаг 6: Создайте задание

Теперь мы создадим файл SendQueueEmail.php  , используя следующую команду.

php artisan make:job SendQueueEmail

приложение/Работа/SendQueueEmail.php

<?php

namespace App\Jobs;

use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use App\User;
use Mail;

class SendQueueEmail implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
    protected $details;
    public $timeout = 7200; // 2 hours

    /**
     * Create a new job instance.
     *
     * @return void
     */
    public function __construct($details)
    {
        $this->details = $details;
    }

    /**
     * Execute the job.
     *
     * @return void
     */
    public function handle()
    {
        $data = User::all();
        $input['subject'] = $this->details['subject'];

        foreach ($data as $key => $value) {

            $input['name'] = $value->name;
            $input['email'] = $value->email;

            \Mail::send('mail.mailExample', [], function($message) use($input){
                $message->to($input['email'], $input['name'])
                    ->subject($input['subject']);
            });
        }
    }
}

Шаг 7: Создайте почтовый блейд

На этом шаге мы создадим файл mailExample.blade.php  . Итак, добавьте следующий код в этот файл.

ресурсы/представления/почта/mailExample.blade.php

Hi <br/>
This is Test Mail.<br />
Thank you !!

И запустите приведенную ниже команду в своем терминале, чтобы отправить почту вручную.

php artisan queue:listen

Оригинальный источник статьи:   https://websolutionstuff.com/

#laravel #send #mail #queue 

Как отправить массовую почту с помощью очереди с Laravel 10

如何在 Laravel 10 中使用队列发送群发邮件

在本文中,我们将看到 laravel 10 使用队列发送群发邮件。在这里,我们将学习如何在 laravel 10 中使用队列发送群发邮件。Laravel 队列用于通过后台进程发送群发邮件。 

正如我们所知,如果我们在 laravel 应用程序中发送单封邮件,它可以正常工作而无需花费更多时间,但是如果您想在 laravel 中发送多封电子邮件,那么它将花费太多时间,而且您在这段时间内无法进行任何操作.

那么,让我们看看如何在 laravel 10 中使用队列发送群发邮件,如何在 laravel 10 中使用队列发送群发邮件,laravel 10 发送电子邮件,以及在 laravel 10 中发送邮件。

第一步:安装 Laravel 10

 在此步骤中,我们将使用以下命令安装 laravel 10。

composer create-project --prefer-dist laravel/laravel laravel_10_send_mail

第 2 步:更新 .env 文件

现在,我们将在.env文件中设置邮件配置, 如下所示。这里我们使用了mailtrap.io。 因此,您可以根据需要使用它。

MAIL_MAILER=smtp
MAIL_HOST=smtp.mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=your_username
MAIL_PASSWORD=your_passowrd
MAIL_ENCRYPTION=TLS

QUEUE_DRIVER=database

 第 3 步:创建路线

在此步骤中,我们将创建使用队列发送批量邮件的路由。

<?php
  
use Illuminate\Support\Facades\Route;
  
use App\Http\Controllers\SendMailController;
  
/*
|--------------------------------------------------------------------------
| Web Routes
|--------------------------------------------------------------------------
|
| Here is where you can register web routes for your application. These
| routes are loaded by the RouteServiceProvider within a group which
| contains the "web" middleware group. Now create something great!
|
*/
  
Route::get('send/mail', [SendMailController::class, 'sendMail'])->name('send_mail');

第 4 步:创建队列表

 现在,我们将在数据库中创建一个jobs表。因此,复制以下命令并在您的终端中运行它。

php artisan queue:table

php artisan migrate​

第五步:创建控制器 

 在此步骤中,我们将使用以下命令创建SendMailController 。

php artisan make:controller SendMailController

应用程序/Http/Controllers/SendMailController.php

<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;

class SendMailController extends Controller
{
    public function sendMail(Request $request)
    {
    	$details = [
    		'subject' => 'Test Notification'
    	];
    	
        $job = (new \App\Jobs\SendQueueEmail($details))
            	->delay(now()->addSeconds(2)); 

        dispatch($job);
        echo "Mail send successfully !!";
    }
}

第 6 步:创建作业

现在,我们将使用以下命令创建SendQueueEmail.php 文件。

php artisan make:job SendQueueEmail

应用程序/工作/SendQueueEmail.php

<?php

namespace App\Jobs;

use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use App\User;
use Mail;

class SendQueueEmail implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
    protected $details;
    public $timeout = 7200; // 2 hours

    /**
     * Create a new job instance.
     *
     * @return void
     */
    public function __construct($details)
    {
        $this->details = $details;
    }

    /**
     * Execute the job.
     *
     * @return void
     */
    public function handle()
    {
        $data = User::all();
        $input['subject'] = $this->details['subject'];

        foreach ($data as $key => $value) {

            $input['name'] = $value->name;
            $input['email'] = $value->email;

            \Mail::send('mail.mailExample', [], function($message) use($input){
                $message->to($input['email'], $input['name'])
                    ->subject($input['subject']);
            });
        }
    }
}

第 7 步:创建邮件刀片

在这一步中,我们将创建一个mailExample.blade.php 文件。因此,将以下代码添加到该文件中。

资源/视图/邮件/mailExample.blade.php

Hi <br/>
This is Test Mail.<br />
Thank you !!

并在您的终端中运行以下命令以手动发送邮件。

php artisan queue:listen

原文出处:https:   //websolutionstuff.com/

#laravel #send #mail #queue 

如何在 Laravel 10 中使用队列发送群发邮件

How to Send Bulk Mail using Queue with Laravel 10

In this article, we will see laravel 10 send bulk mail using a queue. Here, we will learn about how to send bulk mail using a queue in laravel 10. Laravel queue is used for sending bulk mail with a background process. 

As we know if we are sending single mail in the laravel application it is working properly without taking more time but if you want to send multiple emails in laravel then it will take too much time and also you can not do any operation during this time periods.

So, let's see how to send bulk mail using a queue in laravel 10, how to send bulk mail in laravel 10 using a queue, laravel 10 send an email, and send mail in laravel 10.

Step 1: Install Laravel 10

 In this step, we will install laravel 10 using the following command.

composer create-project --prefer-dist laravel/laravel laravel_10_send_mail

Step 2: Update .env File

Now, we will set up the mail configuration in the .env file as below. Here we have used mailtrap.io. So, you can use it as per your requirements.

MAIL_MAILER=smtp
MAIL_HOST=smtp.mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=your_username
MAIL_PASSWORD=your_passowrd
MAIL_ENCRYPTION=TLS

QUEUE_DRIVER=database

 Step 3: Create Route

In this step, we will create routes for sending bulk mail using the queue.

<?php
  
use Illuminate\Support\Facades\Route;
  
use App\Http\Controllers\SendMailController;
  
/*
|--------------------------------------------------------------------------
| Web Routes
|--------------------------------------------------------------------------
|
| Here is where you can register web routes for your application. These
| routes are loaded by the RouteServiceProvider within a group which
| contains the "web" middleware group. Now create something great!
|
*/
  
Route::get('send/mail', [SendMailController::class, 'sendMail'])->name('send_mail');

Step 4: Create Queue Table

Now, we will create a jobs table in the database. So, copy the below command and run it in your terminal.

php artisan queue:table

php artisan migrate​

Step 5: Create Controller 

In this step, we will create SendMailController using the following command.

php artisan make:controller SendMailController

app/Http/Controllers/SendMailController.php

<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;

class SendMailController extends Controller
{
    public function sendMail(Request $request)
    {
    	$details = [
    		'subject' => 'Test Notification'
    	];
    	
        $job = (new \App\Jobs\SendQueueEmail($details))
            	->delay(now()->addSeconds(2)); 

        dispatch($job);
        echo "Mail send successfully !!";
    }
}

Step 6: Create Job

Now, we will create the SendQueueEmail.php file using the following command.

php artisan make:job SendQueueEmail

app/Jobs/SendQueueEmail.php

<?php

namespace App\Jobs;

use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use App\User;
use Mail;

class SendQueueEmail implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
    protected $details;
    public $timeout = 7200; // 2 hours

    /**
     * Create a new job instance.
     *
     * @return void
     */
    public function __construct($details)
    {
        $this->details = $details;
    }

    /**
     * Execute the job.
     *
     * @return void
     */
    public function handle()
    {
        $data = User::all();
        $input['subject'] = $this->details['subject'];

        foreach ($data as $key => $value) {

            $input['name'] = $value->name;
            $input['email'] = $value->email;

            \Mail::send('mail.mailExample', [], function($message) use($input){
                $message->to($input['email'], $input['name'])
                    ->subject($input['subject']);
            });
        }
    }
}

Step  7: Create Mail Blade

In this step, we will create a mailExample.blade.php file. So, add the following code to that file.

resources/views/mail/mailExample.blade.php

Hi <br/>
This is Test Mail.<br />
Thank you !!

And run the below command in your terminal to send manually mail.

php artisan queue:listen

Original article source at:  https://websolutionstuff.com/

#laravel #send #mail #queue 

How to Send Bulk Mail using Queue with Laravel 10

Отправить почту, используя очередь в Laravel 10

Привет, ребята,

В этом уроке мы обсудим отправку почты с использованием очереди в laravel 10. Если вы хотите увидеть пример отправки электронной почты с использованием очереди в laravel 10, то вы попали по адресу. этот пост даст вам простой пример того, как отправлять почту с помощью очереди в laravel 10. давайте обсудим отправку почты laravel с использованием очереди. давайте посмотрим на пример отправки почты в очереди laravel.

Здесь я дам вам простой и легкий пример того, как использовать реализацию laravel 10 для отправки электронной почты с использованием очереди, просто следуйте всем моим шагам.

Шаг 1: Загрузите Laravel

Давайте начнем урок с установки нового приложения laravel. если вы уже создали проект, пропустите следующий шаг.

composer create-project laravel/laravel example-app

Шаг 2: Создайте почтовый класс с конфигурацией

На этом втором шаге мы создадим почтовый класс TestQueueMail, следуя приведенной ниже команде.

php artisan make:mail TestQueueMail

После успешного выполнения вышеуказанной команды перейдите в папку «Mail» в каталогах вашего проекта laravel, вы можете скопировать этот код и поместить свой почтовый класс.

приложение/Почта/TestQueueMail.php

<?php
  
namespace App\Mail;
  
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Mail\Mailable;
use Illuminate\Queue\SerializesModels;
  
class TestQueueMail extends Mailable
{
    use Queueable, SerializesModels;
  
    /**
     * Create a new message instance.
     *
     * @return void
     */
    public function __construct()
    {
          
    }
    /**
     * Build the message.
     *
     * @return $this
     */
    public function build()
    {
        return $this->view('emails.test');
    }
}

Итак, теперь нам нужно создать представление электронной почты, используя файл блейда. Итак, мы создадим простой файл представления и скопируем приведенный ниже код по следующему пути.

/resources/views/email/test.blade.php

<!DOCTYPE html>
<html>
<head>
    <title>How to Send Mail using Queue in Laravel 10? - Nicesnippets.com</title>
</head>
<body>
   
<center>
<h2 style="padding: 23px;background: #b3deb8a1;border-bottom: 6px green solid;">
    <a href="https://www.nicesnippets.com">Visit Our Website : Nicesnippets.com</a>
</h2>
</center>
  
<p>Hi, Sir</p>
<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse
cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non
proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
  
<strong>Thank you Sir. :)</strong>
  
</body>
</html>

После настройки файла представления нам нужно настроить отправку электронной почты. Итак, давайте установим конфигурацию в файле .env:

.env

MAIL_MAILER=smtp
MAIL_HOST=smtp.gmail.com
MAIL_PORT=465
MAIL_USERNAME=mygoogle@gmail.com
MAIL_PASSWORD=rrnnucvnqlbsl
MAIL_ENCRYPTION=tls
MAIL_FROM_ADDRESS=mygoogle@gmail.com
MAIL_FROM_NAME="${APP_NAME}"

Шаг 3: Настройка очереди

На этом третьем шаге мы настроим драйвер очереди, поэтому, прежде всего, мы настроим «базу данных» драйвера очереди. Вы можете установить по своему желанию, но мы дополнительно определим драйвер как Redis. Итак, здесь определите драйвер базы данных в файле «.env»:

.env

QUEUE_CONNECTION=database

После этого нам нужно сгенерировать миграцию и создать таблицы для очереди. Итак, давайте запустим следующую команду для таблиц базы данных очередей:

Создать миграцию:

php artisan queue:table

Запустить миграцию:

php artisan migrate

Шаг 4: Создайте задание в очереди

Итак, на этом шаге мы создадим задание в очереди с помощью следующей команды, эта команда создаст файл задания в очереди с Queueable. Итак, давайте запустим следующую команду:

php artisan make:job SendEmailJob

теперь у вас есть файл SendEmailJob.php в каталоге «Jobs». Итак, давайте посмотрим на этот файл и поместим в него приведенный ниже код.

приложение/Работа/SendEmailJob.php

<?php
  
namespace App\Jobs;
  
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldBeUnique;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use App\Mail\TestQueueMail;
use Mail;
  
class SendEmailJob implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
  
    protected $details;
  
    /**
     * Create a new job instance.
     *
     * @return void
     */
    public function __construct($details)
    {
        $this->details = $details;
    }
  
    /**
     * Execute the job.
     *
     * @return void
     */
    public function handle()
    {
        $email = new TestQueueMail();
        Mail::to($this->details['email'])->send($email);
    }
}

Шаг 5: Проверка задания в очереди

Здесь, на этот раз, чтобы использовать и протестировать созданное задание очереди, поэтому давайте просто создадим маршрут со следующим кодом для тестирования созданной очереди.

маршруты/web.php

Route::get('email-test', function(){
  
    $details['email'] = 'your_email@gmail.com';
  
    dispatch(new App\Jobs\SendEmailJob($details));
  
    dd('done');
});

Затем вам нужно запустить следующую команду, чтобы увидеть процесс очереди, вы должны продолжать запускать эту команду:

php artisan queue:work

Вы увидите макет, как показано ниже, если очередь работает:

Выход:

Запустите приложение Laravel:

Все шаги выполнены, теперь вам нужно ввести данную команду и нажать Enter, чтобы запустить приложение laravel:

php artisan serve

Теперь вам нужно открыть веб-браузер, ввести указанный URL-адрес и просмотреть вывод приложения:

http://localhost:8000/email-test

Выход

 

Держите систему очередей Laravel работающей на сервере:

Как мы знаем, нам нужно продолжать выполнять команду «php artisan work» на терминале, потому что тогда и тогда будет работать очередь. поэтому на сервере вы должны продолжать работать с помощью Supervisor. Супервизор — это монитор процессов для операционной системы Linux, который автоматически перезапустит ваши процессы queue:work в случае их сбоя.

Итак, давайте установим Supervisor с помощью следующей команды:

Установить супервайзер:

sudo apt-get install supervisor

Затем нам нужно указать файл конфигурации в супервизоре по следующему пути, вы также можете указать путь к проекту, пользователя и местоположение выходного файла:

/etc/supervisor/conf.d/laravel-worker.conf

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs --sleep=3 --tries=3 --max-time=3600
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=forge
numprocs=8
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log
stopwaitsecs=3600

Next, we will start supervisor with below commands:

sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start laravel-worker:*

Now you can check it, from your end.

It will help you...

Original article source at:  https://www.nicesnippets.com/

#laravel #queue #send 

Отправить почту, используя очередь в Laravel 10
木村  直子

木村 直子

1678523649

在 Laravel 10 中使用队列发送邮件

嗨,大家好,

在本教程中,我们将讨论在 laravel 10 中使用队列发送邮件。如果您想查看在 laravel 10 中使用队列发送电子邮件的示例,那么您来对地方了。这篇文章将为您提供如何在 laravel 10 中使用队列发送邮件的简单示例。让我们讨论一下 laravel 使用队列发送电子邮件。让我们看看下面的示例在队列 laravel 中发送邮件。

在这里,我将为您提供一个简单易行的示例,说明如何使用实现 laravel 10 使用队列发送电子邮件,只需按照我的所有步骤操作即可。

第 1 步:下载 Laravel

让我们通过安装一个新的 Laravel 应用程序开始本教程。如果您已经创建了项目,则跳过以下步骤。

composer create-project laravel/laravel example-app

第 2 步:使用配置创建邮件类

在第二步中,我们将按照以下命令创建一个邮件类 TestQueueMail。

php artisan make:mail TestQueueMail

成功运行以上命令后,转到 laravel 项目目录中的“Mail”文件夹,您可以复制此代码并放置您的邮件类。

应用程序/邮件/TestQueueMail.php

<?php
  
namespace App\Mail;
  
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Mail\Mailable;
use Illuminate\Queue\SerializesModels;
  
class TestQueueMail extends Mailable
{
    use Queueable, SerializesModels;
  
    /**
     * Create a new message instance.
     *
     * @return void
     */
    public function __construct()
    {
          
    }
    /**
     * Build the message.
     *
     * @return $this
     */
    public function build()
    {
        return $this->view('emails.test');
    }
}

所以,现在我们需要使用刀片文件创建电子邮件视图。因此,我们将创建简单的视图文件并在以下路径中复制以下代码。

/resources/views/email/test.blade.php

<!DOCTYPE html>
<html>
<head>
    <title>How to Send Mail using Queue in Laravel 10? - Nicesnippets.com</title>
</head>
<body>
   
<center>
<h2 style="padding: 23px;background: #b3deb8a1;border-bottom: 6px green solid;">
    <a href="https://www.nicesnippets.com">Visit Our Website : Nicesnippets.com</a>
</h2>
</center>
  
<p>Hi, Sir</p>
<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse
cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non
proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
  
<strong>Thank you Sir. :)</strong>
  
</body>
</html>

配置视图文件后,我们必须设置电子邮件发送,所以让我们在 .env 文件中设置配置:

.env

MAIL_MAILER=smtp
MAIL_HOST=smtp.gmail.com
MAIL_PORT=465
MAIL_USERNAME=mygoogle@gmail.com
MAIL_PASSWORD=rrnnucvnqlbsl
MAIL_ENCRYPTION=tls
MAIL_FROM_ADDRESS=mygoogle@gmail.com
MAIL_FROM_NAME="${APP_NAME}"

第 3 步:队列配置

在这第三步中,我们将对队列驱动程序进行配置,因此首先,我们将设置队列驱动程序“数据库”。您可以在选择时进行设置,我们会将驱动程序另外定义为 Redis。所以这里在“.env”文件上定义数据库驱动程序:

.env

QUEUE_CONNECTION=database

之后我们需要生成迁移并为队列创建表。因此,让我们为队列数据库表运行以下命令:

生成迁移:

php artisan queue:table

运行迁移:

php artisan migrate

第 4 步:创建队列作业

因此,在此步骤中,我们将按照以下命令创建队列作业,此命令将使用 Queueable 创建队列作业文件。因此,让我们运行以下命令:

php artisan make:job SendEmailJob

现在你在“工作”目录中有了 SendEmailJob.php 文件。因此,让我们查看该文件并将波纹管代码放在该文件上。

应用程序/工作/SendEmailJob.php

<?php
  
namespace App\Jobs;
  
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldBeUnique;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use App\Mail\TestQueueMail;
use Mail;
  
class SendEmailJob implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
  
    protected $details;
  
    /**
     * Create a new job instance.
     *
     * @return void
     */
    public function __construct($details)
    {
        $this->details = $details;
    }
  
    /**
     * Execute the job.
     *
     * @return void
     */
    public function handle()
    {
        $email = new TestQueueMail();
        Mail::to($this->details['email'])->send($email);
    }
}

第 5 步:测试队列作业

在这里,这次要使用和测试创建的队列作业,所以让我们使用以下代码简单地创建路由以测试创建的队列。

路线/web.php

Route::get('email-test', function(){
  
    $details['email'] = 'your_email@gmail.com';
  
    dispatch(new App\Jobs\SendEmailJob($details));
  
    dd('done');
});

接下来,您必须运行以下命令以查看队列进程,您必须继续启动此命令:

php artisan queue:work

如果队列有效,您将看到如下所示的布局:

输出:

运行 Laravel 应用程序:

所有步骤都已完成,现在您必须键入给定的命令并按回车键来运行 laravel 应用程序:

php artisan serve

现在,您必须打开 Web 浏览器,输入给定的 URL 并查看应用程序输出:

http://localhost:8000/email-test

输出

 

保持 Laravel 队列系统在服务器上运行:

正如我们所知,我们必须在终端上继续运行“php artisan work”命令,因为然后队列才会工作。所以在服务器中,你必须使用 Supervisor 继续运行。supervisor 是 Linux 操作系统的进程监视器,如果它们失败,它将自动重新启动您的 queue:work 进程。

因此,让我们使用以下命令安装 Supervisor:

安装主管:

sudo apt-get install supervisor

接下来,我们需要在 supervisor 上配置文件如下路径,您还可以设置项目路径、用户和输出文件位置:

/etc/supervisor/conf.d/laravel-worker.conf

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs --sleep=3 --tries=3 --max-time=3600
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=forge
numprocs=8
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log
stopwaitsecs=3600

接下来,我们将使用以下命令启动主管:

sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start laravel-worker:*

现在你可以从你这边检查它。

它会帮助你...

文章原文出处:https:   //www.nicesnippets.com/

#laravel #queue #send 

在 Laravel 10 中使用队列发送邮件
Lawson  Wehner

Lawson Wehner

1678519680

Send Mail using Queue in Laravel 10

Hi Guys,

In this tute, we will discuss send mail using queue in laravel 10. if you want to see example of send email using queue in laravel 10 then you are a right place. this post will give you simple example of how to send mail using queue in laravel 10. let’s discuss about laravel send email using queue. let's see bellow example send mail in queue laravel.

Here,i will give you a simple and easy example how to use implement laravel 10 send email using queue simply follow my all steps.

Step 1: Download Laravel

Let us begin the tutorial by installing a new laravel application. if you have already created the project, then skip following step.

composer create-project laravel/laravel example-app

Step 2: Create Mail Class with Configuration

In this second step we will create a mail class TestQueueMail following bellow command.

php artisan make:mail TestQueueMail

After successfully run above command go to "Mail" folder in your laravel project directories you can copy this code and put your mail class.

app/Mail/TestQueueMail.php

<?php
  
namespace App\Mail;
  
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Mail\Mailable;
use Illuminate\Queue\SerializesModels;
  
class TestQueueMail extends Mailable
{
    use Queueable, SerializesModels;
  
    /**
     * Create a new message instance.
     *
     * @return void
     */
    public function __construct()
    {
          
    }
    /**
     * Build the message.
     *
     * @return $this
     */
    public function build()
    {
        return $this->view('emails.test');
    }
}

So, now we require to create email view using blade file. So we will create simple view file and copy bellow code om following path.

/resources/views/email/test.blade.php

<!DOCTYPE html>
<html>
<head>
    <title>How to Send Mail using Queue in Laravel 10? - Nicesnippets.com</title>
</head>
<body>
   
<center>
<h2 style="padding: 23px;background: #b3deb8a1;border-bottom: 6px green solid;">
    <a href="https://www.nicesnippets.com">Visit Our Website : Nicesnippets.com</a>
</h2>
</center>
  
<p>Hi, Sir</p>
<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse
cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non
proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
  
<strong>Thank you Sir. :)</strong>
  
</body>
</html>

After configuration of view file, we have to setup for email send, So let' set configuration in .env file:

.env

MAIL_MAILER=smtp
MAIL_HOST=smtp.gmail.com
MAIL_PORT=465
MAIL_USERNAME=mygoogle@gmail.com
MAIL_PASSWORD=rrnnucvnqlbsl
MAIL_ENCRYPTION=tls
MAIL_FROM_ADDRESS=mygoogle@gmail.com
MAIL_FROM_NAME="${APP_NAME}"

Step 3 : Queue Configuration

In this third step we will make configuration on queue driver so first of all, we will set queue driver "database". You can set as you optate withal we will define driver as Redis additionally. So here define database driver on ".env" file:

.env

QUEUE_CONNECTION=database

After that we need to generate migration and create tables for queue. So let's run bellow command for queue database tables:

Generate Migration:

php artisan queue:table

Run Migration:

php artisan migrate

Step 4 : Create Queue Job

So, in this step we will create queue job bey following command, this command will create queue job file with Queueable. So let's run bellow command:

php artisan make:job SendEmailJob

now you have SendEmailJob.php file in "Jobs" directory. So let's see that file and put bellow code on that file.

app/Jobs/SendEmailJob.php

<?php
  
namespace App\Jobs;
  
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldBeUnique;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use App\Mail\TestQueueMail;
use Mail;
  
class SendEmailJob implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
  
    protected $details;
  
    /**
     * Create a new job instance.
     *
     * @return void
     */
    public function __construct($details)
    {
        $this->details = $details;
    }
  
    /**
     * Execute the job.
     *
     * @return void
     */
    public function handle()
    {
        $email = new TestQueueMail();
        Mail::to($this->details['email'])->send($email);
    }
}

Step 5 : Test Queue Job

Here, this time to use and test created queue job, so let's simple create route with following code for testing created queue.

routes/web.php

Route::get('email-test', function(){
  
    $details['email'] = 'your_email@gmail.com';
  
    dispatch(new App\Jobs\SendEmailJob($details));
  
    dd('done');
});

Next, you must have to run following command to see queue process, you must have to keep start this command:

php artisan queue:work

You will see layout as like bellow if queue is works:

Output:

Run Laravel App:

All steps have been done, now you have to type the given command and hit enter to run the laravel app:

php artisan serve

Now, you have to open web browser, type the given URL and view the app output:

http://localhost:8000/email-test

Output

 

Keep Laravel Queue System Running on Server:

As we know we must need to keep running "php artisan work" command on the terminal because then and then queue will work. so in server, you must have to keep running using Supervisor. A supervisor is a process monitor for the Linux operating system, and will automatically restart your queue:work processes if they fail.

So let's install Supervisor using bellow command:

Install Supervisor:

sudo apt-get install supervisor

Next, we need to configuration file on supervisor as below following path, you can set project path, user and output file location as well:

/etc/supervisor/conf.d/laravel-worker.conf

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs --sleep=3 --tries=3 --max-time=3600
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=forge
numprocs=8
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log
stopwaitsecs=3600

Next, we will start supervisor with below commands:

sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start laravel-worker:*

Now you can check it, from your end.

It will help you...

Original article source at:  https://www.nicesnippets.com/

#laravel #queue #send 

Send Mail using Queue in Laravel 10
Hermann  Frami

Hermann Frami

1673497464

Quirrel: The Task Queueing Solution for Serverless

Quirrel

The Task Queueing Solution for Serverless.

Quirrel makes job queueing simple as cake. It supports delayed jobs, fanout jobs, recurring jobs and CRON jobs.

Quirrel values ...

Getting Started

If you want to learn about Quirrel, check out the tutorial!

If you want to integrate Quirrel into an existing application, check out the Getting Started Guide.


🎉 Quirrel joins Netlify. Learn more 🎉



Download Details:

Author: Quirrel-dev
Source Code: https://github.com/quirrel-dev/quirrel 
License: MIT license

#serverless #typescript #queue 

Quirrel: The Task Queueing Solution for Serverless
Monty  Boehm

Monty Boehm

1669448040

How to Implement Queue Data Structure using JavaScript

JavaScript queue data structure implementation tutorial

The Queue data structure is a linear data structure that allows you to store a collection of data inside it in a similar fashion to a real-world queue.

Whenever you’re going to a bank or buying a Starbucks coffee, you’re most likely going to form a line with other people and wait until it’s your turn to be served. A queue always implements the FIFO (First-In-First-Out) system. Those who came in earliest will be served first.

Waiting in line illustration. Source: freepik.comWaiting in line illustration. Source: freepik.com

The process of adding a new customer to the line is called enqueue, while a customer that has been served by the store and removed from the line is called the dequeue process.

The Queue data structure mimics this real-world practice of waiting in line and serving the earliest customer first by creating an abstract rule that you need to implement to the data structure. The rules are as follows:

  • The data structure has 2 pointers: the head and the tail
  • The earliest element will be located on the head
  • The latest element will be located on the tail
  • A new element can be inserted into the structure using enqueue() method
  • The dequeue() method will remove the element on the head, moving the head to the next element in line

In addition, you may also count the length of the queue and peek to see the next element located in the head pointer to help with data manipulation. I will show you how to implement both additions later.

Here’s an illustration explaining how Queue data structure works:

How Queue data structure worksHow Queue data structure works

The Queue data structure is commonly used because its fast insertion and deletion. The following table describes Queue data structure time complexity in Big O Notation (from Wikipedia):

AlgorithmAverageWorst case
SpaceO(n)O(n)
SearchO(n)O(n)
InsertO(1)O(1)
DeleteO(1)O(1)

Now that you understand what the queue data structure looks like, let’s learn how to implement one using JavaScript.

JavaScript Queue implementation

You can implement a Queue data structure using JavaScript in three steps:

  • Create a class named Queue with three properties: elements, head, and tail
  • Write the length function and bind it as a property using get syntax
  • Write the enqueue(), dequeue(), and peek() method code

You will use the class as a blueprint to create a new object instance of Queue. Alright, let’s start with creating a Queue class that has the required properties:

class Queue {
  constructor() {
    this.elements = {};
    this.head = 0;
    this.tail = 0;
  }
}

Any element enqueued to the object instance will be stored in elements property. The head and tail properties will be the pointer, indicating the first and last element index inside the structure.

Next, write the length() method code as follows:

get length() {
  return this.tail - this.head;
}

The get syntax before the length() method declaration will bind the method as a property, so when you create a new instance later, you can call the method like a property:

const q = new Queue();
q.length; // calls the length() method

Finally, you need to write the three methods for enqueue(), dequeue(), and peek() process.

The enqueue() method takes an element as its parameter and puts it in the tail index. Once the new element is stored, you need to increment the tail pointer by one:

enqueue(element) {
  this.elements[this.tail] = element;
  this.tail++;
}

The dequeue() method first check if the length property of the instance is larger than zero. When there’s no element, the method simply returns undefined. When there is one or more element in the instance, the method will grab the element at head index and delete it from the storage (elements property in this case)

Then, the head pointer will be incremented by one and the deleted element is returned to the caller.

Here’s the code for dequeue() method:

dequeue() {
  if (this.length) {
    const element = this.elements[this.head];
    delete this.elements[this.head];
    this.head++;
    return element;
  }
  return undefined;
}

Finally, the peek() method will return the next element waiting in line. This will be the same element returned by dequeue() method, but the difference is this element won’t be removed from the queue.

Here’s the code for the peek() method:

peek() {
  if (this.length) {
    return this.elements[this.head];
  }
  return undefined;
}

And with that, your Queue class is finished. Here’s the full code for your reference:

class Queue {
  constructor() {
    this.elements = {};
    this.head = 0;
    this.tail = 0;
  }

  get length() {
    return this.tail - this.head;
  }

  enqueue(element) {
    this.elements[this.tail] = element;
    this.tail++;
  }

  dequeue() {
    if (this.length) {
      const element = this.elements[this.head];
      delete this.elements[this.head];
      this.head++;
      return element;
    }
    return undefined;
  }

  peek() {
    if (this.length) {
      return this.elements[this.head];
    }
    return undefined;
  }
}

You can test the code by creating a new object of Queue class instance and call the methods from it:

const q = new Queue();
q.enqueue("James");
q.enqueue("Dean");
q.enqueue("Ben");
console.log(q.length); // 3
console.log(q.peek()); // "James"
console.log(q.dequeue()); // "James"
console.log(q.peek()); // "Dean"
console.log(q.length); // 2

Feel free to copy and use the code as you need 😉

Original article source at: https://sebhastian.com/

#javascript #queue 

How to Implement Queue Data Structure using JavaScript
Nigel  Uys

Nigel Uys

1668754098

How to Developing an Asynchronous Task Queue in Python

This tutorial looks at how to implement several asynchronous task queues using Python's multiprocessing library and Redis.

Queue Data Structures

A queue is a First-In-First-Out (FIFO) data structure.

  1. an item is added at the tail (enqueue)
  2. an item is removed at the head (dequeue)

queue

You'll see this in practice as you code out the examples in this tutorial.

Task

Let's start by creating a basic task:

# tasks.py

import collections
import json
import os
import sys
import uuid
from pathlib import Path

from nltk.corpus import stopwords

COMMON_WORDS = set(stopwords.words("english"))
BASE_DIR = Path(__file__).resolve(strict=True).parent
DATA_DIR = Path(BASE_DIR).joinpath("data")
OUTPUT_DIR = Path(BASE_DIR).joinpath("output")


def save_file(filename, data):
    random_str = uuid.uuid4().hex
    outfile = f"{filename}_{random_str}.txt"
    with open(Path(OUTPUT_DIR).joinpath(outfile), "w") as outfile:
        outfile.write(data)


def get_word_counts(filename):
    wordcount = collections.Counter()
    # get counts
    with open(Path(DATA_DIR).joinpath(filename), "r") as f:
        for line in f:
            wordcount.update(line.split())
    for word in set(COMMON_WORDS):
        del wordcount[word]

    # save file
    save_file(filename, json.dumps(dict(wordcount.most_common(20))))

    proc = os.getpid()

    print(f"Processed {filename} with process id: {proc}")


if __name__ == "__main__":
    get_word_counts(sys.argv[1])

So, get_word_counts finds the twenty most frequent words from a given text file and saves them to an output file. It also prints the current process identifier (or pid) using Python's os library.

Following along?

Create a project directory along with a virtual environment. Then, use pip to install NLTK:

(env)$ pip install nltk==3.6.5

Once installed, invoke the Python shell and download the stopwords corpus:

>>> import nltk
>>> nltk.download("stopwords")

[nltk_data] Downloading package stopwords to
[nltk_data]     /Users/michael/nltk_data...
[nltk_data]   Unzipping corpora/stopwords.zip.
True

If you experience an SSL error refer to this article.

Example fix:

>>> import nltk >>> nltk.download('stopwords') [nltk_data] Error loading stopwords: <urlopen error [SSL: [nltk_data]     CERTIFICATE_VERIFY_FAILED] certificate verify failed: [nltk_data]     unable to get local issuer certificate (_ssl.c:1056)> False >>> import ssl >>> try: ...     _create_unverified_https_context = ssl._create_unverified_context ... except AttributeError: ...     pass ... else: ...     ssl._create_default_https_context = _create_unverified_https_context ... >>> nltk.download('stopwords') [nltk_data] Downloading package stopwords to [nltk_data]     /Users/michael.herman/nltk_data... [nltk_data]   Unzipping corpora/stopwords.zip. True

Add the above tasks.py file to your project directory but don't run it quite yet.

Multiprocessing Pool

We can run this task in parallel using the multiprocessing library:

# simple_pool.py

import multiprocessing
import time

from tasks import get_word_counts

PROCESSES = multiprocessing.cpu_count() - 1


def run():
    print(f"Running with {PROCESSES} processes!")

    start = time.time()
    with multiprocessing.Pool(PROCESSES) as p:
        p.map_async(
            get_word_counts,
            [
                "pride-and-prejudice.txt",
                "heart-of-darkness.txt",
                "frankenstein.txt",
                "dracula.txt",
            ],
        )
        # clean up
        p.close()
        p.join()

    print(f"Time taken = {time.time() - start:.10f}")


if __name__ == "__main__":
    run()

Here, using the Pool class, we processed four tasks with two processes.

Did you notice the map_async method? There are essentially four different methods available for mapping tasks to processes. When choosing one, you have to take multi-args, concurrency, blocking, and ordering into account:

MethodMulti-argsConcurrencyBlockingOrdered-results
mapNoYesYesYes
map_asyncNoNoNoYes
applyYesNoYesNo
apply_asyncYesYesNoNo

Without both close and join, garbage collection may not occur, which could lead to a memory leak.

  1. close tells the pool not to accept any new tasks
  2. join tells the pool to exit after all tasks have completed

Following along? Grab the Project Gutenberg sample text files from the "data" directory in the simple-task-queue repo, and then add an "output" directory.

Your project directory should look like this:

├── data │   ├── dracula.txt │   ├── frankenstein.txt │   ├── heart-of-darkness.txt │   └── pride-and-prejudice.txt ├── output ├── simple_pool.py └── tasks.py

It should take less than a second to run:

(env)$ python simple_pool.py

Running with 15 processes!
Processed heart-of-darkness.txt with process id: 50510
Processed frankenstein.txt with process id: 50515
Processed pride-and-prejudice.txt with process id: 50511
Processed dracula.txt with process id: 50512

Time taken = 0.6383581161

This script ran on a i9 Macbook Pro with 16 cores.

So, the multiprocessing Pool class handles the queuing logic for us. It's perfect for running CPU-bound tasks or really any job that can be broken up and distributed independently. If you need more control over the queue or need to share data between multiple processes, you may want to look at the Queue class.

For more on this along with the difference between parallelism (multiprocessing) and concurrency (multithreading), review the Speeding Up Python with Concurrency, Parallelism, and asyncio article.

Multiprocessing Queue

Let's look at a simple example:

# simple_queue.py

import multiprocessing


def run():
    books = [
        "pride-and-prejudice.txt",
        "heart-of-darkness.txt",
        "frankenstein.txt",
        "dracula.txt",
    ]
    queue = multiprocessing.Queue()

    print("Enqueuing...")
    for book in books:
        print(book)
        queue.put(book)

    print("\nDequeuing...")
    while not queue.empty():
        print(queue.get())


if __name__ == "__main__":
    run()

The Queue class, also from the multiprocessing library, is a basic FIFO (first in, first out) data structure. It's similar to the queue.Queue class, but designed for interprocess communication. We used put to enqueue an item to the queue and get to dequeue an item.

Check out the Queue source code for a better understanding of the mechanics of this class.

Now, let's look at more advanced example:

# simple_task_queue.py

import multiprocessing
import time

from tasks import get_word_counts

PROCESSES = multiprocessing.cpu_count() - 1
NUMBER_OF_TASKS = 10


def process_tasks(task_queue):
    while not task_queue.empty():
        book = task_queue.get()
        get_word_counts(book)
    return True


def add_tasks(task_queue, number_of_tasks):
    for num in range(number_of_tasks):
        task_queue.put("pride-and-prejudice.txt")
        task_queue.put("heart-of-darkness.txt")
        task_queue.put("frankenstein.txt")
        task_queue.put("dracula.txt")
    return task_queue


def run():
    empty_task_queue = multiprocessing.Queue()
    full_task_queue = add_tasks(empty_task_queue, NUMBER_OF_TASKS)
    processes = []
    print(f"Running with {PROCESSES} processes!")
    start = time.time()
    for n in range(PROCESSES):
        p = multiprocessing.Process(target=process_tasks, args=(full_task_queue,))
        processes.append(p)
        p.start()
    for p in processes:
        p.join()
    print(f"Time taken = {time.time() - start:.10f}")


if __name__ == "__main__":
    run()

Here, we enqueued 40 tasks (ten for each text file) to the queue, created separate processes via the Process class, used start to start running the processes, and, finally, used join to complete the processes.

It should still take less than a second to run.

Challenge: Check your understanding by adding another queue to hold completed tasks. You can enqueue them within the process_tasks function.

Logging

The multiprocessing library provides support for logging as well:

# simple_task_queue_logging.py

import logging
import multiprocessing
import os
import time

from tasks import get_word_counts

PROCESSES = multiprocessing.cpu_count() - 1
NUMBER_OF_TASKS = 10


def process_tasks(task_queue):
    logger = multiprocessing.get_logger()
    proc = os.getpid()
    while not task_queue.empty():
        try:
            book = task_queue.get()
            get_word_counts(book)
        except Exception as e:
            logger.error(e)
        logger.info(f"Process {proc} completed successfully")
    return True


def add_tasks(task_queue, number_of_tasks):
    for num in range(number_of_tasks):
        task_queue.put("pride-and-prejudice.txt")
        task_queue.put("heart-of-darkness.txt")
        task_queue.put("frankenstein.txt")
        task_queue.put("dracula.txt")
    return task_queue


def run():
    empty_task_queue = multiprocessing.Queue()
    full_task_queue = add_tasks(empty_task_queue, NUMBER_OF_TASKS)
    processes = []
    print(f"Running with {PROCESSES} processes!")
    start = time.time()
    for w in range(PROCESSES):
        p = multiprocessing.Process(target=process_tasks, args=(full_task_queue,))
        processes.append(p)
        p.start()
    for p in processes:
        p.join()
    print(f"Time taken = {time.time() - start:.10f}")


if __name__ == "__main__":
    multiprocessing.log_to_stderr(logging.ERROR)
    run()

To test, change task_queue.put("dracula.txt") to task_queue.put("drakula.txt"). You should see the following error outputted ten times in the terminal:

[ERROR/Process-4] [Errno 2] No such file or directory:
'simple-task-queue/data/drakula.txt'

Want to log to disc?

# simple_task_queue_logging.py

import logging
import multiprocessing
import os
import time

from tasks import get_word_counts

PROCESSES = multiprocessing.cpu_count() - 1
NUMBER_OF_TASKS = 10


def create_logger():
    logger = multiprocessing.get_logger()
    logger.setLevel(logging.INFO)
    fh = logging.FileHandler("process.log")
    fmt = "%(asctime)s - %(levelname)s - %(message)s"
    formatter = logging.Formatter(fmt)
    fh.setFormatter(formatter)
    logger.addHandler(fh)
    return logger


def process_tasks(task_queue):
    logger = create_logger()
    proc = os.getpid()
    while not task_queue.empty():
        try:
            book = task_queue.get()
            get_word_counts(book)
        except Exception as e:
            logger.error(e)
        logger.info(f"Process {proc} completed successfully")
    return True


def add_tasks(task_queue, number_of_tasks):
    for num in range(number_of_tasks):
        task_queue.put("pride-and-prejudice.txt")
        task_queue.put("heart-of-darkness.txt")
        task_queue.put("frankenstein.txt")
        task_queue.put("dracula.txt")
    return task_queue


def run():
    empty_task_queue = multiprocessing.Queue()
    full_task_queue = add_tasks(empty_task_queue, NUMBER_OF_TASKS)
    processes = []
    print(f"Running with {PROCESSES} processes!")
    start = time.time()
    for w in range(PROCESSES):
        p = multiprocessing.Process(target=process_tasks, args=(full_task_queue,))
        processes.append(p)
        p.start()
    for p in processes:
        p.join()
    print(f"Time taken = {time.time() - start:.10f}")


if __name__ == "__main__":
    run()

Again, cause an error by altering one of the file names, and then run it. Take a look at process.log. It's not quite as organized as it should be since the Python logging library does not use shared locks between processes. To get around this, let's have each process write to its own file. To keep things organized, add a logs directory to your project folder:

#  simple_task_queue_logging_separate_files.py

import logging
import multiprocessing
import os
import time

from tasks import get_word_counts

PROCESSES = multiprocessing.cpu_count() - 1
NUMBER_OF_TASKS = 10


def create_logger(pid):
    logger = multiprocessing.get_logger()
    logger.setLevel(logging.INFO)
    fh = logging.FileHandler(f"logs/process_{pid}.log")
    fmt = "%(asctime)s - %(levelname)s - %(message)s"
    formatter = logging.Formatter(fmt)
    fh.setFormatter(formatter)
    logger.addHandler(fh)
    return logger


def process_tasks(task_queue):
    proc = os.getpid()
    logger = create_logger(proc)
    while not task_queue.empty():
        try:
            book = task_queue.get()
            get_word_counts(book)
        except Exception as e:
            logger.error(e)
        logger.info(f"Process {proc} completed successfully")
    return True


def add_tasks(task_queue, number_of_tasks):
    for num in range(number_of_tasks):
        task_queue.put("pride-and-prejudice.txt")
        task_queue.put("heart-of-darkness.txt")
        task_queue.put("frankenstein.txt")
        task_queue.put("dracula.txt")
    return task_queue


def run():
    empty_task_queue = multiprocessing.Queue()
    full_task_queue = add_tasks(empty_task_queue, NUMBER_OF_TASKS)
    processes = []
    print(f"Running with {PROCESSES} processes!")
    start = time.time()
    for w in range(PROCESSES):
        p = multiprocessing.Process(target=process_tasks, args=(full_task_queue,))
        processes.append(p)
        p.start()
    for p in processes:
        p.join()
    print(f"Time taken = {time.time() - start:.10f}")


if __name__ == "__main__":
    run()

Redis

Moving right along, instead of using an in-memory queue, let's add Redis into the mix.

Following along? Download and install Redis if you do not already have it installed. Then, install the Python interface:

(env)$ pip install redis==4.0.2

We'll break the logic up into four files:

  1. redis_queue.py creates new queues and tasks via the SimpleQueue and SimpleTask classes, respectively.
  2. redis_queue_client enqueues new tasks.
  3. redis_queue_worker dequeues and processes tasks.
  4. redis_queue_server spawns worker processes.
# redis_queue.py

import pickle
import uuid


class SimpleQueue(object):
    def __init__(self, conn, name):
        self.conn = conn
        self.name = name

    def enqueue(self, func, *args):
        task = SimpleTask(func, *args)
        serialized_task = pickle.dumps(task, protocol=pickle.HIGHEST_PROTOCOL)
        self.conn.lpush(self.name, serialized_task)
        return task.id

    def dequeue(self):
        _, serialized_task = self.conn.brpop(self.name)
        task = pickle.loads(serialized_task)
        task.process_task()
        return task

    def get_length(self):
        return self.conn.llen(self.name)


class SimpleTask(object):
    def __init__(self, func, *args):
        self.id = str(uuid.uuid4())
        self.func = func
        self.args = args

    def process_task(self):
        self.func(*self.args)

Here, we defined two classes, SimpleQueue and SimpleTask:

  1. SimpleQueue creates a new queue and enqueues, dequeues, and gets the length of the queue.
  2. SimpleTask creates new tasks, which are used by the instance of the SimpleQueue class to enqueue new tasks, and processes new tasks.

Curious about lpush(), brpop(), and llen()? Refer to the Command reference page. (The brpop() function is particularly cool because it blocks the connection until a value exists to be popped!)

# redis_queue_client.py

import redis

from redis_queue import SimpleQueue
from tasks import get_word_counts

NUMBER_OF_TASKS = 10


if __name__ == "__main__":
    r = redis.Redis()
    queue = SimpleQueue(r, "sample")
    count = 0
    for num in range(NUMBER_OF_TASKS):
        queue.enqueue(get_word_counts, "pride-and-prejudice.txt")
        queue.enqueue(get_word_counts, "heart-of-darkness.txt")
        queue.enqueue(get_word_counts, "frankenstein.txt")
        queue.enqueue(get_word_counts, "dracula.txt")
        count += 4
    print(f"Enqueued {count} tasks!")

This module will create a new instance of Redis and the SimpleQueue class. It will then enqueue 40 tasks.

# redis_queue_worker.py

import redis

from redis_queue import SimpleQueue


def worker():
    r = redis.Redis()
    queue = SimpleQueue(r, "sample")
    if queue.get_length() > 0:
        queue.dequeue()
    else:
        print("No tasks in the queue")


if __name__ == "__main__":
    worker()

If a task is available, the dequeue method is called, which then de-serializes the task and calls the process_task method (in redis_queue.py).

# redis_queue_server.py

import multiprocessing

from redis_queue_worker import worker

PROCESSES = 4


def run():
    processes = []
    print(f"Running with {PROCESSES} processes!")
    while True:
        for w in range(PROCESSES):
            p = multiprocessing.Process(target=worker)
            processes.append(p)
            p.start()
        for p in processes:
            p.join()


if __name__ == "__main__":
    run()

The run method spawns four new worker processes.

You probably don’t want four processes running at once all the time, but there may be times that you will need four or more processes. Think about how you could programmatically spin up and down additional workers based on demand.

To test, run redis_queue_server.py and redis_queue_client.py in separate terminal windows:

example

example

Check your understanding again by adding logging to the above application.

Conclusion

In this tutorial, we looked at a number of asynchronous task queue implementations in Python. If the requirements are simple enough, it may be easier to develop a queue in this manner. That said, if you're looking for more advanced features -- like task scheduling, batch processing, job prioritization, and retrying of failed tasks -- you should look into a full-blown solution. Check out Celery, RQ, or Huey.

Grab the final code from the simple-task-queue repo.

Original article source at: https://testdriven.io/

#python #queue 

How to Developing an Asynchronous Task Queue in Python

How to Asynchronous Tasks with Flask and Redis Queue

If a long-running task is part of your application's workflow you should handle it in the background, outside the normal flow.

Perhaps your web application requires users to submit a thumbnail (which will probably need to be re-sized) and confirm their email when they register. If your application processed the image and sent a confirmation email directly in the request handler, then the end user would have to wait for them both to finish. Instead, you'll want to pass these tasks off to a task queue and let a separate worker process deal with it, so you can immediately send a response back to the client. The end user can do other things on the client-side and your application is free to respond to requests from other users.

This tutorial looks at how to configure Redis Queue (RQ) to handle long-running tasks in a Flask app.

Celery is a viable solution as well. Check out Asynchronous Tasks with Flask and Celery for more.

Objectives

By the end of this tutorial, you will be able to:

  1. Integrate Redis Queue into a Flask app and create tasks.
  2. Containerize Flask and Redis with Docker.
  3. Run long-running tasks in the background with a separate worker process.
  4. Set up RQ Dashboard to monitor queues, jobs, and workers.
  5. Scale the worker count with Docker.

Workflow

Our goal is to develop a Flask application that works in conjunction with Redis Queue to handle long-running processes outside the normal request/response cycle.

  1. The end user kicks off a new task via a POST request to the server-side
  2. Within the view, a task is added to the queue and the task id is sent back to the client-side
  3. Using AJAX, the client continues to poll the server to check the status of the task while the task itself is running in the background

flask and redis queue user flow

In the end, the app will look like this:

final app

Project Setup

Want to follow along? Clone down the base project, and then review the code and project structure:

$ git clone https://github.com/mjhea0/flask-redis-queue --branch base --single-branch
$ cd flask-redis-queue

Since we'll need to manage three processes in total (Flask, Redis, worker), we'll use Docker to simplify our workflow so they can be managed from a single terminal window.

To test, run:

$ docker-compose up -d --build

Open your browser to http://localhost:5004. You should see:

flask, redis queue, docker

Trigger a Task

An event handler in project/client/static/main.js is set up that listens for a button click and sends an AJAX POST request to the server with the appropriate task type: 1, 2, or 3.

$('.btn').on('click', function() {
  $.ajax({
    url: '/tasks',
    data: { type: $(this).data('type') },
    method: 'POST'
  })
  .done((res) => {
    getStatus(res.data.task_id);
  })
  .fail((err) => {
    console.log(err);
  });
});

On the server-side, a view is already configured to handle the request in project/server/main/views.py:

@main_blueprint.route("/tasks", methods=["POST"])
def run_task():
    task_type = request.form["type"]
    return jsonify(task_type), 202

We just need to wire up Redis Queue.

Redis Queue

So, we need to spin up two new processes: Redis and a worker. Add them to the docker-compose.yml file:

version: '3.8'

services:

  web:
    build: .
    image: web
    container_name: web
    ports:
      - 5004:5000
    command: python manage.py run -h 0.0.0.0
    volumes:
      - .:/usr/src/app
    environment:
      - FLASK_DEBUG=1
      - APP_SETTINGS=project.server.config.DevelopmentConfig
    depends_on:
      - redis

  worker:
    image: web
    command: python manage.py run_worker
    volumes:
      - .:/usr/src/app
    environment:
      - APP_SETTINGS=project.server.config.DevelopmentConfig
    depends_on:
      - redis

  redis:
    image: redis:6.2-alpine

Add the task to a new file called tasks.py in "project/server/main":

# project/server/main/tasks.py


import time


def create_task(task_type):
    time.sleep(int(task_type) * 10)
    return True

Update the view to connect to Redis, enqueue the task, and respond with the id:

@main_blueprint.route("/tasks", methods=["POST"])
def run_task():
    task_type = request.form["type"]
    with Connection(redis.from_url(current_app.config["REDIS_URL"])):
        q = Queue()
        task = q.enqueue(create_task, task_type)
    response_object = {
        "status": "success",
        "data": {
            "task_id": task.get_id()
        }
    }
    return jsonify(response_object), 202

Don't forget the imports:

import redis
from rq import Queue, Connection
from flask import render_template, Blueprint, jsonify, request, current_app

from project.server.main.tasks import create_task

Update BaseConfig:

class BaseConfig(object):
    """Base configuration."""

    WTF_CSRF_ENABLED = True
    REDIS_URL = "redis://redis:6379/0"
    QUEUES = ["default"]

Did you notice that we referenced the redis service (from docker-compose.yml) in the REDIS_URL rather than localhost or some other IP? Review the Docker Compose docs for more info on connecting to other services via the hostname.

Finally, we can use a Redis Queue worker, to process tasks at the top of the queue.

manage.py:

@cli.command("run_worker")
def run_worker():
    redis_url = app.config["REDIS_URL"]
    redis_connection = redis.from_url(redis_url)
    with Connection(redis_connection):
        worker = Worker(app.config["QUEUES"])
        worker.work()

Here, we set up a custom CLI command to fire the worker.

It's important to note that the @cli.command() decorator will provide access to the application context along with the associated config variables from project/server/config.py when the command is executed.

Add the imports as well:

import redis
from rq import Connection, Worker

Add the dependencies to the requirements file:

redis==4.1.1
rq==1.10.1

Build and spin up the new containers:

$ docker-compose up -d --build

To trigger a new task, run:

$ curl -F type=0 http://localhost:5004/tasks

You should see something like:

{
  "data": {
    "task_id": "bdad64d0-3865-430e-9cc3-ec1410ddb0fd"
  },
  "status": "success"
}

Task Status

Turn back to the event handler on the client-side:

$('.btn').on('click', function() {
  $.ajax({
    url: '/tasks',
    data: { type: $(this).data('type') },
    method: 'POST'
  })
  .done((res) => {
    getStatus(res.data.task_id);
  })
  .fail((err) => {
    console.log(err);
  });
});

Once the response comes back from the original AJAX request, we then continue to call getStatus() with the task id every second. If the response is successful, a new row is added to the table on the DOM.

function getStatus(taskID) {
  $.ajax({
    url: `/tasks/${taskID}`,
    method: 'GET',
  })
  .done((res) => {
    const html = `
    <tr>
      <td>${res.data.task_id}</td>
      <td>${res.data.task_status}</td>
      <td>${res.data.task_result}</td>
    </tr>`;
    $('#tasks').prepend(html);
    const taskStatus = res.data.task_status;
    if (taskStatus === 'finished' || taskStatus === 'failed') return false;
    setTimeout(function () {
      getStatus(res.data.task_id);
    }, 1000);
  })
  .fail((err) => {
    console.log(err);
  });
}

Update the view:

@main_blueprint.route("/tasks/<task_id>", methods=["GET"])
def get_status(task_id):
    with Connection(redis.from_url(current_app.config["REDIS_URL"])):
        q = Queue()
        task = q.fetch_job(task_id)
    if task:
        response_object = {
            "status": "success",
            "data": {
                "task_id": task.get_id(),
                "task_status": task.get_status(),
                "task_result": task.result,
            },
        }
    else:
        response_object = {"status": "error"}
    return jsonify(response_object)

Add a new task to the queue:

$ curl -F type=1 http://localhost:5004/tasks

Then, grab the task_id from the response and call the updated endpoint to view the status:

$ curl http://localhost:5004/tasks/5819789f-ebd7-4e67-afc3-5621c28acf02

{
  "data": {
    "task_id": "5819789f-ebd7-4e67-afc3-5621c28acf02",
    "task_result": true,
    "task_status": "finished"
  },
  "status": "success"
}

Test it out in the browser as well:

flask, redis queue, docker

Dashboard

RQ Dashboard is a lightweight, web-based monitoring system for Redis Queue.

To set up, first add a new directory to the "project" directory called "dashboard". Then, add a new Dockerfile to that newly created directory:

FROM python:3.10-alpine

RUN pip install rq-dashboard

# https://github.com/rq/rq/issues/1469
RUN pip uninstall -y click
RUN pip install click==7.1.2

EXPOSE 9181

CMD ["rq-dashboard"]

Simply add the service to the docker-compose.yml file like so:

version: '3.8'

services:

  web:
    build: .
    image: web
    container_name: web
    ports:
      - 5004:5000
    command: python manage.py run -h 0.0.0.0
    volumes:
      - .:/usr/src/app
    environment:
      - FLASK_DEBUG=1
      - APP_SETTINGS=project.server.config.DevelopmentConfig
    depends_on:
      - redis

  worker:
    image: web
    command: python manage.py run_worker
    volumes:
      - .:/usr/src/app
    environment:
      - APP_SETTINGS=project.server.config.DevelopmentConfig
    depends_on:
      - redis

  redis:
    image: redis:6.2-alpine

  dashboard:
    build: ./project/dashboard
    image: dashboard
    container_name: dashboard
    ports:
      - 9181:9181
    command: rq-dashboard -H redis
    depends_on:
      - redis

Build the image and spin up the container:

$ docker-compose up -d --build

Navigate to http://localhost:9181 to view the dashboard:

rq dashboard

Kick off a few jobs to fully test the dashboard:

rq dashboard

Try adding a few more workers to see how that affects things:

$ docker-compose up -d --build --scale worker=3

Conclusion

This has been a basic guide on how to configure Redis Queue to run long-running tasks in a Flask app. You should let the queue handle any processes that could block or slow down the user-facing code.

Looking for some challenges?

  1. Deploy this application across a number of DigitalOcean droplets using Kubernetes or Docker Swarm.
  2. Write unit tests for the new endpoints. (Mock out the Redis instance with fakeredis)
  3. Instead of polling the server, try using Flask-SocketIO to open up a websocket connection.

Grab the code from the repo.

Original article source at: https://testdriven.io/

#flask #redis #queue 

How to Asynchronous Tasks with Flask and Redis Queue

Jocko: Kafka Implemented in Golang with Built-in Coordination

Jocko

Kafka/distributed commit log service in Go.

Goals of this project:

  • Implement Kafka in Go
  • Protocol compatible with Kafka so Kafka clients and services work with Jocko
  • Make operating simpler
  • Distribute a single binary
  • Use Serf for discovery, Raft for consensus (and remove the need to run ZooKeeper)
  • Smarter configuration settings
    • Able to use percentages of disk space for retention policies rather than only bytes and time kept
    • Handling size configs when you change the number of partitions or add topics
  • Learn a lot and have fun

TODO

  •  Producing
  •  Fetching
  •  Partition consensus and distribution
  •  Protocol
    •  Produce
    •  Fetch
    •  Metadata
    •  Create Topics
    •  Delete Topics
    •  Consumer group [current task]
  •  Discovery
  •  API versioning [more API versions to implement]
  •  Replication [first draft done - testing heavily now]

Hiatus Writing Book

I’m writing a book for PragProg called Building Distributed Services with Go. You can sign up on this mailing list and get updated when the book’s available. It walks you through building a distributed commit log from scratch. I hope it will help Jocko contributors and people who want to work on distributed services.

Reading

Project Layout

├── broker        broker subsystem
├── cmd           commands
│   └── jocko     command to run a Jocko broker and manage topics
├── commitlog     low-level commit log implementation
├── examples      examples running/using Jocko
│   ├── cluster   example booting up a 3-broker Jocko cluster
│   └── sarama    example producing/consuming with Sarama
├── protocol      golang implementation of Kafka's protocol
├── prometheus    wrapper around Prometheus' client lib to handle metrics
├── server        API subsystem
└── testutil      test utils
    └── mock      mocks of the various subsystems

Building

Local

Clone Jocko

$ go get github.com/travisjeffery/jocko

Build Jocko

$ cd $GOPATH/src/github.com/travisjeffery/jocko
$ make build

(If you see an error about dep not being found, ensure that $GOPATH/bin is in your PATH)

Docker

docker build -t travisjeffery/jocko:latest .

Contributing

See CONTRIBUTING for details on submitting patches and the contribution workflow.


travisjeffery.com

GitHub @travisjeffery

Twitter @travisjeffery

Medium @travisjeffery

Download Details:

Author: travisjeffery
Source Code: https://github.com/travisjeffery/jocko 
License: MIT license

#go #golang #streaming #kafka #queue 

Jocko: Kafka Implemented in Golang with Built-in Coordination

Tiny-queue: A Simple FIFO Queue Implementation As A Linked List

tiny-queue

A simple FIFO queue implementation as a linked list. The main benefit is to avoid doing shift() on an array, which may be slow. It's implemented in the straightforward root -> node1 -> node2 -> etc. architecture that you may have learned in CS 101.

This can typically be used as a drop-in replacement for an array, and it's only 38 lines of code.

See this Wikipedia page for a good explanation of the tradeoffs of a linked list versus other data structures.

Usage

npm install tiny-queue

Then:

var Queue = require('tiny-queue');
var queue = new Queue();

queue.push('foo');
queue.push('bar');
queue.shift(); // 'foo'
queue.shift(); //'bar'
queue.length; // 0
queue.shift(); // undefined

API

The returned Queue object, once instantiated, only supports four operations:

queue.push()
queue.shift()
queue.slice() // returns a regular Array
queue.length

So it's basically a drop-in replacement for most naïve usages of an array as a queue.

Download Details:

Author: Nolanlawson
Source Code: https://github.com/nolanlawson/tiny-queue 
License: Apache-2.0 license

#javascript #tiny #queue 

Tiny-queue: A Simple FIFO Queue Implementation As A Linked List
Reid  Rohan

Reid Rohan

1661808360

Bull: Premium Queue Package for Handling Distributed Jobs & Messages

Bull

The fastest, most reliable, Redis-based queue for Node. Carefully written for rock solid stability and atomicity.


Features

  •  Minimal CPU usage due to a polling-free design.
  •  Robust design based on Redis.
  •  Delayed jobs.
  •  Schedule and repeat jobs according to a cron specification.
  •  Rate limiter for jobs.
  •  Retries.
  •  Priority.
  •  Concurrency.
  •  Pause/resume—globally or locally.
  •  Multiple job types per queue.
  •  Threaded (sandboxed) processing functions.
  •  Automatic recovery from process crashes.

And coming up on the roadmap...

  •  Job completion acknowledgement.
  •  Parent-child jobs relationships.

UIs

There are a few third-party UIs that you can use for monitoring:

Bull v3

Bull <= v2


Monitoring & Alerting


Feature Comparison

Since there are a few job queue solutions, here is a table comparing them:

FeatureBullKueBeeAgenda
Backendredisredisredismongo
Priorities 
Concurrency
Delayed jobs 
Global events  
Rate Limiter   
Pause/Resume  
Sandboxed worker   
Repeatable jobs  
Atomic ops  
Persistence
UI 
Optimized forJobs / MessagesJobsMessagesJobs

Install

npm install bull --save

or

yarn add bull

Requirements: Bull requires a Redis version greater than or equal to 2.8.18.

Typescript Definitions

npm install @types/bull --save-dev
yarn add --dev @types/bull

Definitions are currently maintained in the DefinitelyTyped repo.

Contributing

We welcome all types of contributions, either code fixes, new features or doc improvements. Code formatting is enforced by prettier For commits please follow conventional commits convention All code must pass lint rules and test suites before it can be merged into develop.


Quick Guide

Basic Usage

var Queue = require('bull');

var videoQueue = new Queue('video transcoding', 'redis://127.0.0.1:6379');
var audioQueue = new Queue('audio transcoding', {redis: {port: 6379, host: '127.0.0.1', password: 'foobared'}}); // Specify Redis connection using object
var imageQueue = new Queue('image transcoding');
var pdfQueue = new Queue('pdf transcoding');

videoQueue.process(function(job, done){

  // job.data contains the custom data passed when the job was created
  // job.id contains id of this job.

  // transcode video asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give a error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { framerate: 29.5 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

audioQueue.process(function(job, done){
  // transcode audio asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give a error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { samplerate: 48000 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

imageQueue.process(function(job, done){
  // transcode image asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give a error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { width: 1280, height: 720 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

pdfQueue.process(function(job){
  // Processors can also return promises instead of using the done callback
  return pdfAsyncProcessor();
});

videoQueue.add({video: 'http://example.com/video1.mov'});
audioQueue.add({audio: 'http://example.com/audio1.mp3'});
imageQueue.add({image: 'http://example.com/image1.tiff'});

Using promises

Alternatively, you can use return promises instead of using the done callback:

videoQueue.process(function(job){ // don't forget to remove the done callback!
  // Simply return a promise
  return fetchVideo(job.data.url).then(transcodeVideo);

  // Handles promise rejection
  return Promise.reject(new Error('error transcoding'));

  // Passes the value the promise is resolved with to the "completed" event
  return Promise.resolve({ framerate: 29.5 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
  // same as
  return Promise.reject(new Error('some unexpected error'));
});

Separate processes

The process function can also be run in a separate process. This has several advantages:

  • The process is sandboxed so if it crashes it does not affect the worker.
  • You can run blocking code without affecting the queue (jobs will not stall).
  • Much better utilization of multi-core CPUs.
  • Less connections to redis.

In order to use this feature just create a separate file with the processor:

// processor.js
module.exports = function(job){
  // Do some heavy work

  return Promise.resolve(result);
}

And define the processor like this:

// Single process:
queue.process('/path/to/my/processor.js');

// You can use concurrency as well:
queue.process(5, '/path/to/my/processor.js');

// and named processors:
queue.process('my processor', 5, '/path/to/my/processor.js');

Repeated jobs

A job can be added to a queue and processed repeatedly according to a cron specification:

  paymentsQueue.process(function(job){
    // Check payments
  });

  // Repeat payment job once every day at 3:15 (am)
  paymentsQueue.add(paymentsData, {repeat: {cron: '15 3 * * *'}});

As a tip, check your expressions here to verify they are correct: cron expression descriptor

Pause / Resume

A queue can be paused and resumed globally (pass true to pause processing for just this worker):

queue.pause().then(function(){
  // queue is paused now
});

queue.resume().then(function(){
  // queue is resumed now
})

Events

A queue emits some useful events, for example...

.on('completed', function(job, result){
  // Job completed with output result!
})

For more information on events, including the full list of events that are fired, check out the Events reference

Queues performance

Queues are cheap, so if you need many of them just create new ones with different names:

var userJohn = new Queue('john');
var userLisa = new Queue('lisa');
.
.
.

However every queue instance will require new redis connections, check how to reuse connections or you can also use named processors to achieve a similar result.

Cluster support

NOTE: From version 3.2.0 and above it is recommended to use threaded processors instead.

Queues are robust and can be run in parallel in several threads or processes without any risk of hazards or queue corruption. Check this simple example using cluster to parallelize jobs across processes:

var
  Queue = require('bull'),
  cluster = require('cluster');

var numWorkers = 8;
var queue = new Queue("test concurrent queue");

if(cluster.isMaster){
  for (var i = 0; i < numWorkers; i++) {
    cluster.fork();
  }

  cluster.on('online', function(worker) {
    // Lets create a few jobs for the queue workers
    for(var i=0; i<500; i++){
      queue.add({foo: 'bar'});
    };
  });

  cluster.on('exit', function(worker, code, signal) {
    console.log('worker ' + worker.process.pid + ' died');
  });
}else{
  queue.process(function(job, jobDone){
    console.log("Job done by worker", cluster.worker.id, job.id);
    jobDone();
  });
}

Documentation

For the full documentation, check out the reference and common patterns:

  • Guide — Your starting point for developing with Bull.
  • Reference — Reference document with all objects and methods available.
  • Patterns — a set of examples for common patterns.
  • License — the Bull license—it's MIT.

If you see anything that could use more docs, please submit a pull request!


Important Notes

The queue aims for an "at least once" working strategy. This means that in some situations, a job could be processed more than once. This mostly happens when a worker fails to keep a lock for a given job during the total duration of the processing.

When a worker is processing a job it will keep the job "locked" so other workers can't process it.

It's important to understand how locking works to prevent your jobs from losing their lock - becoming stalled - and being restarted as a result. Locking is implemented internally by creating a lock for lockDuration on interval lockRenewTime (which is usually half lockDuration). If lockDuration elapses before the lock can be renewed, the job will be considered stalled and is automatically restarted; it will be double processed. This can happen when:

  1. The Node process running your job processor unexpectedly terminates.
  2. Your job processor was too CPU-intensive and stalled the Node event loop, and as a result, Bull couldn't renew the job lock (see #488 for how we might better detect this). You can fix this by breaking your job processor into smaller parts so that no single part can block the Node event loop. Alternatively, you can pass a larger value for the lockDuration setting (with the tradeoff being that it will take longer to recognize a real stalled job).

As such, you should always listen for the stalled event and log this to your error monitoring system, as this means your jobs are likely getting double-processed.

As a safeguard so problematic jobs won't get restarted indefinitely (e.g. if the job processor aways crashes its Node process), jobs will be recovered from a stalled state a maximum of maxStalledCount times (default: 1).


📻 News and updates

Follow me on Twitter for important news and updates.

🛠 Tutorials

You can find tutorials and news in this blog: https://blog.taskforce.sh/


Used by

Bull is popular among large and small organizations, like the following ones:

AtlassianAutodeskMozillaNestSalesforce

BullMQ

If you want to start using the next major version of Bull written entirely in Typescript you are welcome to the new repo here. Otherwise you are very welcome to still use Bull, which is a safe, battle tested codebase.


🚀 Sponsors 🚀

RedisGreen

If you need high quality production Redis instances for your Bull projects, please consider subscribing to RedisGreen, leaders in Redis hosting that works perfectly with Bull. Use the promo code "BULLMQ" when signing up to help us sponsor the development of Bull!


Official FrontEnd

Taskforce.sh, Inc

Supercharge your queues with a professional front end:

  • Get a complete overview of all your queues.
  • Inspect jobs, search, retry, or promote delayed jobs.
  • Metrics and statistics.
  • and many more features.

Sign up at Taskforce.sh


Check the new Guide!


Download Details:

Author: Optimalbits
Source Code: https://github.com/optimalbits/bull 
License: View license

#javascript #node #queue #job #schedule 

Bull: Premium Queue Package for Handling Distributed Jobs & Messages
Reid  Rohan

Reid Rohan

1661804340

Arena: An interactive UI Dashboard for Bee Queue

Arena   

An intuitive Web GUI for Bee Queue, Bull and BullMQ. Built on Express so you can run Arena standalone, or mounted in another app as middleware.

For a quick introduction to the motivations for creating Arena, read Interactively monitoring Bull, a Redis-backed job queue for Node.

Screenshots

Features

  • Check the health of a queue and its jobs at a glance
  • Paginate and filter jobs by their state
  • View details and stacktraces of jobs with permalinks
  • Restart and retry jobs with one click

Usage

Arena accepts the following options:

const Arena = require('bull-arena');

// Mandatory import of queue library.
const Bee = require('bee-queue');

Arena({
  // All queue libraries used must be explicitly imported and included.
  Bee,

  // Provide a `Bull` option when using bull, similar to the `Bee` option above.

  queues: [
    {
      // Required for each queue definition.
      name: 'name_of_my_queue',

      // User-readable display name for the host. Required.
      hostId: 'Queue Server 1',

      // Queue type (Bull or Bee - default Bull).
      type: 'bee',

      // Queue key prefix. Defaults to "bq" for Bee and "bull" for Bull.
      prefix: 'foo',
    },
  ],

  // Optionally include your own stylesheet
  customCssPath: 'https://example.com/custom-arena-styles.css',

  // Optionally include your own script
  customJsPath: 'https://example.com/custom-arena-js.js',
});

The required name and hostId in each queue object have to be present in each queue object. Additional keys can be present in them, to configure the redis client itself.

The three ways in which you can configure the client are:

1. port/host

// In a queue object.
{
  // Hostname or IP. Required.
  "host": "127.0.0.1",

  // Bound port. Optional, default: 6379.
  "port": 6379,

  // Optional, to issue a redis AUTH command.
  "password": "hello",

  // Optional; default 0. Most of the time, you'll leave this absent.
  "db": 1
}

2. URL

You can also provide a url field instead of host, port, db and password.

{
  "url": "[redis:]//[[user][:password@]][host][:port][/db-number][?db=db-number[&password=bar[&option=value]]]"
}

3. Redis client options

Arena is compatible with both Bee and Bull. If you need to pass some specific configuration options directly to the redis client library your queue uses, you can also do so.

Bee uses node redis client, Bull uses ioredis client. These clients expect different configurations options.

{
  "redis": {}
}

For Bee, the redis key will be directly passed to redis.createClient, as explained here.

For Bull, the redis key will be directly passed to ioredis, as explained here. To use this to connect to a Sentinel cluster, see here.

Custom configuration file

To specify a custom configuration file location, see Running Arena as a node module.

Note that if you happen to use Amazon Web Services' ElastiCache as your Redis host, check out http://mixmax.com/blog/bull-queue-aws-autodiscovery

Running Arena as a node module

See the Docker image section or the docker-arena repository for information about running this standalone.

Note that because Arena is implemented using async/await, Arena only currently supports Node >=7.6.

Using Arena as a node module has potential benefits:

  • Arena can be configured to use any method of server/queue configuration desired
    • for example, fetching available redis queues from an AWS instance on server start
    • or even just plain old reading from environment variables
  • Arena can be mounted in other express apps as middleware

Usage:

In project folder:

$ npm install bull-arena

In router.js:

const Arena = require('bull-arena');

const express = require('express');
const router = express.Router();

const arena = Arena({
  // Include a reference to the bee-queue or bull libraries, depending on the library being used.

  queues: [
    {
      // First queue configuration
    },
    {
      // Second queue configuration
    },
    {
      // And so on...
    },
  ],
});

router.use('/', arena);

Arena takes two arguments. The first, config, is a plain object containing the queue configuration, flow configuration (just for bullmq for now) and other optional parameters. The second, listenOpts, is an object that can contain the following optional parameters:

  • port - specify custom port to listen on (default: 4567)
  • host - specify custom ip to listen on (default: '0.0.0.0')
  • basePath - specify custom path to mount server on (default: '/')
  • disableListen - don't let the server listen (useful when mounting Arena as a sub-app of another Express app) (default: false)
  • useCdn - set false to use the bundled js and css files (default: true)
  • customCssPath - an URL to an external stylesheet (default: null)

Example config (for bull)

import Arena from 'bull-arena';
import Bull from 'bull';

const arenaConfig = Arena({
  Bull,
  queues: [
    {
      type: 'bull',

      // Name of the bull queue, this name must match up exactly with what you've defined in bull.
      name: "Notification_Emailer",

      // Hostname or queue prefix, you can put whatever you want.
      hostId: "MyAwesomeQueues",

      // Redis auth.
      redis: {
        port: /* Your redis port */,
        host: /* Your redis host domain*/,
        password: /* Your redis password */,
      },
    },
  ],

  // Optionally include your own stylesheet
  customCssPath: 'https://example.com/custom-arena-styles.css',

  // Optionally include your own script
  customJsPath: 'https://example.com/custom-arena-js.js',
},
{
  // Make the arena dashboard become available at {my-site.com}/arena.
  basePath: '/arena',

  // Let express handle the listening.
  disableListen: true,
});

// Make arena's resources (js/css deps) available at the base app route
app.use('/', arenaConfig);

(Credit to tim-soft for the example config.)

Example config (for bullmq)

import Arena from 'bull-arena';
import { Queue, FlowProducer } from "bullmq";

const arenaConfig = Arena({
  BullMQ: Queue,
  FlowBullMQ: FlowProducer,
  queues: [
    {
      type: 'bullmq',

      // Name of the bullmq queue, this name must match up exactly with what you've defined in bullmq.
      name: "testQueue",

      // Hostname or queue prefix, you can put whatever you want.
      hostId: "worker",

      // Redis auth.
      redis: {
        port: /* Your redis port */,
        host: /* Your redis host domain*/,
        password: /* Your redis password */,
      },
    },
  ],

  flows: [
    {
      type: 'bullmq',

      // Name of the bullmq flow connection, this name helps to identify different connections.
      name: "testConnection",

      // Hostname, you can put whatever you want.
      hostId: "Flow",

      // Redis auth.
      redis: {
        port: /* Your redis port */,
        host: /* Your redis host domain*/,
        password: /* Your redis password */,
      },
    },
  ],

  // Optionally include your own stylesheet
  customCssPath: 'https://example.com/custom-arena-styles.css',

  // Optionally include your own script
  customJsPath: 'https://example.com/custom-arena-js.js',
},
{
  // Make the arena dashboard become available at {my-site.com}/arena.
  basePath: '/arena',

  // Let express handle the listening.
  disableListen: true,
});

// Make arena's resources (js/css deps) available at the base app route
app.use('/', arenaConfig);

Bee Queue support

Arena is dual-compatible with Bull 3.x and Bee-Queue 1.x. To add a Bee queue to the Arena dashboard, include the type: 'bee' property with an individual queue's configuration object.

BullMQ Queue support

Arena has added preliminary support for BullMQ post 3.4.x version. To add a BullMQ queue to the Arena dashboard, include the type: 'bullmq' property with an individual queue's configuration object.

Docker image

You can docker pull Arena from Docker Hub.

Please see the docker-arena repository for details.

Contributing

See contributing guidelines and an example.

Download Details:

Author: Bee-queue
Source Code: https://github.com/bee-queue/arena 
License: MIT license

#javascript #dashboard #queue 

Arena: An interactive UI Dashboard for Bee Queue
Lawrence  Lesch

Lawrence Lesch

1660874580

Fastq: Fast, in Memory Work Queue

fastq  

Fast, in memory work queue.

Benchmarks (1 million tasks):

  • setImmediate: 812ms
  • fastq: 854ms
  • async.queue: 1298ms
  • neoAsync.queue: 1249ms

Obtained on node 12.16.1, on a dedicated server.

If you need zero-overhead series function call, check out fastseries. For zero-overhead parallel function call, check out fastparallel.

Install

npm i fastq --save

Usage (callback API)

'use strict'

const queue = require('fastq')(worker, 1)

queue.push(42, function (err, result) {
  if (err) { throw err }
  console.log('the result is', result)
})

function worker (arg, cb) {
  cb(null, arg * 2)
}

Usage (promise API)

const queue = require('fastq').promise(worker, 1)

async function worker (arg) {
  return arg * 2
}

async function run () {
  const result = await queue.push(42)
  console.log('the result is', result)
}

run()

Setting "this"

'use strict'

const that = { hello: 'world' }
const queue = require('fastq')(that, worker, 1)

queue.push(42, function (err, result) {
  if (err) { throw err }
  console.log(this)
  console.log('the result is', result)
})

function worker (arg, cb) {
  console.log(this)
  cb(null, arg * 2)
}

Using with TypeScript (callback API)

'use strict'

import * as fastq from "fastq";
import type { queue, done } from "fastq";

type Task = {
  id: number
}

const q: queue<Task> = fastq(worker, 1)

q.push({ id: 42})

function worker (arg: Task, cb: done) {
  console.log(arg.id)
  cb(null)
}

Using with TypeScript (promise API)

'use strict'

import * as fastq from "fastq";
import type { queueAsPromised } from "fastq";

type Task = {
  id: number
}

const q: queueAsPromised<Task> = fastq.promise(asyncWorker, 1)

q.push({ id: 42}).catch((err) => console.error(err))

async function asyncWorker (arg: Task): Promise<void> {
  // No need for a try-catch block, fastq handles errors automatically
  console.log(arg.id)
}

API


fastqueue([that], worker, concurrency)

Creates a new queue.

Arguments:

  • that, optional context of the worker function.
  • worker, worker function, it would be called with that as this, if that is specified.
  • concurrency, number of concurrent tasks that could be executed in parallel.

queue.push(task, done)

Add a task at the end of the queue. done(err, result) will be called when the task was processed.


queue.unshift(task, done)

Add a task at the beginning of the queue. done(err, result) will be called when the task was processed.


queue.pause()

Pause the processing of tasks. Currently worked tasks are not stopped.


queue.resume()

Resume the processing of tasks.


queue.idle()

Returns false if there are tasks being processed or waiting to be processed. true otherwise.


queue.length()

Returns the number of tasks waiting to be processed (in the queue).


queue.getQueue()

Returns all the tasks be processed (in the queue). Returns empty array when there are no tasks


queue.kill()

Removes all tasks waiting to be processed, and reset drain to an empty function.


queue.killAndDrain()

Same than kill but the drain function will be called before reset to empty.


queue.error(handler)

Set a global error handler. handler(err, task) will be called each time a task is completed, err will be not null if the task has thrown an error.


queue.concurrency

Property that returns the number of concurrent tasks that could be executed in parallel. It can be altered at runtime.


queue.drain

Function that will be called when the last item from the queue has been processed by a worker. It can be altered at runtime.


queue.empty

Function that will be called when the last item from the queue has been assigned to a worker. It can be altered at runtime.


queue.saturated

Function that will be called when the queue hits the concurrency limit. It can be altered at runtime.


fastqueue.promise([that], worker(arg), concurrency)

Creates a new queue with Promise apis. It also offers all the methods and properties of the object returned by fastqueue with the modified push and unshift methods.

Node v10+ is required to use the promisified version.

Arguments:

  • that, optional context of the worker function.
  • worker, worker function, it would be called with that as this, if that is specified. It MUST return a Promise.
  • concurrency, number of concurrent tasks that could be executed in parallel.

queue.push(task) => Promise

Add a task at the end of the queue. The returned Promise will be fulfilled (rejected) when the task is completed successfully (unsuccessfully).

This promise could be ignored as it will not lead to a 'unhandledRejection'.

queue.unshift(task) => Promise

Add a task at the beginning of the queue. The returned Promise will be fulfilled (rejected) when the task is completed successfully (unsuccessfully).

This promise could be ignored as it will not lead to a 'unhandledRejection'.

queue.drained() => Promise

Wait for the queue to be drained. The returned Promise will be resolved when all tasks in the queue have been processed by a worker.

This promise could be ignored as it will not lead to a 'unhandledRejection'.

Download Details:

Author: Mcollina
Source Code: https://github.com/mcollina/fastq 
License: ISC license

#javascript #queue 

Fastq: Fast, in Memory Work Queue