1669015991
If a long-running process is part of your application's workflow, rather than blocking the response, you should handle it in the background, outside the normal request/response flow.
Perhaps your web application requires users to submit a thumbnail (which will probably need to be re-sized) and confirm their email when they register. If your application processed the image and sent a confirmation email directly in the request handler, then the end user would have to wait unnecessarily for them both to finish processing before the page loads or updates. Instead, you'll want to pass these processes off to a task queue and let a separate worker process deal with it, so you can immediately send a response back to the client. The end user can then do other things on the client-side while the processing takes place. Your application is also free to respond to requests from other users and clients.
To achieve this, we'll walk you through the process of setting up and configuring Celery and Redis for handling long-running processes in a FastAPI app. We'll also use Docker and Docker Compose to tie everything together. Finally, we'll look at how to test the Celery tasks with unit and integration tests.
By the end of this tutorial, you will be able to:
Again, to improve user experience, long-running processes should be run outside the normal HTTP request/response flow, in a background process.
Examples:
As you're building out an app, try to distinguish tasks that should run during the request/response lifecycle, like CRUD operations, from those that should run in the background.
It's worth noting that you can leverage FastAPI's BackgroundTasks class, which comes directly from Starlette, to run tasks in the background.
For example:
from fastapi import BackgroundTasks
def send_email(email, message):
pass
@app.get("/")
async def ping(background_tasks: BackgroundTasks):
background_tasks.add_task(send_email, "email@address.com", "Hi!")
return {"message": "pong!"}
So, when should you use Celery instead of BackgroundTasks
?
BackgroundTasks
runs in the same event loop that serves your app's requests.Our goal is to develop a FastAPI application that works in conjunction with Celery to handle long-running processes outside the normal request/response cycle.
Clone down the base project from the fastapi-celery repo, and then check out the v1 tag to the master branch:
$ git clone https://github.com/testdrivenio/fastapi-celery --branch v1 --single-branch
$ cd fastapi-celery
$ git checkout v1 -b master
Since we'll need to manage three processes in total (FastAPI, Redis, Celery worker), we'll use Docker to simplify our workflow by wiring them up so that they can all be run from one terminal window with a single command.
From the project root, create the images and spin up the Docker containers:
$ docker-compose up -d --build
Once the build is complete, navigate to http://localhost:8004:
Make sure the tests pass as well:
$ docker-compose exec web python -m pytest
================================== test session starts ===================================
platform linux -- Python 3.9.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /usr/src/app
collected 1 item
tests/test_tasks.py . [100%]
=================================== 1 passed in 0.06s ====================================
Take a quick look at the project structure before moving on:
├── .gitignore
├── LICENSE
├── README.md
├── docker-compose.yml
└── project
├── Dockerfile
├── main.py
├── requirements.txt
├── static
│ ├── main.css
│ └── main.js
├── templates
│ ├── _base.html
│ ├── footer.html
│ └── home.html
└── tests
├── __init__.py
├── conftest.py
└── test_tasks.py
An onclick
event handler in project/templates/home.html is set up that listens for a button click:
<div class="btn-group" role="group" aria-label="Basic example">
<button type="button" class="btn btn-primary" onclick="handleClick(1)">Short</a>
<button type="button" class="btn btn-primary" onclick="handleClick(2)">Medium</a>
<button type="button" class="btn btn-primary" onclick="handleClick(3)">Long</a>
</div>
onclick
calls handleClick
found in project/static/main.js, which sends an AJAX POST request to the server with the appropriate task type: 1
, 2
, or 3
.
function handleClick(type) {
fetch('/tasks', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ type: type }),
})
.then(response => response.json())
.then(res => getStatus(res.data.task_id));
}
On the server-side, a route is already configured to handle the request in project/main.py:
@app.post("/tasks", status_code=201)
def run_task(payload = Body(...)):
task_type = payload["type"]
return JSONResponse(task_type)
Now comes the fun part -- wiring up Celery!
Start by adding both Celery and Redis to the requirements.txt file:
aiofiles==0.6.0
celery==4.4.7
fastapi==0.64.0
Jinja2==2.11.3
pytest==6.2.4
redis==3.5.3
requests==2.25.1
uvicorn==0.13.4
This tutorial uses Celery v4.4.7 since Flower does not support Celery 5.
Celery uses a message broker -- RabbitMQ, Redis, or AWS Simple Queue Service (SQS) -- to facilitate communication between the Celery worker and the web application. Messages are added to the broker, which are then processed by the worker(s). Once done, the results are added to the backend.
Redis will be used as both the broker and backend. Add both Redis and a Celery worker to the docker-compose.yml file like so:
version: '3.8'
services:
web:
build: ./project
ports:
- 8004:8000
command: uvicorn main:app --host 0.0.0.0 --reload
volumes:
- ./project:/usr/src/app
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- redis
worker:
build: ./project
command: celery worker --app=worker.celery --loglevel=info
volumes:
- ./project:/usr/src/app
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- web
- redis
redis:
image: redis:6-alpine
Take note of celery worker --app=worker.celery --loglevel=info
:
celery worker
is used to start a Celery worker--app=worker.celery
runs the Celery Application (which we'll define shortly)--loglevel=info
sets the logging level to infoNext, create a new file called worker.py in "project":
import os
import time
from celery import Celery
celery = Celery(__name__)
celery.conf.broker_url = os.environ.get("CELERY_BROKER_URL", "redis://localhost:6379")
celery.conf.result_backend = os.environ.get("CELERY_RESULT_BACKEND", "redis://localhost:6379")
@celery.task(name="create_task")
def create_task(task_type):
time.sleep(int(task_type) * 10)
return True
Here, we created a new Celery instance, and using the task decorator, we defined a new Celery task function called create_task
.
Keep in mind that the task itself will be executed by the Celery worker.
Update the route handler to kick off the task and respond with the task ID:
@app.post("/tasks", status_code=201)
def run_task(payload = Body(...)):
task_type = payload["type"]
task = create_task.delay(int(task_type))
return JSONResponse({"task_id": task.id})
Don't forget to import the task:
from worker import create_task
Build the images and spin up the new containers:
$ docker-compose up -d --build
To trigger a new task, run:
$ curl http://localhost:8004/tasks -H "Content-Type: application/json" --data '{"type": 0}'
You should see something like:
{
"task_id": "14049663-6257-4a1f-81e5-563c714e90af"
}
Turn back to the handleClick
function on the client-side:
function handleClick(type) {
fetch('/tasks', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ type: type }),
})
.then(response => response.json())
.then(res => getStatus(res.data.task_id));
}
When the response comes back from the original AJAX request, we then continue to call getStatus()
with the task ID every second:
function getStatus(taskID) {
fetch(`/tasks/${taskID}`, {
method: 'GET',
headers: {
'Content-Type': 'application/json'
},
})
.then(response => response.json())
.then(res => {
const html = `
<tr>
<td>${taskID}</td>
<td>${res.data.task_status}</td>
<td>${res.data.task_result}</td>
</tr>`;
document.getElementById('tasks').prepend(html);
const newRow = document.getElementById('table').insertRow();
newRow.innerHTML = html;
const taskStatus = res.data.task_status;
if (taskStatus === 'finished' || taskStatus === 'failed') return false;
setTimeout(function() {
getStatus(res.data.task_id);
}, 1000);
})
.catch(err => console.log(err));
}
If the response is successful, a new row is added to the table on the DOM.
Update the get_status
route handler to return the status:
@app.get("/tasks/{task_id}")
def get_status(task_id):
task_result = AsyncResult(task_id)
result = {
"task_id": task_id,
"task_status": task_result.status,
"task_result": task_result.result
}
return JSONResponse(result)
Import AsyncResult:
from celery.result import AsyncResult
Update the containers:
$ docker-compose up -d --build
Trigger a new task:
$ curl http://localhost:8004/tasks -H "Content-Type: application/json" --data '{"type": 1}'
Then, grab the task_id
from the response and call the updated endpoint to view the status:
$ curl http://localhost:8004/tasks/f3ae36f1-58b8-4c2b-bf5b-739c80e9d7ff
{
"task_id": "455234e0-f0ea-4a39-bbe9-e3947e248503",
"task_result": true,
"task_status": "SUCCESS"
}
Test it out in the browser as well:
Update the worker
service, in docker-compose.yml, so that Celery logs are dumped to a log file:
worker:
build: ./project
command: celery worker --app=worker.celery --loglevel=info --logfile=logs/celery.log
volumes:
- ./project:/usr/src/app
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- web
- redis
Add a new directory to "project" called "logs. Then, add a new file called celery.log to that newly created directory.
Update:
$ docker-compose up -d --build
You should see the log file fill up locally since we set up a volume:
[2021-05-08 15:32:24,407: INFO/MainProcess] Connected to redis://redis:6379/0
[2021-05-08 15:32:24,415: INFO/MainProcess] mingle: searching for neighbors
[2021-05-08 15:32:25,434: INFO/MainProcess] mingle: all alone
[2021-05-08 15:32:25,448: INFO/MainProcess] celery@365a3b836a91 ready.
[2021-05-08 15:32:29,834: INFO/MainProcess]
Received task: create_task[013df48c-4548-4a2b-9b22-7267da215361]
[2021-05-08 15:32:39,825: INFO/ForkPoolWorker-7]
Task create_task[013df48c-4548-4a2b-9b22-7267da215361]
succeeded in 10.02114040000015s: True
Flower is a lightweight, real-time, web-based monitoring tool for Celery. You can monitor currently running tasks, increase or decrease the worker pool, view graphs and a number of statistics, to name a few.
Add it to requirements.txt:
aiofiles==0.6.0
celery==4.4.7
fastapi==0.64.0
flower==0.9.7
Jinja2==2.11.3
pytest==6.2.4
redis==3.5.3
requests==2.25.1
uvicorn==0.13.4
Then, add a new service to docker-compose.yml:
dashboard:
build: ./project
command: flower --app=worker.celery --port=5555 --broker=redis://redis:6379/0
ports:
- 5556:5555
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- web
- redis
- worker
Test it out:
$ docker-compose up -d --build
Navigate to http://localhost:5556 to view the dashboard. You should see one worker ready to go:
Kick off a few more tasks to fully test the dashboard:
Try adding a few more workers to see how that affects things:
$ docker-compose up -d --build --scale worker=3
Let's start with the most basic test:
def test_task():
assert create_task.run(1)
assert create_task.run(2)
assert create_task.run(3)
Add the above test case to project/tests/test_tasks.py, and then add the following import:
from worker import create_task
Run that test individually:
$ docker-compose exec web python -m pytest -k "test_task and not test_home"
It should take about one minute to run:
================================== test session starts ===================================
platform linux -- Python 3.9.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /usr/src/app
plugins: celery-4.4.7
collected 2 items / 1 deselected / 1 selected
tests/test_tasks.py . [100%]
====================== 1 passed, 1 deselected in 60.05s (0:01:00) ========================
It's worth noting that in the above asserts, we used the .run
method (rather than .delay
) to run the task directly without a Celery worker.
Want to mock the .run
method to speed things up?
@patch("worker.create_task.run")
def test_mock_task(mock_run):
assert create_task.run(1)
create_task.run.assert_called_once_with(1)
assert create_task.run(2)
assert create_task.run.call_count == 2
assert create_task.run(3)
assert create_task.run.call_count == 3
Import:
from unittest.mock import patch, call
Test:
$ docker-compose exec web python -m pytest -k "test_mock_task"
================================== test session starts ===================================
platform linux -- Python 3.9.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /usr/src/app
plugins: celery-4.4.7
collected 3 items / 2 deselected / 1 selected
tests/test_tasks.py . [100%]
============================ 1 passed, 2 deselected in 0.13s =============================
Much quicker!
How about a full integration test?
def test_task_status(test_app):
response = test_app.post(
"/tasks",
data=json.dumps({"type": 1})
)
content = response.json()
task_id = content["task_id"]
assert task_id
response = test_app.get(f"tasks/{task_id}")
content = response.json()
assert content == {"task_id": task_id, "task_status": "PENDING", "task_result": None}
assert response.status_code == 200
while content["task_status"] == "PENDING":
response = test_app.get(f"tasks/{task_id}")
content = response.json()
assert content == {"task_id": task_id, "task_status": "SUCCESS", "task_result": True}
Keep in mind that this test uses the same broker and backend used in development. You may want to instantiate a new Celery app for testing.
Add the import:
import json
Ensure the test passes.
This has been a basic guide on how to configure Celery to run long-running tasks in a FastAPI app. You should let the queue handle any processes that could block or slow down the user-facing code.
Celery can also be used to execute repeatable tasks and break up complex, resource-intensive tasks so that the computational workload can be distributed across a number of machines to reduce (1) the time to completion and (2) the load on the machine handling client requests.
Grab the code from the repo.
Original article source at: https://testdriven.io/
1633583160
従来のWebアプリケーションは本質的に同期しています。 ユーザーはブラウザーに表示されるWebインターフェースを操作し、ブラウザーはそのユーザー操作に基づいてサーバーに要求を返し、サーバーはユーザーの新しい表示でそれらの要求に応答します。
今日、状況は変化しました。現代のWebサイトは、数十万の訪問者からの要求を処理する必要があります。これらの要求にデータベースまたはWebサービスとの対話が含まれる場合、応答時間が長くなり、何千もの訪問者が同じリソースにアクセスしている場合、Webサイトのパフォーマンスが大幅に低下する可能性があります。 ここで非同期Webが助けになります。
非同期性を選択したときに把握できる利点のいくつかを次に示します。
このチュートリアルでは、新しい要求に応答するWebサーバーの機能を制限する長時間実行タスクを処理するWebアプリケーションを構築するときに発生する一般的な落とし穴の1つを克服する方法を説明します。
簡単な解決策は、これらの長時間実行タスクをバックグラウンドで非同期に、別のスレッドまたはプロセス内で実行し、Webサーバーを解放することです。
Redis、Flask、Celery、SocketIOなどのいくつかのコンポーネントを活用して、長時間実行されるタスクの実行をオフロードし、完了したら、クライアントにそのステータスを示すプッシュ通知を送信します。
このチュートリアルでは、コルーチンを使用してコードを同時に実行できるasyncioPythonの組み込みライブラリについては説明していません。
要件が満たされると、次のコンポーネントが機能します。
Redis:は、オープンソースの高度なKey-Valueストアであり、高性能でスケーラブルなWebアプリケーションを構築するための適切なソリューションです。それを際立たせる3つの主な特徴があります:
Redisはデータベースを完全にメモリに保持し、永続性のためにのみディスクを使用します。
多くのKey-Valueデータストアと比較すると、Redisには比較的豊富なデータ型のセットがあります。
Redisのインストールは、このチュートリアルの範囲外です。ただし、Windowsマシンにインストールするには、このクイックガイドに従うことをお勧めします。
LinuxまたはmacOSを使用している場合は、以下のコマンドのいずれかを実行すると、Redisがセットアップされます。
Ubuntu / Debian:
$ sudo apt-get install redis-server
マックOS:
$ brew install redis
$ brew services start redis
注意: このチュートリアルでは、Redisバージョン3.0.504を使用しています
セロリ:Pythonの世界で最も人気のあるバックグラウンドジョブマネージャーの1人です。リアルタイム操作に重点を置いていますが、スケジューリングもサポートしています。RedisやRabbitMQなどのいくつかのメッセージブローカーと互換性があり、プロデューサーとコンシューマーの両方として機能できます。
requirements.txt
ファイルにCeleryをインストールし ます。
注意: このチュートリアルでは、Celeryバージョン4.4.7を使用しています。
このチュートリアルでは、スキャフォールディング手法を採用し、同期通信と非同期通信の違い、および非同期通信のバリエーションを理解するために、一連のさまざまなシナリオについて説明します。
すべてのシナリオはFlaskフレームワーク内で提示されます。ただし、それらのほとんどは他のPythonフレームワーク(Django、Pyramid)に簡単に移植できます。
このチュートリアルに興味をそそられ、すぐにコードに飛び込みたいと思う 場合は、この記事で使用されているコードについて、このGithubリポジトリにアクセス してください。
私たちのアプリケーションは以下で構成されます:
app_sync.py
同期通信を紹介するプログラム 。app_async1.py
クライアントがポーリングメカニズムを使用してサーバー側プロセスのフィードバックを要求する可能性がある非同期サービス呼び出しを示すプログラム 。app_async2.py
クライアントへの自動フィードバックを伴う非同期サービス呼び出しを示すプログラム 。app_async3.py
クライアントへの自動フィードバックを伴う、スケジュール後の非同期サービス呼び出しを示すプログラム 。セットアップに飛び込みましょう。もちろん 、システムにPython3をインストールする必要があり ます。私は必要なライブラリをインストールする仮想環境を使用します(そしてあなたも間違いなくそうするべきです):
$ python -m venv async-venv
$ source async-venv/bin/activate
名前の付いたファイルを作成し、 requirements.txt
その中に次の行を追加します。
Flask==1.1.2
Flask-SocketIO==5.0.1
Celery==4.4.7
redis==3.5.3
gevent==21.1.2
gevent-websocket==0.10.1
flower==0.9.7
今それらをインストールします:
$ pip install -r requirements.txt
このチュートリアルを終了すると、フォルダー構造は次のようになります。
それがクリアされたので、実際のコードを書き始めましょう。
まず、アプリケーションの構成パラメータを次のように定義します config.py
。
#config.py
#Application configuration File
################################
#Secret key that will be used by Flask for securely signing the session cookie
# and can be used for other security related needs
SECRET_KEY = 'SECRET_KEY'
#Map to REDIS Server Port
BROKER_URL = 'redis://localhost:6379'
#Minimum interval of wait time for our task
MIN_WAIT_TIME = 1
#Maximum interval of wait time for our task
MAX_WAIT_TIME = 20
注意:簡潔にするために、これらの構成パラメーターをにハードコーディングしましたが config.py
、これらのパラメーターを別のファイル(たとえば.env
)に保存することをお勧めします 。
次に、プロジェクトの初期化ファイルを次の場所に作成します init.py
。
#init.py
from flask import Flask
#Create a flask instance
app = Flask(__name__)
#Loads flask configurations from config.py
app.secret_key = app.config['SECRET_KEY']
app.config.from_object("config")
#Setup the Flask SocketIO integration (Required only for asynchronous scenarios)
from flask_socketio import SocketIO
socketio = SocketIO(app,logger=True,engineio_logger=True,message_queue=app.config['BROKER_URL'])
コーディングに入る前に、同期通信について簡単に説明します。
で同期通信、発呼者がサービスを要求し、完全にサービスを待ち受けます。そのサービスの結果を受け取った場合にのみ、作業を続行します。タイムアウトを定義して、定義された期間内にサービスが終了しない場合、呼び出しは失敗したと見なされ、呼び出し元は続行します。
同期通信がどのように機能するかを理解するために、専用のウェイターが割り当てられたと想像してください。彼はあなたの注文を受け取り、それをキッチンに届け、そこでシェフがあなたの料理を準備するのを待ちます。この間、ウェイターは何もしていません。
次の図は、同期サービス呼び出しを示しています。
同期通信はシーケンシャルタスクに適していますが、同時タスクが多数ある場合、プログラムはスレッドを使い果たし、スレッドが使用可能になるまで新しいタスクを待機させる可能性があります。
それでは、コーディングに取り掛かりましょう。Flaskがレンダリングできるテンプレート(index.html
)を作成し、その中に次のHTMLコードを含めます。
templates/index.html
<!DOCTYPE html>
<html>
<head>
<title>Synchronicity versus Asynchronicity</title>
<link rel="stylesheet" href="{{url_for('static',filename='css/materialize.min.css')}}">
<script src="{{ url_for('static',filename='js/jquery.min.js') }}"></script>
<script src="{{ url_for('static',filename='js/socket.io.js') }}"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
</head>
<body class="container">
<div class="row">
<h5>Click to start a post scheduled ansycnhronous task with automatic feedback.</h5>
</div>
<div class="card-panel">
<form method='post' id="runTaskForm" action="/runPSATask">
<div>
<input id="duration" name="duration" placeholder="Enter duration in seconds. for example: 30" type="text">
<label for="duration">Duration</label>
</div>
<button style="height:50px;width:600px" type="submit" id="runTask">Run A Post Scheduled Asynchronous Task With Automatic Feedback</button>
</form>
</div>
<div class="row">
<div id="Messages" class="red-text" style="width:800px; height:400px; overflow-y:scroll;"></div>
</div>
<script>
$(document).ready(function(){
var namespace='/runPSATask';
var url = 'http://' + document.domain + ':' + location.port + namespace;
var socket = io.connect(url);
socket.on('connect', function() {
socket.emit('join_room');
});
socket.on('msg' , function(data) {
$("#Messages").prepend('<li>'+data.msg+'</li>');
});
socket.on('status', function(data) {
////alert('socket on status ='+ data.msg);
if (data.msg == 'End') {
$("#runTask").attr("disabled",false);
};
});
});
</script>
<script>
$("#runTask").click(function(e) {
$("#runTask").attr("disabled",true);
$("#Messages").empty();
$.ajax({ type: "Post"
, url: '/runPSATask'
, data: $("#runTaskForm").serialize()
, success: function(data) {
$("#Messages").empty();
$("#Messages").prepend('<li>The Task ' + data.taskid + ' has been submitted and will execute in ' + data.duration + ' seconds. </li>');
}
});
e.preventDefault();
console.log('runPSATask complete');
});
</script>
</body>
</html>
このテンプレートには次のものが含まれます。
runTask
ルートを使用してサーバーにタスクを送信するボタン/runSyncTask
。div
idでに配置されMessages
ます。次に、app_sync.py
Flaskアプリケーションを含むというプログラムを作成し、このプログラム内に2つのルートを定義します。
"/"
Webページをレンダリングします(index.html
)"/runSyncTask"
1〜20秒の乱数を生成し、反復ごとに1秒間スリープするループを実行する、長時間実行中のタスクをシミュレートします。#app_sync.py
from flask import render_template, jsonify
from random import randint
from init import app
import tasks
#Render the predefined template index.html
@app.route("/",methods=['GET'])
def index():
return render_template('index.html')
#Defining the route for running A Synchronous Task
@app.route("/runSyncTask",methods=['POST'])
def long_sync_task():
print("Running","/runSyncTask")
#Generate a random number between MIN_WAIT_TIME and MAX_WAIT_TIME
n = randint(app.config['MIN_WAIT_TIME'],app.config['MAX_WAIT_TIME'])
#Call the function long_sync_task included within tasks.py
task = tasks.long_sync_task(n=n)
#Return the random wait time generated
return jsonify({ 'waittime': n })
if __name__ == "__main__":
app.run(debug=True)
このチュートリアルで定義されているすべてのタスクのコアロジックは、プログラム内にありますtasks.py
。
#tasks.py
import time
from celery import Celery
from celery.utils.log import get_task_logger
from flask_socketio import SocketIO
import config
# Setup the logger (compatible with celery version 4)
logger = get_task_logger(__name__)
# Setup the celery client
celery = Celery(__name__)
# Load celery configurations from celeryconfig.py
celery.config_from_object("celeryconfig")
# Setup and connect the socket instance to Redis Server
socketio = SocketIO(message_queue=config.BROKER_URL)
###############################################################################
def long_sync_task(n):
print(f"This task will take {n} seconds.")
for i in range(n):
print(f"i = {i}")
time.sleep(1)
###############################################################################
@celery.task(name = 'tasks.long_async_task')
def long_async_task(n,session):
print(f"The task of session {session} will take {n} seconds.")
for i in range(n):
print(f"i = {i}")
time.sleep(1)
###############################################################################
def send_message(event, namespace, room, message):
print("Message = ", message)
socketio.emit(event, {'msg': message}, namespace=namespace, room=room)
@celery.task(name = 'tasks.long_async_taskf')
def long_async_taskf(data):
room = data['sessionid']
namespace = data['namespase']
n = data['waittime']
#Send messages signaling the lifecycle of the task
send_message('status', namespace, room, 'Begin')
send_message('msg', namespace, room, 'Begin Task {}'.format(long_async_taskf.request.id))
send_message('msg', namespace, room, 'This task will take {} seconds'.format(n))
print(f"This task will take {n} seconds.")
for i in range(n):
msg = f"{i}"
send_message('msg', namespace, room, msg )
time.sleep(1)
send_message('msg', namespace, room, 'End Task {}'.format(long_async_taskf.request.id))
send_message('status', namespace, room, 'End')
###############################################################################
@celery.task(name = 'tasks.long_async_sch_task')
def long_async_sch_task(data):
room = data['sessionid']
namespace = data['namespase']
n = data['waittime']
send_message('status', namespace, room, 'Begin')
send_message('msg' , namespace, room, 'Begin Task {}'.format(long_async_sch_task.request.id))
send_message('msg' , namespace, room, 'This task will take {} seconds'.format(n))
print(f"This task will take {n} seconds.")
for i in range(n):
msg = f"{i}"
send_message('msg', namespace, room, msg )
time.sleep(1)
send_message('msg' , namespace, room, 'End Task {}'.format(long_async_sch_task.request.id))
send_message('status', namespace, room, 'End')
###############################################################################
このセクションでは、このlong_sync_task()
関数を同期タスクとしてのみ使用します。
app_sync.py
プログラムを実行して、同期シナリオをテストしてみましょう。
$ python app_sync.py
http://localhost:5000
Flaskインスタンスが実行されているリンクにアクセスすると、次の出力が表示されます。
「同期タスクの実行」ボタンを押して、プロセスが完了するまで待ちます。
完了すると、トリガーされたタスクに割り当てられたランダムな時間を通知するメッセージが表示されます。
同時に、サーバーがタスクを実行すると、コンソールに1秒ごとに増分された数値が表示されます。
このセクションでは、クライアントがポーリングメカニズムを使用してサーバー側プロセスのフィードバックを要求する可能性のある非同期サービス呼び出しを示します。
簡単に言うと、非同期とは、プログラムが特定のプロセスが完了するのを待たずに、関係なく続行することを意味します。
発信者が開始サービスコールが、やるES N結果をオト待ち時間。発信者は、結果を気にせずにすぐに作業を続行します。発信者が結果に関心がある場合は、後で説明するメカニズムがあります。
最も単純な非同期メッセージ交換パターンはファイアアンドフォーゲットと呼ばれ 、メッセージは送信されますがフィードバックは必要ありませんが、フィードバックが必要な場合、クライアントはポーリングメカニズムを介して結果を繰り返し要求することがあります 。
ポーリングは潜在的に高いネットワーク負荷を引き起こすため、お勧めしません。それでも、サービスプロバイダー(サーバー)がクライアントについて知る必要がないという利点があります。
次の図は、シナリオを示しています。
非同期通信は、イベントに応答する必要のあるコード(たとえば、待機を伴う時間のかかるI / Oバウンド操作)に適しています。
非同期性を選択すると、システムは同時により多くの要求を処理できるようになり、スループットが向上します。
それでは、コーディングに移りましょう。構成ファイルを使用して、セロリの初期化パラメーターを定義しますceleryconfig.py
。
#celeryconfig.py
#Celery Configuration parameters
#Map to Redis server
broker_url = 'redis://localhost:6379/0'
#Backend used to store the tasks results
result_backend = 'redis://localhost:6379/0'
#A string identifying the default serialization to use Default json
task_serializer = 'json'
result_serializer = 'json'
accept_content = ['json']
#When set to false the local system timezone is used.
enable_utc = False
#To track the started state of a task, we should explicitly enable it
task_track_started = True
#Configure Celery to use a specific time zone.
#The timezone value can be any time zone supported by the pytz library
#timezone = 'Asia/Beirut'
#enable_utc = True
Flaskがレンダリングできるテンプレートを作成します(index1.html
):
<!DOCTYPE html>
<html>
<head>
<title>Synchronicity versus Asynchronicity</title>
<link rel="stylesheet" href="{{url_for('static',filename='css/materialize.min.css')}}">
<script src="{{ url_for('static',filename='js/jquery.min.js') }}"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
</head>
<body class="container">
<div class="row">
<h4>Click to start an ansycnhronous task</h4>
</div>
<div class="card-panel">
<form method='post' id="runTaskForm" action="/runAsyncTask">
<button style="height:50px;width:400px" type="submit" id="runTask">Run An Asynchronous Task</button>
</form>
<form method='post' id="getTaskResultForm" action="/getAsyncTaskResult">
<button style="height:50px;width:400px" type="submit" id="getTaskResult">Get Asynchronous Task Result</button>
</form>
</div>
<div class="row">
<div id="Messages" class="red-text" style="width:800px; height:400px; overflow-y:scroll;"></div>
</div>
<script>
$("#runTask").click(function(e) {
$("#runTask").attr("disabled",true);
$("*").css("cursor","wait");
$("#Messages").empty();
$.ajax({ type: "Post"
, url: '/runAsyncTask'
, data: $("#runTaskForm").serialize()
, success: function(data) {
$("#runTask").attr("disabled",false);
$("*").css("cursor","");
$("#Messages").append('The task ' + data.taskid + ' will be executed in asynchronous manner for ' + data.waittime + ' seconds...');
}
});
e.preventDefault();
console.log('runAsyncTask complete');
});
$("#getTaskResult").click(function(e) {
var msg = $("#Messages").text();
var taskid = msg.match("task(.*)will");
//Get The Task ID from The Messages div and create a Target URL
var vurl = '/getAsyncTaskResult?taskid=' + jQuery.trim(taskid[1]);
$.ajax({ type: "Post"
, url: vurl
, data: $("#getTaskResultForm").serialize()
, success: function(data) {
$("*").css("cursor","");
$("#Messages").append('<p> The Status of the task = ' + data.taskid + ' is ' + data.taskstatus + '</p>');
}
});
e.preventDefault();
console.log('getAsyncTaskResult complete');
});
</script>
</body>
</html>
次に、app_async1.py
Flaskアプリを含むプログラムを作成します。
#app_async1.py
from flask import render_template, jsonify, session,request
from random import randint
import uuid
import tasks
from init import app
from celery.result import AsyncResult
@app.route("/",methods=['GET'])
def index():
# create a unique ID to assign for the asynchronous task
if 'uid' not in session:
sid = str(uuid.uuid4())
session['uid'] = sid
print("Session ID stored =", sid)
return render_template('index1.html')
#Run an Asynchronous Task
@app.route("/runAsyncTask",methods=['POST'])
def long_async_task():
print("Running", "/runAsyncTask")
#Generate a random number between MIN_WAIT_TIME and MAX_WAIT_TIME
n = randint(app.config['MIN_WAIT_TIME'],app.config['MAX_WAIT_TIME'])
sid = str(session['uid'])
task = tasks.long_async_task.delay(n=n,session=sid)
#print('taskid',task.id,'sessionid',sid,'waittime',n )
return jsonify({'taskid':task.id,'sessionid':sid,'waittime':n })
#Get The Result of The Asynchronous Task
@app.route('/getAsyncTaskResult', methods=['GET', 'POST'])
def result():
task_id = request.args.get('taskid')
# grab the AsyncResult
result = AsyncResult(task_id)
# print the task id
print("Task ID = ", result.task_id)
# print the Asynchronous result status
print("Task Status = ", result.status)
return jsonify({'taskid': result.task_id, 'taskstatus': result.status})
if __name__ == "__main__":
app.run(debug=True)
このプログラムには、3つの主要なルートがあります。
"/"
:Webページをレンダリングします(index1.html
)。"/runAsyncTask"
:1〜20秒の乱数を生成する非同期タスクを呼び出してから、反復ごとに1秒間スリープするループを実行します。"/getAsyncTaskResult"
:受信したタスクIDに基づいて、タスクの状態を収集します。注意:このシナリオには、SocketIOコンポーネントは含まれていません。
このシナリオをテストして、次の手順に従ってパスを進めましょう。
redis-server.exe
ます。デフォルトのインストールまたはLinux / MacOSの場合は、RedisインスタンスがTCPポート6379で実行されていることを確認してください。$ async-venv\Scripts\celery.exe worker -A tasks --loglevel=DEBUG --concurrency=1 -P solo -f celery.logs
Linux / MacOSでは、非常によく似ています。
$ async-venv/bin/celery worker -A tasks --loglevel=DEBUG --concurrency=1 -P solo -f celery.logs
これasync-venv
が仮想環境の名前であることに注意してください。別の名前を付けた場合は、必ず自分の名前に置き換えてください。セロリが始まると、次の出力が表示されます。
プログラムで定義されたタスクtasks.py
がCeleryに反映されていることを確認してください。
$ python app_async1.py
次に、ブラウザを開いて、次のリンクにアクセスします。
[非同期タスクの実行]ボタンを押すと、新しいタスクがキューに入れられ、直接実行されます。「メッセージ」セクションに、タスクのIDとその実行時間を示すメッセージが表示されます。
[非同期タスク結果の取得]ボタンを(継続的に)押すと、その特定の時間におけるタスクの状態が収集されます。
PENDING
:実行を待機しています。STARTED
:タスクが開始されました。SUCCESS
:タスクは正常に実行されました。FAILURE
:タスクの実行により例外が発生しました。RETRY
:タスクは再試行されています。REVOKED
:タスクが取り消されました。ログファイルに含まれているセロリワーカーのログcelery.logs
を確認すると、タスクのライフサイクルに気付くでしょう。
以前のシナリオに基づいて、タスクの状態を収集するために複数のリクエストを開始することによる煩わしさを軽減するために、サーバーがタスクの状態に関してクライアントを継続的に更新できるようにするソケットテクノロジの組み込みを試みます。
実際、ソケットIOエンジンは、リアルタイムの双方向イベントベースの通信を可能にします。
これがもたらす主な利点は、ネットワークの負荷を軽減し、膨大な数のクライアントに情報を伝達するためにより効率的になることです。
次の図は、シナリオを示しています。
さらに掘り下げる前に、実行する手順について簡単に説明します。
セロリからWebブラウザーにメッセージを送り返すことができるようにするために、以下を活用します。
データ接続を効果的に管理するために、次の区分化戦略を採用します。
"/runAsyncTaskF"
このシナリオに名前空間を割り当てます。(名前空間は、単一の共有接続を介してサーバーロジックを分離するために使用されます)。それでは、コーディングに移りましょう。
index2.html
):<!DOCTYPE html>
<html>
<head>
<title>Synchronicity versus Asynchronicity</title>
<link rel="stylesheet" href="{{url_for('static',filename='css/materialize.min.css')}}">
<script src="{{ url_for('static',filename='js/jquery.min.js') }}"></script>
<script src="{{ url_for('static',filename='js/socket.io.js') }}"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
</head>
<body class="container">
<div class="row">
<h5>Click to start an ansycnhronous task with automatic feedback.</h5>
</div>
<div class="card-panel">
<form method='post' id="runTaskForm" action="/runAsyncTask">
<button style="height:50px;width:400px" type="submit" id="runTask">Run An Asynchronous Task With Automatic Feedback</button>
</form>
</div>
<div class="row">
<div id="Messages" class="red-text" style="width:800px; height:400px; overflow-y:scroll;"></div>
</div>
<script>
$(document).ready(function() {
var namespace = '/runAsyncTaskF';
var url = 'http://' + document.domain + ':' + location.port + namespace;
var socket = io.connect(url);
socket.on('connect', function() {
////alert('socket on connect');
socket.emit('join_room');
});
socket.on('msg', function(data) {
////alert('socket on msg ='+ data.msg);
$("#Messages").prepend('<li>' + data.msg + '</li>');
});
socket.on('status', function(data) {
////alert('socket on status ='+ data.msg);
if (data.msg == 'End') {
$("#runTask").attr("disabled", false);
};
});
});
</script>
<script>
$("#runTask").click(function(e) {
$("#runTask").attr("disabled", true);
$("*").css("cursor", "wait");
$("#Messages").empty();
$.ajax({
type: "Post",
url: '/runAsyncTaskF',
data: $("#runTaskForm").serialize(),
success: function(data) {
$("*").css("cursor", "");
$("#Messages").empty();
$("#Messages").prepend('<li>The Task ' + data.taskid + ' has been submitted. </li>');
}
});
e.preventDefault();
console.log('runAsyncTaskF complete');
});
</script>
</body>
</html>
app_async2.py
Flaskアプリケーションを含むと呼ばれるプログラムを作成します。
#Gevent is a coroutine based concurrency library for Python
from gevent import monkey
#For dynamic modifications of a class or module
monkey.patch_all()
from flask import render_template, jsonify, session, request
from random import randint
import uuid
import tasks
from init import app, socketio
from flask_socketio import join_room
@app.route("/",methods=['GET'])
def index():
# create a unique session ID and store it within the Flask session
if 'uid' not in session:
sid = str(uuid.uuid4())
session['uid'] = sid
print("Session ID stored =", sid)
return render_template('index2.html')
#Run an Asynchronous Task With Automatic Feedback
@app.route("/runAsyncTaskF",methods=['POST'])
def long_async_taskf():
print("Running", "/runAsyncTaskF")
# Generate a random number between MIN_WAIT_TIME and MAX_WAIT_TIME
n = randint(app.config['MIN_WAIT_TIME'], app.config['MAX_WAIT_TIME'])
data = {}
data['sessionid'] = str(session['uid'])
data['waittime'] = n
data['namespase'] = '/runAsyncTaskF'
task = tasks.long_async_taskf.delay(data)
return jsonify({ 'taskid':task.id
,'sessionid':data['sessionid']
,'waittime':data['waittime']
,'namespace':data['namespase']
})
@socketio.on('connect', namespace='/runAsyncTaskF')
def socket_connect():
#Display message upon connecting to the namespace
print('Client Connected To NameSpace /runAsyncTaskF - ',request.sid)
@socketio.on('disconnect', namespace='/runAsyncTaskF')
def socket_connect():
# Display message upon disconnecting from the namespace
print('Client disconnected From NameSpace /runAsyncTaskF - ',request.sid)
@socketio.on('join_room', namespace='/runAsyncTaskF')
def on_room():
room = str(session['uid'])
# Display message upon joining a room specific to the session previously stored.
print(f"Socket joining room {room}")
join_room(room)
@socketio.on_error_default
def error_handler(e):
# Display message on error.
print(f"socket error: {e}, {str(request.event)}")
if __name__ == "__main__":
# Run the application with socketio integration.
socketio.run(app,debug=True)
このプログラムには、主に2つのルートがあります。
"/"
:Webページをレンダリングします(index2.html
)。"/runAsyncTaskF"
:以下を実行する非同期タスクを呼び出します。long_async_taskf()
プログラム内のそれぞれのタスクを呼び出しますtasks.py
。このシナリオを実行するには:
app_async2.py
ブラウザを開き、次のリンクにアクセスしてボタンを押すと、次のような出力が徐々に表示されます。
同時に、コンソールに次の出力が表示されます。
また
celery.logs
、タスクのライフサイクルについてファイルをいつでも確認できます。
このシナリオはシナリオ3に似ています。唯一の違いは、非同期タスクを直接実行する代わりに、このタスクがクライアントによって指定された特定の期間の後に実行されるようにスケジュールされることです。
コーディングに進みましょう。非同期タスクを実行する前に待機する時間を秒単位で表すindex3.html
新しいフィールドを使用してテンプレートを作成し"Duration"
ます。
<!DOCTYPE html>
<html>
<head>
<title>Synchronicity versus Asynchronicity</title>
<link rel="stylesheet" href="{{url_for('static',filename='css/materialize.min.css')}}">
<script src="{{ url_for('static',filename='js/jquery.min.js') }}"></script>
<script src="{{ url_for('static',filename='js/socket.io.js') }}"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
</head>
<body class="container">
<div class="row">
<h5>Click to start a post scheduled ansycnhronous task with automatic feedback.</h5>
</div>
<div class="card-panel">
<form method='post' id="runTaskForm" action="/runPSATask">
<div>
<input id="duration" name="duration" placeholder="Enter duration in seconds. for example: 30" type="text">
<label for="duration">Duration</label>
</div>
<button style="height:50px;width:600px" type="submit" id="runTask">Run A Post Scheduled Asynchronous Task With Automatic Feedback</button>
</form>
</div>
<div class="row">
<div id="Messages" class="red-text" style="width:800px; height:400px; overflow-y:scroll;"></div>
</div>
<script>
$(document).ready(function() {
var namespace = '/runPSATask';
var url = 'http://' + document.domain + ':' + location.port + namespace;
var socket = io.connect(url);
socket.on('connect', function() {
socket.emit('join_room');
});
socket.on('msg', function(data) {
$("#Messages").prepend('<li>' + data.msg + '</li>');
});
socket.on('status', function(data) {
////alert('socket on status ='+ data.msg);
if (data.msg == 'End') {
$("#runTask").attr("disabled", false);
};
});
});
</script>
<script>
$("#runTask").click(function(e) {
$("#runTask").attr("disabled", true);
$("#Messages").empty();
$.ajax({
type: "Post",
url: '/runPSATask',
data: $("#runTaskForm").serialize(),
success: function(data) {
$("#Messages").empty();
$("#Messages").prepend('<li>The Task ' + data.taskid + ' has been submitted and will execute in ' + data.duration + ' seconds. </li>');
}
});
e.preventDefault();
console.log('runPSATask complete');
});
</script>
</body>
</html>
次に、app_async3.py
このシナリオのFlaskアプリは次のとおりです。
#app_async3.py
from gevent import monkey
monkey.patch_all()
from flask import render_template, jsonify, session, request
from random import randint
import uuid
import tasks
from init import app, socketio
from flask_socketio import join_room
@app.route("/",methods=['GET'])
def index():
# create a unique session ID
if 'uid' not in session:
sid = str(uuid.uuid4())
session['uid'] = sid
print("Session ID stored =", sid)
return render_template('index3.html')
#Run a Post Scheduled Asynchronous Task With Automatic Feedback
@app.route("/runPSATask",methods=['POST'])
def long_async_sch_task():
print("Running", "/runPSATask")
# Generate a random number between MIN_WAIT_TIME and MAX_WAIT_TIME
n = randint(app.config['MIN_WAIT_TIME'], app.config['MAX_WAIT_TIME'])
data = {}
data['sessionid'] = str(session['uid'])
data['waittime'] = n
data['namespase'] = '/runPSATask'
data['duration'] = int(request.form['duration'])
#Countdown represents the duration to wait in seconds before running the task
task = tasks.long_async_sch_task.apply_async(args=[data],countdown=data['duration'])
return jsonify({ 'taskid':task.id
,'sessionid':data['sessionid']
,'waittime': data['waittime']
,'namespace':data['namespase']
,'duration':data['duration']
})
@socketio.on('connect', namespace='/runPSATask')
def socket_connect():
print('Client Connected To NameSpace /runPSATask - ',request.sid)
@socketio.on('disconnect', namespace='/runPSATask')
def socket_connect():
print('Client disconnected From NameSpace /runPSATask - ',request.sid)
@socketio.on('join_room', namespace='/runPSATask')
def on_room():
room = str(session['uid'])
print(f"Socket joining room {room}")
join_room(room)
@socketio.on_error_default
def error_handler(e):
print(f"socket error: {e}, {str(request.event)}")
if __name__ == "__main__":
socketio.run(app,debug=True)
今回long_async_sch_task()
からタスクメソッドを使用していることに注意してくださいtasks.py
。
app_async3.py
以前と同じように実行し、ブラウザを開きます。
期間(つまり10)を入力し、ボタンを押して、スケジュール後の非同期タスクを作成します。作成されると、タスクの詳細を示すメッセージが[メッセージ]ボックスに表示されます。
期間フィールドで指定した時間待つ必要があります。タスクが実行されていることがわかります。
また、
celery.logs
ログファイルに含まれているセロリワーカーのログを復元すると、タスクのライフサイクルに気付くでしょう。
セロリタスクをより適切に監視するために、セロリ クラスターを監視および管理するためのWebベースのツールであるFlowerをインストールできます。
注意:フラワーライブラリはの一部でしたrequirements.txt
。
花を使用してセロリのタスクを表示するには、次の手順に従ってください。
$ async-venv\Scripts\flower.exe worker -A tasks --port=5555
Linux / MacOSの場合:
$ async-venv/bin/flower worker -A tasks --port=5555
コンソールに次の情報が表示されます。
アプリに戻ってタスクを実行し、ブラウザを開いて
http://localhost:5555
[タスク]タブに移動します。
タスクが完了すると、フラワーダッシュボードに次のように表示されます。
この記事が、Celeryの助けを借りて同期および非同期リクエストの概念的な基礎を得るのに役立つことを願っています。同期要求は遅くなる可能性があり、非同期要求は迅速に実行されますが、あらゆるシナリオに適切な方法を認識することが重要です。時には、彼らは一緒に働くことさえあります。
リンク: https://www.thepythoncode.com/article/async-tasks-with-celery-redis-and-flask-in-python
1595059664
With more of us using smartphones, the popularity of mobile applications has exploded. In the digital era, the number of people looking for products and services online is growing rapidly. Smartphone owners look for mobile applications that give them quick access to companies’ products and services. As a result, mobile apps provide customers with a lot of benefits in just one device.
Likewise, companies use mobile apps to increase customer loyalty and improve their services. Mobile Developers are in high demand as companies use apps not only to create brand awareness but also to gather information. For that reason, mobile apps are used as tools to collect valuable data from customers to help companies improve their offer.
There are many types of mobile applications, each with its own advantages. For example, native apps perform better, while web apps don’t need to be customized for the platform or operating system (OS). Likewise, hybrid apps provide users with comfortable user experience. However, you may be wondering how long it takes to develop an app.
To give you an idea of how long the app development process takes, here’s a short guide.
_Average time spent: two to five weeks _
This is the initial stage and a crucial step in setting the project in the right direction. In this stage, you brainstorm ideas and select the best one. Apart from that, you’ll need to do some research to see if your idea is viable. Remember that coming up with an idea is easy; the hard part is to make it a reality.
All your ideas may seem viable, but you still have to run some tests to keep it as real as possible. For that reason, when Web Developers are building a web app, they analyze the available ideas to see which one is the best match for the targeted audience.
Targeting the right audience is crucial when you are developing an app. It saves time when shaping the app in the right direction as you have a clear set of objectives. Likewise, analyzing how the app affects the market is essential. During the research process, App Developers must gather information about potential competitors and threats. This helps the app owners develop strategies to tackle difficulties that come up after the launch.
The research process can take several weeks, but it determines how successful your app can be. For that reason, you must take your time to know all the weaknesses and strengths of the competitors, possible app strategies, and targeted audience.
The outcomes of this stage are app prototypes and the minimum feasible product.
#android app #frontend #ios app #minimum viable product (mvp) #mobile app development #web development #android app development #app development #app development for ios and android #app development process #ios and android app development #ios app development #stages in app development
1595491178
The electric scooter revolution has caught on super-fast taking many cities across the globe by storm. eScooters, a renovated version of old-school scooters now turned into electric vehicles are an environmentally friendly solution to current on-demand commute problems. They work on engines, like cars, enabling short traveling distances without hassle. The result is that these groundbreaking electric machines can now provide faster transport for less — cheaper than Uber and faster than Metro.
Since they are durable, fast, easy to operate and maintain, and are more convenient to park compared to four-wheelers, the eScooters trend has and continues to spike interest as a promising growth area. Several companies and universities are increasingly setting up shop to provide eScooter services realizing a would-be profitable business model and a ready customer base that is university students or residents in need of faster and cheap travel going about their business in school, town, and other surrounding areas.
In many countries including the U.S., Canada, Mexico, U.K., Germany, France, China, Japan, India, Brazil and Mexico and more, a growing number of eScooter users both locals and tourists can now be seen effortlessly passing lines of drivers stuck in the endless and unmoving traffic.
A recent report by McKinsey revealed that the E-Scooter industry will be worth― $200 billion to $300 billion in the United States, $100 billion to $150 billion in Europe, and $30 billion to $50 billion in China in 2030. The e-Scooter revenue model will also spike and is projected to rise by more than 20% amounting to approximately $5 billion.
And, with a necessity to move people away from high carbon prints, traffic and congestion issues brought about by car-centric transport systems in cities, more and more city planners are developing more bike/scooter lanes and adopting zero-emission plans. This is the force behind the booming electric scooter market and the numbers will only go higher and higher.
Companies that have taken advantage of the growing eScooter trend develop an appthat allows them to provide efficient eScooter services. Such an app enables them to be able to locate bike pick-up and drop points through fully integrated google maps.
It’s clear that e scooters will increasingly become more common and the e-scooter business model will continue to grab the attention of manufacturers, investors, entrepreneurs. All this should go ahead with a quest to know what are some of the best electric bikes in the market especially for anyone who would want to get started in the electric bikes/scooters rental business.
We have done a comprehensive list of the best electric bikes! Each bike has been reviewed in depth and includes a full list of specs and a photo.
https://www.kickstarter.com/projects/enkicycles/billy-were-redefining-joyrides
To start us off is the Billy eBike, a powerful go-anywhere urban electric bike that’s specially designed to offer an exciting ride like no other whether you want to ride to the grocery store, cafe, work or school. The Billy eBike comes in 4 color options – Billy Blue, Polished aluminium, Artic white, and Stealth black.
Price: $2490
Available countries
Available in the USA, Europe, Asia, South Africa and Australia.This item ships from the USA. Buyers are therefore responsible for any taxes and/or customs duties incurred once it arrives in your country.
Features
Specifications
Why Should You Buy This?
**Who Should Ride Billy? **
Both new and experienced riders
**Where to Buy? **Local distributors or ships from the USA.
Featuring a sleek and lightweight aluminum frame design, the 200-Series ebike takes your riding experience to greater heights. Available in both black and white this ebike comes with a connected app, which allows you to plan activities, map distances and routes while also allowing connections with fellow riders.
Price: $2099.00
Available countries
The Genze 200 series e-Bike is available at GenZe retail locations across the U.S or online via GenZe.com website. Customers from outside the US can ship the product while incurring the relevant charges.
Features
Specifications
https://ebikestore.com/shop/norco-vlt-s2/
The Norco VLT S2 is a front suspension e-Bike with solid components alongside the reliable Bosch Performance Line Power systems that offer precise pedal assistance during any riding situation.
Price: $2,699.00
Available countries
This item is available via the various Norco bikes international distributors.
Features
Specifications
http://www.bodoevs.com/bodoev/products_show.asp?product_id=13
Manufactured by Bodo Vehicle Group Limited, the Bodo EV is specially designed for strong power and extraordinary long service to facilitate super amazing rides. The Bodo Vehicle Company is a striking top in electric vehicles brand field in China and across the globe. Their Bodo EV will no doubt provide your riders with high-level riding satisfaction owing to its high-quality design, strength, breaking stability and speed.
Price: $799
Available countries
This item ships from China with buyers bearing the shipping costs and other variables prior to delivery.
Features
Specifications
#android app #autorent #entrepreneurship #ios app #minimum viable product (mvp) #mobile app development #news #app like bird #app like bounce #app like lime #autorent #best electric bikes 2020 #best electric bikes for rental business #best electric kick scooters 2020 #best electric kickscooters for rental business #best electric scooters 2020 #best electric scooters for rental business #bird scooter business model #bird scooter rental #bird scooter rental cost #bird scooter rental price #clone app like bird #clone app like bounce #clone app like lime #electric rental scooters #electric scooter company #electric scooter rental business #how do you start a moped #how to start a moped #how to start a scooter rental business #how to start an electric company #how to start electric scooterrental business #lime scooter business model #scooter franchise #scooter rental business #scooter rental business for sale #scooter rental business insurance #scooters franchise cost #white label app like bird #white label app like bounce #white label app like lime
1595494844
Are you leading an organization that has a large campus, e.g., a large university? You are probably thinking of introducing an electric scooter/bicycle fleet on the campus, and why wouldn’t you?
Introducing micro-mobility in your campus with the help of such a fleet would help the people on the campus significantly. People would save money since they don’t need to use a car for a short distance. Your campus will see a drastic reduction in congestion, moreover, its carbon footprint will reduce.
Micro-mobility is relatively new though and you would need help. You would need to select an appropriate fleet of vehicles. The people on your campus would need to find electric scooters or electric bikes for commuting, and you need to provide a solution for this.
To be more specific, you need a short-term electric bike rental app. With such an app, you will be able to easily offer micro-mobility to the people on the campus. We at Devathon have built Autorent exactly for this.
What does Autorent do and how can it help you? How does it enable you to introduce micro-mobility on your campus? We explain these in this article, however, we will touch upon a few basics first.
You are probably thinking about micro-mobility relatively recently, aren’t you? A few relevant insights about it could help you to better appreciate its importance.
Micro-mobility is a new trend in transportation, and it uses vehicles that are considerably smaller than cars. Electric scooters (e-scooters) and electric bikes (e-bikes) are the most popular forms of micro-mobility, however, there are also e-unicycles and e-skateboards.
You might have already seen e-scooters, which are kick scooters that come with a motor. Thanks to its motor, an e-scooter can achieve a speed of up to 20 km/h. On the other hand, e-bikes are popular in China and Japan, and they come with a motor, and you can reach a speed of 40 km/h.
You obviously can’t use these vehicles for very long commutes, however, what if you need to travel a short distance? Even if you have a reasonable public transport facility in the city, it might not cover the route you need to take. Take the example of a large university campus. Such a campus is often at a considerable distance from the central business district of the city where it’s located. While public transport facilities may serve the central business district, they wouldn’t serve this large campus. Currently, many people drive their cars even for short distances.
As you know, that brings its own set of challenges. Vehicular traffic adds significantly to pollution, moreover, finding a parking spot can be hard in crowded urban districts.
Well, you can reduce your carbon footprint if you use an electric car. However, electric cars are still new, and many countries are still building the necessary infrastructure for them. Your large campus might not have the necessary infrastructure for them either. Presently, electric cars don’t represent a viable option in most geographies.
As a result, you need to buy and maintain a car even if your commute is short. In addition to dealing with parking problems, you need to spend significantly on your car.
All of these factors have combined to make people sit up and think seriously about cars. Many people are now seriously considering whether a car is really the best option even if they have to commute only a short distance.
This is where micro-mobility enters the picture. When you commute a short distance regularly, e-scooters or e-bikes are viable options. You limit your carbon footprints and you cut costs!
Businesses have seen this shift in thinking, and e-scooter companies like Lime and Bird have entered this field in a big way. They let you rent e-scooters by the minute. On the other hand, start-ups like Jump and Lyft have entered the e-bike market.
Think of your campus now! The people there might need to travel short distances within the campus, and e-scooters can really help them.
What advantages can you get from micro-mobility? Let’s take a deeper look into this question.
Micro-mobility can offer several advantages to the people on your campus, e.g.:
#android app #autorent #ios app #mobile app development #app like bird #app like bounce #app like lime #autorent #bird scooter business model #bird scooter rental #bird scooter rental cost #bird scooter rental price #clone app like bird #clone app like bounce #clone app like lime #electric rental scooters #electric scooter company #electric scooter rental business #how do you start a moped #how to start a moped #how to start a scooter rental business #how to start an electric company #how to start electric scooterrental business #lime scooter business model #scooter franchise #scooter rental business #scooter rental business for sale #scooter rental business insurance #scooters franchise cost #white label app like bird #white label app like bounce #white label app like lime
1652543820
Background Fetch is a very simple plugin which attempts to awaken an app in the background about every 15 minutes, providing a short period of background running-time. This plugin will execute your provided callbackFn
whenever a background-fetch event occurs.
There is no way to increase the rate which a fetch-event occurs and this plugin sets the rate to the most frequent possible — you will never receive an event faster than 15 minutes. The operating-system will automatically throttle the rate the background-fetch events occur based upon usage patterns. Eg: if user hasn't turned on their phone for a long period of time, fetch events will occur less frequently or if an iOS user disables background refresh they may not happen at all.
:new: Background Fetch now provides a scheduleTask
method for scheduling arbitrary "one-shot" or periodic tasks.
scheduleTask
seems only to fire when the device is plugged into power.stopOnTerminate: false
for iOS.@config enableHeadless
)⚠️ If you have a previous version of react-native-background-fetch < 2.7.0
installed into react-native >= 0.60
, you should first unlink
your previous version as react-native link
is no longer required.
$ react-native unlink react-native-background-fetch
yarn
$ yarn add react-native-background-fetch
npm
$ npm install --save react-native-background-fetch
react-native >= 0.60
react-native >= 0.60
ℹ️ This repo contains its own Example App. See /example
import React from 'react';
import {
SafeAreaView,
StyleSheet,
ScrollView,
View,
Text,
FlatList,
StatusBar,
} from 'react-native';
import {
Header,
Colors
} from 'react-native/Libraries/NewAppScreen';
import BackgroundFetch from "react-native-background-fetch";
class App extends React.Component {
constructor(props) {
super(props);
this.state = {
events: []
};
}
componentDidMount() {
// Initialize BackgroundFetch ONLY ONCE when component mounts.
this.initBackgroundFetch();
}
async initBackgroundFetch() {
// BackgroundFetch event handler.
const onEvent = async (taskId) => {
console.log('[BackgroundFetch] task: ', taskId);
// Do your background work...
await this.addEvent(taskId);
// IMPORTANT: You must signal to the OS that your task is complete.
BackgroundFetch.finish(taskId);
}
// Timeout callback is executed when your Task has exceeded its allowed running-time.
// You must stop what you're doing immediately BackgroundFetch.finish(taskId)
const onTimeout = async (taskId) => {
console.warn('[BackgroundFetch] TIMEOUT task: ', taskId);
BackgroundFetch.finish(taskId);
}
// Initialize BackgroundFetch only once when component mounts.
let status = await BackgroundFetch.configure({minimumFetchInterval: 15}, onEvent, onTimeout);
console.log('[BackgroundFetch] configure status: ', status);
}
// Add a BackgroundFetch event to <FlatList>
addEvent(taskId) {
// Simulate a possibly long-running asynchronous task with a Promise.
return new Promise((resolve, reject) => {
this.setState(state => ({
events: [...state.events, {
taskId: taskId,
timestamp: (new Date()).toString()
}]
}));
resolve();
});
}
render() {
return (
<>
<StatusBar barStyle="dark-content" />
<SafeAreaView>
<ScrollView
contentInsetAdjustmentBehavior="automatic"
style={styles.scrollView}>
<Header />
<View style={styles.body}>
<View style={styles.sectionContainer}>
<Text style={styles.sectionTitle}>BackgroundFetch Demo</Text>
</View>
</View>
</ScrollView>
<View style={styles.sectionContainer}>
<FlatList
data={this.state.events}
renderItem={({item}) => (<Text>[{item.taskId}]: {item.timestamp}</Text>)}
keyExtractor={item => item.timestamp}
/>
</View>
</SafeAreaView>
</>
);
}
}
const styles = StyleSheet.create({
scrollView: {
backgroundColor: Colors.lighter,
},
body: {
backgroundColor: Colors.white,
},
sectionContainer: {
marginTop: 32,
paddingHorizontal: 24,
},
sectionTitle: {
fontSize: 24,
fontWeight: '600',
color: Colors.black,
},
sectionDescription: {
marginTop: 8,
fontSize: 18,
fontWeight: '400',
color: Colors.dark,
},
});
export default App;
In addition to the default background-fetch task defined by BackgroundFetch.configure
, you may also execute your own arbitrary "oneshot" or periodic tasks (iOS requires additional Setup Instructions). However, all events will be fired into the Callback provided to BackgroundFetch#configure
:
scheduleTask
on iOS seems only to run when the device is plugged into power.scheduleTask
on iOS are designed for low-priority tasks, such as purging cache files — they tend to be unreliable for mission-critical tasks. scheduleTask
will never run as frequently as you want.fetch
event is much more reliable and fires far more often.scheduleTask
on iOS stop when the user terminates the app. There is no such thing as stopOnTerminate: false
for iOS.// Step 1: Configure BackgroundFetch as usual.
let status = await BackgroundFetch.configure({
minimumFetchInterval: 15
}, async (taskId) => { // <-- Event callback
// This is the fetch-event callback.
console.log("[BackgroundFetch] taskId: ", taskId);
// Use a switch statement to route task-handling.
switch (taskId) {
case 'com.foo.customtask':
print("Received custom task");
break;
default:
print("Default fetch task");
}
// Finish, providing received taskId.
BackgroundFetch.finish(taskId);
}, async (taskId) => { // <-- Task timeout callback
// This task has exceeded its allowed running-time.
// You must stop what you're doing and immediately .finish(taskId)
BackgroundFetch.finish(taskId);
});
// Step 2: Schedule a custom "oneshot" task "com.foo.customtask" to execute 5000ms from now.
BackgroundFetch.scheduleTask({
taskId: "com.foo.customtask",
forceAlarmManager: true,
delay: 5000 // <-- milliseconds
});
API Documentation
@param {Integer} minimumFetchInterval [15]
The minimum interval in minutes to execute background fetch events. Defaults to 15
minutes. Note: Background-fetch events will never occur at a frequency higher than every 15 minutes. Apple uses a secret algorithm to adjust the frequency of fetch events, presumably based upon usage patterns of the app. Fetch events can occur less often than your configured minimumFetchInterval
.
@param {Integer} delay (milliseconds)
ℹ️ Valid only for BackgroundFetch.scheduleTask
. The minimum number of milliseconds in future that task should execute.
@param {Boolean} periodic [false]
ℹ️ Valid only for BackgroundFetch.scheduleTask
. Defaults to false
. Set true to execute the task repeatedly. When false
, the task will execute just once.
@config {Boolean} stopOnTerminate [true]
Set false
to continue background-fetch events after user terminates the app. Default to true
.
@config {Boolean} startOnBoot [false]
Set true
to initiate background-fetch events when the device is rebooted. Defaults to false
.
❗ NOTE: startOnBoot
requires stopOnTerminate: false
.
@config {Boolean} forceAlarmManager [false]
By default, the plugin will use Android's JobScheduler
when possible. The JobScheduler
API prioritizes for battery-life, throttling task-execution based upon device usage and battery level.
Configuring forceAlarmManager: true
will bypass JobScheduler
to use Android's older AlarmManager
API, resulting in more accurate task-execution at the cost of higher battery usage.
let status = await BackgroundFetch.configure({
minimumFetchInterval: 15,
forceAlarmManager: true
}, async (taskId) => { // <-- Event callback
console.log("[BackgroundFetch] taskId: ", taskId);
BackgroundFetch.finish(taskId);
}, async (taskId) => { // <-- Task timeout callback
// This task has exceeded its allowed running-time.
// You must stop what you're doing and immediately .finish(taskId)
BackgroundFetch.finish(taskId);
});
.
.
.
// And with with #scheduleTask
BackgroundFetch.scheduleTask({
taskId: 'com.foo.customtask',
delay: 5000, // milliseconds
forceAlarmManager: true,
periodic: false
});
@config {Boolean} enableHeadless [false]
Set true
to enable React Native's Headless JS mechanism, for handling fetch events after app termination.
index.js
(MUST BE IN index.js
):import BackgroundFetch from "react-native-background-fetch";
let MyHeadlessTask = async (event) => {
// Get task id from event {}:
let taskId = event.taskId;
let isTimeout = event.timeout; // <-- true when your background-time has expired.
if (isTimeout) {
// This task has exceeded its allowed running-time.
// You must stop what you're doing immediately finish(taskId)
console.log('[BackgroundFetch] Headless TIMEOUT:', taskId);
BackgroundFetch.finish(taskId);
return;
}
console.log('[BackgroundFetch HeadlessTask] start: ', taskId);
// Perform an example HTTP request.
// Important: await asychronous tasks when using HeadlessJS.
let response = await fetch('https://reactnative.dev/movies.json');
let responseJson = await response.json();
console.log('[BackgroundFetch HeadlessTask] response: ', responseJson);
// Required: Signal to native code that your task is complete.
// If you don't do this, your app could be terminated and/or assigned
// battery-blame for consuming too much time in background.
BackgroundFetch.finish(taskId);
}
// Register your BackgroundFetch HeadlessTask
BackgroundFetch.registerHeadlessTask(MyHeadlessTask);
@config {integer} requiredNetworkType [BackgroundFetch.NETWORK_TYPE_NONE]
Set basic description of the kind of network your job requires.
If your job doesn't need a network connection, you don't need to use this option as the default value is BackgroundFetch.NETWORK_TYPE_NONE
.
NetworkType | Description |
---|---|
BackgroundFetch.NETWORK_TYPE_NONE | This job doesn't care about network constraints, either any or none. |
BackgroundFetch.NETWORK_TYPE_ANY | This job requires network connectivity. |
BackgroundFetch.NETWORK_TYPE_CELLULAR | This job requires network connectivity that is a cellular network. |
BackgroundFetch.NETWORK_TYPE_UNMETERED | This job requires network connectivity that is unmetered. Most WiFi networks are unmetered, as in "you can upload as much as you like". |
BackgroundFetch.NETWORK_TYPE_NOT_ROAMING | This job requires network connectivity that is not roaming (being outside the country of origin) |
@config {Boolean} requiresBatteryNotLow [false]
Specify that to run this job, the device's battery level must not be low.
This defaults to false. If true, the job will only run when the battery level is not low, which is generally the point where the user is given a "low battery" warning.
@config {Boolean} requiresStorageNotLow [false]
Specify that to run this job, the device's available storage must not be low.
This defaults to false. If true, the job will only run when the device is not in a low storage state, which is generally the point where the user is given a "low storage" warning.
@config {Boolean} requiresCharging [false]
Specify that to run this job, the device must be charging (or be a non-battery-powered device connected to permanent power, such as Android TV devices). This defaults to false.
@config {Boolean} requiresDeviceIdle [false]
When set true, ensure that this job will not run if the device is in active use.
The default state is false: that is, the for the job to be runnable even when someone is interacting with the device.
This state is a loose definition provided by the system. In general, it means that the device is not currently being used interactively, and has not been in use for some time. As such, it is a good time to perform resource heavy jobs. Bear in mind that battery usage will still be attributed to your application, and shown to the user in battery stats.
Method Name | Arguments | Returns | Notes |
---|---|---|---|
configure | {FetchConfig} , callbackFn , timeoutFn | Promise<BackgroundFetchStatus> | Configures the plugin's callbackFn and timeoutFn . This callback will fire each time a background-fetch event occurs in addition to events from #scheduleTask . The timeoutFn will be called when the OS reports your task is nearing the end of its allowed background-time. |
scheduleTask | {TaskConfig} | Promise<boolean> | Executes a custom task. The task will be executed in the same Callback function provided to #configure . |
status | callbackFn | Promise<BackgroundFetchStatus> | Your callback will be executed with the current status (Integer) 0: Restricted , 1: Denied , 2: Available . These constants are defined as BackgroundFetch.STATUS_RESTRICTED , BackgroundFetch.STATUS_DENIED , BackgroundFetch.STATUS_AVAILABLE (NOTE: Android will always return STATUS_AVAILABLE ) |
finish | String taskId | Void | You MUST call this method in your callbackFn provided to #configure in order to signal to the OS that your task is complete. iOS provides only 30s of background-time for a fetch-event -- if you exceed this 30s, iOS will kill your app. |
start | none | Promise<BackgroundFetchStatus> | Start the background-fetch API. Your callbackFn provided to #configure will be executed each time a background-fetch event occurs. NOTE the #configure method automatically calls #start . You do not have to call this method after you #configure the plugin |
stop | [taskId:String] | Promise<boolean> | Stop the background-fetch API and all #scheduleTask from firing events. Your callbackFn provided to #configure will no longer be executed. If you provide an optional taskId , only that #scheduleTask will be stopped. |
BGTaskScheduler
API for iOS 13+[||]
button to initiate a Breakpoint.(lldb)
, paste the following command (Note: use cursor up/down keys to cycle through previously run commands):e -l objc -- (void)[[BGTaskScheduler sharedScheduler] _simulateLaunchForTaskWithIdentifier:@"com.transistorsoft.fetch"]
[ > ]
button to continue. The task will execute and the Callback function provided to BackgroundFetch.configure
will receive the event.BGTaskScheduler
api supports simulated task-timeout events. To simulate a task-timeout, your fetchCallback
must not call BackgroundFetch.finish(taskId)
:let status = await BackgroundFetch.configure({
minimumFetchInterval: 15
}, async (taskId) => { // <-- Event callback.
// This is the task callback.
console.log("[BackgroundFetch] taskId", taskId);
//BackgroundFetch.finish(taskId); // <-- Disable .finish(taskId) when simulating an iOS task timeout
}, async (taskId) => { // <-- Event timeout callback
// This task has exceeded its allowed running-time.
// You must stop what you're doing and immediately .finish(taskId)
print("[BackgroundFetch] TIMEOUT taskId:", taskId);
BackgroundFetch.finish(taskId);
});
e -l objc -- (void)[[BGTaskScheduler sharedScheduler] _simulateExpirationForTaskWithIdentifier:@"com.transistorsoft.fetch"]
BackgroundFetch
APIDebug->Simulate Background Fetch
$ adb logcat
:$ adb logcat *:S ReactNative:V ReactNativeJS:V TSBackgroundFetch:V
21+
:$ adb shell cmd jobscheduler run -f <your.application.id> 999
<21
, simulate a "Headless JS" event with (insert <your.application.id>)$ adb shell am broadcast -a <your.application.id>.event.BACKGROUND_FETCH
Download Details:
Author: transistorsoft
Source Code: https://github.com/transistorsoft/react-native-background-fetch
License: MIT license