1669034583
Let's look at how to spin up a Docker Swarm cluster on DigitalOcean and then configure a microservice, powered by Flask and Postgres, to run on it.
This is an intermediate-level tutorial. It assumes that you have basic working knowledge of Flask, Docker, and container orchestration. Review the following courses for more info on each of these tools and topics:
Docker dependencies:
By the end of this tutorial, you will be able to...
As you move from deploying containers on a single machine to deploying them across a number of machines, you'll need an orchestration tool to manage (and automate) the arrangement, coordination, and availability of the containers across the entire system.
This is where Docker Swarm (or "Swarm mode") fits in along with a number of other orchestration tools -- like Kubernetes, ECS, Mesos, and Nomad.
Which one should you use?
Tool | Pros | Cons |
---|---|---|
Kubernetes | large community, flexible, most features, hip | complex setup, high learning curve, hip |
Docker Swarm | easy to set up, perfect for smaller clusters | limited by the Docker API |
ECS | fully-managed service, integrated with AWS | vendor lock-in |
There's also a number of managed Kubernetes-based services on the market:
For more, review the Choosing the Right Containerization and Cluster Management Tool blog post.
Clone down the base branch from the flask-docker-swarm repo:
$ git clone https://github.com/testdrivenio/flask-docker-swarm --branch base --single-branch
$ cd flask-docker-swarm
Build the images and spin up the containers locally:
$ docker-compose up -d --build
Create and seed the database users
table:
$ docker-compose run web python manage.py recreate_db
$ docker-compose run web python manage.py seed_db
Test out the following URLs in your browser of choice.
{
"container_id": "3c9dc22aa37a",
"message": "pong!",
"status": "success"
}
container_id
is the ID of the Docker container the app is running in:$ docker ps --filter name=flask-docker-swarm_web --format "{{.ID}}" 3c9dc22aa37a
{
"container_id": "3c9dc22aa37a",
"status": "success",
"users": [{
"active": true,
"admin": false,
"email": "michael@notreal.com",
"id": 1,
"username": "michael"
}]
}
Take a quick look at the code before moving on:
├── README.md
├── docker-compose.yml
└── services
├── db
│ ├── Dockerfile
│ └── create.sql
├── nginx
│ ├── Dockerfile
│ └── prod.conf
└── web
├── Dockerfile
├── manage.py
├── project
│ ├── __init__.py
│ ├── api
│ │ ├── main.py
│ │ ├── models.py
│ │ └── users.py
│ └── config.py
└── requirements.txt
Since Docker Swarm uses multiple Docker engines, we'll need to use a Docker image registry to distribute our three images to each of the engines. This tutorial uses the Docker Hub image registry but feel free to use a different registry service or run your own private registry within Swarm.
Create an account on Docker Hub, if you don't already have one, and then log in:
$ docker login
Build, tag, and push the images to Docker Hub:
$ docker build -t mjhea0/flask-docker-swarm_web:latest -f ./services/web/Dockerfile ./services/web
$ docker push mjhea0/flask-docker-swarm_web:latest
$ docker build -t mjhea0/flask-docker-swarm_db:latest -f ./services/db/Dockerfile ./services/db
$ docker push mjhea0/flask-docker-swarm_db:latest
$ docker build -t mjhea0/flask-docker-swarm_nginx:latest -f ./services/nginx/Dockerfile ./services/nginx
$ docker push mjhea0/flask-docker-swarm_nginx:latest
Be sure you replace
mjhea0
with your namespace on Docker Hub.
Moving on, let's set up a new Docker Compose file for use with Docker Swarm:
version: '3.8'
services:
web:
image: mjhea0/flask-docker-swarm_web:latest
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.role == worker]
expose:
- 5000
environment:
- FLASK_ENV=production
- APP_SETTINGS=project.config.ProductionConfig
- DB_USER=postgres
- DB_PASSWORD=postgres
- SECRET_CODE=myprecious
depends_on:
- db
networks:
- app
db:
image: mjhea0/flask-docker-swarm_db:latest
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.role == manager]
volumes:
- data-volume:/var/lib/postgresql/data
expose:
- 5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
networks:
- app
nginx:
image: mjhea0/flask-docker-swarm_nginx:latest
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.role == worker]
ports:
- 80:80
depends_on:
- web
networks:
- app
networks:
app:
driver: overlay
volumes:
data-volume:
driver: local
Save this file as docker-compose-swarm.yml in the project root. Take note of the differences between the two compose files:
Sign up for a DigitalOcean account (if you don’t already have one), and then generate an access token so you can access the DigitalOcean API.
Add the token to your environment:
$ export DIGITAL_OCEAN_ACCESS_TOKEN=[your_digital_ocean_token]
Spin up four DigitalOcean droplets:
$ for i in 1 2 3 4; do
docker-machine create \
--driver digitalocean \
--digitalocean-access-token $DIGITAL_OCEAN_ACCESS_TOKEN \
--engine-install-url "https://releases.rancher.com/install-docker/19.03.9.sh" \
node-$i;
done
--engine-install-url
is required since, as of writing, Docker v20.10.0 doesn't work with Docker Machine.
This will take a few minutes. Once complete, initialize Swarm mode on node-1
:
$ docker-machine ssh node-1 -- docker swarm init --advertise-addr $(docker-machine ip node-1)
Grab the join token from the output of the previous command, and then add the remaining nodes to the Swarm as workers:
$ for i in 2 3 4; do
docker-machine ssh node-$i \
-- docker swarm join --token YOUR_JOIN_TOKEN;
done
Point the Docker daemon at node-1
and deploy the stack:
$ eval $(docker-machine env node-1)
$ docker stack deploy --compose-file=docker-compose-swarm.yml flask
List out the services in the stack:
$ docker stack ps -f "desired-state=running" flask
You should see something similar to:
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
uz84le3651f8 flask_nginx.1 mjhea0/flask-docker-swarm_nginx:latest node-3 Running Running 23 seconds ago
nv365bhsoek1 flask_web.1 mjhea0/flask-docker-swarm_web:latest node-2 Running Running 32 seconds ago
uyl11jk2h71d flask_db.1 mjhea0/flask-docker-swarm_db:latest node-1 Running Running 38 seconds ago
Now, to update the database based on the schema provided in the web
service, we first need to point the Docker daemon at the node that flask_web
is running on:
$ NODE=$(docker service ps -f "desired-state=running" --format "{{.Node}}" flask_web)
$ eval $(docker-machine env $NODE)
Assign the container ID for flask_web
to a variable:
$ CONTAINER_ID=$(docker ps --filter name=flask_web --format "{{.ID}}")
Create the database table and apply the seed:
$ docker container exec -it $CONTAINER_ID python manage.py recreate_db
$ docker container exec -it $CONTAINER_ID python manage.py seed_db
Finally, point the Docker daemon back at node-1
and retrieve the IP associated with the machine that flask_nginx
is running on:
$ eval $(docker-machine env node-1)
$ docker-machine ip $(docker service ps -f "desired-state=running" --format "{{.Node}}" flask_nginx)
Test out the endpoints:
Let's add another web app to the cluster:
$ docker service scale flask_web=2
flask_web scaled to 2
overall progress: 2 out of 2 tasks
1/2: running [==================================================>]
2/2: running [==================================================>]
verify: Service converged
Confirm that the service did in fact scale:
$ docker stack ps -f "desired-state=running" flask
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
uz84le3651f8 flask_nginx.1 mjhea0/flask-docker-swarm_nginx:latest node-3 Running Running 7 minutes ago
nv365bhsoek1 flask_web.1 mjhea0/flask-docker-swarm_web:latest node-2 Running Running 7 minutes ago
uyl11jk2h71d flask_db.1 mjhea0/flask-docker-swarm_db:latest node-1 Running Running 7 minutes ago
n8ld0xkm3pd0 flask_web.2 mjhea0/flask-docker-swarm_web:latest node-4 Running Running 7 seconds ago
Make a few requests to the service:
$ for ((i=1;i<=10;i++)); do curl http://YOUR_MACHINE_IP/ping; done
You should see different container_id
s being returned, indicating that requests are being routed appropriately via a round robin algorithm between the two replicas:
{"container_id":"3e984eb707ea","message":"pong!","status":"success"}
{"container_id":"e47de2a13a2e","message":"pong!","status":"success"}
{"container_id":"3e984eb707ea","message":"pong!","status":"success"}
{"container_id":"e47de2a13a2e","message":"pong!","status":"success"}
{"container_id":"3e984eb707ea","message":"pong!","status":"success"}
{"container_id":"e47de2a13a2e","message":"pong!","status":"success"}
{"container_id":"3e984eb707ea","message":"pong!","status":"success"}
{"container_id":"e47de2a13a2e","message":"pong!","status":"success"}
{"container_id":"3e984eb707ea","message":"pong!","status":"success"}
{"container_id":"e47de2a13a2e","message":"pong!","status":"success"}
What happens if we scale in as traffic is hitting the cluster?
Traffic is re-routed appropriately. Try this again, but this time scale out.
Docker Swarm Visualizer is an open source tool designed to monitor a Docker Swarm cluster.
Add the service to docker-compose-swarm.yml:
visualizer:
image: dockersamples/visualizer:latest
ports:
- 8080:8080
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- app
Point the Docker daemon at node-1
and update the stack:
$ eval $(docker-machine env node-1)
$ docker stack deploy --compose-file=docker-compose-swarm.yml flask
It could take a minute or two for the visualizer to spin up. Navigate to http://YOUR_MACHINE_IP:8080 to view the dashboard:
Add two more replicas of flask_web
:
$ docker service scale flask_web=3
Docker Secrets is a secrets management tool specifically designed for Docker Swarm. With it you can easily distribute sensitive info (like usernames and passwords, SSH keys, SSL certificates, API tokens, etc.) across the cluster.
Docker can read secrets from either its own database (external mode) or from a local file (file mode). We'll look at the former.
In the services/web/project/api/main.py file, take note of the /secret
route. If the secret
in the request payload is the same as the SECRET_CODE
variable, a message in the response payload will be equal to yay!
. Otherwise, it will equal nay!
.
# yay
{
"container_id": "6f91a81a6357",
"message": "yay!",
"status": "success"
}
# nay
{
"container_id": "6f91a81a6357",
"message": "nay!",
"status": "success"
}
Test out the /secret
endpoint in the terminal:
$ curl -X POST http://YOUR_MACHINE_IP/secret \
-d '{"secret": "myprecious"}' \
-H 'Content-Type: application/json'
You should see:
{
"container_id": "6f91a81a6357",
"message": "yay!",
"status": "success"
}
Let's update the SECRET_CODE
, so that it's being set by a Docker Secret rather than an environment variable. Start by creating a new secret from the manager node:
$ eval $(docker-machine env node-1)
$ echo "foobar" | docker secret create secret_code -
Confirm that it was created:
$ docker secret ls
You should see something like:
ID NAME DRIVER CREATED UPDATED
za3pg2cbbf92gi9u1v0af16e3 secret_code 15 seconds ago 15 seconds ago
Next, remove the SECRET_CODE
environment variable and add the secrets
config to the web
service in docker-compose-swarm-yml:
web:
image: mjhea0/flask-docker-swarm_web:latest
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.role == worker]
expose:
- 5000
environment:
- FLASK_ENV=production
- APP_SETTINGS=project.config.ProductionConfig
- DB_USER=postgres
- DB_PASSWORD=postgres
secrets:
- secret_code
depends_on:
- db
networks:
- app
At the bottom of the file, define the source of the secret, as external
, just below the volumes
declaration:
secrets:
secret_code:
external: true
That's it. We can gain access to this secret within the Flask App.
Review the secrets configuration reference guide as well as this Stack Overflow answer for more info on both external and file-based secrets.
Turn back to services/web/project/api/main.py.
Change:
SECRET_CODE = os.environ.get("SECRET_CODE")
To:
SECRET_CODE = open("/run/secrets/secret_code", "r").read().strip()
Reset the Docker environment back to localhost:
$ eval $(docker-machine env -u)
Re-build the image and push the new version to Docker Hub:
$ docker build -t mjhea0/flask-docker-swarm_web:latest -f ./services/web/Dockerfile ./services/web
$ docker push mjhea0/flask-docker-swarm_web:latest
Point the daemon back at the manager, and then update the service:
$ eval $(docker-machine env node-1)
$ docker stack deploy --compose-file=docker-compose-swarm.yml flask
For more on defining secrets in a compose file, refer to the the Use Secrets in Compose section of the docs.
Test it out again:
$ curl -X POST http://YOUR_MACHINE_IP/secret \
-d '{"secret": "foobar"}' \
-H 'Content-Type: application/json'
{
"container_id": "6f91a81a6357",
"message": "yay!",
"status": "success"
}
Looking for a challenge? Try using Docker Secrets to manage the database credentials rather than defining them directly in the compose file.
In a production environment you should use health checks to test whether a specific container is working as expected before routing traffic to it. In our case, we can use a health check to ensure that the Flask app (and the API) is up and running; otherwise, we could run into a situation where a new container is spun up and added to the cluster that appears to be healthy when in fact the app is actually down and not able to handle traffic.
You can add health checks to either a Dockerfile or to a compose file. We'll look at the latter.
Curious about how to add health checks to a Dockerfile? Review the health check instruction from the official docs.
It's worth noting that the health check settings defined in a compose file will override the settings from a Dockerfile.
Update the web
service in docker-compose-swarm.yml like so:
web:
image: mjhea0/flask-docker-swarm_web:latest
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.role == worker]
expose:
- 5000
environment:
- FLASK_ENV=production
- APP_SETTINGS=project.config.ProductionConfig
- DB_USER=postgres
- DB_PASSWORD=postgres
secrets:
- secret_code
depends_on:
- db
networks:
- app
healthcheck:
test: curl --fail http://localhost:5000/ping || exit 1
interval: 10s
timeout: 2s
retries: 5
Options:
test
is the actual command that will be run to check the health status. It should return 0
if healthy or 1
if unhealthy. For this to work, the curl command must be available in the container.interval
controls when the first health check runs and how often it runs from there on out.retries
sets how many times the health check will retry a failed check before the container is considered unhealthy.timeout
that run will be considered a failure.Before we can test the health check, we need to add curl to the container. Remember: The command you use for the health check needs to be available inside the container.
Update the Dockerfile like so:
###########
# BUILDER #
###########
# Base Image
FROM python:3.9 as builder
# Lint
RUN pip install flake8 black
WORKDIR /home/app
COPY project ./project
COPY manage.py .
RUN flake8 --ignore=E501 .
RUN black --check .
# Install Requirements
COPY requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /home/app/wheels -r requirements.txt
#########
# FINAL #
#########
# Base Image
FROM python:3.9-slim
# ----- NEW ----
# Install curl
RUN apt-get update && apt-get install -y curl
# Create directory for the app user
RUN mkdir -p /home/app
# Create the app user
RUN groupadd app && useradd -g app app
# Create the home directory
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
# Install Requirements
COPY --from=builder /home/app/wheels /wheels
COPY --from=builder /home/app/requirements.txt .
RUN pip install --no-cache /wheels/*
# Copy in the Flask code
COPY . $APP_HOME
# Chown all the files to the app user
RUN chown -R app:app $APP_HOME
# Change to the app user
USER app
# run server
CMD gunicorn --log-level=debug -b 0.0.0.0:5000 manage:app
Again, reset the Docker environment:
$ eval $(docker-machine env -u)
Build and push the new image:
$ docker build -t mjhea0/flask-docker-swarm_web:latest -f ./services/web/Dockerfile ./services/web
$ docker push mjhea0/flask-docker-swarm_web:latest
Update the service:
$ eval $(docker-machine env node-1)
$ docker stack deploy --compose-file=docker-compose-swarm.yml flask
Then, find the node that the flask_web
service is on:
$ docker service ps flask_web
Point the daemon at that node:
$ eval $(docker-machine env <NODE>)
Make sure to replace
<NODE>
with the actual node -- e.g.,node-2
,node-3
, ornode-4
.
Grab the container ID:
$ docker ps
Then run:
$ docker inspect --format='{{json .State.Health}}' <CONTAINER_ID>
You should see something like:
{
"Status": "healthy",
"FailingStreak": 0,
"Log": [
{
"Start": "2021-02-23T03:31:44.886509504Z",
"End": "2021-02-23T03:31:45.104507568Z",
"ExitCode": 0,
"Output": " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 69 100 69 0 0 11629 0 --:--:-- --:--:-- --:--:-- 13800\n{\"container_id\":\"a6127b1f469d\",\"message\":\"pong!\",\"status\":\"success\"}\n"
}
]
}
Want to see a failing health check? Update the test
command in docker-compose-swarm.yml to ping port 5001 instead of 5000:
healthcheck:
test: curl --fail http://localhost:5001/ping || exit 1
interval: 10s
timeout: 2s
retries: 5
Just like before, update the service and then find the node and container id that the flask_web
service is on. Then, run:
$ docker inspect --format='{{json .State.Health}}' <CONTAINER_ID>
You should see something like:
{
"Status": "starting",
"FailingStreak": 1,
"Log": [
{
"Start": "2021-02-23T03:34:39.644618421Z",
"End": "2021-02-23T03:34:39.784855122Z",
"ExitCode": 1,
"Output": " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 5001: Connection refused\n"
}
]
}
The service should be down in the Docker Swarm Visualizer dashboard as well.
Update the health check and the service. Make sure all is well before moving on.
When working with a distributed system it's important to set up proper logging and monitoring so you can gain insight into what's happening when things go wrong. We've already set up the Docker Swarm Visualizer tool to help with monitoring, but much more can be done.
In terms of logging, you can run the following command (from the node manager) to access the logs of a service running on multiple nodes:
$ docker service logs -f SERVICE_NAME
Review the docs to learn more about the logs command as well as how to configure the default logging driver.
Try it out:
$ eval $(docker-machine env node-1)
$ docker service logs -f flask_web
You'll probably want to aggregate log events from each service to help make analysis and visualization easier. One popular approach is to set up an ELK (Elasticsearch, Logstash, and Kibana) stack in the Swarm cluster. This is beyond the scope of this blog post, but take a look at the following resources for help on this:
Finally, Prometheus (along with its de-facto GUI Grafana) is a powerful monitoring solution. Check out Docker Swarm instrumentation with Prometheus for more info.
All done?
Bring down the stack and remove the nodes:
$ docker stack rm flask
$ docker-machine rm node-1 node-2 node-3 node-4 -y
Ready to put everything together? Let’s write a script that will:
Add a new file called deploy.sh to the project root:
#!/bin/bash
echo "Spinning up four droplets..."
for i in 1 2 3 4; do
docker-machine create \
--driver digitalocean \
--digitalocean-access-token $DIGITAL_OCEAN_ACCESS_TOKEN \
--engine-install-url "https://releases.rancher.com/install-docker/19.03.9.sh" \
node-$i;
done
echo "Initializing Swarm mode..."
docker-machine ssh node-1 -- docker swarm init --advertise-addr $(docker-machine ip node-1)
echo "Adding the nodes to the Swarm..."
TOKEN=`docker-machine ssh node-1 docker swarm join-token worker | grep token | awk '{ print $5 }'`
for i in 2 3 4; do
docker-machine ssh node-$i \
-- docker swarm join --token ${TOKEN} $(docker-machine ip node-1):2377;
done
echo "Creating secret..."
eval $(docker-machine env node-1)
echo "foobar" | docker secret create secret_code -
echo "Deploying the Flask microservice..."
docker stack deploy --compose-file=docker-compose-swarm.yml flask
echo "Create the DB table and apply the seed..."
sleep 15
NODE=$(docker service ps -f "desired-state=running" --format "{{.Node}}" flask_web)
eval $(docker-machine env $NODE)
CONTAINER_ID=$(docker ps --filter name=flask_web --format "{{.ID}}")
docker container exec -it $CONTAINER_ID python manage.py recreate_db
docker container exec -it $CONTAINER_ID python manage.py seed_db
echo "Get the IP address..."
eval $(docker-machine env node-1)
docker-machine ip $(docker service ps -f "desired-state=running" --format "{{.Node}}" flask_nginx)
Try it out!
$ sh deploy.sh
Bring down the droplets once done:
$ docker-machine rm node-1 node-2 node-3 node-4 -y
In this post we looked at how to run a Flask app on DigitalOcean via Docker Swarm.
At this point, you should understand how Docker Swarm works and be able to deploy a cluster with an app running on it. Make sure you dive into some of the more advanced topics like logging, monitoring, and using rolling updates to enable zero-downtime deployments before using Docker Swarm in production.
You can find the code in the flask-docker-swarm repo on GitHub.
Original article source at: https://testdriven.io/
1595491178
The electric scooter revolution has caught on super-fast taking many cities across the globe by storm. eScooters, a renovated version of old-school scooters now turned into electric vehicles are an environmentally friendly solution to current on-demand commute problems. They work on engines, like cars, enabling short traveling distances without hassle. The result is that these groundbreaking electric machines can now provide faster transport for less — cheaper than Uber and faster than Metro.
Since they are durable, fast, easy to operate and maintain, and are more convenient to park compared to four-wheelers, the eScooters trend has and continues to spike interest as a promising growth area. Several companies and universities are increasingly setting up shop to provide eScooter services realizing a would-be profitable business model and a ready customer base that is university students or residents in need of faster and cheap travel going about their business in school, town, and other surrounding areas.
In many countries including the U.S., Canada, Mexico, U.K., Germany, France, China, Japan, India, Brazil and Mexico and more, a growing number of eScooter users both locals and tourists can now be seen effortlessly passing lines of drivers stuck in the endless and unmoving traffic.
A recent report by McKinsey revealed that the E-Scooter industry will be worth― $200 billion to $300 billion in the United States, $100 billion to $150 billion in Europe, and $30 billion to $50 billion in China in 2030. The e-Scooter revenue model will also spike and is projected to rise by more than 20% amounting to approximately $5 billion.
And, with a necessity to move people away from high carbon prints, traffic and congestion issues brought about by car-centric transport systems in cities, more and more city planners are developing more bike/scooter lanes and adopting zero-emission plans. This is the force behind the booming electric scooter market and the numbers will only go higher and higher.
Companies that have taken advantage of the growing eScooter trend develop an appthat allows them to provide efficient eScooter services. Such an app enables them to be able to locate bike pick-up and drop points through fully integrated google maps.
It’s clear that e scooters will increasingly become more common and the e-scooter business model will continue to grab the attention of manufacturers, investors, entrepreneurs. All this should go ahead with a quest to know what are some of the best electric bikes in the market especially for anyone who would want to get started in the electric bikes/scooters rental business.
We have done a comprehensive list of the best electric bikes! Each bike has been reviewed in depth and includes a full list of specs and a photo.
https://www.kickstarter.com/projects/enkicycles/billy-were-redefining-joyrides
To start us off is the Billy eBike, a powerful go-anywhere urban electric bike that’s specially designed to offer an exciting ride like no other whether you want to ride to the grocery store, cafe, work or school. The Billy eBike comes in 4 color options – Billy Blue, Polished aluminium, Artic white, and Stealth black.
Price: $2490
Available countries
Available in the USA, Europe, Asia, South Africa and Australia.This item ships from the USA. Buyers are therefore responsible for any taxes and/or customs duties incurred once it arrives in your country.
Features
Specifications
Why Should You Buy This?
**Who Should Ride Billy? **
Both new and experienced riders
**Where to Buy? **Local distributors or ships from the USA.
Featuring a sleek and lightweight aluminum frame design, the 200-Series ebike takes your riding experience to greater heights. Available in both black and white this ebike comes with a connected app, which allows you to plan activities, map distances and routes while also allowing connections with fellow riders.
Price: $2099.00
Available countries
The Genze 200 series e-Bike is available at GenZe retail locations across the U.S or online via GenZe.com website. Customers from outside the US can ship the product while incurring the relevant charges.
Features
Specifications
https://ebikestore.com/shop/norco-vlt-s2/
The Norco VLT S2 is a front suspension e-Bike with solid components alongside the reliable Bosch Performance Line Power systems that offer precise pedal assistance during any riding situation.
Price: $2,699.00
Available countries
This item is available via the various Norco bikes international distributors.
Features
Specifications
http://www.bodoevs.com/bodoev/products_show.asp?product_id=13
Manufactured by Bodo Vehicle Group Limited, the Bodo EV is specially designed for strong power and extraordinary long service to facilitate super amazing rides. The Bodo Vehicle Company is a striking top in electric vehicles brand field in China and across the globe. Their Bodo EV will no doubt provide your riders with high-level riding satisfaction owing to its high-quality design, strength, breaking stability and speed.
Price: $799
Available countries
This item ships from China with buyers bearing the shipping costs and other variables prior to delivery.
Features
Specifications
#android app #autorent #entrepreneurship #ios app #minimum viable product (mvp) #mobile app development #news #app like bird #app like bounce #app like lime #autorent #best electric bikes 2020 #best electric bikes for rental business #best electric kick scooters 2020 #best electric kickscooters for rental business #best electric scooters 2020 #best electric scooters for rental business #bird scooter business model #bird scooter rental #bird scooter rental cost #bird scooter rental price #clone app like bird #clone app like bounce #clone app like lime #electric rental scooters #electric scooter company #electric scooter rental business #how do you start a moped #how to start a moped #how to start a scooter rental business #how to start an electric company #how to start electric scooterrental business #lime scooter business model #scooter franchise #scooter rental business #scooter rental business for sale #scooter rental business insurance #scooters franchise cost #white label app like bird #white label app like bounce #white label app like lime
1595494844
Are you leading an organization that has a large campus, e.g., a large university? You are probably thinking of introducing an electric scooter/bicycle fleet on the campus, and why wouldn’t you?
Introducing micro-mobility in your campus with the help of such a fleet would help the people on the campus significantly. People would save money since they don’t need to use a car for a short distance. Your campus will see a drastic reduction in congestion, moreover, its carbon footprint will reduce.
Micro-mobility is relatively new though and you would need help. You would need to select an appropriate fleet of vehicles. The people on your campus would need to find electric scooters or electric bikes for commuting, and you need to provide a solution for this.
To be more specific, you need a short-term electric bike rental app. With such an app, you will be able to easily offer micro-mobility to the people on the campus. We at Devathon have built Autorent exactly for this.
What does Autorent do and how can it help you? How does it enable you to introduce micro-mobility on your campus? We explain these in this article, however, we will touch upon a few basics first.
You are probably thinking about micro-mobility relatively recently, aren’t you? A few relevant insights about it could help you to better appreciate its importance.
Micro-mobility is a new trend in transportation, and it uses vehicles that are considerably smaller than cars. Electric scooters (e-scooters) and electric bikes (e-bikes) are the most popular forms of micro-mobility, however, there are also e-unicycles and e-skateboards.
You might have already seen e-scooters, which are kick scooters that come with a motor. Thanks to its motor, an e-scooter can achieve a speed of up to 20 km/h. On the other hand, e-bikes are popular in China and Japan, and they come with a motor, and you can reach a speed of 40 km/h.
You obviously can’t use these vehicles for very long commutes, however, what if you need to travel a short distance? Even if you have a reasonable public transport facility in the city, it might not cover the route you need to take. Take the example of a large university campus. Such a campus is often at a considerable distance from the central business district of the city where it’s located. While public transport facilities may serve the central business district, they wouldn’t serve this large campus. Currently, many people drive their cars even for short distances.
As you know, that brings its own set of challenges. Vehicular traffic adds significantly to pollution, moreover, finding a parking spot can be hard in crowded urban districts.
Well, you can reduce your carbon footprint if you use an electric car. However, electric cars are still new, and many countries are still building the necessary infrastructure for them. Your large campus might not have the necessary infrastructure for them either. Presently, electric cars don’t represent a viable option in most geographies.
As a result, you need to buy and maintain a car even if your commute is short. In addition to dealing with parking problems, you need to spend significantly on your car.
All of these factors have combined to make people sit up and think seriously about cars. Many people are now seriously considering whether a car is really the best option even if they have to commute only a short distance.
This is where micro-mobility enters the picture. When you commute a short distance regularly, e-scooters or e-bikes are viable options. You limit your carbon footprints and you cut costs!
Businesses have seen this shift in thinking, and e-scooter companies like Lime and Bird have entered this field in a big way. They let you rent e-scooters by the minute. On the other hand, start-ups like Jump and Lyft have entered the e-bike market.
Think of your campus now! The people there might need to travel short distances within the campus, and e-scooters can really help them.
What advantages can you get from micro-mobility? Let’s take a deeper look into this question.
Micro-mobility can offer several advantages to the people on your campus, e.g.:
#android app #autorent #ios app #mobile app development #app like bird #app like bounce #app like lime #autorent #bird scooter business model #bird scooter rental #bird scooter rental cost #bird scooter rental price #clone app like bird #clone app like bounce #clone app like lime #electric rental scooters #electric scooter company #electric scooter rental business #how do you start a moped #how to start a moped #how to start a scooter rental business #how to start an electric company #how to start electric scooterrental business #lime scooter business model #scooter franchise #scooter rental business #scooter rental business for sale #scooter rental business insurance #scooters franchise cost #white label app like bird #white label app like bounce #white label app like lime
1595059664
With more of us using smartphones, the popularity of mobile applications has exploded. In the digital era, the number of people looking for products and services online is growing rapidly. Smartphone owners look for mobile applications that give them quick access to companies’ products and services. As a result, mobile apps provide customers with a lot of benefits in just one device.
Likewise, companies use mobile apps to increase customer loyalty and improve their services. Mobile Developers are in high demand as companies use apps not only to create brand awareness but also to gather information. For that reason, mobile apps are used as tools to collect valuable data from customers to help companies improve their offer.
There are many types of mobile applications, each with its own advantages. For example, native apps perform better, while web apps don’t need to be customized for the platform or operating system (OS). Likewise, hybrid apps provide users with comfortable user experience. However, you may be wondering how long it takes to develop an app.
To give you an idea of how long the app development process takes, here’s a short guide.
_Average time spent: two to five weeks _
This is the initial stage and a crucial step in setting the project in the right direction. In this stage, you brainstorm ideas and select the best one. Apart from that, you’ll need to do some research to see if your idea is viable. Remember that coming up with an idea is easy; the hard part is to make it a reality.
All your ideas may seem viable, but you still have to run some tests to keep it as real as possible. For that reason, when Web Developers are building a web app, they analyze the available ideas to see which one is the best match for the targeted audience.
Targeting the right audience is crucial when you are developing an app. It saves time when shaping the app in the right direction as you have a clear set of objectives. Likewise, analyzing how the app affects the market is essential. During the research process, App Developers must gather information about potential competitors and threats. This helps the app owners develop strategies to tackle difficulties that come up after the launch.
The research process can take several weeks, but it determines how successful your app can be. For that reason, you must take your time to know all the weaknesses and strengths of the competitors, possible app strategies, and targeted audience.
The outcomes of this stage are app prototypes and the minimum feasible product.
#android app #frontend #ios app #minimum viable product (mvp) #mobile app development #web development #android app development #app development #app development for ios and android #app development process #ios and android app development #ios app development #stages in app development
1633583160
従来のWebアプリケーションは本質的に同期しています。 ユーザーはブラウザーに表示されるWebインターフェースを操作し、ブラウザーはそのユーザー操作に基づいてサーバーに要求を返し、サーバーはユーザーの新しい表示でそれらの要求に応答します。
今日、状況は変化しました。現代のWebサイトは、数十万の訪問者からの要求を処理する必要があります。これらの要求にデータベースまたはWebサービスとの対話が含まれる場合、応答時間が長くなり、何千もの訪問者が同じリソースにアクセスしている場合、Webサイトのパフォーマンスが大幅に低下する可能性があります。 ここで非同期Webが助けになります。
非同期性を選択したときに把握できる利点のいくつかを次に示します。
このチュートリアルでは、新しい要求に応答するWebサーバーの機能を制限する長時間実行タスクを処理するWebアプリケーションを構築するときに発生する一般的な落とし穴の1つを克服する方法を説明します。
簡単な解決策は、これらの長時間実行タスクをバックグラウンドで非同期に、別のスレッドまたはプロセス内で実行し、Webサーバーを解放することです。
Redis、Flask、Celery、SocketIOなどのいくつかのコンポーネントを活用して、長時間実行されるタスクの実行をオフロードし、完了したら、クライアントにそのステータスを示すプッシュ通知を送信します。
このチュートリアルでは、コルーチンを使用してコードを同時に実行できるasyncioPythonの組み込みライブラリについては説明していません。
要件が満たされると、次のコンポーネントが機能します。
Redis:は、オープンソースの高度なKey-Valueストアであり、高性能でスケーラブルなWebアプリケーションを構築するための適切なソリューションです。それを際立たせる3つの主な特徴があります:
Redisはデータベースを完全にメモリに保持し、永続性のためにのみディスクを使用します。
多くのKey-Valueデータストアと比較すると、Redisには比較的豊富なデータ型のセットがあります。
Redisのインストールは、このチュートリアルの範囲外です。ただし、Windowsマシンにインストールするには、このクイックガイドに従うことをお勧めします。
LinuxまたはmacOSを使用している場合は、以下のコマンドのいずれかを実行すると、Redisがセットアップされます。
Ubuntu / Debian:
$ sudo apt-get install redis-server
マックOS:
$ brew install redis
$ brew services start redis
注意: このチュートリアルでは、Redisバージョン3.0.504を使用しています
セロリ:Pythonの世界で最も人気のあるバックグラウンドジョブマネージャーの1人です。リアルタイム操作に重点を置いていますが、スケジューリングもサポートしています。RedisやRabbitMQなどのいくつかのメッセージブローカーと互換性があり、プロデューサーとコンシューマーの両方として機能できます。
requirements.txt
ファイルにCeleryをインストールし ます。
注意: このチュートリアルでは、Celeryバージョン4.4.7を使用しています。
このチュートリアルでは、スキャフォールディング手法を採用し、同期通信と非同期通信の違い、および非同期通信のバリエーションを理解するために、一連のさまざまなシナリオについて説明します。
すべてのシナリオはFlaskフレームワーク内で提示されます。ただし、それらのほとんどは他のPythonフレームワーク(Django、Pyramid)に簡単に移植できます。
このチュートリアルに興味をそそられ、すぐにコードに飛び込みたいと思う 場合は、この記事で使用されているコードについて、このGithubリポジトリにアクセス してください。
私たちのアプリケーションは以下で構成されます:
app_sync.py
同期通信を紹介するプログラム 。app_async1.py
クライアントがポーリングメカニズムを使用してサーバー側プロセスのフィードバックを要求する可能性がある非同期サービス呼び出しを示すプログラム 。app_async2.py
クライアントへの自動フィードバックを伴う非同期サービス呼び出しを示すプログラム 。app_async3.py
クライアントへの自動フィードバックを伴う、スケジュール後の非同期サービス呼び出しを示すプログラム 。セットアップに飛び込みましょう。もちろん 、システムにPython3をインストールする必要があり ます。私は必要なライブラリをインストールする仮想環境を使用します(そしてあなたも間違いなくそうするべきです):
$ python -m venv async-venv
$ source async-venv/bin/activate
名前の付いたファイルを作成し、 requirements.txt
その中に次の行を追加します。
Flask==1.1.2
Flask-SocketIO==5.0.1
Celery==4.4.7
redis==3.5.3
gevent==21.1.2
gevent-websocket==0.10.1
flower==0.9.7
今それらをインストールします:
$ pip install -r requirements.txt
このチュートリアルを終了すると、フォルダー構造は次のようになります。
それがクリアされたので、実際のコードを書き始めましょう。
まず、アプリケーションの構成パラメータを次のように定義します config.py
。
#config.py
#Application configuration File
################################
#Secret key that will be used by Flask for securely signing the session cookie
# and can be used for other security related needs
SECRET_KEY = 'SECRET_KEY'
#Map to REDIS Server Port
BROKER_URL = 'redis://localhost:6379'
#Minimum interval of wait time for our task
MIN_WAIT_TIME = 1
#Maximum interval of wait time for our task
MAX_WAIT_TIME = 20
注意:簡潔にするために、これらの構成パラメーターをにハードコーディングしましたが config.py
、これらのパラメーターを別のファイル(たとえば.env
)に保存することをお勧めします 。
次に、プロジェクトの初期化ファイルを次の場所に作成します init.py
。
#init.py
from flask import Flask
#Create a flask instance
app = Flask(__name__)
#Loads flask configurations from config.py
app.secret_key = app.config['SECRET_KEY']
app.config.from_object("config")
#Setup the Flask SocketIO integration (Required only for asynchronous scenarios)
from flask_socketio import SocketIO
socketio = SocketIO(app,logger=True,engineio_logger=True,message_queue=app.config['BROKER_URL'])
コーディングに入る前に、同期通信について簡単に説明します。
で同期通信、発呼者がサービスを要求し、完全にサービスを待ち受けます。そのサービスの結果を受け取った場合にのみ、作業を続行します。タイムアウトを定義して、定義された期間内にサービスが終了しない場合、呼び出しは失敗したと見なされ、呼び出し元は続行します。
同期通信がどのように機能するかを理解するために、専用のウェイターが割り当てられたと想像してください。彼はあなたの注文を受け取り、それをキッチンに届け、そこでシェフがあなたの料理を準備するのを待ちます。この間、ウェイターは何もしていません。
次の図は、同期サービス呼び出しを示しています。
同期通信はシーケンシャルタスクに適していますが、同時タスクが多数ある場合、プログラムはスレッドを使い果たし、スレッドが使用可能になるまで新しいタスクを待機させる可能性があります。
それでは、コーディングに取り掛かりましょう。Flaskがレンダリングできるテンプレート(index.html
)を作成し、その中に次のHTMLコードを含めます。
templates/index.html
<!DOCTYPE html>
<html>
<head>
<title>Synchronicity versus Asynchronicity</title>
<link rel="stylesheet" href="{{url_for('static',filename='css/materialize.min.css')}}">
<script src="{{ url_for('static',filename='js/jquery.min.js') }}"></script>
<script src="{{ url_for('static',filename='js/socket.io.js') }}"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
</head>
<body class="container">
<div class="row">
<h5>Click to start a post scheduled ansycnhronous task with automatic feedback.</h5>
</div>
<div class="card-panel">
<form method='post' id="runTaskForm" action="/runPSATask">
<div>
<input id="duration" name="duration" placeholder="Enter duration in seconds. for example: 30" type="text">
<label for="duration">Duration</label>
</div>
<button style="height:50px;width:600px" type="submit" id="runTask">Run A Post Scheduled Asynchronous Task With Automatic Feedback</button>
</form>
</div>
<div class="row">
<div id="Messages" class="red-text" style="width:800px; height:400px; overflow-y:scroll;"></div>
</div>
<script>
$(document).ready(function(){
var namespace='/runPSATask';
var url = 'http://' + document.domain + ':' + location.port + namespace;
var socket = io.connect(url);
socket.on('connect', function() {
socket.emit('join_room');
});
socket.on('msg' , function(data) {
$("#Messages").prepend('<li>'+data.msg+'</li>');
});
socket.on('status', function(data) {
////alert('socket on status ='+ data.msg);
if (data.msg == 'End') {
$("#runTask").attr("disabled",false);
};
});
});
</script>
<script>
$("#runTask").click(function(e) {
$("#runTask").attr("disabled",true);
$("#Messages").empty();
$.ajax({ type: "Post"
, url: '/runPSATask'
, data: $("#runTaskForm").serialize()
, success: function(data) {
$("#Messages").empty();
$("#Messages").prepend('<li>The Task ' + data.taskid + ' has been submitted and will execute in ' + data.duration + ' seconds. </li>');
}
});
e.preventDefault();
console.log('runPSATask complete');
});
</script>
</body>
</html>
このテンプレートには次のものが含まれます。
runTask
ルートを使用してサーバーにタスクを送信するボタン/runSyncTask
。div
idでに配置されMessages
ます。次に、app_sync.py
Flaskアプリケーションを含むというプログラムを作成し、このプログラム内に2つのルートを定義します。
"/"
Webページをレンダリングします(index.html
)"/runSyncTask"
1〜20秒の乱数を生成し、反復ごとに1秒間スリープするループを実行する、長時間実行中のタスクをシミュレートします。#app_sync.py
from flask import render_template, jsonify
from random import randint
from init import app
import tasks
#Render the predefined template index.html
@app.route("/",methods=['GET'])
def index():
return render_template('index.html')
#Defining the route for running A Synchronous Task
@app.route("/runSyncTask",methods=['POST'])
def long_sync_task():
print("Running","/runSyncTask")
#Generate a random number between MIN_WAIT_TIME and MAX_WAIT_TIME
n = randint(app.config['MIN_WAIT_TIME'],app.config['MAX_WAIT_TIME'])
#Call the function long_sync_task included within tasks.py
task = tasks.long_sync_task(n=n)
#Return the random wait time generated
return jsonify({ 'waittime': n })
if __name__ == "__main__":
app.run(debug=True)
このチュートリアルで定義されているすべてのタスクのコアロジックは、プログラム内にありますtasks.py
。
#tasks.py
import time
from celery import Celery
from celery.utils.log import get_task_logger
from flask_socketio import SocketIO
import config
# Setup the logger (compatible with celery version 4)
logger = get_task_logger(__name__)
# Setup the celery client
celery = Celery(__name__)
# Load celery configurations from celeryconfig.py
celery.config_from_object("celeryconfig")
# Setup and connect the socket instance to Redis Server
socketio = SocketIO(message_queue=config.BROKER_URL)
###############################################################################
def long_sync_task(n):
print(f"This task will take {n} seconds.")
for i in range(n):
print(f"i = {i}")
time.sleep(1)
###############################################################################
@celery.task(name = 'tasks.long_async_task')
def long_async_task(n,session):
print(f"The task of session {session} will take {n} seconds.")
for i in range(n):
print(f"i = {i}")
time.sleep(1)
###############################################################################
def send_message(event, namespace, room, message):
print("Message = ", message)
socketio.emit(event, {'msg': message}, namespace=namespace, room=room)
@celery.task(name = 'tasks.long_async_taskf')
def long_async_taskf(data):
room = data['sessionid']
namespace = data['namespase']
n = data['waittime']
#Send messages signaling the lifecycle of the task
send_message('status', namespace, room, 'Begin')
send_message('msg', namespace, room, 'Begin Task {}'.format(long_async_taskf.request.id))
send_message('msg', namespace, room, 'This task will take {} seconds'.format(n))
print(f"This task will take {n} seconds.")
for i in range(n):
msg = f"{i}"
send_message('msg', namespace, room, msg )
time.sleep(1)
send_message('msg', namespace, room, 'End Task {}'.format(long_async_taskf.request.id))
send_message('status', namespace, room, 'End')
###############################################################################
@celery.task(name = 'tasks.long_async_sch_task')
def long_async_sch_task(data):
room = data['sessionid']
namespace = data['namespase']
n = data['waittime']
send_message('status', namespace, room, 'Begin')
send_message('msg' , namespace, room, 'Begin Task {}'.format(long_async_sch_task.request.id))
send_message('msg' , namespace, room, 'This task will take {} seconds'.format(n))
print(f"This task will take {n} seconds.")
for i in range(n):
msg = f"{i}"
send_message('msg', namespace, room, msg )
time.sleep(1)
send_message('msg' , namespace, room, 'End Task {}'.format(long_async_sch_task.request.id))
send_message('status', namespace, room, 'End')
###############################################################################
このセクションでは、このlong_sync_task()
関数を同期タスクとしてのみ使用します。
app_sync.py
プログラムを実行して、同期シナリオをテストしてみましょう。
$ python app_sync.py
http://localhost:5000
Flaskインスタンスが実行されているリンクにアクセスすると、次の出力が表示されます。
「同期タスクの実行」ボタンを押して、プロセスが完了するまで待ちます。
完了すると、トリガーされたタスクに割り当てられたランダムな時間を通知するメッセージが表示されます。
同時に、サーバーがタスクを実行すると、コンソールに1秒ごとに増分された数値が表示されます。
このセクションでは、クライアントがポーリングメカニズムを使用してサーバー側プロセスのフィードバックを要求する可能性のある非同期サービス呼び出しを示します。
簡単に言うと、非同期とは、プログラムが特定のプロセスが完了するのを待たずに、関係なく続行することを意味します。
発信者が開始サービスコールが、やるES N結果をオト待ち時間。発信者は、結果を気にせずにすぐに作業を続行します。発信者が結果に関心がある場合は、後で説明するメカニズムがあります。
最も単純な非同期メッセージ交換パターンはファイアアンドフォーゲットと呼ばれ 、メッセージは送信されますがフィードバックは必要ありませんが、フィードバックが必要な場合、クライアントはポーリングメカニズムを介して結果を繰り返し要求することがあります 。
ポーリングは潜在的に高いネットワーク負荷を引き起こすため、お勧めしません。それでも、サービスプロバイダー(サーバー)がクライアントについて知る必要がないという利点があります。
次の図は、シナリオを示しています。
非同期通信は、イベントに応答する必要のあるコード(たとえば、待機を伴う時間のかかるI / Oバウンド操作)に適しています。
非同期性を選択すると、システムは同時により多くの要求を処理できるようになり、スループットが向上します。
それでは、コーディングに移りましょう。構成ファイルを使用して、セロリの初期化パラメーターを定義しますceleryconfig.py
。
#celeryconfig.py
#Celery Configuration parameters
#Map to Redis server
broker_url = 'redis://localhost:6379/0'
#Backend used to store the tasks results
result_backend = 'redis://localhost:6379/0'
#A string identifying the default serialization to use Default json
task_serializer = 'json'
result_serializer = 'json'
accept_content = ['json']
#When set to false the local system timezone is used.
enable_utc = False
#To track the started state of a task, we should explicitly enable it
task_track_started = True
#Configure Celery to use a specific time zone.
#The timezone value can be any time zone supported by the pytz library
#timezone = 'Asia/Beirut'
#enable_utc = True
Flaskがレンダリングできるテンプレートを作成します(index1.html
):
<!DOCTYPE html>
<html>
<head>
<title>Synchronicity versus Asynchronicity</title>
<link rel="stylesheet" href="{{url_for('static',filename='css/materialize.min.css')}}">
<script src="{{ url_for('static',filename='js/jquery.min.js') }}"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
</head>
<body class="container">
<div class="row">
<h4>Click to start an ansycnhronous task</h4>
</div>
<div class="card-panel">
<form method='post' id="runTaskForm" action="/runAsyncTask">
<button style="height:50px;width:400px" type="submit" id="runTask">Run An Asynchronous Task</button>
</form>
<form method='post' id="getTaskResultForm" action="/getAsyncTaskResult">
<button style="height:50px;width:400px" type="submit" id="getTaskResult">Get Asynchronous Task Result</button>
</form>
</div>
<div class="row">
<div id="Messages" class="red-text" style="width:800px; height:400px; overflow-y:scroll;"></div>
</div>
<script>
$("#runTask").click(function(e) {
$("#runTask").attr("disabled",true);
$("*").css("cursor","wait");
$("#Messages").empty();
$.ajax({ type: "Post"
, url: '/runAsyncTask'
, data: $("#runTaskForm").serialize()
, success: function(data) {
$("#runTask").attr("disabled",false);
$("*").css("cursor","");
$("#Messages").append('The task ' + data.taskid + ' will be executed in asynchronous manner for ' + data.waittime + ' seconds...');
}
});
e.preventDefault();
console.log('runAsyncTask complete');
});
$("#getTaskResult").click(function(e) {
var msg = $("#Messages").text();
var taskid = msg.match("task(.*)will");
//Get The Task ID from The Messages div and create a Target URL
var vurl = '/getAsyncTaskResult?taskid=' + jQuery.trim(taskid[1]);
$.ajax({ type: "Post"
, url: vurl
, data: $("#getTaskResultForm").serialize()
, success: function(data) {
$("*").css("cursor","");
$("#Messages").append('<p> The Status of the task = ' + data.taskid + ' is ' + data.taskstatus + '</p>');
}
});
e.preventDefault();
console.log('getAsyncTaskResult complete');
});
</script>
</body>
</html>
次に、app_async1.py
Flaskアプリを含むプログラムを作成します。
#app_async1.py
from flask import render_template, jsonify, session,request
from random import randint
import uuid
import tasks
from init import app
from celery.result import AsyncResult
@app.route("/",methods=['GET'])
def index():
# create a unique ID to assign for the asynchronous task
if 'uid' not in session:
sid = str(uuid.uuid4())
session['uid'] = sid
print("Session ID stored =", sid)
return render_template('index1.html')
#Run an Asynchronous Task
@app.route("/runAsyncTask",methods=['POST'])
def long_async_task():
print("Running", "/runAsyncTask")
#Generate a random number between MIN_WAIT_TIME and MAX_WAIT_TIME
n = randint(app.config['MIN_WAIT_TIME'],app.config['MAX_WAIT_TIME'])
sid = str(session['uid'])
task = tasks.long_async_task.delay(n=n,session=sid)
#print('taskid',task.id,'sessionid',sid,'waittime',n )
return jsonify({'taskid':task.id,'sessionid':sid,'waittime':n })
#Get The Result of The Asynchronous Task
@app.route('/getAsyncTaskResult', methods=['GET', 'POST'])
def result():
task_id = request.args.get('taskid')
# grab the AsyncResult
result = AsyncResult(task_id)
# print the task id
print("Task ID = ", result.task_id)
# print the Asynchronous result status
print("Task Status = ", result.status)
return jsonify({'taskid': result.task_id, 'taskstatus': result.status})
if __name__ == "__main__":
app.run(debug=True)
このプログラムには、3つの主要なルートがあります。
"/"
:Webページをレンダリングします(index1.html
)。"/runAsyncTask"
:1〜20秒の乱数を生成する非同期タスクを呼び出してから、反復ごとに1秒間スリープするループを実行します。"/getAsyncTaskResult"
:受信したタスクIDに基づいて、タスクの状態を収集します。注意:このシナリオには、SocketIOコンポーネントは含まれていません。
このシナリオをテストして、次の手順に従ってパスを進めましょう。
redis-server.exe
ます。デフォルトのインストールまたはLinux / MacOSの場合は、RedisインスタンスがTCPポート6379で実行されていることを確認してください。$ async-venv\Scripts\celery.exe worker -A tasks --loglevel=DEBUG --concurrency=1 -P solo -f celery.logs
Linux / MacOSでは、非常によく似ています。
$ async-venv/bin/celery worker -A tasks --loglevel=DEBUG --concurrency=1 -P solo -f celery.logs
これasync-venv
が仮想環境の名前であることに注意してください。別の名前を付けた場合は、必ず自分の名前に置き換えてください。セロリが始まると、次の出力が表示されます。
プログラムで定義されたタスクtasks.py
がCeleryに反映されていることを確認してください。
$ python app_async1.py
次に、ブラウザを開いて、次のリンクにアクセスします。
[非同期タスクの実行]ボタンを押すと、新しいタスクがキューに入れられ、直接実行されます。「メッセージ」セクションに、タスクのIDとその実行時間を示すメッセージが表示されます。
[非同期タスク結果の取得]ボタンを(継続的に)押すと、その特定の時間におけるタスクの状態が収集されます。
PENDING
:実行を待機しています。STARTED
:タスクが開始されました。SUCCESS
:タスクは正常に実行されました。FAILURE
:タスクの実行により例外が発生しました。RETRY
:タスクは再試行されています。REVOKED
:タスクが取り消されました。ログファイルに含まれているセロリワーカーのログcelery.logs
を確認すると、タスクのライフサイクルに気付くでしょう。
以前のシナリオに基づいて、タスクの状態を収集するために複数のリクエストを開始することによる煩わしさを軽減するために、サーバーがタスクの状態に関してクライアントを継続的に更新できるようにするソケットテクノロジの組み込みを試みます。
実際、ソケットIOエンジンは、リアルタイムの双方向イベントベースの通信を可能にします。
これがもたらす主な利点は、ネットワークの負荷を軽減し、膨大な数のクライアントに情報を伝達するためにより効率的になることです。
次の図は、シナリオを示しています。
さらに掘り下げる前に、実行する手順について簡単に説明します。
セロリからWebブラウザーにメッセージを送り返すことができるようにするために、以下を活用します。
データ接続を効果的に管理するために、次の区分化戦略を採用します。
"/runAsyncTaskF"
このシナリオに名前空間を割り当てます。(名前空間は、単一の共有接続を介してサーバーロジックを分離するために使用されます)。それでは、コーディングに移りましょう。
index2.html
):<!DOCTYPE html>
<html>
<head>
<title>Synchronicity versus Asynchronicity</title>
<link rel="stylesheet" href="{{url_for('static',filename='css/materialize.min.css')}}">
<script src="{{ url_for('static',filename='js/jquery.min.js') }}"></script>
<script src="{{ url_for('static',filename='js/socket.io.js') }}"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
</head>
<body class="container">
<div class="row">
<h5>Click to start an ansycnhronous task with automatic feedback.</h5>
</div>
<div class="card-panel">
<form method='post' id="runTaskForm" action="/runAsyncTask">
<button style="height:50px;width:400px" type="submit" id="runTask">Run An Asynchronous Task With Automatic Feedback</button>
</form>
</div>
<div class="row">
<div id="Messages" class="red-text" style="width:800px; height:400px; overflow-y:scroll;"></div>
</div>
<script>
$(document).ready(function() {
var namespace = '/runAsyncTaskF';
var url = 'http://' + document.domain + ':' + location.port + namespace;
var socket = io.connect(url);
socket.on('connect', function() {
////alert('socket on connect');
socket.emit('join_room');
});
socket.on('msg', function(data) {
////alert('socket on msg ='+ data.msg);
$("#Messages").prepend('<li>' + data.msg + '</li>');
});
socket.on('status', function(data) {
////alert('socket on status ='+ data.msg);
if (data.msg == 'End') {
$("#runTask").attr("disabled", false);
};
});
});
</script>
<script>
$("#runTask").click(function(e) {
$("#runTask").attr("disabled", true);
$("*").css("cursor", "wait");
$("#Messages").empty();
$.ajax({
type: "Post",
url: '/runAsyncTaskF',
data: $("#runTaskForm").serialize(),
success: function(data) {
$("*").css("cursor", "");
$("#Messages").empty();
$("#Messages").prepend('<li>The Task ' + data.taskid + ' has been submitted. </li>');
}
});
e.preventDefault();
console.log('runAsyncTaskF complete');
});
</script>
</body>
</html>
app_async2.py
Flaskアプリケーションを含むと呼ばれるプログラムを作成します。
#Gevent is a coroutine based concurrency library for Python
from gevent import monkey
#For dynamic modifications of a class or module
monkey.patch_all()
from flask import render_template, jsonify, session, request
from random import randint
import uuid
import tasks
from init import app, socketio
from flask_socketio import join_room
@app.route("/",methods=['GET'])
def index():
# create a unique session ID and store it within the Flask session
if 'uid' not in session:
sid = str(uuid.uuid4())
session['uid'] = sid
print("Session ID stored =", sid)
return render_template('index2.html')
#Run an Asynchronous Task With Automatic Feedback
@app.route("/runAsyncTaskF",methods=['POST'])
def long_async_taskf():
print("Running", "/runAsyncTaskF")
# Generate a random number between MIN_WAIT_TIME and MAX_WAIT_TIME
n = randint(app.config['MIN_WAIT_TIME'], app.config['MAX_WAIT_TIME'])
data = {}
data['sessionid'] = str(session['uid'])
data['waittime'] = n
data['namespase'] = '/runAsyncTaskF'
task = tasks.long_async_taskf.delay(data)
return jsonify({ 'taskid':task.id
,'sessionid':data['sessionid']
,'waittime':data['waittime']
,'namespace':data['namespase']
})
@socketio.on('connect', namespace='/runAsyncTaskF')
def socket_connect():
#Display message upon connecting to the namespace
print('Client Connected To NameSpace /runAsyncTaskF - ',request.sid)
@socketio.on('disconnect', namespace='/runAsyncTaskF')
def socket_connect():
# Display message upon disconnecting from the namespace
print('Client disconnected From NameSpace /runAsyncTaskF - ',request.sid)
@socketio.on('join_room', namespace='/runAsyncTaskF')
def on_room():
room = str(session['uid'])
# Display message upon joining a room specific to the session previously stored.
print(f"Socket joining room {room}")
join_room(room)
@socketio.on_error_default
def error_handler(e):
# Display message on error.
print(f"socket error: {e}, {str(request.event)}")
if __name__ == "__main__":
# Run the application with socketio integration.
socketio.run(app,debug=True)
このプログラムには、主に2つのルートがあります。
"/"
:Webページをレンダリングします(index2.html
)。"/runAsyncTaskF"
:以下を実行する非同期タスクを呼び出します。long_async_taskf()
プログラム内のそれぞれのタスクを呼び出しますtasks.py
。このシナリオを実行するには:
app_async2.py
ブラウザを開き、次のリンクにアクセスしてボタンを押すと、次のような出力が徐々に表示されます。
同時に、コンソールに次の出力が表示されます。
また
celery.logs
、タスクのライフサイクルについてファイルをいつでも確認できます。
このシナリオはシナリオ3に似ています。唯一の違いは、非同期タスクを直接実行する代わりに、このタスクがクライアントによって指定された特定の期間の後に実行されるようにスケジュールされることです。
コーディングに進みましょう。非同期タスクを実行する前に待機する時間を秒単位で表すindex3.html
新しいフィールドを使用してテンプレートを作成し"Duration"
ます。
<!DOCTYPE html>
<html>
<head>
<title>Synchronicity versus Asynchronicity</title>
<link rel="stylesheet" href="{{url_for('static',filename='css/materialize.min.css')}}">
<script src="{{ url_for('static',filename='js/jquery.min.js') }}"></script>
<script src="{{ url_for('static',filename='js/socket.io.js') }}"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
</head>
<body class="container">
<div class="row">
<h5>Click to start a post scheduled ansycnhronous task with automatic feedback.</h5>
</div>
<div class="card-panel">
<form method='post' id="runTaskForm" action="/runPSATask">
<div>
<input id="duration" name="duration" placeholder="Enter duration in seconds. for example: 30" type="text">
<label for="duration">Duration</label>
</div>
<button style="height:50px;width:600px" type="submit" id="runTask">Run A Post Scheduled Asynchronous Task With Automatic Feedback</button>
</form>
</div>
<div class="row">
<div id="Messages" class="red-text" style="width:800px; height:400px; overflow-y:scroll;"></div>
</div>
<script>
$(document).ready(function() {
var namespace = '/runPSATask';
var url = 'http://' + document.domain + ':' + location.port + namespace;
var socket = io.connect(url);
socket.on('connect', function() {
socket.emit('join_room');
});
socket.on('msg', function(data) {
$("#Messages").prepend('<li>' + data.msg + '</li>');
});
socket.on('status', function(data) {
////alert('socket on status ='+ data.msg);
if (data.msg == 'End') {
$("#runTask").attr("disabled", false);
};
});
});
</script>
<script>
$("#runTask").click(function(e) {
$("#runTask").attr("disabled", true);
$("#Messages").empty();
$.ajax({
type: "Post",
url: '/runPSATask',
data: $("#runTaskForm").serialize(),
success: function(data) {
$("#Messages").empty();
$("#Messages").prepend('<li>The Task ' + data.taskid + ' has been submitted and will execute in ' + data.duration + ' seconds. </li>');
}
});
e.preventDefault();
console.log('runPSATask complete');
});
</script>
</body>
</html>
次に、app_async3.py
このシナリオのFlaskアプリは次のとおりです。
#app_async3.py
from gevent import monkey
monkey.patch_all()
from flask import render_template, jsonify, session, request
from random import randint
import uuid
import tasks
from init import app, socketio
from flask_socketio import join_room
@app.route("/",methods=['GET'])
def index():
# create a unique session ID
if 'uid' not in session:
sid = str(uuid.uuid4())
session['uid'] = sid
print("Session ID stored =", sid)
return render_template('index3.html')
#Run a Post Scheduled Asynchronous Task With Automatic Feedback
@app.route("/runPSATask",methods=['POST'])
def long_async_sch_task():
print("Running", "/runPSATask")
# Generate a random number between MIN_WAIT_TIME and MAX_WAIT_TIME
n = randint(app.config['MIN_WAIT_TIME'], app.config['MAX_WAIT_TIME'])
data = {}
data['sessionid'] = str(session['uid'])
data['waittime'] = n
data['namespase'] = '/runPSATask'
data['duration'] = int(request.form['duration'])
#Countdown represents the duration to wait in seconds before running the task
task = tasks.long_async_sch_task.apply_async(args=[data],countdown=data['duration'])
return jsonify({ 'taskid':task.id
,'sessionid':data['sessionid']
,'waittime': data['waittime']
,'namespace':data['namespase']
,'duration':data['duration']
})
@socketio.on('connect', namespace='/runPSATask')
def socket_connect():
print('Client Connected To NameSpace /runPSATask - ',request.sid)
@socketio.on('disconnect', namespace='/runPSATask')
def socket_connect():
print('Client disconnected From NameSpace /runPSATask - ',request.sid)
@socketio.on('join_room', namespace='/runPSATask')
def on_room():
room = str(session['uid'])
print(f"Socket joining room {room}")
join_room(room)
@socketio.on_error_default
def error_handler(e):
print(f"socket error: {e}, {str(request.event)}")
if __name__ == "__main__":
socketio.run(app,debug=True)
今回long_async_sch_task()
からタスクメソッドを使用していることに注意してくださいtasks.py
。
app_async3.py
以前と同じように実行し、ブラウザを開きます。
期間(つまり10)を入力し、ボタンを押して、スケジュール後の非同期タスクを作成します。作成されると、タスクの詳細を示すメッセージが[メッセージ]ボックスに表示されます。
期間フィールドで指定した時間待つ必要があります。タスクが実行されていることがわかります。
また、
celery.logs
ログファイルに含まれているセロリワーカーのログを復元すると、タスクのライフサイクルに気付くでしょう。
セロリタスクをより適切に監視するために、セロリ クラスターを監視および管理するためのWebベースのツールであるFlowerをインストールできます。
注意:フラワーライブラリはの一部でしたrequirements.txt
。
花を使用してセロリのタスクを表示するには、次の手順に従ってください。
$ async-venv\Scripts\flower.exe worker -A tasks --port=5555
Linux / MacOSの場合:
$ async-venv/bin/flower worker -A tasks --port=5555
コンソールに次の情報が表示されます。
アプリに戻ってタスクを実行し、ブラウザを開いて
http://localhost:5555
[タスク]タブに移動します。
タスクが完了すると、フラワーダッシュボードに次のように表示されます。
この記事が、Celeryの助けを借りて同期および非同期リクエストの概念的な基礎を得るのに役立つことを願っています。同期要求は遅くなる可能性があり、非同期要求は迅速に実行されますが、あらゆるシナリオに適切な方法を認識することが重要です。時には、彼らは一緒に働くことさえあります。
リンク: https://www.thepythoncode.com/article/async-tasks-with-celery-redis-and-flask-in-python
1651634880
Yucca/PrerenderBundle
Backbone, EmberJS, Angular and so more are your daily basis ? In case of an admin area, that's fine, but on your front office, you might encounter some SEO problems
Thanks to Prerender.io, you now can dynamically render your JavaScript pages in your server using PhantomJS.
This bundle is largely inspired by bakura10 work on zfr-prerender
Install the module by typing (or add it to your composer.json
file):
$ php composer.phar require "yucca/prerender-bundle" "0.1.*@dev"
Register the bundle in app/AppKernel.php
:
// app/AppKernel.php
public function registerBundles()
{
return array(
// ...
new Yucca\PrerenderBundle\YuccaPrerenderBundle(),
);
}
Enable the bundle's configuration in app/config/config.yml
:
# app/config/config.yml
yucca_prerender: ~
GET
request to the prerender service (PhantomJS server) for the page's prerendered HTMLThis bundle comes with a sane default, extracted from prerender-node middleware, but you can easily customize it:
#app/config/config.yml
yucca_prerender:
....
By default, YuccaPrerenderBundle uses the Prerender.io service deployed at http://prerender.herokuapp.com
. However, you may want to deploy it on your own server. To that extent, you can customize YuccaPrerenderBundle to use your server using the following configuration:
#app/config/config.yml
yucca_prerender:
backend_url: http://localhost:3000
With this config, here is how YuccaPrerender will proxy the "https://google.com" request:
GET
http://localhost:3000/https://google.com
YuccaPrerender decides to pre-render based on the User-Agent string to check if a request comes from a bot or not. By default, those user agents are registered: 'baiduspider', 'facebookexternalhit', 'twitterbot'. Googlebot, Yahoo, and Bingbot should not be in this list because we support escaped_fragment instead of checking user agent for those crawlers. Your site must have to understand the '#!' ajax url notation.
You can add other User-Agent string to evaluate using this sample configuration:
#app/config/config.yml
yucca_prerender:
crawler_user_agents: ['yandex', 'msnbot']
YuccaPrerender is configured by default to ignore all the requests for resources with those extensions: .js
, .css
, .less
, .png
, .jpg
, .jpeg
, .gif
, .pdf
, .doc
, .txt
, .zip
, .mp3
, .rar
, .exe
, .wmv
, .doc
, .avi
, .ppt
, .mpg
, .mpeg
, .tif
, .wav
, .mov
, .psd
, .ai
, .xls
, .mp4
, .m4a
, .swf
, .dat
, .dmg
, .iso
, .flv
, .m4v
, .torrent
. Those are never pre-rendered.
You can add your own extensions using this sample configuration:
#app/config/config.yml
yucca_prerender:
ignored_extensions: ['.less', '.pdf']
Whitelist a single url path or multiple url paths. Compares using regex, so be specific when possible. If a whitelist is supplied, only url's containing a whitelist path will be prerendered.
Here is a sample configuration that only pre-render URLs that contains "/users/":
#app/config/config.yml
yucca_prerender:
whitelist_urls: ['/users/*']
Note: remember to specify URL here and not Symfony2 route names.
Blacklist a single url path or multiple url paths. Compares using regex, so be specific when possible. If a blacklist is supplied, all url's will be pre-rendered except ones containing a blacklist part. Please note that if the referer is part of the blacklist, it won't be pre-rendered too.
Here is a sample configuration that prerender all URLs excepting the ones that contains "/users/":
#app/config/config.yml
yucca_prerender:
blacklist_urls: ['/users/*']
Note: remember to specify URL here and not Symfony22 route names.
If you want to make sure your pages are rendering correctly:
Thanks
Author: rjanot
Source Code: https://github.com/rjanot/YuccaPrerenderBundle
License: MIT License