1673703420
Note: This is the latest (version 2) of the project. If you are using version 1 then checkout the branch version-1
Project Highlights
About The Project
This project is designed for a production ready environment. It can handle the scale and complexity of a very demanding application. This project is being used by companies like MindOrks, AfterAcademy, and CuriousJr. Apps/Websites having 10+ million usebase.
It is suitable for Web Apps, Mobile Apps, and other API services.
About The Author
I Janishar Ali have created this project using my 10 years of experience in tech industry working for top companies. I enjoy sharing my learnings with the community. You can connect with me here:
Project Instructions
We will learn and build the backend application for a blogging platform. The main focus will be to create a maintainable and highly testable architecture.
Following are the features of this project:
I have also open source a complete blogging website working on this backend project: Goto Repository The repository [React.js Isomorphic Web Application Architecture] has a complete React.js web application implemented for a blogging platform which is using this project as its API server.
Install using Docker Compose [Recommended Method]
docker-compose up -d
in terminal from the repo directory.Run The Tests
npm install
.npm test
.Install Without Docker [2nd Method]
DB_HOST
to localhost
in .env and tests/.env.test files.npm start
and You will be able to access the API from http://localhost:3000npm test
.Postman APIs Here: addons/postman
├── .vscode
│ ├── settings.json
│ ├── tasks.json
│ └── launch.json
├── .templates
├── src
│ ├── server.ts
│ ├── app.ts
│ ├── config.ts
│ ├── auth
│ │ ├── apikey.ts
│ │ ├── authUtils.ts
│ │ ├── authentication.ts
│ │ ├── authorization.ts
│ │ └── schema.ts
│ ├── core
│ │ ├── ApiError.ts
│ │ ├── ApiResponse.ts
│ │ ├── JWT.ts
│ │ ├── Logger.ts
│ │ └── utils.ts
│ ├── cache
│ │ ├── index.ts
│ │ ├── keys.ts
│ │ ├── query.ts
│ │ └── repository
│ │ ├── BlogCache.ts
│ │ └── BlogsCache.ts
│ ├── database
│ │ ├── index.ts
│ │ ├── model
│ │ │ ├── ApiKey.ts
│ │ │ ├── Blog.ts
│ │ │ ├── Keystore.ts
│ │ │ ├── Role.ts
│ │ │ └── User.ts
│ │ └── repository
│ │ ├── ApiKeyRepo.ts
│ │ ├── BlogRepo.ts
│ │ ├── KeystoreRepo.ts
│ │ ├── RoleRepo.ts
│ │ └── UserRepo.ts
│ ├── helpers
│ │ ├── asyncHandler.ts
│ │ ├── permission.ts
│ │ ├── role.ts
│ │ ├── security.ts
│ │ ├── utils.ts
│ │ └── validator.ts
│ ├── routes
│ │ ├── access
│ │ │ ├── credential.ts
│ │ │ ├── login.ts
│ │ │ ├── logout.ts
│ │ │ ├── schema.ts
│ │ │ ├── signup.ts
│ │ │ ├── token.ts
│ │ │ └── utils.ts
│ │ ├── blog
│ │ │ ├── editor.ts
│ │ │ ├── index.ts
│ │ │ ├── schema.ts
│ │ │ └── writer.ts
│ │ ├── blogs
│ │ │ ├── index.ts
│ │ │ └── schema.ts
│ │ ├── index.ts
│ │ └── profile
│ │ ├── schema.ts
│ │ └── user.ts
│ └── types
│ └── app-request.d.ts
├── tests
│ ├── auth
│ │ ├── apikey
│ │ │ ├── mock.ts
│ │ │ └── unit.test.ts
│ │ ├── authUtils
│ │ │ ├── mock.ts
│ │ │ └── unit.test.ts
│ │ ├── authentication
│ │ │ ├── mock.ts
│ │ │ └── unit.test.ts
│ │ └── authorization
│ │ ├── mock.ts
│ │ └── unit.test.ts
│ ├── core
│ │ └── jwt
│ │ ├── mock.ts
│ │ └── unit.test.ts
│ ├── cache
│ │ └── mock.ts
│ ├── database
│ │ └── mock.ts
│ ├── routes
│ │ ├── access
│ │ │ ├── login
│ │ │ │ ├── integration.test.ts
│ │ │ │ ├── mock.ts
│ │ │ │ └── unit.test.ts
│ │ │ └── signup
│ │ │ ├── mock.ts
│ │ │ └── unit.test.ts
│ │ └── blog
│ │ ├── index
│ │ │ ├── mock.ts
│ │ │ └── unit.test.ts
│ │ └── writer
│ │ ├── mock.ts
│ │ └── unit.test.ts
│ ├── .env.test
│ └── setup.ts
├── addons
│ └── init-mongo.js
├── keys
│ ├── private.pem
│ └── public.pem
├── .env
├── .gitignore
├── .dockerignore
├── .eslintrc
├── .eslintignore
├── .prettierrc
├── .prettierignore
├── .travis.yml
├── Dockerfile
├── docker-compose.yml
├── package-lock.json
├── package.json
├── jest.config.js
└── tsconfig.json
/src → server.ts → app.ts → /routes/index.ts → /auth/apikey.ts → schema.ts → /helpers/validator.ts → asyncHandler.ts → /routes/access/signup.ts → schema.ts → /helpers/validator.ts → asyncHandler.ts → /database/repository/UserRepo.ts → /database/model/User.ts → /core/ApiResponses.ts
Method and Headers
POST /signup/basic HTTP/1.1
Host: localhost:3000
x-api-key: GCMUDiuY5a7WvyUNt9n3QztToSHzK7Uj
Content-Type: application/json
Request Body
{
"name" : "Janishar Ali",
"email": "ali@github.com",
"password": "changeit",
"profilePicUrl": "https://avatars1.githubusercontent.com/u/11065002?s=460&u=1e8e42bda7e6f579a2b216767b2ed986619bbf78&v=4"
}
Response Body: 200
{
"statusCode": "10000",
"message": "Signup Successful",
"data": {
"user": {
"_id": "63a19e5ba2730d1599d46c0b",
"name": "Janishar Ali",
"roles": [
{
"_id": "63a197b39e07f859826e6626",
"code": "LEARNER",
"status": true
}
],
"profilePicUrl": "https://avatars1.githubusercontent.com/u/11065002?s=460&u=1e8e42bda7e6f579a2b216767b2ed986619bbf78&v=4"
},
"tokens": {
"accessToken": "some_token",
"refreshToken": "some_token"
}
}
}
Response Body: 400
{
"statusCode": "10001",
"message": "Bad Parameters"
}
GET /profile/my HTTP/1.1
Host: localhost:3000
x-api-key: GCMUDiuY5a7WvyUNt9n3QztToSHzK7Uj
Content-Type: application/json
Authorization: Bearer <your_token_received_from_signup_or_login>
{
"statusCode": "10000",
"message": "success",
"data": {
"name": "Janishar Ali Anwar",
"profilePicUrl": "https://avatars1.githubusercontent.com/u/11065002?s=460&u=1e8e42bda7e6f579a2b216767b2ed986619bbf78&v=4",
"roles": [
{
"_id": "5e7b8acad7aded2407e078d7",
"code": "LEARNER"
},
{
"_id": "5e7b8c22d347fc2407c564a6",
"code": "WRITER"
},
{
"_id": "5e7b8c2ad347fc2407c564a7",
"code": "EDITOR"
}
]
}
}
Author: janishar
Source Code: https://github.com/janishar/nodejs-backend-architecture-typescript
License: Apache-2.0 license
#typescript #javascript #nodejs #docker #redis #jwt #express
1673550660
KeyDB is a high performance fork of Redis with a focus on multithreading, memory efficiency, and high throughput. In addition to performance improvements, KeyDB offers features such as Active Replication, FLASH Storage and Subkey Expires. KeyDB has a MVCC architecture that allows you to execute queries such as KEYS and SCAN without blocking the database and degrading performance.
KeyDB maintains full compatibility with the Redis protocol, modules, and scripts. This includes the atomicity guarantees for scripts and transactions. Because KeyDB keeps in sync with Redis development KeyDB is a superset of Redis functionality, making KeyDB a drop in replacement for existing Redis deployments.
On the same hardware KeyDB can achieve significantly higher throughput than Redis. Active-Replication simplifies hot-spare failover allowing you to easily distribute writes over replicas and use simple TCP based load balancing/failover. KeyDB's higher performance allows you to do more on less hardware which reduces operation costs and complexity.
The chart below compares several KeyDB and Redis setups, including the latest Redis6 io-threads option, and TLS benchmarks.
See the full benchmark results and setup information here: https://docs.keydb.dev/blog/2020/09/29/blog-post/
KeyDB has a different philosophy on how the codebase should evolve. We feel that ease of use, high performance, and a "batteries included" approach is the best way to create a good user experience. While we have great respect for the Redis maintainers it is our opinion that the Redis approach focuses too much on simplicity of the code base at the expense of complexity for the user. This results in the need for external components and workarounds to solve common problems - resulting in more complexity overall.
Because of this difference of opinion features which are right for KeyDB may not be appropriate for Redis. A fork allows us to explore this new development path and implement features which may never be a part of Redis. KeyDB keeps in sync with upstream Redis changes, and where applicable we upstream bug fixes and changes. It is our hope that the two projects can continue to grow and learn from each other.
The KeyDB team maintains this project as part of Snap Inc. KeyDB is used by Snap as part of its caching infrastructure and is fully open sourced. There is no separate commercial product and no paid support options available. We really value collaborating with the open source community and welcome PRs, bug reports, and open discussion. For community support or to get involved further with the project check out our community support options here (slack, forum, meetup, github issues). Our team monitors these channlels regularly.
Try the KeyDB Docker Image
Join us on Slack
Learn more using KeyDB's extensive documentation
Post to our Community Forum
See the KeyDB Roadmap to see what's in store
Please note keydb-benchmark and redis-benchmark are currently single threaded and too slow to properly benchmark KeyDB. We recommend using a redis cluster benchmark tool such as memtier. Please ensure your machine has enough cores for both KeyDB and memtier if testing locally. KeyDB expects exclusive use of any cores assigned to it.
With new features comes new options. All other configuration options behave as you'd expect. Your existing configuration files should continue to work unchanged.
server-threads N
server-thread-affinity [true/false]
The number of threads used to serve requests. This should be related to the number of queues available in your network hardware, not the number of cores on your machine. Because KeyDB uses spinlocks to reduce latency; making this too high will reduce performance. We recommend using 4 here. By default this is set to two.
min-clients-per-thread 50
The minimum number of clients on a thread before KeyDB assigns new connections to a different thread. Tuning this parameter is a tradeoff between locking overhead and distributing the workload over multiple cores
replica-weighting-factor 2
KeyDB will attempt to balance clients across threads evenly; However, replica clients are usually much more expensive than a normal client, and so KeyDB will try to assign fewer clients to threads with a replica. The weighting factor below is intented to help tune this behavior. A replica weighting factor of 2 means we treat a replica as the equivalent of two normal clients. Adjusting this value may improve performance when replication is used. The best weighting is workload specific - e.g. read heavy workloads should set this to 1. Very write heavy workloads may benefit from higher numbers.
active-client-balancing yes
Should KeyDB make active attempts at balancing clients across threads? This can impact performance accepting new clients. By default this is enabled. If disabled there is still a best effort from the kernel to distribute across threads with SO_REUSEPORT but it will not be as fair. By default this is enabled
active-replica yes
If you are using active-active replication set active-replica
option to “yes”. This will enable both instances to accept reads and writes while remaining synced. Click here to see more on active-rep in our docs section. There are also docker examples on docs.
multi-master-no-forward no
Avoid forwarding RREPLAY messages to other masters? WARNING: This setting is dangerous! You must be certain all masters are connected to eachother in a true mesh topology or data loss will occur! This command can be used to reduce multimaster bus traffic
db-s3-object /path/to/bucket
If you would like KeyDB to dump and load directly to AWS S3 this option specifies the bucket. Using this option with the traditional RDB options will result in KeyDB backing up twice to both locations. If both are specified KeyDB will first attempt to load from the local dump file and if that fails load from S3. This requires the AWS CLI tools to be installed and configured which are used under the hood to transfer the data.
storage-provider flash /path/to/flash
If you would like to use KeyDB FLASH storage, specify the storage medium followed by the directory path on your local SSD volume. Note that this feature is still considered experimental and should be used with discretion. See FLASH Documentation for more details on configuration and setting up your FLASH volume.
KeyDB can be compiled and is tested for use on Linux. KeyDB currently relies on SO_REUSEPORT's load balancing behavior which is available only in Linux. When we support marshalling connections across threads we plan to support other operating systems such as FreeBSD.
More on CentOS/Archlinux/Alpine/Debian/Ubuntu dependencies and builds can be found here: https://docs.keydb.dev/docs/build/
Init and clone submodule dependencies:
% git submodule init && git submodule update
Install dependencies:
% sudo apt install build-essential nasm autotools-dev autoconf libjemalloc-dev tcl tcl-dev uuid-dev libcurl4-openssl-dev libbz2-dev libzstd-dev liblz4-dev libsnappy-dev libssl-dev
Compiling is as simple as:
% make
To build with systemd support, you'll need systemd development libraries (such as libsystemd-dev on Debian/Ubuntu or systemd-devel on CentOS) and run:
% make USE_SYSTEMD=yes
To append a suffix to KeyDB program names, use:
% make PROG_SUFFIX="-alt"
***Note that the following dependencies may be needed: % sudo apt-get install autoconf autotools-dev libnuma-dev libtool
To buik=ld with TLS support, use:
% make BUILD_TLS=yes
Running the tests with TLS enabled (you will need tcl-tls
installed):
% ./utils/gen-test-certs.sh
% ./runtest --tls
To build with KeyDB FLASH support, use:
% make ENABLE_FLASH=yes
***Note that the KeyDB FLASH feature is considered experimental (beta) and should used with discretion
KeyDB has some dependencies which are included in the deps
directory. make
does not automatically rebuild dependencies even if something in the source code of dependencies changes.
When you update the source code with git pull
or when code inside the dependencies tree is modified in any other way, make sure to use the following command in order to really clean everything and rebuild from scratch:
make distclean
This will clean: jemalloc, lua, hiredis, linenoise.
Also if you force certain build options like 32bit target, no C compiler optimizations (for debugging purposes), and other similar build time options, those options are cached indefinitely until you issue a make distclean
command.
If after building KeyDB with a 32 bit target you need to rebuild it with a 64 bit target, or the other way around, you need to perform a make distclean
in the root directory of the KeyDB distribution.
In case of build errors when trying to build a 32 bit binary of KeyDB, try the following steps:
make 32bit
: make CFLAGS="-m32 -march=native" LDFLAGS="-m32"
Selecting a non-default memory allocator when building KeyDB is done by setting the MALLOC
environment variable. KeyDB is compiled and linked against libc malloc by default, with the exception of jemalloc being the default on Linux systems. This default was picked because jemalloc has proven to have fewer fragmentation problems than libc malloc.
To force compiling against libc malloc, use:
% make MALLOC=libc
To compile against jemalloc on Mac OS X systems, use:
% make MALLOC=jemalloc
By default, KeyDB will build using the POSIX clock_gettime function as the monotonic clock source. On most modern systems, the internal processor clock can be used to improve performance. Cautions can be found here: http://oliveryang.net/2015/09/pitfalls-of-TSC-usage/
To build with support for the processor's internal instruction clock, use:
% make CFLAGS="-DUSE_PROCESSOR_CLOCK"
KeyDB will build with a user friendly colorized output by default. If you want to see a more verbose output, use the following:
% make V=1
To run KeyDB with the default configuration, just type:
% cd src
% ./keydb-server
If you want to provide your keydb.conf, you have to run it using an additional parameter (the path of the configuration file):
% cd src
% ./keydb-server /path/to/keydb.conf
It is possible to alter the KeyDB configuration by passing parameters directly as options using the command line. Examples:
% ./keydb-server --port 9999 --replicaof 127.0.0.1 6379
% ./keydb-server /etc/keydb/6379.conf --loglevel debug
All the options in keydb.conf are also supported as options using the command line, with exactly the same name.
Please consult the TLS.md file for more information on how to use KeyDB with TLS.
You can use keydb-cli to play with KeyDB. Start a keydb-server instance, then in another terminal try the following:
% cd src
% ./keydb-cli
keydb> ping
PONG
keydb> set foo bar
OK
keydb> get foo
"bar"
keydb> incr mycounter
(integer) 1
keydb> incr mycounter
(integer) 2
keydb>
You can find the list of all the available commands at https://docs.keydb.dev/docs/commands/
In order to install KeyDB binaries into /usr/local/bin, just use:
% make install
You can use make PREFIX=/some/other/directory install
if you wish to use a different destination.
Make install will just install binaries in your system, but will not configure init scripts and configuration files in the appropriate place. This is not needed if you just want to play a bit with KeyDB, but if you are installing it the proper way for a production system, we have a script that does this for Ubuntu and Debian systems:
% cd utils
% ./install_server.sh
Note: install_server.sh
will not work on Mac OSX; it is built for Linux only.
The script will ask you a few questions and will setup everything you need to run KeyDB properly as a background daemon that will start again on system reboots.
You'll be able to stop and start KeyDB using the script named /etc/init.d/keydb_<portnumber>
, for instance /etc/init.d/keydb_6379
.
KeyDB works by running the normal Redis event loop on multiple threads. Network IO, and query parsing are done concurrently. Each connection is assigned a thread on accept(). Access to the core hash table is guarded by spinlock. Because the hashtable access is extremely fast this lock has low contention. Transactions hold the lock for the duration of the EXEC command. Modules work in concert with the GIL which is only acquired when all server threads are paused. This maintains the atomicity guarantees modules expect.
Unlike most databases the core data structure is the fastest part of the system. Most of the query time comes from parsing the REPL protocol and copying data to/from the network.
Note: by contributing code to the KeyDB project in any form, including sending a pull request via Github, a code fragment or patch via private email or public discussion groups, you agree to release your code under the terms of the BSD license that you can find in the COPYING file included in the KeyDB source distribution.
Please see the CONTRIBUTING file in this source distribution for more information.
KeyDB is now a part of Snap Inc! Check out the announcement here
Release v6.3.0 is here with major improvements as we consolodate our Open Source and Enterprise offerings into a single BSD-3 licensed project. See our roadmap for details.
Want to extend KeyDB with Javascript? Try ModJS
Need Help? Check out our extensive documentation.
KeyDB is on Slack. Click here to learn more and join the KeyDB Community Slack workspace.
Author: Snapchat
Source Code: https://github.com/Snapchat/KeyDB
License: BSD-3-Clause license
1673546580
Rails 7 App with Preinstalled Tools is Ready in Minutes!
Usually It is difficult and time consuming to setup a typical Rails environment from scratch.
Since now if you have Ruby and Docker then you have working Rails environment in about 5 minutes without any manual efforts.
Logotype | Description | Why it was added |
---|---|---|
![]() | Docker | Helps to keep all required services in containers. To have fast and predictable installation process in minutes |
![]() | PostgresSQL | Most popular relation database |
![]() | Ruby 3.2 | Most recent version of Ruby |
![]() | Rails 7 | Most recent version of Rails |
![]() | gem "config" | Configuration management tool |
![]() | Elasticsearch | The world’s leading Search engine |
![]() | Chewy | Ruby Connector to Elasticsearch |
![]() | Redis | In-memory data store. For caching and as a dependency of Sidekiq |
![]() | Sidekiq | Job Scheduler and Async Tasks Executor. Can be used as a stand alone tool or as ActiveJob backend |
![]() | Import Maps | Rails' recommended way to process JavaScript |
![]() | Puma | Application Web Server. To launch Rails app |
What I'm going to add...
Logotype | Description | Why it was added |
---|---|---|
![]() | Kaminari | Pagination solution |
![]() | Devise | Authentication solution for Rails |
![]() | Devise | Login with Facebook and Google |
![]() | Devise and Action Mailer | Sending emails for account confirmations |
![]() | Letter Opener | Email previwer for development |
![]() | whenever | Linux Cron based periodical tasks |
![]() | RSpec | Testing Framework for Rails |
![]() | Rubocop | Ruby static code analyzer (a.k.a. linter) and formatter. |
All trademarks, logos and brand names are the property of their respective owners.
On your host you have:
ONE!
git clone https://github.com/the-teacher/rails7-startkit.git
TWO!
cd rails7-startkit
THREE!
bin/setup
You will see something like that:
1. Launching PgSQL container
2. Launching ElasticSearch Container
3. Launching Rails container
4. Installing Gems. Please Wait
5. Create DB. Migrate DB. Create Seeds
6. Launching Redis Container
7. Indexing Article Model
8. Launching Rails App with Puma
9. Launching Sidekiq
10. Visit: http://localhost:3000
Index Page of the Project
bin/
commandsFrom the root of the project
Command | Description |
---|---|
bin/setup | Download images, run containers, initialize data, launch all processes. |
bin/open | Get in Rails Container (`rails` by default) |
bin/open rails | Get in Rails Container |
bin/open psql | Get in PgSQL Container |
bin/open redis | Get in Redis Container |
bin/open elastic | Get in ElasticSearch Container |
bin/status | To see running containers and launched services |
bin/start | Start everything if it is stopped |
bin/stop | Stop processes in Rails container |
bin/stop_all | Stop everything if it is running |
bin/index | Run Search engines indexation |
bin/reset | Reset data od services in ./db folder |
For demonstration, education and maintainance purposes I use the following approach:
Data
./db
UPPERCASED
./db
├── ELASTIC
├── PGSQL
└── REDIS
Configuration Files
./config
_UNDERSCORED
and UPPERCASED
./config
├── _CONFIG.yml
├── _PUMA.rb
└── _SIDEKIQ.yml
Initialazers
./config/initializers
_UNDERSCORED
and UPPERCASED
./config/initializers/
├── _CHEWY.rb
├── _CONFIG.rb
├── _REDIS.rb
└── _SIDEKIQ.rb
As a user to own files and run Rails inside a container I use
user:group
=> lucky:lucky
=> 7777:7777
If you would like to run the project on a linux environment then:
lucky (7777)
and user lucky (7777)
RUN_AS=7777:7777
optionFrom the root of the project
bin/open rails
Now you are in the Rails container and you can do everything as usual
RAILS_ENV=test rake db:create
rake test
What is an idea of this project?
For many years Rails gives you freedom to choose development tools. Different databases, different paginators, different search engines, different delayed job solutions.
It is great. But all the time you need to choose something and install it from scratch.
I think I did my choice about many solutions and tools.
I want to install my minimal pack of tools now and reuse my StartKit every time when I start a new project.
With Docker I can roll out my minimal application with all required preinstalled tools in minutes, not in hours or in days.
Why did you create this project?
I didn't work with Rails last 4 or 5 years. I wanted to learn new approaches and techniques. I found that there is still no a simple way to setup a blank app with most popular tools.
So. Why not to make my own playground?
How do you choose technologies for the StartKit?
I use tools that I like or want to learn.
I use tools that I think are the most popular ones.
It looks good for development. What about production?
I'm not a DevOps, but I have a vision how to deploy this code to production.
Right now it is not described somehow. It is in my plans.
Author: the-teacher
Source Code: https://github.com/the-teacher/rails7-startkit
License: MIT
1673448660
PostgreSQL in Great STYle
A battery-included, open-source RDS alternative.
Latest Beta: v2.0.0-b5 | Stable Version: v1.5.1 | Demo | GitHub Pages | Website
The current master branch is in beta (v2.0.0-b5). Check v1.5.1 for the stable release.
Pigsty is a Me-Better Open Source RDS Alternative with:
Check Feature for details.
It takes four steps to install Pigsty: Download, Bootstrap, Configure and Install.
Prepare a new node with Linux x86_64 EL compatible OS, then run as a sudo-able user:
bash -c "$(curl -fsSL http://download.pigsty.cc/getb)" && cd ~/pigsty
./bootstrap && ./configure && ./install.yml # install latest pigsty
Then you will have a pigsty singleton node ready, with Web Services on port 80
and Postgres on port 5432
.
getb
will get the latest beta, v2.0.0-b5, whileget
will use the last stable release, v1.5.1.
Download Directly
You can also download pigsty source and packages with git
or curl
directly:
curl -L https://github.com/Vonng/pigsty/releases/download/v2.0.0-b5/pigsty-v2.0.0-b5.tgz -o ~/pigsty.tgz
curl -L https://github.com/Vonng/pigsty/releases/download/v2.0.0-b5/pigsty-pkg-v2.0.0-b5.el7.x86_64.tgz -o /tmp/pkg.tgz
# or using git if curl not available
git clone https://github.com/Vonng/pigsty; cd pigsty; git checkout v2.0.0-b5
Check Installation for details.
Pigsty uses a modular design. There are six default modules available:
INFRA
: Local yum repo, Nginx, DNS, and entire Prometheus & Grafana observability stack.NODE
: Init node name, repo, pkg, NTP, ssh, admin, tune, expose services, collect logs & metrics.ETCD
: Init etcd cluster for HA Postgres DCS or Kubernetes, used as distributed config store.PGSQL
: Autonomous self-healing PostgreSQL cluster powered by Patroni, Pgbouncer, PgBackrest & HAProxyREDIS
: Deploy Redis servers in standalone master-replica, sentinel, and native cluster mode, optional.MINIO
: S3-compatible object storage service used as an optional central backup server for PGSQL
.You can compose them freely in a declarative manner. If you want host monitoring, INFRA
& NODE
will suffice. ETCD
and PGSQL
are used for HA PG clusters, install them on multiple nodes will automatically form a HA cluster. You can also reuse pigsty infra and develop your own modules, KAFKA
, MYSQL
, GPSQL
, and more will come.
The default install.yml
playbook in Get Started will install INFRA
, NODE
, ETCD
& PGSQL
on the current node. which gives you a battery-included PostgreSQL singleton instance (admin_ip:5432
) with everything ready. This node can be used as an admin center & infra provider to manage, deploy & monitor more nodes & clusters.
Check Architecture for details.
To deploy a 3-node HA Postgres Cluster with streaming replication, define a new cluster on all.children.pg-test
of pigsty.yml
:
pg-test:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
10.10.10.13: { pg_seq: 3, pg_role: offline }
vars: { pg_cluster: pg-test }
Then create it with built-in playbooks:
bin/pgsql-add pg-test # init pg-test cluster
You can deploy different kinds of instance roles such as primary, replica, offline, delayed, sync standby, and different kinds of clusters, such as standby clusters, Citus clusters, and even Redis/MinIO/Etcd clusters.
Example: Complex Postgres Customize
pg-meta:
hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary , pg_offline_query: true } }
vars:
pg_cluster: pg-meta
pg_databases: # define business databases on this cluster, array of database definition
- name: meta # REQUIRED, `name` is the only mandatory field of a database definition
baseline: cmdb.sql # optional, database sql baseline path, (relative path among ansible search path, e.g files/)
pgbouncer: true # optional, add this database to pgbouncer database list? true by default
schemas: [pigsty] # optional, additional schemas to be created, array of schema names
extensions: # optional, additional extensions to be installed: array of `{name[,schema]}`
- { name: postgis , schema: public }
- { name: timescaledb }
comment: pigsty meta database # optional, comment string for this database
owner: postgres # optional, database owner, postgres by default
template: template1 # optional, which template to use, template1 by default
encoding: UTF8 # optional, database encoding, UTF8 by default. (MUST same as template database)
locale: C # optional, database locale, C by default. (MUST same as template database)
lc_collate: C # optional, database collate, C by default. (MUST same as template database)
lc_ctype: C # optional, database ctype, C by default. (MUST same as template database)
tablespace: pg_default # optional, default tablespace, 'pg_default' by default.
allowconn: true # optional, allow connection, true by default. false will disable connect at all
revokeconn: false # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
register_datasource: true # optional, register this database to grafana datasources? true by default
connlimit: -1 # optional, database connection limit, default -1 disable limit
pool_auth_user: dbuser_meta # optional, all connection to this pgbouncer database will be authenticated by this user
pool_mode: transaction # optional, pgbouncer pool mode at database level, default transaction
pool_size: 64 # optional, pgbouncer pool size at database level, default 64
pool_size_reserve: 32 # optional, pgbouncer pool size reserve at database level, default 32
pool_size_min: 0 # optional, pgbouncer pool size min at database level, default 0
pool_max_db_conn: 100 # optional, max database connections at database level, default 100
- { name: grafana ,owner: dbuser_grafana ,revokeconn: true ,comment: grafana primary database }
- { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
- { name: kong ,owner: dbuser_kong ,revokeconn: true ,comment: kong the api gateway database }
- { name: gitea ,owner: dbuser_gitea ,revokeconn: true ,comment: gitea meta database }
- { name: wiki ,owner: dbuser_wiki ,revokeconn: true ,comment: wiki meta database }
pg_users: # define business users/roles on this cluster, array of user definition
- name: dbuser_meta # REQUIRED, `name` is the only mandatory field of a user definition
password: DBUser.Meta # optional, password, can be a scram-sha-256 hash string or plain text
login: true # optional, can log in, true by default (new biz ROLE should be false)
superuser: false # optional, is superuser? false by default
createdb: false # optional, can create database? false by default
createrole: false # optional, can create role? false by default
inherit: true # optional, can this role use inherited privileges? true by default
replication: false # optional, can this role do replication? false by default
bypassrls: false # optional, can this role bypass row level security? false by default
pgbouncer: true # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
connlimit: -1 # optional, user connection limit, default -1 disable limit
expire_in: 3650 # optional, now + n days when this role is expired (OVERWRITE expire_at)
expire_at: '2030-12-31' # optional, YYYY-MM-DD 'timestamp' when this role is expired (OVERWRITTEN by expire_in)
comment: pigsty admin user # optional, comment string for this user/role
roles: [dbrole_admin] # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
parameters: {} # optional, role level parameters with `ALTER ROLE SET`
pool_mode: transaction # optional, pgbouncer pool mode at user level, transaction by default
pool_connlimit: -1 # optional, max database connections at user level, default -1 disable limit
- {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database}
- {name: dbuser_grafana ,password: DBUser.Grafana ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for grafana database }
- {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for bytebase database }
- {name: dbuser_kong ,password: DBUser.Kong ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for kong api gateway }
- {name: dbuser_gitea ,password: DBUser.Gitea ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for gitea service }
- {name: dbuser_wiki ,password: DBUser.Wiki ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for wiki.js service }
pg_services: # extra services in addition to pg_default_services, array of service definition
# standby service will route {ip|name}:5435 to sync replica's pgbouncer (5435->6432 standby)
- name: standby # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
port: 5435 # required, service exposed port (work as kubernetes service node port mode)
ip: "*" # optional, service bind ip address, `*` for all ip by default
selector: "[]" # required, service member selector, use JMESPath to filter inventory
dest: default # optional, destination port, default|postgres|pgbouncer|<port_number>, 'default' by default
check: /sync # optional, health check url path, / by default
backup: "[? pg_role == `primary`]" # backup server selector
maxconn: 3000 # optional, max allowed front-end connection
balance: roundrobin # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'
pg_hba_rules:
- {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
pg_vip_enabled: true
pg_vip_address: 10.10.10.2/24
pg_vip_interface: eth1
node_crontab: # make a full backup 1 am everyday
- '00 01 * * * postgres /pg/bin/pg-backup full'
Example: Security Enhanced PG Cluster with Delayed Replica
pg-meta: # 3 instance postgres cluster `pg-meta`
hosts:
10.10.10.10: { pg_seq: 1, pg_role: primary }
10.10.10.11: { pg_seq: 2, pg_role: replica }
10.10.10.12: { pg_seq: 3, pg_role: replica , pg_offline_query: true }
vars:
pg_cluster: pg-meta
pg_conf: crit.yml
pg_users:
- { name: dbuser_meta , password: DBUser.Meta , pgbouncer: true , roles: [ dbrole_admin ] , comment: pigsty admin user }
- { name: dbuser_view , password: DBUser.Viewer , pgbouncer: true , roles: [ dbrole_readonly ] , comment: read-only viewer for meta database }
pg_databases:
- {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [{name: postgis, schema: public}, {name: timescaledb}]}
pg_services:
- { name: standby ,src_ip: "*" ,port: 5435 , dest: default ,selector: "[]" , backup: "[? pg_role == `primary`]" }
pg_vip_enabled: true
pg_vip_address: 10.10.10.2/24
pg_vip_interface: eth1
# OPTIONAL delayed cluster for pg-meta
pg-meta-delay: # delayed instance for pg-meta (1 hour ago)
hosts: { 10.10.10.13: { pg_seq: 1, pg_role: primary, pg_upstream: 10.10.10.10, pg_delay: 1h } }
vars: { pg_cluster: pg-meta-delay }
Example: Citus Cluster: 1 Coordinator x 3 Data Nodes
# citus coordinator node
pg-meta:
hosts:
10.10.10.10: { pg_seq: 1, pg_role: primary , pg_offline_query: true }
vars:
pg_cluster: pg-meta
pg_users: [{ name: citus ,password: citus ,pgbouncer: true ,roles: [dbrole_admin]}]
pg_databases:
- { name: meta ,schemas: [pigsty] ,extensions: [{name: postgis, schema: public},{ name: citus}] ,baseline: cmdb.sql ,comment: pigsty meta database}
# citus data node 1,2,3
pg-node1:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
vars:
pg_cluster: pg-node1
vip_address: 10.10.10.3
pg_users: [{ name: citus ,password: citus ,pgbouncer: true ,roles: [dbrole_admin]}]
pg_databases: [{ name: meta ,owner: citus , extensions: [{name: citus},{name: postgis, schema: public}]}]
pg-node2:
hosts:
10.10.10.12: { pg_seq: 1, pg_role: primary , pg_offline_query: true }
vars:
pg_cluster: pg-node2
vip_address: 10.10.10.4
pg_users: [ { name: citus , password: citus , pgbouncer: true , roles: [ dbrole_admin ] } ]
pg_databases: [ { name: meta , owner: citus , extensions: [ { name: citus }, { name: postgis, schema: public } ] } ]
pg-node3:
hosts:
10.10.10.13: { pg_seq: 1, pg_role: primary , pg_offline_query: true }
vars:
pg_cluster: pg-node3
vip_address: 10.10.10.5
pg_users: [ { name: citus , password: citus , pgbouncer: true , roles: [ dbrole_admin ] } ]
pg_databases: [ { name: meta , owner: citus , extensions: [ { name: citus }, { name: postgis, schema: public } ] } ]
Example: Redis Cluster/Sentinel/Standalone
redis-ms: # redis classic primary & replica
hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6501: { }, 6502: { replica_of: '10.10.10.13 6501' } } } }
vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }
redis-meta: # redis sentinel x 3
hosts: { 10.10.10.11: { redis_node: 1 , redis_instances: { 6001: { } ,6002: { } , 6003: { } } } }
vars: { redis_cluster: redis-meta, redis_mode: sentinel ,redis_max_memory: 16MB }
redis-test: # redis native cluster: 3m x 3s
hosts:
10.10.10.12: { redis_node: 1 ,redis_instances: { 6501: { } ,6502: { } ,6503: { } } }
10.10.10.13: { redis_node: 2 ,redis_instances: { 6501: { } ,6502: { } ,6503: { } } }
vars: { redis_cluster: redis-test ,redis_mode: cluster, redis_max_memory: 32MB }
Example: ETCD 3 Node Cluster
etcd: # dcs service for postgres/patroni ha consensus
hosts: # 1 node for testing, 3 or 5 for production
10.10.10.10: { etcd_seq: 1 } # etcd_seq required
10.10.10.11: { etcd_seq: 2 } # assign from 1 ~ n
10.10.10.12: { etcd_seq: 3 } # odd number please
vars: # cluster level parameter override roles/etcd
etcd_cluster: etcd # mark etcd cluster name etcd
etcd_safeguard: false # safeguard against purging
etcd_clean: true # purge etcd during init process
Example: Minio 3 Node Deployment
minio:
hosts:
10.10.10.10: { minio_seq: 1 }
10.10.10.11: { minio_seq: 2 }
10.10.10.12: { minio_seq: 3 }
vars:
minio_cluster: minio
minio_data: '/data{1...2}' # use two disk per node
minio_node: '${minio_cluster}-${minio_seq}.pigsty' # minio node name pattern
haproxy_services:
- name: minio # [REQUIRED] service name, unique
port: 9002 # [REQUIRED] service port, unique
options:
- option httpchk
- option http-keep-alive
- http-check send meth OPTIONS uri /minio/health/live
- http-check expect status 200
servers:
- { name: minio-1 ,ip: 10.10.10.10 , port: 9000 , options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
- { name: minio-2 ,ip: 10.10.10.11 , port: 9000 , options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
- { name: minio-3 ,ip: 10.10.10.12 , port: 9000 , options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
Check Configuration for details.
Author: Vonng
Source Code: https://github.com/Vonng/pigsty
License: AGPL-3.0 license
1672326480
amphp/redis
provides non-blocking access to Redis instances. All I/O operations are handled by the Amp concurrency framework, so you should be familiar with the basics of it.
This package can be installed as a Composer dependency.
composer require amphp/redis
<?php
require __DIR__ . '/vendor/autoload.php';
use Amp\Redis\Config;
use Amp\Redis\Redis;
use Amp\Redis\RemoteExecutor;
Amp\Loop::run(static function () {
$redis = new Redis(new RemoteExecutor(Config::fromUri('redis://')));
yield $redis->set('foo', '21');
$result = yield $redis->increment('foo', 21);
\var_dump($result); // int(42)
});
If you discover any security related issues, please email me@kelunik.com
instead of using the issue tracker.
Author: amphp
Source Code: https://github.com/amphp/redis
License: MIT license
1672269960
A Redis client for the Crystal programming language.
Add it to your shard.yml
:
dependencies:
redis:
github: stefanwille/crystal-redis
and then install the library into your project:
$ shards install
On MacOS X you may get this error:
ld: library not found for -lssl (this usually means you need to install the development package for libssl)
clang: error: linker command failed with exit code 1 (use -v to see invocation)
...
Or this warning:
Package libssl was not found in the pkg-config search path.
Perhaps you should add the directory containing `libssl.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libssl' found
Package libcrypto was not found in the pkg-config search path.
Perhaps you should add the directory containing `libcrypto.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libcrypto' found
The problem is that Crystal can't find openssl, because it is not installed by default on MacOS X.
The fix:
$ brew install openssl
PKG_CONFIG_PATH
:$ export PKG_CONFIG_PATH=/usr/local/opt/openssl/lib/pkgconfig
Note: Please write me if you know a better way!
This library needs Crystal version >= 0.34.0
I haven't tested older Crystal versions.
Require the package:
require "redis"
then
redis = Redis.new
Then you can call Redis commands on the redis
object:
redis.set("foo", "bar")
redis.get("foo")
Since version 2.0.0, a connection pool is built in. It is used implicitly through Redis::PooledClient
:
redis = Redis::PooledClient.new
10.times do |i|
spawn do
redis.set("foo#{i}", "bar")
redis.get("foo#{i}") # => "bar"
end
end
This redis
instance can be shared across fibers, and accepts the same Redis commands as the Redis
class. It automatically allocates and frees connections from/to the pool, per command.
:warning: If you are using Redis in a web context (e. g. with a framework like Kemal), you need to use connection pooling.
To get started, see the examples:
Redis
.I have benchmarked Crystal-Redis against several other client libraries in various programming languages in this blog article.
Here are some results:
Crystal: With this library I get > 680,000 commands per second using pipeline on a MacBook Air with a single client thread.
C: The equivalent program written in C with Hiredis gets me 340,000 commands per second.
Ruby: Ruby 2.2.1 with the redis-rb and Hiredis driver handles 150,000 commands per second.
Read more results for Go, Java, Node.js.
I have exercised every API method in the spec and built some example programs. Some people report production usage.
I took great care to make this library very usable with respect to API, reliability and documentation.
This project requires a locally running redis server running on port 6379 and with a Unix socket located at /tmp/redis.sock. In Homebrew's default redis.config the Unix domain socket option is disabled. To enable, edit /usr/local/etc/redis.conf
or whatever your redis.conf
is and uncomment this line:
# unixsocket /tmp/redis.sock
so that it reads
unixsocket /tmp/redis.sock
Then you can run the specs via
$ crystal spec
Running the spec will delete database number 0!
If you have questions or need help, please open a ticket in the GitHub issue tracker. This way others can benefit from the discussion.
Author: Stefanwille
Source Code: https://github.com/stefanwille/crystal-redis
License: MIT license
1671613372
Redis is a data store that can store key-value data structures on memory and disk by optimizing the rapid creation of applications through the availability of very versatile data structures. Moreover, you can use it as a NoSQL database or even a message broker with the Pub-Sub pattern. Redis is written in C programming language. The Redis project is developed and maintained by a project core team and, as of 2015, is sponsored by Redis Labs . In This tutorial will help you install the Redis server and PHP extensions on an Ubuntu 22.04 LTS system.
Redis packages are available under the default apt repository for the installation of Redis on an Ubuntu VPS.
Start by updating the packages to the latest version. Run the following command:
sudo apt update
Install Redis using the following command.
sudo apt install redis-server
Once the installation is completed, you can check the version of Redis using the following command.
redis-server -v
Redis can start without a configuration file using a built-in default configuration. Aim to make Any extra parameter exchange, you can use ict configuration file: /etc/redis/redis.conf
. Edit the Redis configuration file in a text editor to make changes:
sudo nano /etc/redis/redis.conf
Configure Memory
Update the following values in the Redis configuration file. You can use its configuration file /etc/redis/redis.conf
.
maxmemory 256mb
maxmemory-policy allkeys-lru
Configure supervisord
For Ubuntu, we can safely select the systemd as the supervised so that Redis can interact with your supervision tree. You can use its configuration file /etc/redis/redis.conf
.
Binding to localhost
By default, the Redis server doesn't accept remote connections. You can connect to Redis only from 127.0.0.1 (localhost) - the machine where Redis is running.
If you are using a single server setup where the client connecting to the database is also running on the same host, you should not enable remote access. You can use its configuration file /etc/redis/redis.conf
.
bind 127.0.0.1 ::1
Verify redis is listening on all interfaces on port 6379. Run the following command:
ss -an | grep 6379
Configure Password
Configuring a Redis password enables one of its two built-in security features - the auth command, which requires clients to authenticate to access the database. You can use its configuration file /etc/redis/redis.conf
.
Redis for the changes to take effect.
sudo systemctl restart redis-server
Next, if you need to use Redis with a PHP application, you need to install Redis PHP extension on your Ubuntu system. To install the Redis PHP extension, type:
sudo apt install php-redis
The installer will automatically enable the redis extension for all the pre-installed PHP versions. If your installer new PHP version after this, you can use the below command to help the redis module. Run the following command:
sudo phpenmod -v <Any PHP Version> -s ALL redis
Redis provides redis-cli utility to connect to the Redis server. Run the following command:
redis-cli
Few more examples of the redis-cli command-line tool.
redis-cli info
redis-cli info stats
redis-cli info server
You can find more details about redis-cli here .
Now that you have your service up and running, let's go over basic management commands.
To stop your service, run this command:
sudo systemctl stop redis-server
To start your service, run this command:
sudo systemctl start redis-server
To disable your service, run this command:
sudo systemctl disable redis-server
To enable your service, run this command:
sudo systemctl enable redis-server
To status your service, run this command:
sudo systemctl status redis-server
Thanks for reading !!!
Originally published at https://techvblogs.com/
1669032420
This article looks at how server-side sessions can be utilized in Flask with Flask-Session and Redis.
This article is part of a two-part series on how sessions can be used in Flask:
This article assumes that you have prior experience with Flask. If you're interested in learning more about Flask, check out my course on how to build, test, and deploy a Flask application:
Since HTTP is a stateless protocol, each request has no knowledge of any requests previously executed:
While this greatly simplifies client/server communication, web apps typically need a way to store data between each request as a user interacts with the app itself.
For example, on an e-commerce website, you'd typically store items that a user has added to their shopping cart to a database so that once they're done shopping they can view their cart to purchase the items. This workflow, of storing items in the database, only works for authenticated users, though. So, you need a way to store user-specific data for non-authenticated users between requests.
That's where sessions come into play.
A session is used to store information related to a user, across different requests, as they interact with a web app. So, in the above example, the shopping cart items would be added to a user's session.
The data stored for a session should be considered temporary data, as the session will eventually expire. In order to permanently store date, you need to utilize a database.
Computer storage is a nice analogy here: Temporary items on a computer are stored in RAM (Random Access Memory), much like sessions, while permanent items are stored on the hard drive, much like databases.
Examples of data to store in a session:
Examples of data to store in a database:
In Flask, you can store information specific to a user for the duration of a session. Saving data for use throughout a session allows the web app to keep data persistent over multiple requests -- i.e., as a user accesses different pages within a web app.
There are two types of sessions commonly used in web development:
Flask uses the client-side approach as the built-in solution to sessions.
Curious about client-side sessions? Review the Sessions in Flask article.
Server-side sessions store the data associated with the session on the server in a particular data storage solution. A cryptographically-signed cookie is included in each response from Flask for specifying a session identifier. This cookie is returned in the next request to the Flask app, which is then used to load the session data from the server-side storage.
Pros:
Cons:
Flask-Session is an extension for Flask that enables server-side sessions. It supports a variety of solutions for storing the session data on the server-side:
In this article, we'll be using Redis, an in-memory data structure store, due to its fast read/write speed and ease of setup.
Refer to the Configuration section of the Flask-Session documentation for how to configure other data storage solutions.
Flask-Session uses Flask's Session Interface, which provides a simple way to replace Flask's built-in session implementation, so you can continue to use the session
object as you normally would with the built-in client-side session implementation.
The following app.py file illustrates how to use server-side sessions in Flask with Flask-Session:
from datetime import timedelta
import redis
from flask import Flask, render_template_string, request, session, redirect, url_for
from flask_session import Session
# Create the Flask application
app = Flask(__name__)
# Details on the Secret Key: https://flask.palletsprojects.com/en/1.1.x/config/#SECRET_KEY
# NOTE: The secret key is used to cryptographically-sign the cookies used for storing
# the session identifier.
app.secret_key = 'BAD_SECRET_KEY'
# Configure Redis for storing the session data on the server-side
app.config['SESSION_TYPE'] = 'redis'
app.config['SESSION_PERMANENT'] = False
app.config['SESSION_USE_SIGNER'] = True
app.config['SESSION_REDIS'] = redis.from_url('redis://localhost:6379')
# Create and initialize the Flask-Session object AFTER `app` has been configured
server_session = Session(app)
@app.route('/set_email', methods=['GET', 'POST'])
def set_email():
if request.method == 'POST':
# Save the form data to the session object
session['email'] = request.form['email_address']
return redirect(url_for('get_email'))
return """
<form method="post">
<label for="email">Enter your email address:</label>
<input type="email" id="email" name="email_address" required />
<button type="submit">Submit</button
</form>
"""
@app.route('/get_email')
def get_email():
return render_template_string("""
{% if session['email'] %}
<h1>Welcome {{ session['email'] }}!</h1>
{% else %}
<h1>Welcome! Please enter your email <a href="{{ url_for('set_email') }}">here.</a></h1>
{% endif %}
""")
@app.route('/delete_email')
def delete_email():
# Clear the email stored in the session object
session.pop('email', default=None)
return '<h1>Session deleted!</h1>'
if __name__ == '__main__':
app.run()
To run this example, start by creating and activating a new virtual environment:
$ mkdir flask-server-side-sessions
$ cd flask-server-side-sessions
$ python3 -m venv venv
$ source venv/bin/activate
Install and run Redis.
The quickest way to get Redis up and running is with Docker:
$ docker run --name some-redis -d -p 6379:6379 redis
If you're not a Docker user, check out these resources:
Install Flask, Flask-Session, and redis-py:
(venv)$ pip install Flask Flask-Session redis
Since we're using Redis as the session data store, redis-py is required.
Save the above code to an app.py file. Then, start the Flask development server:
(venv)$ export FLASK_APP=app.py
(venv)$ export FLASK_ENV=development
(venv)$ python -m flask run
Now navigate to http://localhost:5000/get_email using your favorite web browser:
After the Flask application (app
) is created, the secret key needs to be specified:
# Details on the Secret Key: https://flask.palletsprojects.com/en/1.1.x/config/#SECRET_KEY
# NOTE: The secret key is used to cryptographically-sign the cookies used for storing
# the session identifier.
app.secret_key = 'BAD_SECRET_KEY'
The secret key is used to cryptographically-sign the cookies that store the session identifier.
Next, the configuration of Redis as the storage solution for the server-side session data needs to be defined:
# Configure Redis for storing the session data on the server-side
app.config['SESSION_TYPE'] = 'redis'
app.config['SESSION_PERMANENT'] = False
app.config['SESSION_USE_SIGNER'] = True
app.config['SESSION_REDIS'] = redis.from_url('redis://localhost:6379')
# Create and initialize the Flask-Session object AFTER `app` has been configured
server_session = Session(app)
Configuration variables:
SESSION_TYPE
- specifies which type of session interface to useSESSION_PERMANENT
- indicates whether to use permanent sessions (defaults to True
)SESSION_USE_SIGNER
- indicates whether to sign the session cookie identifier (defaults to False
)SESSION_REDIS
- specifies the Redis instance (default connection is to 127.0.0.1:6379
)Refer to the Configuration section of the Flask-Session documentation for details on all available configuration variables.
In this example, the set_email
view function processes the email when the form is submitted:
@app.route('/set_email', methods=['GET', 'POST'])
def set_email():
if request.method == 'POST':
# Save the form data to the session object
session['email'] = request.form['email_address']
return redirect(url_for('get_email'))
return """
<form method="post">
<label for="email">Enter your email address:</label>
<input type="email" id="email" name="email_address" required />
<button type="submit">Submit</button
</form>
"""
This view function supports the GET and POST HTTP methods. When the GET method is used, an HTML form is returned for you to enter your email address:
When you submit the form with with your email address (via the POST method), the email is saved in the session
object:
# Save the form data to the session object
session['email'] = request.form['email_address']
Go ahead and enter your email at http://localhost:5000/set_email, and submit the form.
The get_email
view function utilizes the Jinja templating engine to display either the email address stored in the session
object or a link to the set_email()
view function when an email is not stored in the session:
@app.route('/get_email')
def get_email():
return render_template_string("""
{% if session['email'] %}
<h1>Welcome {{ session['email'] }}!</h1>
{% else %}
<h1>Welcome! Please enter your email <a href="{{ url_for('set_email') }}">here.</a></h1>
{% endif %}
""")
The session
object is available for use within the template files!
When you navigate to the http://localhost:5000/get_email URL after entering your email address, your email will be displayed:
The email address stored in the session
object can be deleted via the delete_email
view function:
@app.route('/delete_email')
def delete_email():
# Clear the email stored in the session object
session.pop('email', default=None)
return '<h1>Session deleted!</h1>'
This view function pop
s the email
element from the session
object. The pop
method will return the value popped, so it's a good practice to provide a default value in case the element is not defined in the session
object.
When you navigates to the 'http://localhost:5000/delete_email URL, you will see:
With the email address no longer stored in the session
object, you'll once again be asked to enter your email address when you navigate to the http://localhost:5000/get_email URL:
To demonstrate how session data is unique to each user, enter your email address again at http://localhost:5000/set_email. Then, within a different browser (or a private/incognito window in your current browser) navigate to http://localhost:5000/set_email and enter a different email address. What do you expect to see after you're redirected to http://localhost:5000/get_email?
Since a different web browser is being used, this is considered a different user to the Flask app. Therefore, there will be a unique session
utilized for that user.
To see this in greater detail, you can examine what's stored in the Redis database after accessing the Flask app from two different web browsers on your computer:
$ redis-cli
127.0.0.1:6379> KEYS *
1) "session:8a77d85b-7ed9-4961-958a-510240bcbac4"
2) "session:5ce4b8e2-a2b5-43e4-a0f9-7fa465b7bb0c"
127.0.0.1:6379> exit
$
There are two different sessions stored in Redis, which correspond to the two different web browsers used to access the Flask app:
8a77d85b-7ed9-4961-958a-510240bcbac4
is from Firefoxsession:5ce4b8e2-a2b5-43e4-a0f9-7fa465b7bb0c
is from ChromeThis article showed how server-side sessions can be implemented in Flask with Flask-Session and Redis.
If you'd like to learn more about about sessions in Flask, be sure to check out my course -- Developing Web Applications with Python and Flask.
Original article source at: https://testdriven.io/
1668680340
If a long-running task is part of your application's workflow you should handle it in the background, outside the normal flow.
Perhaps your web application requires users to submit a thumbnail (which will probably need to be re-sized) and confirm their email when they register. If your application processed the image and sent a confirmation email directly in the request handler, then the end user would have to wait for them both to finish. Instead, you'll want to pass these tasks off to a task queue and let a separate worker process deal with it, so you can immediately send a response back to the client. The end user can do other things on the client-side and your application is free to respond to requests from other users.
This tutorial looks at how to configure Redis Queue (RQ) to handle long-running tasks in a Flask app.
Celery is a viable solution as well. Check out Asynchronous Tasks with Flask and Celery for more.
By the end of this tutorial, you will be able to:
Our goal is to develop a Flask application that works in conjunction with Redis Queue to handle long-running processes outside the normal request/response cycle.
In the end, the app will look like this:
Want to follow along? Clone down the base project, and then review the code and project structure:
$ git clone https://github.com/mjhea0/flask-redis-queue --branch base --single-branch
$ cd flask-redis-queue
Since we'll need to manage three processes in total (Flask, Redis, worker), we'll use Docker to simplify our workflow so they can be managed from a single terminal window.
To test, run:
$ docker-compose up -d --build
Open your browser to http://localhost:5004. You should see:
An event handler in project/client/static/main.js is set up that listens for a button click and sends an AJAX POST request to the server with the appropriate task type: 1
, 2
, or 3
.
$('.btn').on('click', function() {
$.ajax({
url: '/tasks',
data: { type: $(this).data('type') },
method: 'POST'
})
.done((res) => {
getStatus(res.data.task_id);
})
.fail((err) => {
console.log(err);
});
});
On the server-side, a view is already configured to handle the request in project/server/main/views.py:
@main_blueprint.route("/tasks", methods=["POST"])
def run_task():
task_type = request.form["type"]
return jsonify(task_type), 202
We just need to wire up Redis Queue.
So, we need to spin up two new processes: Redis and a worker. Add them to the docker-compose.yml file:
version: '3.8'
services:
web:
build: .
image: web
container_name: web
ports:
- 5004:5000
command: python manage.py run -h 0.0.0.0
volumes:
- .:/usr/src/app
environment:
- FLASK_DEBUG=1
- APP_SETTINGS=project.server.config.DevelopmentConfig
depends_on:
- redis
worker:
image: web
command: python manage.py run_worker
volumes:
- .:/usr/src/app
environment:
- APP_SETTINGS=project.server.config.DevelopmentConfig
depends_on:
- redis
redis:
image: redis:6.2-alpine
Add the task to a new file called tasks.py in "project/server/main":
# project/server/main/tasks.py
import time
def create_task(task_type):
time.sleep(int(task_type) * 10)
return True
Update the view to connect to Redis, enqueue the task, and respond with the id:
@main_blueprint.route("/tasks", methods=["POST"])
def run_task():
task_type = request.form["type"]
with Connection(redis.from_url(current_app.config["REDIS_URL"])):
q = Queue()
task = q.enqueue(create_task, task_type)
response_object = {
"status": "success",
"data": {
"task_id": task.get_id()
}
}
return jsonify(response_object), 202
Don't forget the imports:
import redis
from rq import Queue, Connection
from flask import render_template, Blueprint, jsonify, request, current_app
from project.server.main.tasks import create_task
Update BaseConfig
:
class BaseConfig(object):
"""Base configuration."""
WTF_CSRF_ENABLED = True
REDIS_URL = "redis://redis:6379/0"
QUEUES = ["default"]
Did you notice that we referenced the redis
service (from docker-compose.yml) in the REDIS_URL
rather than localhost
or some other IP? Review the Docker Compose docs for more info on connecting to other services via the hostname.
Finally, we can use a Redis Queue worker, to process tasks at the top of the queue.
manage.py:
@cli.command("run_worker")
def run_worker():
redis_url = app.config["REDIS_URL"]
redis_connection = redis.from_url(redis_url)
with Connection(redis_connection):
worker = Worker(app.config["QUEUES"])
worker.work()
Here, we set up a custom CLI command to fire the worker.
It's important to note that the @cli.command()
decorator will provide access to the application context along with the associated config variables from project/server/config.py when the command is executed.
Add the imports as well:
import redis
from rq import Connection, Worker
Add the dependencies to the requirements file:
redis==4.1.1
rq==1.10.1
Build and spin up the new containers:
$ docker-compose up -d --build
To trigger a new task, run:
$ curl -F type=0 http://localhost:5004/tasks
You should see something like:
{
"data": {
"task_id": "bdad64d0-3865-430e-9cc3-ec1410ddb0fd"
},
"status": "success"
}
Turn back to the event handler on the client-side:
$('.btn').on('click', function() {
$.ajax({
url: '/tasks',
data: { type: $(this).data('type') },
method: 'POST'
})
.done((res) => {
getStatus(res.data.task_id);
})
.fail((err) => {
console.log(err);
});
});
Once the response comes back from the original AJAX request, we then continue to call getStatus()
with the task id every second. If the response is successful, a new row is added to the table on the DOM.
function getStatus(taskID) {
$.ajax({
url: `/tasks/${taskID}`,
method: 'GET',
})
.done((res) => {
const html = `
<tr>
<td>${res.data.task_id}</td>
<td>${res.data.task_status}</td>
<td>${res.data.task_result}</td>
</tr>`;
$('#tasks').prepend(html);
const taskStatus = res.data.task_status;
if (taskStatus === 'finished' || taskStatus === 'failed') return false;
setTimeout(function () {
getStatus(res.data.task_id);
}, 1000);
})
.fail((err) => {
console.log(err);
});
}
Update the view:
@main_blueprint.route("/tasks/<task_id>", methods=["GET"])
def get_status(task_id):
with Connection(redis.from_url(current_app.config["REDIS_URL"])):
q = Queue()
task = q.fetch_job(task_id)
if task:
response_object = {
"status": "success",
"data": {
"task_id": task.get_id(),
"task_status": task.get_status(),
"task_result": task.result,
},
}
else:
response_object = {"status": "error"}
return jsonify(response_object)
Add a new task to the queue:
$ curl -F type=1 http://localhost:5004/tasks
Then, grab the task_id
from the response and call the updated endpoint to view the status:
$ curl http://localhost:5004/tasks/5819789f-ebd7-4e67-afc3-5621c28acf02
{
"data": {
"task_id": "5819789f-ebd7-4e67-afc3-5621c28acf02",
"task_result": true,
"task_status": "finished"
},
"status": "success"
}
Test it out in the browser as well:
RQ Dashboard is a lightweight, web-based monitoring system for Redis Queue.
To set up, first add a new directory to the "project" directory called "dashboard". Then, add a new Dockerfile to that newly created directory:
FROM python:3.10-alpine
RUN pip install rq-dashboard
# https://github.com/rq/rq/issues/1469
RUN pip uninstall -y click
RUN pip install click==7.1.2
EXPOSE 9181
CMD ["rq-dashboard"]
Simply add the service to the docker-compose.yml file like so:
version: '3.8'
services:
web:
build: .
image: web
container_name: web
ports:
- 5004:5000
command: python manage.py run -h 0.0.0.0
volumes:
- .:/usr/src/app
environment:
- FLASK_DEBUG=1
- APP_SETTINGS=project.server.config.DevelopmentConfig
depends_on:
- redis
worker:
image: web
command: python manage.py run_worker
volumes:
- .:/usr/src/app
environment:
- APP_SETTINGS=project.server.config.DevelopmentConfig
depends_on:
- redis
redis:
image: redis:6.2-alpine
dashboard:
build: ./project/dashboard
image: dashboard
container_name: dashboard
ports:
- 9181:9181
command: rq-dashboard -H redis
depends_on:
- redis
Build the image and spin up the container:
$ docker-compose up -d --build
Navigate to http://localhost:9181 to view the dashboard:
Kick off a few jobs to fully test the dashboard:
Try adding a few more workers to see how that affects things:
$ docker-compose up -d --build --scale worker=3
This has been a basic guide on how to configure Redis Queue to run long-running tasks in a Flask app. You should let the queue handle any processes that could block or slow down the user-facing code.
Looking for some challenges?
Grab the code from the repo.
Original article source at: https://testdriven.io/
1667582100
NAME
Redis - Perl binding for Redis database
VERSION
version 1.999
SYNOPSIS
## Defaults to $ENV{REDIS_SERVER} or 127.0.0.1:6379
my $redis = Redis->new;
my $redis = Redis->new(server => 'redis.example.com:8080');
## Set the connection name (requires Redis 2.6.9)
my $redis = Redis->new(
server => 'redis.example.com:8080',
name => 'my_connection_name',
);
my $generation = 0;
my $redis = Redis->new(
server => 'redis.example.com:8080',
name => sub { "cache-$$-".++$generation },
);
## Use UNIX domain socket
my $redis = Redis->new(sock => '/path/to/socket');
## Connect to Redis over a secure SSL/TLS channel. See
## IO::Socket::SSL documentation for more information
## about SSL_verify_mode parameter.
my $redis = Redis->new(
server => 'redis.tls.example.com:8080',
ssl => 1,
SSL_verify_mode => SSL_VERIFY_PEER,
);
## Enable auto-reconnect
## Try to reconnect every 1s up to 60 seconds until success
## Die if you can't after that
my $redis = Redis->new(reconnect => 60, every => 1_000_000);
## Try each 100ms up to 2 seconds (every is in microseconds)
my $redis = Redis->new(reconnect => 2, every => 100_000);
## Enable connection timeout (in seconds)
my $redis = Redis->new(cnx_timeout => 60);
## Enable read timeout (in seconds)
my $redis = Redis->new(read_timeout => 0.5);
## Enable write timeout (in seconds)
my $redis = Redis->new(write_timeout => 1.2);
## Connect via a list of Sentinels to a given service
my $redis = Redis->new(sentinels => [ '127.0.0.1:12345' ], service => 'mymaster');
## Same, but with connection, read and write timeout on the sentinel hosts
my $redis = Redis->new( sentinels => [ '127.0.0.1:12345' ], service => 'mymaster',
sentinels_cnx_timeout => 0.1,
sentinels_read_timeout => 1,
sentinels_write_timeout => 1,
);
## Use all the regular Redis commands, they all accept a list of
## arguments
## See https://redis.io/commands for full list
$redis->get('key');
$redis->set('key' => 'value');
$redis->sort('list', 'DESC');
$redis->sort(qw{list LIMIT 0 5 ALPHA DESC});
## Add a coderef argument to run a command in the background
$redis->sort(qw{list LIMIT 0 5 ALPHA DESC}, sub {
my ($reply, $error) = @_;
die "Oops, got an error: $error\n" if defined $error;
print "$_\n" for @$reply;
});
long_computation();
$redis->wait_all_responses;
## or
$redis->wait_one_response();
## Or run a large batch of commands in a pipeline
my %hash = _get_large_batch_of_commands();
$redis->hset('h', $_, $hash{$_}, sub {}) for keys %hash;
$redis->wait_all_responses;
## Publish/Subscribe
$redis->subscribe(
'topic_1',
'topic_2',
sub {
my ($message, $topic, $subscribed_topic) = @_
## $subscribed_topic can be different from topic if
## you use psubscribe() with wildcards
}
);
$redis->psubscribe('nasdaq.*', sub {...});
## Blocks and waits for messages, calls subscribe() callbacks
## ... forever
my $timeout = 10;
$redis->wait_for_messages($timeout) while 1;
## ... until some condition
my $keep_going = 1; ## other code will set to false to quit
$redis->wait_for_messages($timeout) while $keep_going;
$redis->publish('topic_1', 'message');
DESCRIPTION
Pure perl bindings for https://redis.io/
This version supports protocol 2.x (multi-bulk) or later of Redis available at https://github.com/antirez/redis/.
This documentation lists commands which are exercised in test suite, but additional commands will work correctly since protocol specifies enough information to support almost all commands with same piece of code with a little help of AUTOLOAD
.
PIPELINING
Usually, running a command will wait for a response. However, if you're doing large numbers of requests, it can be more efficient to use what Redis calls pipelining: send multiple commands to Redis without waiting for a response, then wait for the responses that come in.
To use pipelining, add a coderef argument as the last argument to a command method call:
$r->set('foo', 'bar', sub {});
Pending responses to pipelined commands are processed in a single batch, as soon as at least one of the following conditions holds:
A non-pipelined (synchronous) command is called on the same connection
A pub/sub subscription command (one of subscribe
, unsubscribe
, psubscribe
, or punsubscribe
) is about to be called on the same connection.
One of "wait_all_responses" or "wait_one_response" methods is called explicitly.
The coderef you supply to a pipelined command method is invoked once the response is available. It takes two arguments, $reply
and $error
. If $error
is defined, it contains the text of an error reply sent by the Redis server. Otherwise, $reply
is the non-error reply. For almost all commands, that means it's undef
, or a defined but non-reference scalar, or an array ref of any of those; but see "keys", "info", and "exec".
Note the contrast with synchronous commands, which throw an exception on receipt of an error reply, or return a non-error reply directly.
The fact that pipelined commands never throw an exception can be particularly useful for Redis transactions; see "exec".
ENCODING
There is no encoding feature anymore, it has been deprecated and finally removed. This module consider that any data sent to the Redis server is a binary data. And it doesn't do anything when getting data from the Redis server.
So, if you are working with character strings, you should pre-encode or post-decode it if needed !
CONSTRUCTOR
my $r = Redis->new; # $ENV{REDIS_SERVER} or 127.0.0.1:6379
my $r = Redis->new( server => '192.168.0.1:6379', debug => 0 );
my $r = Redis->new( server => '192.168.0.1:6379', encoding => undef );
my $r = Redis->new( server => '192.168.0.1:6379', ssl => 1, SSL_verify_mode => SSL_VERIFY_PEER );
my $r = Redis->new( sock => '/path/to/sock' );
my $r = Redis->new( reconnect => 60, every => 5000 );
my $r = Redis->new( password => 'boo' );
my $r = Redis->new( on_connect => sub { my ($redis) = @_; ... } );
my $r = Redis->new( name => 'my_connection_name' );
my $r = Redis->new( name => sub { "cache-for-$$" });
my $redis = Redis->new(sentinels => [ '127.0.0.1:12345', '127.0.0.1:23456' ],
service => 'mymaster');
## Connect via a list of Sentinels to a given service
my $redis = Redis->new(sentinels => [ '127.0.0.1:12345' ], service => 'mymaster');
## Same, but with connection, read and write timeout on the sentinel hosts
my $redis = Redis->new( sentinels => [ '127.0.0.1:12345' ], service => 'mymaster',
sentinels_cnx_timeout => 0.1,
sentinels_read_timeout => 1,
sentinels_write_timeout => 1,
);
server
The server
parameter specifies the Redis server we should connect to, via TCP. Use the 'IP:PORT' format. If no server
option is present, we will attempt to use the REDIS_SERVER
environment variable. If neither of those options are present, it defaults to '127.0.0.1:6379'.
Alternatively you can use the sock
parameter to specify the path of the UNIX domain socket where the Redis server is listening.
Alternatively you can use the sentinels
parameter and the service
parameter to specify a list of sentinels to contact and try to get the address of the given service name. sentinels
must be an ArrayRef and service
an Str.
The REDIS_SERVER
can be used for UNIX domain sockets too. The following formats are supported:
/path/to/sock
unix:/path/to/sock
127.0.0.1:11011
tcp:127.0.0.1:11011
reconnect
, every
The reconnect
option enables auto-reconnection mode. If we cannot connect to the Redis server, or if a network write fails, we enter retry mode. We will try a new connection every every
microseconds (1 ms by default), up-to reconnect
seconds.
Be aware that read errors will always thrown an exception, and will not trigger a retry until the new command is sent.
If we cannot re-establish a connection after reconnect
seconds, an exception will be thrown.
conservative_reconnect
conservative_reconnect
option makes sure that reconnection is only attempted when no pending command is ongoing. For instance, if you're doing <$redis-
incr('key')>>, and if the server properly understood and processed the command, but the network connection is dropped just before the server replies : the command has been processed but the client doesn't know it. In this situation, if reconnect is enabled, the Redis client will reconnect and send the incr
command *again*. If it succeeds, at the end the key as been incremented *two* times. To avoid this issue, you can set the conservative_reconnect
option to a true value. In this case, the client will reconnect only if no request is pending. Otherwise it will die with the message: reconnect disabled while responses are pending and safe reconnect mode enabled
.
cnx_timeout
The cnx_timeout
option enables connection timeout. The Redis client will wait at most that number of seconds (can be fractional) before giving up connecting to a server.
sentinels_cnx_timeout
The sentinels_cnx_timeout
option enables sentinel connection timeout. When using the sentinels feature, Redis client will wait at most that number of seconds (can be fractional) before giving up connecting to a sentinel. Default: 0.1
read_timeout
The read_timeout
option enables read timeout. The Redis client will wait at most that number of seconds (can be fractional) before giving up when reading from the server.
sentinels_read_timeout
The sentinels_read_timeout
option enables sentinel read timeout. When using the sentinels feature, the Redis client will wait at most that number of seconds (can be fractional) before giving up when reading from a sentinel server. Default: 1
write_timeout
The write_timeout
option enables write timeout. The Redis client will wait at most that number of seconds (can be fractional) before giving up when reading from the server.
sentinels_write_timeout
The sentinels_write_timeout
option enables sentinel write timeout. When using the sentinels feature, the Redis client will wait at most that number of seconds (can be fractional) before giving up when reading from a sentinel server. Default: 1
password
If your Redis server requires authentication, you can use the password
attribute. After each established connection (at the start or when reconnecting), the Redis AUTH
command will be send to the server. If the password is wrong, an exception will be thrown and reconnect will be disabled.
on_connect
You can also provide a code reference that will be immediately after each successful connection. The on_connect
attribute is used to provide the code reference, and it will be called with the first parameter being the Redis object.
no_auto_connect_on_new
You can also provide no_auto_connect_on_new
in which case new
won't call $obj->connect
for you implicitly, you'll have to do that yourself. This is useful for figuring out how long connection setup takes so you can configure the cnx_timeout
appropriately.
no_sentinels_list_update
You can also provide no_sentinels_list_update
. By default (that is, without this option), when successfully contacting a sentinel server, the Redis client will ask it for the list of sentinels known for the given service, and merge it with its list of sentinels (in the sentinels
attribute). You can disable this behavior by setting no_sentinels_list_update
to a true value.
name
You can also set a name for each connection. This can be very useful for debugging purposes, using the CLIENT LIST
command. To set a connection name, use the name
parameter. You can use both a scalar value or a CodeRef. If the latter, it will be called after each connection, with the Redis object, and it should return the connection name to use. If it returns a undefined value, Redis will not set the connection name.
Please note that there are restrictions on the name you can set, the most important of which is, no spaces. See the CLIENT SETNAME documentation for all the juicy details. This feature is safe to use with all versions of Redis servers. If CLIENT SETNAME
support is not available (Redis servers 2.6.9 and above only), the name parameter is ignored.
ssl
You can connect to Redis over SSL/TLS by setting this flag if the target Redis server or cluster has been setup to support SSL/TLS. This requires IO::Socket::SSL to be installed on the client. It's off by default.
SSL_verify_mode
This parameter will be applied when ssl
flag is set. It sets the verification mode for the peer certificate. It's compatible with the parameter with the same name in IO::Socket::SSL.
debug
The debug
parameter enables debug information to STDERR, including all interactions with the server. You can also enable debug with the REDIS_DEBUG
environment variable.
CONNECTION HANDLING
$r->connect;
Connects to the Redis server. This is done by default when the obect is constructed using new()
, unless no_auto_connect_on_new
has been set. See this option in the new()
constructor.
$r->quit;
Closes the connection to the server. The quit
method does not support pipelined operation.
$r->ping || die "no server?";
The ping
method does not support pipelined operation.
PIPELINE MANAGEMENT
Waits until all pending pipelined responses have been received, and invokes the pipeline callback for each one. See "PIPELINING".
Waits until the first pending pipelined response has been received, and invokes its callback. See "PIPELINING".
PUBLISH/SUBSCRIBE COMMANDS
When one of "subscribe" or "psubscribe" is used, the Redis object will enter PubSub mode. When in PubSub mode only commands in this section, plus "quit", will be accepted.
If you plan on using PubSub and other Redis functions, you should use two Redis objects, one dedicated to PubSub and the other for regular commands.
All Pub/Sub commands receive a callback as the last parameter. This callback receives three arguments:
The published message.
The topic over which the message was sent.
The subscribed topic that matched the topic for the message. With "subscribe" these last two are the same, always. But with "psubscribe", this parameter tells you the pattern that matched.
See the Pub-Sub notes for more information about the messages you will receive on your callbacks after each "subscribe", "unsubscribe", "psubscribe" and "punsubscribe".
$r->publish($topic, $message);
Publishes the $message
to the $topic
.
$r->subscribe(
@topics_to_subscribe_to,
my $savecallback = sub {
my ($message, $topic, $subscribed_topic) = @_;
...
},
);
Subscribe one or more topics. Messages published into one of them will be received by Redis, and the specified callback will be executed.
$r->unsubscribe(@topic_list, $savecallback);
Stops receiving messages via $savecallback
for all the topics in @topic_list
. WARNING: it is important that you give the same calleback that you used for subscribtion. The value of the CodeRef must be the same, as this is how internally the code identifies it.
my @topic_matches = ('prefix1.*', 'prefix2.*');
$r->psubscribe(@topic_matches, my $savecallback = sub { my ($m, $t, $s) = @_; ... });
Subscribes a pattern of topics. All messages to topics that match the pattern will be delivered to the callback.
my @topic_matches = ('prefix1.*', 'prefix2.*');
$r->punsubscribe(@topic_matches, $savecallback);
Stops receiving messages via $savecallback
for all the topics pattern matches in @topic_list
. WARNING: it is important that you give the same calleback that you used for subscribtion. The value of the CodeRef must be the same, as this is how internally the code identifies it.
if ($r->is_subscriber) { say "We are in Pub/Sub mode!" }
Returns true if we are in Pub/Sub mode.
my $keep_going = 1; ## Set to false somewhere to leave the loop
my $timeout = 5;
$r->wait_for_messages($timeout) while $keep_going;
Blocks, waits for incoming messages and delivers them to the appropriate callbacks.
Requires a single parameter, the number of seconds to wait for messages. Use 0 to wait for ever. If a positive non-zero value is used, it will return after that amount of seconds without a single notification.
Please note that the timeout is not a commitment to return control to the caller at most each timeout
seconds, but more a idle timeout, were control will return to the caller if Redis is idle (as in no messages were received during the timeout period) for more than timeout
seconds.
The "wait_for_messages" call returns the number of messages processed during the run.
IMPORTANT NOTES ON METHODS
When a method returns more than one value, it checks the context and returns either a list of values or an ArrayRef.
Warning: the behaviour of the TRANSACTIONS commands when combined with pipelining is still under discussion, and you should NOT use them at the same time just now.
You can follow the discussion to see the open issues with this.
my @individual_replies = $r->exec;
exec
has special behaviour when run in a pipeline: the $reply
argument to the pipeline callback is an array ref whose elements are themselves [$reply, $error]
pairs. This means that you can accurately detect errors yielded by any command in the transaction, and without any exceptions being thrown.
my @keys = $r->keys( '*glob_pattern*' );
my $keys = $r->keys( '*glob_pattern*' ); # count of matching keys
Note that synchronous keys
calls in a scalar context return the number of matching keys (not an array ref of matching keys as you might expect). This does not apply in pipelined mode: assuming the server returns a list of keys, as expected, it is always passed to the pipeline callback as an array ref.
Hashes in Redis cannot be nested as in perl, if you want to store a nested hash, you need to serialize the hash first. If you want to have a named hash, you can use Redis-hashes. You will find an example in the tests of this module t/01-basic.t
Note that this commands sends the Lua script every time you call it. See "evalsha" and "script_load" for an alternative.
my $info_hash = $r->info;
The info
method is unique in that it decodes the server's response into a hashref, if possible. This decoding happens in both synchronous and pipelined modes.
KEYS
$r->del(key [key ...])
Delete a key (see https://redis.io/commands/del)
$r->dump(key)
Return a serialized version of the value stored at the specified key. (see https://redis.io/commands/dump)
$r->exists(key)
Determine if a key exists (see https://redis.io/commands/exists)
$r->expire(key, seconds)
Set a key's time to live in seconds (see https://redis.io/commands/expire)
$r->expireat(key, timestamp)
Set the expiration for a key as a UNIX timestamp (see https://redis.io/commands/expireat)
$r->keys(pattern)
Find all keys matching the given pattern (see https://redis.io/commands/keys)
$r->migrate(host, port, key, destination-db, timeout, [COPY], [REPLACE])
Atomically transfer a key from a Redis instance to another one. (see https://redis.io/commands/migrate)
$r->move(key, db)
Move a key to another database (see https://redis.io/commands/move)
$r->object(subcommand, [arguments [arguments ...]])
Inspect the internals of Redis objects (see https://redis.io/commands/object)
$r->persist(key)
Remove the expiration from a key (see https://redis.io/commands/persist)
$r->pexpire(key, milliseconds)
Set a key's time to live in milliseconds (see https://redis.io/commands/pexpire)
$r->pexpireat(key, milliseconds-timestamp)
Set the expiration for a key as a UNIX timestamp specified in milliseconds (see https://redis.io/commands/pexpireat)
$r->pttl(key)
Get the time to live for a key in milliseconds (see https://redis.io/commands/pttl)
$r->randomkey()
Return a random key from the keyspace (see https://redis.io/commands/randomkey)
$r->rename(key, newkey)
Rename a key (see https://redis.io/commands/rename)
$r->renamenx(key, newkey)
Rename a key, only if the new key does not exist (see https://redis.io/commands/renamenx)
$r->restore(key, ttl, serialized-value)
Create a key using the provided serialized value, previously obtained using DUMP. (see https://redis.io/commands/restore)
$r->scan(cursor, [MATCH pattern], [COUNT count])
Incrementally iterate the keys space (see https://redis.io/commands/scan)
$r->sort(key, [BY pattern], [LIMIT offset count], [GET pattern [GET pattern ...]], [ASC|DESC], [ALPHA], [STORE destination])
Sort the elements in a list, set or sorted set (see https://redis.io/commands/sort)
$r->ttl(key)
Get the time to live for a key (see https://redis.io/commands/ttl)
$r->type(key)
Determine the type stored at key (see https://redis.io/commands/type)
STRINGS
$r->append(key, value)
Append a value to a key (see https://redis.io/commands/append)
$r->bitcount(key, [start end])
Count set bits in a string (see https://redis.io/commands/bitcount)
$r->bitop(operation, destkey, key [key ...])
Perform bitwise operations between strings (see https://redis.io/commands/bitop)
$r->bitpos(key, bit, [start], [end])
Find first bit set or clear in a string (see https://redis.io/commands/bitpos)
$r->blpop(key [key ...], timeout)
Remove and get the first element in a list, or block until one is available (see https://redis.io/commands/blpop)
$r->brpop(key [key ...], timeout)
Remove and get the last element in a list, or block until one is available (see https://redis.io/commands/brpop)
$r->brpoplpush(source, destination, timeout)
Pop a value from a list, push it to another list and return it; or block until one is available (see https://redis.io/commands/brpoplpush)
$r->decr(key)
Decrement the integer value of a key by one (see https://redis.io/commands/decr)
$r->decrby(key, decrement)
Decrement the integer value of a key by the given number (see https://redis.io/commands/decrby)
$r->get(key)
Get the value of a key (see https://redis.io/commands/get)
$r->getbit(key, offset)
Returns the bit value at offset in the string value stored at key (see https://redis.io/commands/getbit)
$r->getrange(key, start, end)
Get a substring of the string stored at a key (see https://redis.io/commands/getrange)
$r->getset(key, value)
Set the string value of a key and return its old value (see https://redis.io/commands/getset)
$r->incr(key)
Increment the integer value of a key by one (see https://redis.io/commands/incr)
$r->incrby(key, increment)
Increment the integer value of a key by the given amount (see https://redis.io/commands/incrby)
$r->incrbyfloat(key, increment)
Increment the float value of a key by the given amount (see https://redis.io/commands/incrbyfloat)
$r->mget(key [key ...])
Get the values of all the given keys (see https://redis.io/commands/mget)
$r->mset(key value [key value ...])
Set multiple keys to multiple values (see https://redis.io/commands/mset)
$r->msetnx(key value [key value ...])
Set multiple keys to multiple values, only if none of the keys exist (see https://redis.io/commands/msetnx)
$r->psetex(key, milliseconds, value)
Set the value and expiration in milliseconds of a key (see https://redis.io/commands/psetex)
$r->set(key, value, ['EX', seconds], ['PX', milliseconds], ['NX'|'XX'])
Set the string value of a key (see https://redis.io/commands/set). Example:
$r->set('key', 'test', 'EX', 60, 'NX')
$r->setbit(key, offset, value)
Sets or clears the bit at offset in the string value stored at key (see https://redis.io/commands/setbit)
$r->setex(key, seconds, value)
Set the value and expiration of a key (see https://redis.io/commands/setex)
$r->setnx(key, value)
Set the value of a key, only if the key does not exist (see https://redis.io/commands/setnx)
$r->setrange(key, offset, value)
Overwrite part of a string at key starting at the specified offset (see https://redis.io/commands/setrange)
$r->strlen(key)
Get the length of the value stored in a key (see https://redis.io/commands/strlen)
HASHES
$r->hdel(key, field [field ...])
Delete one or more hash fields (see https://redis.io/commands/hdel)
$r->hexists(key, field)
Determine if a hash field exists (see https://redis.io/commands/hexists)
$r->hget(key, field)
Get the value of a hash field (see https://redis.io/commands/hget)
$r->hgetall(key)
Get all the fields and values in a hash (see https://redis.io/commands/hgetall)
$r->hincrby(key, field, increment)
Increment the integer value of a hash field by the given number (see https://redis.io/commands/hincrby)
$r->hincrbyfloat(key, field, increment)
Increment the float value of a hash field by the given amount (see https://redis.io/commands/hincrbyfloat)
$r->hkeys(key)
Get all the fields in a hash (see https://redis.io/commands/hkeys)
$r->hlen(key)
Get the number of fields in a hash (see https://redis.io/commands/hlen)
$r->hmget(key, field [field ...])
Get the values of all the given hash fields (see https://redis.io/commands/hmget)
$r->hmset(key, field value [field value ...])
Set multiple hash fields to multiple values (see https://redis.io/commands/hmset)
$r->hscan(key, cursor, [MATCH pattern], [COUNT count])
Incrementally iterate hash fields and associated values (see https://redis.io/commands/hscan)
$r->hset(key, field, value)
Set the string value of a hash field (see https://redis.io/commands/hset)
$r->hsetnx(key, field, value)
Set the value of a hash field, only if the field does not exist (see https://redis.io/commands/hsetnx)
$r->hvals(key)
Get all the values in a hash (see https://redis.io/commands/hvals)
SETS
$r->sadd(key, member [member ...])
Add one or more members to a set (see https://redis.io/commands/sadd)
$r->scard(key)
Get the number of members in a set (see https://redis.io/commands/scard)
$r->sdiff(key [key ...])
Subtract multiple sets (see https://redis.io/commands/sdiff)
$r->sdiffstore(destination, key [key ...])
Subtract multiple sets and store the resulting set in a key (see https://redis.io/commands/sdiffstore)
$r->sinter(key [key ...])
Intersect multiple sets (see https://redis.io/commands/sinter)
$r->sinterstore(destination, key [key ...])
Intersect multiple sets and store the resulting set in a key (see https://redis.io/commands/sinterstore)
$r->sismember(key, member)
Determine if a given value is a member of a set (see https://redis.io/commands/sismember)
$r->smembers(key)
Get all the members in a set (see https://redis.io/commands/smembers)
$r->smove(source, destination, member)
Move a member from one set to another (see https://redis.io/commands/smove)
$r->spop(key)
Remove and return a random member from a set (see https://redis.io/commands/spop)
$r->srandmember(key, [count])
Get one or multiple random members from a set (see https://redis.io/commands/srandmember)
$r->srem(key, member [member ...])
Remove one or more members from a set (see https://redis.io/commands/srem)
$r->sscan(key, cursor, [MATCH pattern], [COUNT count])
Incrementally iterate Set elements (see https://redis.io/commands/sscan)
$r->sunion(key [key ...])
Add multiple sets (see https://redis.io/commands/sunion)
$r->sunionstore(destination, key [key ...])
Add multiple sets and store the resulting set in a key (see https://redis.io/commands/sunionstore)
SORTED SETS
$r->zadd(key, score member [score member ...])
Add one or more members to a sorted set, or update its score if it already exists (see https://redis.io/commands/zadd)
$r->zcard(key)
Get the number of members in a sorted set (see https://redis.io/commands/zcard)
$r->zcount(key, min, max)
Count the members in a sorted set with scores within the given values (see https://redis.io/commands/zcount)
$r->zincrby(key, increment, member)
Increment the score of a member in a sorted set (see https://redis.io/commands/zincrby)
$r->zinterstore(destination, numkeys, key [key ...], [WEIGHTS weight [weight ...]], [AGGREGATE SUM|MIN|MAX])
Intersect multiple sorted sets and store the resulting sorted set in a new key (see https://redis.io/commands/zinterstore)
$r->zlexcount(key, min, max)
Count the number of members in a sorted set between a given lexicographical range (see https://redis.io/commands/zlexcount)
$r->zrange(key, start, stop, [WITHSCORES])
Return a range of members in a sorted set, by index (see https://redis.io/commands/zrange)
$r->zrangebylex(key, min, max, [LIMIT offset count])
Return a range of members in a sorted set, by lexicographical range (see https://redis.io/commands/zrangebylex)
$r->zrangebyscore(key, min, max, [WITHSCORES], [LIMIT offset count])
Return a range of members in a sorted set, by score (see https://redis.io/commands/zrangebyscore)
$r->zrank(key, member)
Determine the index of a member in a sorted set (see https://redis.io/commands/zrank)
$r->zrem(key, member [member ...])
Remove one or more members from a sorted set (see https://redis.io/commands/zrem)
$r->zremrangebylex(key, min, max)
Remove all members in a sorted set between the given lexicographical range (see https://redis.io/commands/zremrangebylex)
$r->zremrangebyrank(key, start, stop)
Remove all members in a sorted set within the given indexes (see https://redis.io/commands/zremrangebyrank)
$r->zremrangebyscore(key, min, max)
Remove all members in a sorted set within the given scores (see https://redis.io/commands/zremrangebyscore)
$r->zrevrange(key, start, stop, [WITHSCORES])
Return a range of members in a sorted set, by index, with scores ordered from high to low (see https://redis.io/commands/zrevrange)
$r->zrevrangebylex(key, max, min, [LIMIT offset count])
Return a range of members in a sorted set, by lexicographical range, ordered from higher to lower strings. (see https://redis.io/commands/zrevrangebylex)
$r->zrevrangebyscore(key, max, min, [WITHSCORES], [LIMIT offset count])
Return a range of members in a sorted set, by score, with scores ordered from high to low (see https://redis.io/commands/zrevrangebyscore)
$r->zrevrank(key, member)
Determine the index of a member in a sorted set, with scores ordered from high to low (see https://redis.io/commands/zrevrank)
$r->zscan(key, cursor, [MATCH pattern], [COUNT count])
Incrementally iterate sorted sets elements and associated scores (see https://redis.io/commands/zscan)
$r->zscore(key, member)
Get the score associated with the given member in a sorted set (see https://redis.io/commands/zscore)
$r->zunionstore(destination, numkeys, key [key ...], [WEIGHTS weight [weight ...]], [AGGREGATE SUM|MIN|MAX])
Add multiple sorted sets and store the resulting sorted set in a new key (see https://redis.io/commands/zunionstore)
HYPERLOGLOG
$r->pfadd(key, element [element ...])
Adds the specified elements to the specified HyperLogLog. (see https://redis.io/commands/pfadd)
$r->pfcount(key [key ...])
Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s). (see https://redis.io/commands/pfcount)
$r->pfmerge(destkey, sourcekey [sourcekey ...])
Merge N different HyperLogLogs into a single one. (see https://redis.io/commands/pfmerge)
PUB/SUB
$r->pubsub(subcommand, [argument [argument ...]])
Inspect the state of the Pub/Sub subsystem (see https://redis.io/commands/pubsub)
TRANSACTIONS
$r->discard()
Discard all commands issued after MULTI (see https://redis.io/commands/discard)
$r->exec()
Execute all commands issued after MULTI (see https://redis.io/commands/exec)
$r->multi()
Mark the start of a transaction block (see https://redis.io/commands/multi)
$r->unwatch()
Forget about all watched keys (see https://redis.io/commands/unwatch)
$r->watch(key [key ...])
Watch the given keys to determine execution of the MULTI/EXEC block (see https://redis.io/commands/watch)
SCRIPTING
$r->eval(script, numkeys, key [key ...], arg [arg ...])
Execute a Lua script server side (see https://redis.io/commands/eval)
$r->evalsha(sha1, numkeys, key [key ...], arg [arg ...])
Execute a Lua script server side (see https://redis.io/commands/evalsha)
$r->script_exists(script [script ...])
Check existence of scripts in the script cache. (see https://redis.io/commands/script-exists)
$r->script_flush()
Remove all the scripts from the script cache. (see https://redis.io/commands/script-flush)
$r->script_kill()
Kill the script currently in execution. (see https://redis.io/commands/script-kill)
$r->script_load(script)
Load the specified Lua script into the script cache. (see https://redis.io/commands/script-load)
CONNECTION
$r->auth(password)
Authenticate to the server (see https://redis.io/commands/auth)
$r->echo(message)
Echo the given string (see https://redis.io/commands/echo)
$r->ping()
Ping the server (see https://redis.io/commands/ping)
$r->quit()
Close the connection (see https://redis.io/commands/quit)
$r->select(index)
Change the selected database for the current connection (see https://redis.io/commands/select)
SERVER
$r->bgrewriteaof()
Asynchronously rewrite the append-only file (see https://redis.io/commands/bgrewriteaof)
$r->bgsave()
Asynchronously save the dataset to disk (see https://redis.io/commands/bgsave)
$r->client_getname()
Get the current connection name (see https://redis.io/commands/client-getname)
$r->client_kill([ip:port], [ID client-id], [TYPE normal|slave|pubsub], [ADDR ip:port], [SKIPME yes/no])
Kill the connection of a client (see https://redis.io/commands/client-kill)
$r->client_list()
Get the list of client connections (see https://redis.io/commands/client-list)
$r->client_pause(timeout)
Stop processing commands from clients for some time (see https://redis.io/commands/client-pause)
$r->client_setname(connection-name)
Set the current connection name (see https://redis.io/commands/client-setname)
$r->cluster_slots()
Get array of Cluster slot to node mappings (see https://redis.io/commands/cluster-slots)
$r->command()
Get array of Redis command details (see https://redis.io/commands/command)
$r->command_count()
Get total number of Redis commands (see https://redis.io/commands/command-count)
$r->command_getkeys()
Extract keys given a full Redis command (see https://redis.io/commands/command-getkeys)
$r->command_info(command-name [command-name ...])
Get array of specific Redis command details (see https://redis.io/commands/command-info)
$r->config_get(parameter)
Get the value of a configuration parameter (see https://redis.io/commands/config-get)
$r->config_resetstat()
Reset the stats returned by INFO (see https://redis.io/commands/config-resetstat)
$r->config_rewrite()
Rewrite the configuration file with the in memory configuration (see https://redis.io/commands/config-rewrite)
$r->config_set(parameter, value)
Set a configuration parameter to the given value (see https://redis.io/commands/config-set)
$r->dbsize()
Return the number of keys in the selected database (see https://redis.io/commands/dbsize)
$r->debug_object(key)
Get debugging information about a key (see https://redis.io/commands/debug-object)
$r->debug_segfault()
Make the server crash (see https://redis.io/commands/debug-segfault)
$r->flushall()
Remove all keys from all databases (see https://redis.io/commands/flushall)
$r->flushdb()
Remove all keys from the current database (see https://redis.io/commands/flushdb)
$r->info([section])
Get information and statistics about the server (see https://redis.io/commands/info)
$r->lastsave()
Get the UNIX time stamp of the last successful save to disk (see https://redis.io/commands/lastsave)
$r->lindex(key, index)
Get an element from a list by its index (see https://redis.io/commands/lindex)
$r->linsert(key, BEFORE|AFTER, pivot, value)
Insert an element before or after another element in a list (see https://redis.io/commands/linsert)
$r->llen(key)
Get the length of a list (see https://redis.io/commands/llen)
$r->lpop(key)
Remove and get the first element in a list (see https://redis.io/commands/lpop)
$r->lpush(key, value [value ...])
Prepend one or multiple values to a list (see https://redis.io/commands/lpush)
$r->lpushx(key, value)
Prepend a value to a list, only if the list exists (see https://redis.io/commands/lpushx)
$r->lrange(key, start, stop)
Get a range of elements from a list (see https://redis.io/commands/lrange)
$r->lrem(key, count, value)
Remove elements from a list (see https://redis.io/commands/lrem)
$r->lset(key, index, value)
Set the value of an element in a list by its index (see https://redis.io/commands/lset)
$r->ltrim(key, start, stop)
Trim a list to the specified range (see https://redis.io/commands/ltrim)
$r->monitor()
Listen for all requests received by the server in real time (see https://redis.io/commands/monitor)
$r->role()
Return the role of the instance in the context of replication (see https://redis.io/commands/role)
$r->rpop(key)
Remove and get the last element in a list (see https://redis.io/commands/rpop)
$r->rpoplpush(source, destination)
Remove the last element in a list, append it to another list and return it (see https://redis.io/commands/rpoplpush)
$r->rpush(key, value [value ...])
Append one or multiple values to a list (see https://redis.io/commands/rpush)
$r->rpushx(key, value)
Append a value to a list, only if the list exists (see https://redis.io/commands/rpushx)
$r->save()
Synchronously save the dataset to disk (see https://redis.io/commands/save)
$r->shutdown([NOSAVE], [SAVE])
Synchronously save the dataset to disk and then shut down the server (see https://redis.io/commands/shutdown)
$r->slaveof(host, port)
Make the server a slave of another instance, or promote it as master (see https://redis.io/commands/slaveof)
$r->slowlog(subcommand, [argument])
Manages the Redis slow queries log (see https://redis.io/commands/slowlog)
$r->sync()
Internal command used for replication (see https://redis.io/commands/sync)
$r->time()
Return the current server time (see https://redis.io/commands/time)
ACKNOWLEDGEMENTS
The following persons contributed to this project (random order):
Aaron Crane (pipelining and AUTOLOAD caching support)
Dirk Vleugels
Flavio Poletti
Jeremy Zawodny
sunnavy at bestpractical.com
Thiago Berlitz Rondon
Ulrich Habel
Ivan Kruglov
Steffen Mueller <smueller@cpan.org>
AUTHORS
Pedro Melo <melo@cpan.org>
Damien Krotkine <dams@cpan.org>
COPYRIGHT AND LICENSE
This software is Copyright (c) 2015 by Pedro Melo, Damien Krotkine.
This is free software, licensed under:
The Artistic License 2.0 (GPL Compatible)
Author: PerlRedis
Source Code: https://github.com/PerlRedis/perl-redis
1667398500
This script is written using the shell, in order to quickly deploy LEMP
/LAMP
/LNMP
/LNMPA
/LTMP
(Linux, Nginx/Tengine/OpenResty, MySQL in a production environment/MariaDB/Percona, PHP, JAVA), applicable to CentOS 7 ~ 8(including redhat,AlmaLinux,Rocky), Debian 9 ~ 11, Ubuntu 16 ~ 21, Fedora 27+ of 64.
Script properties:
Install the dependencies for your distro, download the source and run the installation script.
yum -y install wget screen
apt-get -y install wget screen
wget http://mirrors.linuxeye.com/oneinstack-full.tar.gz
tar xzf oneinstack-full.tar.gz
cd oneinstack
If you disconnect during installation, you can execute the command screen -r oneinstack
to reconnect to the install window
screen -S oneinstack
If you need to modify the directory (installation, data storage, Nginx logs), modify options.conf
file before running install.sh
./install.sh
~/oneinstack/install.sh --mphp_ver 54
~/oneinstack/addons.sh
~/oneinstack/vhost.sh
~/oneinstack/vhost.sh --del
~/oneinstack/pureftpd_vhost.sh
~/oneinstack/backup_setup.sh // Backup parameters
~/oneinstack/backup.sh // Perform the backup immediately
crontab -l // Can be added to scheduled tasks, such as automatic backups every day 1:00
0 1 * * * cd ~/oneinstack/backup.sh > /dev/null 2>&1 &
Nginx/Tengine/OpenResty:
systemctl {start|stop|status|restart|reload} nginx
MySQL/MariaDB/Percona:
systemctl {start|stop|restart|reload|status} mysqld
PostgreSQL:
systemctl {start|stop|restart|status} postgresql
MongoDB:
systemctl {start|stop|status|restart|reload} mongod
PHP:
systemctl {start|stop|restart|reload|status} php-fpm
Apache:
systemctl {start|restart|stop} httpd
Tomcat:
systemctl {start|stop|status|restart} tomcat
Pure-FTPd:
systemctl {start|stop|restart|status} pureftpd
Redis:
systemctl {start|stop|status|restart|reload} redis-server
Memcached:
systemctl {start|stop|status|restart|reload} memcached
~/oneinstack/upgrade.sh
~/oneinstack/uninstall.sh
For feedback, questions, and to follow the progress of the project:
Telegram Group
OneinStack
Author: Oneinstack
Source Code: https://github.com/oneinstack/oneinstack
License: Apache-2.0 license
1667352668
Redis::Fast - Perl binding for Redis database
## Defaults to $ENV{REDIS_SERVER} or 127.0.0.1:6379
my $redis = Redis::Fast->new;
my $redis = Redis::Fast->new(server => 'redis.example.com:8080');
## Set the connection name (requires Redis 2.6.9)
my $redis = Redis::Fast->new(
server => 'redis.example.com:8080',
name => 'my_connection_name',
);
my $generation = 0;
my $redis = Redis::Fast->new(
server => 'redis.example.com:8080',
name => sub { "cache-$$-".++$generation },
);
## Use UNIX domain socket
my $redis = Redis::Fast->new(sock => '/path/to/socket');
## Enable auto-reconnect
## Try to reconnect every 500ms up to 60 seconds until success
## Die if you can't after that
my $redis = Redis::Fast->new(reconnect => 60, every => 500_000);
## Try each 100ms up to 2 seconds (every is in microseconds)
my $redis = Redis::Fast->new(reconnect => 2, every => 100_000);
## Disable the automatic utf8 encoding => much more performance
## !!!! This will be the default after 2.000, see ENCODING below
my $redis = Redis::Fast->new(encoding => undef);
## Use all the regular Redis commands, they all accept a list of
## arguments
## See http://redis.io/commands for full list
$redis->get('key');
$redis->set('key' => 'value');
$redis->sort('list', 'DESC');
$redis->sort(qw{list LIMIT 0 5 ALPHA DESC});
## Add a coderef argument to run a command in the background
$redis->sort(qw{list LIMIT 0 5 ALPHA DESC}, sub {
my ($reply, $error) = @_;
die "Oops, got an error: $error\n" if defined $error;
print "$_\n" for @$reply;
});
long_computation();
$redis->wait_all_responses;
## or
$redis->wait_one_response();
## Or run a large batch of commands in a pipeline
my %hash = _get_large_batch_of_commands();
$redis->hset('h', $_, $hash{$_}, sub {}) for keys %hash;
$redis->wait_all_responses;
## Publish/Subscribe
$redis->subscribe(
'topic_1',
'topic_2',
sub {
my ($message, $topic, $subscribed_topic) = @_
## $subscribed_topic can be different from topic if
## you use psubscribe() with wildcards
}
);
$redis->psubscribe('nasdaq.*', sub {...});
## Blocks and waits for messages, calls subscribe() callbacks
## ... forever
my $timeout = 10;
$redis->wait_for_messages($timeout) while 1;
## ... until some condition
my $keep_going = 1; ## other code will set to false to quit
$redis->wait_for_messages($timeout) while $keep_going;
$redis->publish('topic_1', 'message');
DESCRIPTION
Redis::Fast
is a wrapper around Salvatore Sanfilippo's hiredis C client. It is compatible with Redis.pm.
This version supports protocol 2.x (multi-bulk) or later of Redis available at https://github.com/antirez/redis/.
Besides auto-reconnect when the connection is closed, Redis::Fast
supports reconnecting on the specified errors by the reconnect_on_error
option. Here's an example that will reconnect when receiving READONLY
error:
my $r = Redis::Fast->new(
reconnect => 1, # The value greater than 0 is required
reconnect_on_error => sub {
my ($error, $ret, $command) = @_;
if ($error =~ /READONLY You can't write against a read only slave/) {
# force reconnect
return 1;
}
# do nothing
return -1;
},
);
This feature is useful when using Amazon ElastiCache. Once failover happens, Amazon ElastiCache will switch the master we currently connected with to a slave, leading to the following writes fails with the error READONLY
. Using reconnect_on_error
, we can force the connection to reconnect on this error in order to connect to the new master. If your Elasticache Redis is enabled to be set an option for close-on-slave-write, this feature might be unnecessary.
The return value of reconnect_on_error
should be greater than -2
. -1
means that Redis::Fast
behaves the same as without this option. 0
and greater than 0
means that Redis::Fast
forces to reconnect and then wait for a next force reconnect until this value seconds elapse. This unit is a second, and the type is double. For example, 0.01 means 10 milliseconds.
Note: This feature is not supported for the subscribed mode.
PERFORMANCE IN SYNCHRONIZE MODE
Benchmark: running 00_ping, 10_set, 11_set_r, 20_get, 21_get_r, 30_incr, 30_incr_r, 40_lpush, 50_lpop, 90_h_get, 90_h_set for at least 5 CPU seconds...
00_ping: 8 wallclock secs ( 0.69 usr + 4.77 sys = 5.46 CPU) @ 5538.64/s (n=30241)
10_set: 8 wallclock secs ( 1.07 usr + 4.01 sys = 5.08 CPU) @ 5794.09/s (n=29434)
11_set_r: 7 wallclock secs ( 0.42 usr + 4.84 sys = 5.26 CPU) @ 5051.33/s (n=26570)
20_get: 8 wallclock secs ( 0.69 usr + 4.82 sys = 5.51 CPU) @ 5080.40/s (n=27993)
21_get_r: 7 wallclock secs ( 2.21 usr + 3.09 sys = 5.30 CPU) @ 5389.06/s (n=28562)
30_incr: 7 wallclock secs ( 0.69 usr + 4.73 sys = 5.42 CPU) @ 5671.77/s (n=30741)
30_incr_r: 7 wallclock secs ( 0.85 usr + 4.31 sys = 5.16 CPU) @ 5824.42/s (n=30054)
40_lpush: 8 wallclock secs ( 0.60 usr + 4.77 sys = 5.37 CPU) @ 5832.59/s (n=31321)
50_lpop: 7 wallclock secs ( 1.24 usr + 4.17 sys = 5.41 CPU) @ 5112.75/s (n=27660)
90_h_get: 7 wallclock secs ( 0.63 usr + 4.65 sys = 5.28 CPU) @ 5716.29/s (n=30182)
90_h_set: 7 wallclock secs ( 0.65 usr + 4.74 sys = 5.39 CPU) @ 5593.14/s (n=30147)
Redis::Fast is 50% faster than Redis.pm.
Benchmark: running 00_ping, 10_set, 11_set_r, 20_get, 21_get_r, 30_incr, 30_incr_r, 40_lpush, 50_lpop, 90_h_get, 90_h_set for at least 5 CPU seconds...
00_ping: 9 wallclock secs ( 0.18 usr + 4.84 sys = 5.02 CPU) @ 7939.24/s (n=39855)
10_set: 10 wallclock secs ( 0.31 usr + 5.40 sys = 5.71 CPU) @ 7454.64/s (n=42566)
11_set_r: 9 wallclock secs ( 0.31 usr + 4.87 sys = 5.18 CPU) @ 7993.05/s (n=41404)
20_get: 10 wallclock secs ( 0.27 usr + 4.84 sys = 5.11 CPU) @ 8350.68/s (n=42672)
21_get_r: 10 wallclock secs ( 0.32 usr + 5.17 sys = 5.49 CPU) @ 8238.62/s (n=45230)
30_incr: 9 wallclock secs ( 0.23 usr + 5.27 sys = 5.50 CPU) @ 8221.82/s (n=45220)
30_incr_r: 8 wallclock secs ( 0.28 usr + 4.91 sys = 5.19 CPU) @ 8092.29/s (n=41999)
40_lpush: 9 wallclock secs ( 0.18 usr + 5.06 sys = 5.24 CPU) @ 8312.02/s (n=43555)
50_lpop: 9 wallclock secs ( 0.20 usr + 4.84 sys = 5.04 CPU) @ 8010.12/s (n=40371)
90_h_get: 9 wallclock secs ( 0.19 usr + 5.51 sys = 5.70 CPU) @ 7467.72/s (n=42566)
90_h_set: 8 wallclock secs ( 0.28 usr + 4.83 sys = 5.11 CPU) @ 7724.07/s (n=39470)o
PERFORMANCE IN PIPELINE MODE
#!/usr/bin/perl
use warnings;
use strict;
use Time::HiRes qw/time/;
use Redis;
my $count = 100000;
{
my $r = Redis->new;
my $start = time;
for(1..$count) {
$r->set('hoge', 'fuga', sub{});
}
$r->wait_all_responses;
printf "Redis.pm:\n%.2f/s\n", $count / (time - $start);
}
{
my $r = Redis::Fast->new;
my $start = time;
for(1..$count) {
$r->set('hoge', 'fuga', sub{});
}
$r->wait_all_responses;
printf "Redis::Fast:\n%.2f/s\n", $count / (time - $start);
}
Redis::Fast is 4x faster than Redis.pm in pipeline mode.
Redis.pm:
22588.95/s
Redis::Fast:
81098.01/s
AUTHOR
Ichinose Shogo shogo82148@gmail.com
SEE ALSO
LICENSE
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Author: shogo82148
Source Code: https://github.com/shogo82148/Redis-Fast
License: View license
1667331120
As the V9 is relatively not compatible with previous versions, please read carefully the migration guide to ensure you the smoothest migration possible. One of the biggest change is the configuration system which is now an object that replace the primitive array that we used to implement back then. Also, please note that the V9 requires at least PHP 8 or higher to works properly.
More information in Wiki The simplicity of abstraction: One class for many backend cache. You don't need to rewrite your code many times again.
💡 Feel free to propose a driver by making a new Pull Request, they are welcome !
Regular drivers | High performances drivers | Development drivers | Cluster-Aggregated drivers |
---|---|---|---|
Apcu (APC support removed) | Arangodb | Devnull | FullReplicationCluster |
Dynamodb (AWS) | Cassandra | Devrandom | SemiReplicationCluster |
Files | CouchBasev3 ( Couchbase for SDK 2 support removed) | Memstatic | MasterSlaveReplicationCluster |
Firestore (GCP) | Couchdb | RandomReplicationCluster | |
Leveldb | Mongodb | ||
Memcache(d) | Predis | ||
Solr (Via Solarium 6.x) | Redis | ||
Sqlite | Ssdb | ||
Wincache | Zend Memory Cache | ||
Zend Disk Cache |
* Driver descriptions available in DOCS/DRIVERS.md
Phpfastcache has been developed over the years with 3 main goals:
Phpfastcache provides you a lot of useful APIs:
Method | Return | Description |
---|---|---|
addTag($tagName) | ExtendedCacheItemInterface | Adds a tag |
addTags(array $tagNames) | ExtendedCacheItemInterface | Adds multiple tags |
append($data) | ExtendedCacheItemInterface | Appends data to a string or an array (push) |
decrement($step = 1) | ExtendedCacheItemInterface | Redundant joke... |
expiresAfter($ttl) | ExtendedCacheItemInterface | Allows you to extends the lifetime of an entry without altering its value (formerly known as touch()) |
expiresAt($expiration) | ExtendedCacheItemInterface | Sets the expiration time for this cache item (as a DateTimeInterface object) |
get() | mixed | The getter, obviously, returns your cache object |
getCreationDate() | \DatetimeInterface | Gets the creation date for this cache item (as a DateTimeInterface object) * |
getDataAsJsonString() | string | Return the data as a well-formatted json string |
getEncodedKey() | string | Returns the final and internal item identifier (key), generally used for debug purposes |
getExpirationDate() | ExtendedCacheItemInterface | Gets the expiration date as a Datetime object |
getKey() | string | Returns the item identifier (key) |
getLength() | int | Gets the data length if the data is a string, array, or objects that implement \Countable interface. |
getModificationDate() | \DatetimeInterface | Gets the modification date for this cache item (as a DateTimeInterface object) * |
getTags() | string[] | Gets the tags |
getTagsAsString($separator = ', ') | string | Gets the data as a string separated by $separator |
getTtl() | int | Gets the remaining Time To Live as an integer |
increment($step = 1) | ExtendedCacheItemInterface | To allow us to count on an integer item |
isEmpty() | bool | Checks if the data is empty or not despite the hit/miss status. |
isExpired() | bool | Checks if your cache entry is expired |
isHit() | bool | Checks if your cache entry exists and is still valid, it's the equivalent of isset() |
isNull() | bool | Checks if the data is null or not despite the hit/miss status. |
prepend($data) | ExtendedCacheItemInterface | Prepends data to a string or an array (unshift) |
removeTag($tagName) | ExtendedCacheItemInterface | Removes a tag |
removeTags(array $tagNames) | ExtendedCacheItemInterface | Removes multiple tags |
set($value) | ExtendedCacheItemInterface | The setter, for those who missed it, can be anything except resources or non-serializer object (ex: PDO objects, file pointers, etc). |
setCreationDate($expiration) | \DatetimeInterface | Sets the creation date for this cache item (as a DateTimeInterface object) * |
setEventManager($evtMngr) | ExtendedCacheItemInterface | Sets the event manager |
setExpirationDate() | ExtendedCacheItemInterface | Alias of expireAt() (for more code logic) |
setModificationDate($expiration) | \DatetimeInterface | Sets the modification date for this cache item (as a DateTimeInterface object) * |
setTags(array $tags) | ExtendedCacheItemInterface | Sets multiple tags |
* Require configuration directive "itemDetailedDate" to be enabled, else a \LogicException will be thrown
Methods (By Alphabetic Order) | Return | Description |
---|---|---|
appendItemsByTag($tagName, $data) | bool | Appends items by a tag |
appendItemsByTags(array $tagNames, $data) | bool | Appends items by one of multiple tag names |
attachItem($item) | void | (Re-)attaches an item to the pool |
clear() | bool | Allows you to completely empty the cache and restart from the beginning |
commit() | bool | Persists any deferred cache items |
decrementItemsByTag($tagName, $step = 1) | bool | Decrements items by a tag |
decrementItemsByTags(array $tagNames, $step = 1) | bool | Decrements items by one of multiple tag names |
deleteItem($key) | bool | Deletes an item |
deleteItems(array $keys) | bool | Deletes one or more items |
deleteItemsByTag($tagName) | bool | Deletes items by a tag |
deleteItemsByTags(array $tagNames, int $strategy) | bool | Deletes items by one of multiple tag names |
detachItem($item) | void | Detaches an item from the pool |
getConfig() | ConfigurationOption | Returns the configuration object |
getConfigOption($optionName); | mixed | Returns a configuration value by its key $optionName |
getDefaultConfig() | ConfigurationOption | Returns the default configuration object (not altered by the object instance) |
getDriverName() | string | Returns the current driver name (without the namespace) |
getEventManager() | EventManagerInterface | Gets the event manager |
getHelp() | string | Provides a very basic help for a specific driver |
getInstanceId() | string | Returns the instance ID |
getItem($key) | ExtendedCacheItemInterface | Retrieves an item and returns an empty item if not found |
getItems(array $keys) | ExtendedCacheItemInterface[] | Retrieves one or more item and returns an array of items |
getItemsAsJsonString(array $keys) | string | Returns A json string that represents an array of items |
getItemsByTag($tagName, $strategy) | ExtendedCacheItemInterface[] | Returns items by a tag |
getItemsByTags(array $tagNames, $strategy) | ExtendedCacheItemInterface[] | Returns items by one of multiple tag names |
getItemsByTagsAsJsonString(array $tagNames, $strategy) | string | Returns A json string that represents an array of items corresponding |
getStats() | DriverStatistic | Returns the cache statistics as an object, useful for checking disk space used by the cache etc. |
hasEventManager() | bool | Check the event manager |
hasItem($key) | bool | Tests if an item exists |
incrementItemsByTag($tagName, $step = 1, $strategy) | bool | Increments items by a tag |
incrementItemsByTags(array $tagNames, $step = 1, $strategy) | bool | Increments items by one of multiple tag names |
isAttached($item) | bool | Verify if an item is (still) attached |
prependItemsByTag($tagName, $data, $strategy) | bool | Prepends items by a tag |
prependItemsByTags(array $tagNames, $data, $strategy) | bool | Prepends items by one of multiple tag names |
save(CacheItemInterface $item) | bool | Persists a cache item immediately |
saveDeferred(CacheItemInterface $item) | bool | Sets a cache item to be persisted later |
saveMultiple(...$items) | bool | Persists multiple cache items immediately |
setEventManager(EventManagerInterface $evtMngr) | ExtendedCacheItemPoolInterface | Sets the event manager |
🆕 in V8: Multiple strategies ($strategy
) are now supported for tagging:
TaggableCacheItemPoolInterface::TAG_STRATEGY_ONE
allows you to get cache item(s) by at least ONE of the specified matching tag(s). Default behavior.TaggableCacheItemPoolInterface::TAG_STRATEGY_ALL
allows you to get cache item(s) by ALL of the specified matching tag(s) (the cache item can have additional tag(s))TaggableCacheItemPoolInterface::TAG_STRATEGY_ONLY
allows you to get cache item(s) by ONLY the specified matching tag(s) (the cache item cannot have additional tag(s))It also supports multiple calls, Tagging, Setup Folder for caching. Look at our examples folders for more information.
Phpfastcache provides a class that gives you basic information about your Phpfastcache installation
Phpfastcache\Api::GetVersion();
Phpfastcache\Api::getChangelog();
Phpfastcache\Api::getPhpfastcacheVersion();
Phpfastcache\Api::getPhpfastcacheChangelog();
😅 Good news, as of the V6, a Psr16 adapter is provided to keep the cache simplest using very basic getters/setters:
get($key, $default = null);
set($key, $value, $ttl = null);
delete($key);
clear();
getMultiple($keys, $default = null);
setMultiple($values, $ttl = null);
deleteMultiple($keys);
has($key);
Basic usage:
<?php
use Phpfastcache\Helper\Psr16Adapter;
$defaultDriver = 'Files';
$Psr16Adapter = new Psr16Adapter($defaultDriver);
if(!$Psr16Adapter->has('test-key')){
// Setter action
$data = 'lorem ipsum';
$Psr16Adapter->set('test-key', 'lorem ipsum', 300);// 5 minutes
}else{
// Getter action
$data = $Psr16Adapter->get('test-key');
}
/**
* Do your stuff with $data
*/
Internally, the Psr16 adapter calls the Phpfastcache Api via the cache manager.
📣 As of the V6, Phpfastcache provides an event mechanism. You can subscribe to an event by passing a Closure to an active event:
<?php
use Phpfastcache\EventManager;
/**
* Bind the event callback
*/
EventManager::getInstance()->onCacheGetItem(function(ExtendedCacheItemPoolInterface $itemPool, ExtendedCacheItemInterface $item){
$item->set('[HACKED BY EVENT] ' . $item->get());
});
An event callback can get unbind but you MUST provide a name to the callback previously:
<?php
use Phpfastcache\EventManager;
/**
* Bind the event callback
*/
EventManager::getInstance()->onCacheGetItem(function(ExtendedCacheItemPoolInterface $itemPool, ExtendedCacheItemInterface $item){
$item->set('[HACKED BY EVENT] ' . $item->get());
}, 'myCallbackName');
/**
* Unbind the event callback
*/
EventManager::getInstance()->unbindEventCallback('onCacheGetItem', 'myCallbackName');
🆕 As of the V8 you can simply subscribe to every events of Phpfastcache.
More information about the implementation and the events are available on the Wiki
📚 As of the V6, Phpfastcache provides some helpers to make your code easier.
May more will come in the future, feel free to contribute !
Check out the WIKI to learn how to implement aggregated cache clustering feature.
composer require phpfastcache/phpfastcache
<?php
use Phpfastcache\CacheManager;
use Phpfastcache\Config\ConfigurationOption;
// Setup File Path on your config files
// Please note that as of the V6.1 the "path" config
// can also be used for Unix sockets (Redis, Memcache, etc)
CacheManager::setDefaultConfig(new ConfigurationOption([
'path' => '/var/www/phpfastcache.com/dev/tmp', // or in windows "C:/tmp/"
]));
// In your class, function, you can call the Cache
$InstanceCache = CacheManager::getInstance('files');
/**
* Try to get $products from Caching First
* product_page is "identity keyword";
*/
$key = "product_page";
$CachedString = $InstanceCache->getItem($key);
$your_product_data = [
'First product',
'Second product',
'Third product'
/* ... */
];
if (!$CachedString->isHit()) {
$CachedString->set($your_product_data)->expiresAfter(5);//in seconds, also accepts Datetime
$InstanceCache->save($CachedString); // Save the cache item just like you do with doctrine and entities
echo 'FIRST LOAD // WROTE OBJECT TO CACHE // RELOAD THE PAGE AND SEE // ';
echo $CachedString->get();
} else {
echo 'READ FROM CACHE // ';
echo $CachedString->get()[0];// Will print 'First product'
}
/**
* use your products here or return them;
*/
echo implode('<br />', $CachedString->get());// Will echo your product list
💾 Legacy support (Without Composer)
* See the file examples/withoutComposer.php for more information.
⚠️ The legacy autoload will be removed in the next major release ⚠️
Please include Phpfastcache through composer by running composer require phpfastcache/phpfastcache
.
For curious developers, there is a lot of other examples available here.
Found an issue or have an idea ? Come here and let us know !
Author: PHPSocialNetwork
Source Code: https://github.com/PHPSocialNetwork/phpfastcache
License: MIT license
1667239260
This script is written using the shell, in order to quickly deploy LEMP
/LAMP
/LNMP
/LNMPA
/LTMP
(Linux, Nginx/Tengine/OpenResty, MySQL in a production environment/MariaDB/Percona, PHP, JAVA), applicable to CentOS 7 ~ 8(including redhat,AlmaLinux,Rocky), Debian 8 ~ 11, Ubuntu 16 ~ 21, Fedora 27+ of 64.
Script properties:
Install the dependencies for your distro, download the source and run the installation script.
yum -y install wget screen
apt-get -y install wget screen
wget http://mirrors.linuxeye.com/oneinstack-full.tar.gz
tar xzf oneinstack-full.tar.gz
cd oneinstack
If you disconnect during installation, you can execute the command screen -r oneinstack
to reconnect to the install window
screen -S oneinstack
If you need to modify the directory (installation, data storage, Nginx logs), modify options.conf
file before running install.sh
./install.sh
~/oneinstack/install.sh --mphp_ver 54
~/oneinstack/addons.sh
~/oneinstack/vhost.sh
~/oneinstack/vhost.sh --del
~/oneinstack/pureftpd_vhost.sh
~/oneinstack/backup_setup.sh // Backup parameters
~/oneinstack/backup.sh // Perform the backup immediately
crontab -l // Can be added to scheduled tasks, such as automatic backups every day 1:00
0 1 * * * cd ~/oneinstack/backup.sh > /dev/null 2>&1 &
Nginx/Tengine/OpenResty:
systemctl {start|stop|status|restart|reload} nginx
MySQL/MariaDB/Percona:
systemctl {start|stop|restart|reload|status} mysqld
PostgreSQL:
systemctl {start|stop|restart|status} postgresql
MongoDB:
systemctl {start|stop|status|restart|reload} mongod
PHP:
systemctl {start|stop|restart|reload|status} php-fpm
Apache:
systemctl {start|restart|stop} httpd
Tomcat:
systemctl {start|stop|status|restart} tomcat
Pure-FTPd:
systemctl {start|stop|restart|status} pureftpd
Redis:
systemctl {start|stop|status|restart|reload} redis-server
Memcached:
systemctl {start|stop|status|restart|reload} memcached
~/oneinstack/upgrade.sh
~/oneinstack/uninstall.sh
For feedback, questions, and to follow the progress of the project:
Telegram Group
OneinStack
Author: oneinstack
Source Code: https://github.com/oneinstack/lnmp
License: Apache-2.0 license
1667207068
Easy Tips
A knowledge storage for the PHP developer
flag | meaning |
---|---|
not-start | not start |
doing | ing |
α | for reference only |
done | complete |
fixing | fix |
PHP(doing)
PHP code standard with PSR(Include personal suggestion)
Base knowledge[RTFM]
Feature
Mysql(doing)
Submeter
Sql optimize
Master-Slave
Redis(doing)
Docker
Design Pattern(done/fixing)
Creational Pattern
Construction Pattern
Behavior Pattern
Algorithm(doing)
analyze
examples
sort algorithm(α)
Netwok basis (doing)
Computer basis (doing)
High concurrency (not-start)
run: php patterns/[folder-name]/test.php
for example,
chain of responsibility: run, php patterns/chainOfResponsibility/test.php
result:
request 5850c8354b298: token pass~
request 5850c8354b298: request frequent pass~
request 5850c8354b298: params pass~
request 5850c8354b298: sign pass~
request 5850c8354b298: auth pass~
run: php algorithm/test.php [algorithm name|help]
for example,
bubble sort: run, php algorithm/test.php bubble
result:
==========================bubble sort=========================
Array
(
[0] => 11
[1] => 67
[2] => 3
[3] => 121
[4] => 71
[5] => 6
[6] => 100
[7] => 45
[8] => 2
)
=========up is the origin data==================below is the sort result=============
Array
(
[0] => 2
[1] => 3
[2] => 6
[3] => 11
[4] => 45
[5] => 67
[6] => 71
[7] => 100
[8] => 121
)
run: php redis/test.php [name|help]
for example,
pessimistic-lock: run, php redis/test.php p-lock
result:
exexute count increment 1~
count value: 1
If you find some where is not right, you can make a issueissueor a pull request,I will fix it,THX~
Author: TIGERB
Source Code: https://github.com/TIGERB/easy-tips