This blog post demonstrates how to auto-scale your Redis based applications on Kubernetes. Redis is a widely used (and loved!) database which supports a rich set of data structures (String, Hash, Streams, Geospatial), as well as other features such as pub/sub messaging, clustering (HA) etc.
This blog post demonstrates how to auto-scale your Redis based applications on Kubernetes. Redis is a widely used (and loved!) database which supports a rich set of data structures (String, Hash, Streams, Geospatial), as well as other features such as pub/sub messaging, clustering (HA) etc. One such data structure is a List which supports operations such as inserts (
LINSERT etc.), reads (
LRANGE), deletes (
LPOP etc.) etc. But that's not all!
Redis Lists are quite versatile and used as the backbone for implementing scalable architectural patterns such as consumer-producer (based on queues), where producer applications push items into a List, and consumers (also called workers) process those items. Popular projects such as resque
, celery etc. use Redis behind the scenes to implement background jobs.
In this blog, you will learn how to automatically scale your Celery workers that use Redis as the broker. There are multiple ways to achieve this — this blog uses a Kubernetes-based Event Driven Autoscaler (KEDA) to do the heavy lifting, including scaling up the worker fleet based on workload and also scaling it back to zero if there are no tasks in the queue!
Autoscaling Celery worker processes using KEDA
Please note that this blog post uses a [Golang_](https://golang.org/) application (thanks to [gocelery_](https://github.com/gocelery/gocelery/)!) as an example, but the same applies to Python or any other application that uses the Celery protocol.
It covers the following topics:
_The sample code is available in [this GitHub repository_](https://github.com/abhirockzz/redis-celery-kubernetes-keda)
To start off, here is a quick round of introductions!
In a nutshell, Celery is a distributed message processing system. It uses brokers to orchestrate communication between clients and workers. Client applications add messages (tasks) to the broker, which is then delivered to one or more workers — this setup is horizontal scalable (and highly available) since you can have multiple workers to share the processing load.
Although Celery is written in Python, the good thing is that the protocol can be implemented in any language. This means that you can have client and worker applications written in completely different programming languages (a Node.js based client and a Python based worker app), but they will be able to inter-operate, given they speak the Celery protocol!
Our original Kubernetes tool list was so popular that we've curated another great list of tools to help you improve your functionality with the platform.
Redis Lua scripting is the popularly recommended approach for handling transactions. Learn the common Lua Scripts error and how to handle for sentinel systems. Redis offers two mechanisms for handling transactions – MULTI/EXEC based transactions and Lua scripts evaluation. Redis Lua scripting is the recommended approach and is fairly popular in usage.
The latest in NoSQL data and Redis on Azure. Azure Cache for Redis and Azure Cosmos DB offer these capabilities and more—making them go-to choices for cloud application development. Explore the new features that make each service even more effective for developers, including active geo-replication in Azure Cache for Redis and MongoDB 4.0 API compatibility in Azure Cosmos DB.
.. and How to securely access private AKS clusters over a bastion? Terraform code included!
NoSQL databases use a variety of data models for accessing and managing data. These types of databases are optimized specifically for applications that require large data volume, low latency, and flexible data models, which are achieved by relaxing some of the data consistency restrictions of other databases.