After the events of 2020, more and more organizations are transforming their applications to become cloud native and optimizing containerization further. Gartner predicts that by 2023, 70 percent of global organizations will be running more than two containerized applications in production. As of 2019 the number of cloud native born projects (not apps that are reworked but purpose-built) in production was more than 50% and the adoption of the approach is only increasing. For more on Becoming Cloud Native, don’t miss our most recent blog on the subject here.

The typical requirement for becoming cloud native is to run an application at scale and speed without risk of downtime. Using Kubernetes, you can containerize such an app and deploy it on a cluster to realize this aim. After that, Kubernetes will spin containers inside the pod to run the app and ensure that the resources it needs are distributed cleanly across the available infrastructure.

But scaling and descaling containers on-demand according to an application’s usage is a tough job. This is where Azure Functions can help support your business critical workload. Furthermore, by using the Azure Function on Kubernetes with KEDA, you can scale your application resources in or out dynamically as demand requires.

Azure Functions Running in Kubernetes

Azure Functions is a fully managed serverless service that you can best use to apply any code written in the Azure Functions run-time to the Azure Functions Programming Model. Publish code on the cloud, and the service then runs and scales and manages the code all for you.

Scale in Azure Functions is fully event-driven. For example, the code you write will be triggered according to an assigned feature: when somebody clicks the checkout button on your website. You publish that code to Azure Functions, it will start listening and looking at your checkout events. If a checkout event pops up, it will scale up enough containers on the Kubernetes cluster to run the code you wrote, then it scales back down to zero. With the service, you only pay when your code is running. This is for when you are using Azure Function outside the cluster.

With KEDA, you do not consume resources in the cluster when the function is off but you are paying for the cluster, so the cost does not vary.

Leveraging KEDA?

Kubernetes is not very well suited for event-driven scaling, and that is because, by default, Kubernetes functions by resource-based scaling according to CPU and memory. KEDA is a Cloud Native Computing Foundation (CNCF) sandbox project that resulted in an event-driven scale controller that can run inside any Kubernetes cluster parallel to Horizontal Pod AutoScaler (HPA). KEDA monitors the rate of events to proactively scale a container even before there is any impact on the CPU. The tool allows containers to scale to and from zero in the same way an Azure Function or AWS Lambda service in the cloud can. KEDA is completely open-source, and you can install it on any cluster, making the tool very non-intrusive. You can add KEDA into a Kubernetes cluster where you already have deployments, or you can just map it to scale the things you want to.

As well as an agent that activates and deactivates the deployments in the Kubernetes cluster to scale, KEDA also acts as a Kubernetes metrics server to expose event data such as queue length to Horizontal Pod AutoScaler (HPA) to drive the scale. There are different ways you can deploy KEDA on a Kubernetes cluster. You can use helm charts, or Operator Hub, or YAML declarations. In this article, we are using the YAML declaration of the latest KEDA version, which is currently 2.2.0 to deploy it on the Kubernetes cluster.

#azure #blog #azure functions #keda

Optimizing Azure functions on Kubernetes with KEDA
1.20 GEEK