The predominant use cases involve speed to market, implementing a DevOps methodology, cost savings, and modernization.
Originally published by Tom Smith at https://dzone.com
To understand the current and future state of Kubernetes (K8s) in the enterprise, we gathered insights from IT executives at 22 companies. We asked, "What are a couple of use cases you’d like to highlight where K8s helped solve a specific business problem?" Here’s what we learned!
Two women discussing Kubernetes use cases.
- If you want to run your application in the cloud without having to worry about servers, that’s where we come in. Building on top of K8s has helped us move faster with a smaller team. You can run Elasticsearch, WordPress, MySQL, Mongo and we manage each database for you. It reduces the time you have to spend on DevOps and allows DevOps to work with engineering to make apps more efficient.
- Most customers are using microservices and are migrating from monolithic apps and gluing together for seamless integration. They are able to develop much faster with a smaller team.
- Reduce time to market. Rapid coverage creation reduces the time to get coverage in place. Reduce the number and skills required when it comes to UX. Maintain integrity when scaling. Customers adopting K8s in the cloud are making a digital transformation. A lot of testing infrastructures are not set up to deal with K8s. Orchestration and resource requirements enable you to scale seamlessly which is a paradigm shift from on-prem.
- Our move to K8s was really about cleaning up technical debt. We had developed a set of home-grown (and fragile) orchestration scripts for managing service deployments. When we moved to K8s, there were still a few custom parts, but they were built on a much more solid foundation, and overall, we had a much more maintainable system.
- K8s also helps our service teams develop and deploy more quickly: with K8s, we’ve been able to develop services that are less coupled to each other. The interaction between service teams is mediated through stronger and more standardized abstractions, which means they can work more independently.
- 1) K8s speeds time to market: One of the core principles of DevOps is that the team responsible for writing code owns that code throughout its lifecycle — across development, QA, deployment, and operations. The team writes, tests, deploys, monitors, supports, triages and debugs that code.
- We have seen, in many cases, the challenge with this new model is that each and every team needs to develop and implement expertise across the entire application lifecycle, which requires lots of extra time and effort — especially as app development scales. K8s has helped many of our users and customers dramatically here, as it automates DevOps tasks, and builds best-practices directly into the environment. This not only lowers the manual overhead required from DevOps teams, but it also accelerates every phase of the application life cycle, and in so doing speeds time to market.
- 2) K8s can actually ease compliance: Proving compliance in an “everything as code” environment has initially proven challenging for many of our customers. The fact is, dev-centric teams are able to “see the compliance in the code” but security, audit and compliance teams who are less dev-centric need to see the results of policy rules in a different way.
- When compliance can be built into the infrastructure, and that infrastructure can output the results of the policy in a human-consumable way, suddenly not only can guardrails be made tighter and more pervasive, but they can be implemented in more holistic and seamless ways. It’s a win-win.
- As a managed service provider beginning with a database-as-a-service solution, everything we built was centered around it and the design choices were driven by faster time to market. As we started growing the business offering more value adds to our customers with the likes of Apache Kafka, Spark, Zeppelin and soon, Elasticsearch, our infrastructure grew complex making it harder to maintain and extend it.
- A move towards an infrastructure built on microservices-based containerized applications was inevitable. K8s was our unanimous choice to deploy those microservices as it is open source, has a growing developer ecosystem, industry trends and standardization efforts. We are on the verge of deploying one of our key business-impacting components as a container application on K8s. Other components will follow, and we also excited about the opportunities to optimize infrastructure costs with this move.
- CI/CD DevOps automation and that is the most popular use case.
- K8s makes it more possible to go into a continual development and continual deployment type of structure to model and improve. K8s makes sure deliveries go to the right place at the right time. You have the ability to easily control the delivery aspects, less so the development aspects. In these environments, we’re able to see if there is a failure and what caused it. The toolset lets you look at the big picture and see where the failure is.
- People building apps in K8s are building large apps and are offloading building email delivery of Kafka messaging. We make managed services a first-class service in K8s. You can build tools to understand K8s and train models around how you want applications to perform. Learn from the performance of containers and helps them perform better based on your stated performance objectives. Better performance = more cost savings. How well you can optimize infrastructure. It costs less to have a high performing infrastructure.
- Many companies are deploying (or planning to deploy) a wide variety of applications in the cloud and on-prem using K8s orchestrated container management environments. For example, Banking and FSI, HPC, Big Data, Animation, and ML, just to name several.
- The list also includes a broad range of ISV applications, including stateful applications such as SQL relational database products. Business benefits realized are:
- Reduced infrastructure costs.
- Easier and rapid application deployment methods.
- Automatic scale-in and -out to adjust quickly to application workload demands.
- Built-in continuous availability, as many online applications require "always-on" availability.
- The second most popular use case is legacy containerization/ modernization. Take existing apps and start to containerize parts of it to take advantage of the benefits of containers. Use K8s for management to reduce overhead and get cost benefits.
- Support clients on orchestrating the services microservices talk to Kafka, Cassandra, and Spark. 75% of cloud-native projects have data in the core. IoT, connected cars, fraud detection, customer analytics. We provide full lifecycle automation. Customers can run all their workloads on a single platform.
- Food and agriculture product distributor Cargill’s legacy IT systems were slowing its ability to create new digital products. Our native support for K8s enabled the Cargill team to scale up and down with demand. With the number of new services Cargill was adding and creating from decoupling legacy applications into microservices, it would have been nearly impossible to manually manage the infrastructure. Decentralization became a reality for Cargill by using us with K8s to define how they wanted their services to behave and then automatically scale depending on the situation.
- The best example is in scale. Retail customers need to scale application deployments very rapidly for events like Black Friday. We see the same use case for public cloud. Containers and microservices enable the business to scale in a public cloud or in scalable private clouds. You can repurpose on-prem infrastructure by deploying additional pods or clusters in your existing on-prem infrastructure.
- K8s eases the large-scale management of containers, making it possible to harness the full potential of the technology. Container environments orchestrated by K8s put enterprises in a position to optimize their use of resources and control infrastructure expenses. Enterprises also benefit from more agile development, greater preparation for future scalability, and put themselves in a position to increase availability and avoid lock-in through multi-cloud deployments. That said, as DevOps teams increase their efficiency and the speed of releases, production K8s security must be able to keep pace.
- The third use case is application operations – patches, upgrades, add features, fixing bugs, operations, upgrades, and canary deployments. It’s hard to do these traditional application operations in the traditional world. They have been very manual. K8s help to automate much of that stuff for standardization, agility and cost reduction.
- The gaming industry has real money involved. We help customers protect the application stack in a public cloud and help with off-cloud back-ups and restore testing. We help solve specific compliance requirements. We provide an application-centric way of doing things by hooking into the databases. We help extract data from local storage and push into object storage. Extract data in a consistent manner, able to recover to a particular point in time in a single cloud or on-prem.
- We see K8s used in the cloud, less on-prem. AWS, GCP, Azure offer a combination of services being used as well. Spark in K8s is newly supported with the analytics side using K8s more for seamless scaling and flexibility for ETL and transformation jobs. Presto is coming up as will with deployment architectures depending on the data locality you need. Presto is co-located with Alluxio for ETL and SQL.
- 1) From the visibility point of view, we are able to leverage granular data to go beyond green and red light to troubleshoot exactly what happens with a specific server. We had a customer with application servers that were losing connectivity and restarting. By capturing data, they were able to investigate and resolve.
- 2) From a security point-of-view using data for enrichment for hunting and incident response. Often you get an alert with bits of information. It’s easy to automatically enrich your knowledge and help with hunting down the problem. In modern environments where things quickly, this can really help you.
- Kubeflow is a free and open-source software platform developed by Google. Kubeflow is designed to develop ML applications and to deploy them to K8s. We built an ML toolkit on top of K8s that allows others to quickly compose a portable ML stack on top of K8s. Users can add this stack with a single command.
- In the database world, the end goal every organization would like to reach is to have a single unified interface for providing a database-as-a-service like-experience to their internal development teams which span on-prem and multiple cloud providers. K8s provides the necessary “operating system for the cloud” and Operators provide us a way to create repeatable standardized deployments for stateful applications like databases. This is huge for organizations that have a small infrastructure team servicing large numbers of disparate application development teams internally.
- Mineteria, a Minecraft social network, uses our K8s service to handle much of the company’s low-level networking administration and maintenance of K8s master servers. As Mineteria grew, it could not keep up with Google's bandwidth pricing and switched to us. Mineteria created its own K8s management applications to deploy and maintain workloads.
Thanks for reading ❤
If you liked this post, share it with all of your programming buddies!
Follow me on Facebook | Twitter
Further reading about Kubernetes
☞ Docker and Kubernetes: The Complete Guide
☞ Learn DevOps: The Complete Kubernetes Course
☞ Docker and Kubernetes: The Complete Guide
☞ Kubernetes Certification Course with Practice Tests
☞ An illustrated guide to Kubernetes Networking
☞ An Introduction to Kubernetes: Pods, Nodes, Containers, and Clusters
☞ An Introduction to the Kubernetes DNS Service
☞ Kubernetes Deployment Tutorial For Beginners
☞ Kubernetes Tutorial - Step by Step Introduction to Basic Concepts