In this blog, we will have a glimpse at some of the tasks required to deploy and implement Kubernetes Engine resources. But first, let’s do a bit of a review on containers and Kubernetes in simple terms.

If you aren’t familiar with containers, please read further to get the basic overview.

What are containers?

Containers provide you the self-governing scalability of workloads in Platform as a Service (PaaS) and a reflecting layer of the OS and hardware in Infrastructure as a Service (IaaS). Containers provide you a transparent box around your code and its dependencies, with limited access to its partition of the file system and hardware. It only needs a few system costs to create and starts as instantly as a process. All you need on each host is an OS kernel that supports containers in a container runtime.

In reality, you’re virtualizing the OS. It scales like PaaS but gives you almost the same flexibility as IaaS. With this idea, your code is ultra-portable, and you can handle the OS and hardware as a black box.

Are containers often used to implement the microservices?

For example, if you want to scale a web server, you can do so in seconds and deploy dozens of them depending on the size of your workload on a single host. It is a simple example of scaling one container running the whole application on a single host. You’re more likely to want to develop your app using lots of containers, each performing their own functions as microservices.

If you develop applications this way and unite them with network connections, you can make them modular, extend quickly, and scale freely across a group of hosts. The hosts can then scale up and down and start and stop containers as a need for your app developments or as host down. The tool that makes it great is Kubernetes.

What is Kubernetes ?

Kubernetes does it simple to orchestrate many containers on many hosts, scale them as microservices and deploy roll-outs and rollbacks.

#kubernetes #devops

Run-through of Containers & Kubernetes
3.05 GEEK