The advent of Cloud and Container technology ushered a new era in distributed computing at “planet scale” which was unheard of and unimaginable just a decade ago. Another interesting movement was brewing up a decade ago which bolstered delivering these complex solutions at high speed and accuracy, DevOps.

These two paradigm shifts have gone hand in hand complementing each other to shape up distributed computing in the way we know (or are still learning about) today.

We all know how integral Continuous Integration and Continuous Deployment are to the DevOps automation paradigm and how organizations have designed verbose pipelines so as to bring a factory floor model into shipping software.

If you’ve ever been part of implementing any of these DevOps practices for a cloud native distributed system, you’d perhaps know how quickly these CI/CD pipelines become a cacophony of complex tools and integrations requiring their own sub organization of specialists to be built and maintained, thereby adding to the very silos that the practices had set out to break.

A large system that’s composed of multiple distributed sub systems and is usually deployed as docker containers clustered over an orchestration runtime such Kubernetes; and for those systems these pipelines come anything but easy. The problem is you’d, at any time, be dealing with at-least 5 tools for anything from Triggers, to running builds and packaging, creating test environments and running tests and finally the holy grail of one click deploy (if it is any Holy at all!).

#devops #kubernetes #docker #cloudnative #kubernetes-cluster #java #programming #coding

Everything You Need to Know About Tekton and Reusable Pipelines
1.10 GEEK