The term “cloud” has been evolving ever since the term was first used in the early 1990s. One could argue that Cloud 1.0 was really nothing more than a euphemism for hosted services. Hosted services gave companies the ability to run critical apps off their premises in a highly secure, predictable environment. This value proposition continued with the future rise of services like AWS and Microsoft Azure, where businesses would “lift and shift” legacy apps and drop them into a cloud.

Cloud 2.0 gave rise to web-optimized apps. In Cloud 2.0 the apps were truly built for the cloud and spawned companies that made the cloud their primary compute platform. However, this cloud strategy revolved around a single cloud provider and traditional monolithic app architectures. Even companies that used multiple clouds built app A on one cloud, app B on another, etc. In this case, multicloud was actually multiple clouds being used as discrete, independent infrastructure entities.

Also on InfoWorld: Cloud tech certifications count more than degrees now ]

We have now entered the Cloud 3.0 era, which can be thought of as multicloud on steroids. The rise of microservices and containers has allowed app developers to build apps by accessing services from multiple cloud providers. Most modern, cloud-native apps are being built this way. Edge computing is on the horizon, which will create more locations for app developers to extend access to data and app services. This is the concept of the distributed cloud, where the cloud is no longer a single location, but a set of distributed resources.

#cloud

The distributed cloud era has arrived
1.25 GEEK