Kuma, the service mesh it released in September 2019 enables users to run meshes on any platform, so it can run on Kubernetes or VMs, both in a native way.
“I’m pretty sure that you won’t hear anybody saying, ‘Oh, yeah, we implemented a service mesh, and it was easy to do.’ They were just extremely complicated systems,” said Kong Chief Technology Officer Marco Palladino. The first generation of service meshes, released around 2017, “came with lots of moving parts, lots of dependencies, and lots of assumptions that we did not necessarily agree with.”
Those meshes were hyperfocused on Kubernetes, he said, while customers, though perhaps running K8s, also were still running virtual machines. They don’t scale and require a new cluster for each mesh.
Kuma, the service mesh it released in September 2019 enables users to run meshes on any platform, so it can run on Kubernetes or VMs, both in a native way, and to manage multiple meshes through a single control plane.
In its latest iteration supports complex applications running across heterogeneous environments, including VMs, multiple Kubernetes clusters and multiple data centers.
Enterprises these days run multiple different meshes for different lines of businesses, teams and applications. Palladino said its customers want to introduce different meshes for isolation, and to manage them in a way that requires less coordination.
With version 0.6, “we have also introduced this new mode called global and remote control planes, which make these entire systems not only portable for across multiple environments, but also more scalable, because instead of having all the data planes from all of these environments talking to one control plane, we can allocate a remote control plane for a specific zone — be it either a cloud, a Kubernetes cluster or a platform,” he said.
For teams to monitor, support, and to maintain these services is becoming a challenge so most enterprises have invested in some kind of microservices management tool.
Microsoft has released open service mesh (OSM), an alpha service mesh implementation compliant with the SMI specification. OSM covers standard features of a service mesh like canary releases, secure communication, and application insights, similar to other service mesh implementations like Istio, Linkerd, or Consul. Additionally, the OSM team is in the process of donating the project to the CNCF.
Microsoft’s Open Service Mesh is an SMI-compliant, lightweight service mesh being run as an open source project. Backed by service-mesh…
Neural networks, as their name implies, are computer algorithms modeled after networks of neurons in the human brain. Learn more about neural networks from Algorithmia.
Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while having hidden states.