Daniel Bryant: There has been primarily three big changes with API gateways and related edge technologies over the past 25 years: the move from hardware to software; the shift of focus from layer 4 of the OSI networking stack to layer 7; and the need to support decentralized management.

I’m old enough to remember manually installing edge appliances, such as “racking and stacking” load balancers in data centers, but even the younger reader will have seen the increased adoption of virtualization technology over the past 10 years. Load balancers, WAFs, and other edge components have moved from hardware to software for mainly two reasons: (1) to save costs, e.g. running software on commodity VMs in the cloud is considerably cheaper than using specialized edge hardware; and (2) to increase both configurability and flexibility e.g. “cloud native” developers can now program the edge using “infrastructure as code”, as seen with Kubernetes YAML and custom resources.

**SEE ALSO: **Best practices in geolocation-based API request routing

In regards to the move to a layer 7 aware edge, we are now seeing a lot of API access and routing decisions being made based on a richer set of user and application data than we did before the time of “cloud native”. For example, we can route a user’s request based on HTTP metadata, such as User-Agent or a cookie, or we can rate limit a MongoDB connection based on the metadata contained within the request. This has primarily been driven by the need to innovate and experiment more rapidly.

The third change is the move from centralized to decentralized management of the edge. When my Java developer career was in full swing in the early 2010s, we typically had centralized teams that managed the edge and API gateway ecosystem. If we wanted to make a change in these systems, e.g. opening up a new port or registering a new API, we typically had to raise a ticket to get this change actioned. With modern development teams wanting to move increasingly fast, raising tickets to make changes in an API gateway is a potential blocker. Instead, development teams now want access to self-service mechanisms to change edge config or release a new API.

No longer is it a single monolith offering all of the APIs; now potentially every service offers an API. Scaling the management of the edge and API gateways is therefore a big challenge.

JAXenter: What impact has adopting new architecture styles, such as microservices, had on API gateways?

**Daniel Bryant: **The biggest impact is that with microservices-based architectures there are typically more services and APIs exposed at the edge. No longer is it a single monolith offering all of the APIs; now potentially every service offers an API. Scaling the management of the edge and API gateways is therefore a big challenge.

For developers to release new services and functionality rapidly — and also be able to understand the state of the distributed system and get the feedback they need — everything needs to be self-service, and also independently configurable and stored in a “single source of truth”.

#articles #interview #api #cloud #kubernetes

"Start at the edge, typically with an API gateway, and work inwards"
1.05 GEEK