Sometime ago for a project, I was looking for a good rate-limiting service. For the scope of that project, the service would run along a front proxy and would rate-limit requests to third party applications.
Nginx Plus and Kong certainly have rate-limiting features but are not OSS; while I am a bigger fan of OSS. Using Istio service mesh would have been a overkill. Therefore I decided to use Envoy Proxy + Lyft Ratelimiting.
The aim for this blog is help you get started with the rate-limiting service and configure various combinations of rate-limiting scenarios.
Let’s dive in…
Ratelimit configuration consists of
Descriptors can also be nested to achieve more complex rate-limiting scenarios.
We will be performing rate limiting based on various HTTP headers. Let’s have a look at the configuration file.
domain: apis
descriptors:
- key: generic_key
value: global
rate_limit:
unit: second
requests_per_unit: 60
- key: generic_key
value: local
rate_limit:
unit: second
requests_per_unit: 50
- key: header_match
value: "123"
rate_limit:
unit: second
requests_per_unit: 40
- key: header_match
value: "456"
rate_limit:
unit: second
requests_per_unit: 30
- key: header_match
value: post
rate_limit:
unit: second
requests_per_unit: 20
- key: header_match
value: get
rate_limit:
unit: second
requests_per_unit: 10
- key: header_match
value: path
rate_limit:
unit: second
requests_per_unit: 5
#Using nested descriptors
- key: custom_header
descriptors:
- key: plan
value: BASIC
rate_limit:
requests_per_unit: 2
unit: second
- key: plan
value: PLUS
rate_limit:
requests_per_unit: 3
unit: second
In the configuration above it can be clearly seen
#lyft #api-rate-limiting #tutorial #lytft-global-rate-limiting #http-throttling #software-development #programming #coding