Arno  Bradtke

Arno Bradtke

1598094420

Nginx-ingress controller for cross-namespace support

We were using alb ingress controller in AWS EKS. But the main limitation of the ALB ingress controller is that It does support cross-namespaces. To work with multi-namespaces you must deploy ingress in each namespace and it will create another load balancer. It is really very expensive if you have many namespaces. See this post for the details.

So alternative solution is using Nginx ingress controller at https://github.com/kubernetes/ingress-nginx

I deployed from https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/aws/deploy.yaml

Updated ingress-nginx-controller service with the following annotations terminate TLS on NLB

service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: 'arn:aws:acm:eu-west-1:1234567819:certificate/ef5011e2-c830-4194-b1f1-fbttf'

and deployed a pod and service to test NLB TLS termination.

Used sample apple service from https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/

But got some errors.

  1. 400 The plain HTTP request was sent to HTTPS port
  2. Default Nginx-ingress-controller service sends HTTPS requests to HTTPS(targetPort: https) ports. But If we terminate TLS in NLB, did not configure ingress to use TLS then we got 400 The plain HTTP request was sent to HTTPS port in the browser.
  3. Solution: Change targetPort value from https to http in https ports settings.

Before:

Image for post

After:

Image for post

#aws-eks #kubernetes #tls #nginx-ingress-controller #aws-nlb

What is GEEK

Buddha Community

Nginx-ingress controller for cross-namespace support
Autumn  Blick

Autumn Blick

1603600800

NGINX Announces Eight Solutions that Let Developers Run Safely with Scissors

Technology is hard. As technologists, I think we like it that way. It’s built‑in job security, right? Well, unfortunately, the modern application world has become unproductively hard. We need to make it easier.

That’s why I like describing the current developer paradox as the need to run safely with scissors.

NGINX Balances Developer Choice with Infrastructure Guardrails

Running with scissors is a simple metaphor for what is the admittedly difficult ask we make of software engineers. Developers need to run. Time to market and feature velocity are critical to the success of digital businesses. As a result, we don’t want to encumber developers with processes or technology choices that slow them down. Instead we empower them to pick tools and stacks that let them deliver code to customers as quickly as possible.

But there’s a catch. In the world of fast releases, multiple daily (or hourly or minutely!) changes, and fail‑fast development, we risk introducing application downtime into digital experiences – that risk is the metaphorical scissors that make it dangerous to run fast. On some level we know it’s wrong to make developers run with scissors. But the speed upside trumps the downtime downside.

That frames the dilemma of our era: we need our developers to run with scissors, but we don’t want anybody to get hurt. Is there a solution?

At NGINX, the answer is “yes”. I’m excited to announce eight new or significantly enhanced solutions built to unleash developer speed without sacrificing the governance, visibility, and control infrastructure teams require.

Load Balancing and Security DNS Solutions Empower Self‑Service

As my colleague, Gus Robertson, eloquently points out in his recent blog The Essence of Sprint Is Speed, self‑service is an important part of developer empowerment. He talks about developers as the engines of digital transformation. And if they’re not presented with easy-to-use, capable tools, they take matters into their own hands. The result is shadow IT and significant infrastructure risk.

Self‑service turns this on its head. It provides infrastructure teams with a way to release the application delivery and security technologies that developers need for A/B, canary, blue‑green, and circuit‑breaker patterns. But it does so within the guardrails that ensure the consistency, reliability, and security that ensure your apps remain running once in production.

#blog #news #opinion #red hat #nginx controller #nginx app protect #nginx sprint 2020 #nginx ingress controller #nginx service mesh #f5 dns cloud services #nginx analytics cloud service

Arno  Bradtke

Arno Bradtke

1598094420

Nginx-ingress controller for cross-namespace support

We were using alb ingress controller in AWS EKS. But the main limitation of the ALB ingress controller is that It does support cross-namespaces. To work with multi-namespaces you must deploy ingress in each namespace and it will create another load balancer. It is really very expensive if you have many namespaces. See this post for the details.

So alternative solution is using Nginx ingress controller at https://github.com/kubernetes/ingress-nginx

I deployed from https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/aws/deploy.yaml

Updated ingress-nginx-controller service with the following annotations terminate TLS on NLB

service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: 'arn:aws:acm:eu-west-1:1234567819:certificate/ef5011e2-c830-4194-b1f1-fbttf'

and deployed a pod and service to test NLB TLS termination.

Used sample apple service from https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/

But got some errors.

  1. 400 The plain HTTP request was sent to HTTPS port
  2. Default Nginx-ingress-controller service sends HTTPS requests to HTTPS(targetPort: https) ports. But If we terminate TLS in NLB, did not configure ingress to use TLS then we got 400 The plain HTTP request was sent to HTTPS port in the browser.
  3. Solution: Change targetPort value from https to http in https ports settings.

Before:

Image for post

After:

Image for post

#aws-eks #kubernetes #tls #nginx-ingress-controller #aws-nlb

Mikel  Okuneva

Mikel Okuneva

1600894800

Performance Testing NGINX Ingress Controllers in a Dynamic Kubernetes Cloud Environment

As more and more enterprises run containerized apps in production, Kubernetes continues to solidify its position as the standard tool for container orchestration. At the same time, demand for cloud computing has been pulled forward by a couple of years because work-at-home initiatives prompted by the COVID‑19 pandemic have accelerated the growth of Internet traffic. Companies are working rapidly to upgrade their infrastructure because their customers are experiencing major network outages and overloads.

To achieve the required level of performance in cloud‑based microservices environments, you need rapid, fully dynamic software that harnesses the scalability and performance of the next‑generation hyperscale data centers. Many organizations that use Kubernetes to manage containers depend on an NGINX‑based Ingress controller to deliver their apps to users.

#blog #tech #ingress controller #nginx ingress controller

Waylon  Bruen

Waylon Bruen

1616571660

Easy and Robust Single Sign-on with OpenID Connect and NGINX ingress Controller

With the release of NGINX Ingress Controller 1.10.0, we are happy to announce a major enhancement: a  technology preview of OpenID Connect (OIDC) authentication . OIDC is the identity layer built on top of the OAuth 2.0 framework which provides an authentication and single sign‑on (SSO) solution for modern apps. Our OIDC policy is a full‑fledged SSO solution enabling users to securely authenticate with multiple applications and Kubernetes services. Significantly, it enables apps to use an external identity provider (IdP) to authenticate users and frees the apps from having to handle usernames or passwords.

This new capability complements other  NGINX Ingress Controller authorization and authentication features, such as  JSON Web Token (JWT) authentication, to provide a robust SSO option that is easy to configure with NGINX Ingress resources. This means you can secure apps with a battle‑tested solution for authenticating and authorizing users, and that developers don’t need to implement these functions in the app. Enforcing security and traffic control at the Ingress controller blocks unauthorized and unauthenticated users at early stages of the connection, reducing unnecessary strain on resources in the Kubernetes environment.

Defining an OIDC Policy

When you define and apply an OIDC policy, NGINX Plus Ingress Controller operates as the OIDC relying party, initiating and validating authenticated sessions to the Kubernetes services for which it provides ingress. We support the  OIDC Authorization Code Flow with a preconfigured IdP.

#microservices #kubernetes #releases #nginx ingress controller #nginx service mesh

Hudson  Kunde

Hudson Kunde

1595648280

Announcing NGINX Ingress Controller for Kubernetes Release 1.8.0

We are happy to announce release 1.8.0 of the NGINX Ingress Controller for Kubernetes. This release builds upon the development of our supported solution for Ingress load balancing on Kubernetes platforms, including Red Hat OpenShift, Amazon Elastic Container Service for Kubernetes (EKS), the Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), IBM Cloud Private, Diamanti, and others.

With release 1.8.0, we continue our commitment to providing a flexible, powerful and easy-to-use Ingress Controller, which you can configure with both Kubernetes Ingress Resources and NGINX Ingress Resources:

Release 1.8.0 brings the following major enhancements and improvements:

  • Integration with NGINX App Protect – NGINX App Protect is the leading NGINX‑based application security solution, providing deep signature and structural protection for your web applications.
  • Extensibility for NGINX Ingress resources – For users who want to use NGINX Ingress resources but need to customize NGINX features that the VirtualServer and VirtualServerRoute resources don’t currently expose, two complementary mechanisms are now supported: configuration snippets and custom templates.
  • URI rewrites and request and response header modification – These features give you granular control (adding, removing, and ignoring) over the request and response headers that are passed to upstreams and then the ones that are passed back to the clients.
  • Policies and IP address access control lists – With policies, traffic management functionality is abstracted within a separate Kubernetes object that can be defined and applied in multiple places by different teams. Access control lists (ACLs) are used to filter incoming and outgoing network traffic flowing through the NGINX Ingress Controller.
  • Other new features –
  • A readiness probe
  • Support for multiple Ingress Controllers in VirtualServer and VirtualServerRoute resources and Helm charts
  • Status information about VirtualServer and VirtualServerRoute resources
  • Updates to the NGINX Ingress Operator for Red Hat OpenShift

#blog #news #tech #nginx kubernetes ingress controller #nginx app protect