Waylon  Bruen

Waylon Bruen

1616571660

Easy and Robust Single Sign-on with OpenID Connect and NGINX ingress Controller

With the release of NGINX Ingress Controller 1.10.0, we are happy to announce a major enhancement: a  technology preview of OpenID Connect (OIDC) authentication . OIDC is the identity layer built on top of the OAuth 2.0 framework which provides an authentication and single sign‑on (SSO) solution for modern apps. Our OIDC policy is a full‑fledged SSO solution enabling users to securely authenticate with multiple applications and Kubernetes services. Significantly, it enables apps to use an external identity provider (IdP) to authenticate users and frees the apps from having to handle usernames or passwords.

This new capability complements other  NGINX Ingress Controller authorization and authentication features, such as  JSON Web Token (JWT) authentication, to provide a robust SSO option that is easy to configure with NGINX Ingress resources. This means you can secure apps with a battle‑tested solution for authenticating and authorizing users, and that developers don’t need to implement these functions in the app. Enforcing security and traffic control at the Ingress controller blocks unauthorized and unauthenticated users at early stages of the connection, reducing unnecessary strain on resources in the Kubernetes environment.

Defining an OIDC Policy

When you define and apply an OIDC policy, NGINX Plus Ingress Controller operates as the OIDC relying party, initiating and validating authenticated sessions to the Kubernetes services for which it provides ingress. We support the  OIDC Authorization Code Flow with a preconfigured IdP.

#microservices #kubernetes #releases #nginx ingress controller #nginx service mesh

What is GEEK

Buddha Community

Easy and Robust Single Sign-on with OpenID Connect and NGINX ingress Controller
Waylon  Bruen

Waylon Bruen

1616571660

Easy and Robust Single Sign-on with OpenID Connect and NGINX ingress Controller

With the release of NGINX Ingress Controller 1.10.0, we are happy to announce a major enhancement: a  technology preview of OpenID Connect (OIDC) authentication . OIDC is the identity layer built on top of the OAuth 2.0 framework which provides an authentication and single sign‑on (SSO) solution for modern apps. Our OIDC policy is a full‑fledged SSO solution enabling users to securely authenticate with multiple applications and Kubernetes services. Significantly, it enables apps to use an external identity provider (IdP) to authenticate users and frees the apps from having to handle usernames or passwords.

This new capability complements other  NGINX Ingress Controller authorization and authentication features, such as  JSON Web Token (JWT) authentication, to provide a robust SSO option that is easy to configure with NGINX Ingress resources. This means you can secure apps with a battle‑tested solution for authenticating and authorizing users, and that developers don’t need to implement these functions in the app. Enforcing security and traffic control at the Ingress controller blocks unauthorized and unauthenticated users at early stages of the connection, reducing unnecessary strain on resources in the Kubernetes environment.

Defining an OIDC Policy

When you define and apply an OIDC policy, NGINX Plus Ingress Controller operates as the OIDC relying party, initiating and validating authenticated sessions to the Kubernetes services for which it provides ingress. We support the  OIDC Authorization Code Flow with a preconfigured IdP.

#microservices #kubernetes #releases #nginx ingress controller #nginx service mesh

Autumn  Blick

Autumn Blick

1603600800

NGINX Announces Eight Solutions that Let Developers Run Safely with Scissors

Technology is hard. As technologists, I think we like it that way. It’s built‑in job security, right? Well, unfortunately, the modern application world has become unproductively hard. We need to make it easier.

That’s why I like describing the current developer paradox as the need to run safely with scissors.

NGINX Balances Developer Choice with Infrastructure Guardrails

Running with scissors is a simple metaphor for what is the admittedly difficult ask we make of software engineers. Developers need to run. Time to market and feature velocity are critical to the success of digital businesses. As a result, we don’t want to encumber developers with processes or technology choices that slow them down. Instead we empower them to pick tools and stacks that let them deliver code to customers as quickly as possible.

But there’s a catch. In the world of fast releases, multiple daily (or hourly or minutely!) changes, and fail‑fast development, we risk introducing application downtime into digital experiences – that risk is the metaphorical scissors that make it dangerous to run fast. On some level we know it’s wrong to make developers run with scissors. But the speed upside trumps the downtime downside.

That frames the dilemma of our era: we need our developers to run with scissors, but we don’t want anybody to get hurt. Is there a solution?

At NGINX, the answer is “yes”. I’m excited to announce eight new or significantly enhanced solutions built to unleash developer speed without sacrificing the governance, visibility, and control infrastructure teams require.

Load Balancing and Security DNS Solutions Empower Self‑Service

As my colleague, Gus Robertson, eloquently points out in his recent blog The Essence of Sprint Is Speed, self‑service is an important part of developer empowerment. He talks about developers as the engines of digital transformation. And if they’re not presented with easy-to-use, capable tools, they take matters into their own hands. The result is shadow IT and significant infrastructure risk.

Self‑service turns this on its head. It provides infrastructure teams with a way to release the application delivery and security technologies that developers need for A/B, canary, blue‑green, and circuit‑breaker patterns. But it does so within the guardrails that ensure the consistency, reliability, and security that ensure your apps remain running once in production.

#blog #news #opinion #red hat #nginx controller #nginx app protect #nginx sprint 2020 #nginx ingress controller #nginx service mesh #f5 dns cloud services #nginx analytics cloud service

PostgreSQL Connection Pooling: Part 4 – PgBouncer vs. Pgpool-II

In our previous posts in this series, we spoke at length about using PgBouncer  and Pgpool-II , the connection pool architecture and pros and cons of leveraging one for your PostgreSQL deployment. In our final post, we will put them head-to-head in a detailed feature comparison and compare the results of PgBouncer vs. Pgpool-II performance for your PostgreSQL hosting !

The bottom line – Pgpool-II is a great tool if you need load-balancing and high availability. Connection pooling is almost a bonus you get alongside. PgBouncer does only one thing, but does it really well. If the objective is to limit the number of connections and reduce resource consumption, PgBouncer wins hands down.

It is also perfectly fine to use both PgBouncer and Pgpool-II in a chain – you can have a PgBouncer to provide connection pooling, which talks to a Pgpool-II instance that provides high availability and load balancing. This gives you the best of both worlds!

Using PgBouncer with Pgpool-II - Connection Pooling Diagram

PostgreSQL Connection Pooling: Part 4 – PgBouncer vs. Pgpool-II

CLICK TO TWEET

Performance Testing

While PgBouncer may seem to be the better option in theory, theory can often be misleading. So, we pitted the two connection poolers head-to-head, using the standard pgbench tool, to see which one provides better transactions per second throughput through a benchmark test. For good measure, we ran the same tests without a connection pooler too.

Testing Conditions

All of the PostgreSQL benchmark tests were run under the following conditions:

  1. Initialized pgbench using a scale factor of 100.
  2. Disabled auto-vacuuming on the PostgreSQL instance to prevent interference.
  3. No other workload was working at the time.
  4. Used the default pgbench script to run the tests.
  5. Used default settings for both PgBouncer and Pgpool-II, except max_children*. All PostgreSQL limits were also set to their defaults.
  6. All tests ran as a single thread, on a single-CPU, 2-core machine, for a duration of 5 minutes.
  7. Forced pgbench to create a new connection for each transaction using the -C option. This emulates modern web application workloads and is the whole reason to use a pooler!

We ran each iteration for 5 minutes to ensure any noise averaged out. Here is how the middleware was installed:

  • For PgBouncer, we installed it on the same box as the PostgreSQL server(s). This is the configuration we use in our managed PostgreSQL clusters. Since PgBouncer is a very light-weight process, installing it on the box has no impact on overall performance.
  • For Pgpool-II, we tested both when the Pgpool-II instance was installed on the same machine as PostgreSQL (on box column), and when it was installed on a different machine (off box column). As expected, the performance is much better when Pgpool-II is off the box as it doesn’t have to compete with the PostgreSQL server for resources.

Throughput Benchmark

Here are the transactions per second (TPS) results for each scenario across a range of number of clients:

#database #developer #performance #postgresql #connection control #connection pooler #connection pooler performance #connection queue #high availability #load balancing #number of connections #performance testing #pgbench #pgbouncer #pgbouncer and pgpool-ii #pgbouncer vs pgpool #pgpool-ii #pooling modes #postgresql connection pooling #postgresql limits #resource consumption #throughput benchmark #transactions per second #without pooling

Mikel  Okuneva

Mikel Okuneva

1600894800

Performance Testing NGINX Ingress Controllers in a Dynamic Kubernetes Cloud Environment

As more and more enterprises run containerized apps in production, Kubernetes continues to solidify its position as the standard tool for container orchestration. At the same time, demand for cloud computing has been pulled forward by a couple of years because work-at-home initiatives prompted by the COVID‑19 pandemic have accelerated the growth of Internet traffic. Companies are working rapidly to upgrade their infrastructure because their customers are experiencing major network outages and overloads.

To achieve the required level of performance in cloud‑based microservices environments, you need rapid, fully dynamic software that harnesses the scalability and performance of the next‑generation hyperscale data centers. Many organizations that use Kubernetes to manage containers depend on an NGINX‑based Ingress controller to deliver their apps to users.

#blog #tech #ingress controller #nginx ingress controller

Hudson  Kunde

Hudson Kunde

1595648280

Announcing NGINX Ingress Controller for Kubernetes Release 1.8.0

We are happy to announce release 1.8.0 of the NGINX Ingress Controller for Kubernetes. This release builds upon the development of our supported solution for Ingress load balancing on Kubernetes platforms, including Red Hat OpenShift, Amazon Elastic Container Service for Kubernetes (EKS), the Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), IBM Cloud Private, Diamanti, and others.

With release 1.8.0, we continue our commitment to providing a flexible, powerful and easy-to-use Ingress Controller, which you can configure with both Kubernetes Ingress Resources and NGINX Ingress Resources:

Release 1.8.0 brings the following major enhancements and improvements:

  • Integration with NGINX App Protect – NGINX App Protect is the leading NGINX‑based application security solution, providing deep signature and structural protection for your web applications.
  • Extensibility for NGINX Ingress resources – For users who want to use NGINX Ingress resources but need to customize NGINX features that the VirtualServer and VirtualServerRoute resources don’t currently expose, two complementary mechanisms are now supported: configuration snippets and custom templates.
  • URI rewrites and request and response header modification – These features give you granular control (adding, removing, and ignoring) over the request and response headers that are passed to upstreams and then the ones that are passed back to the clients.
  • Policies and IP address access control lists – With policies, traffic management functionality is abstracted within a separate Kubernetes object that can be defined and applied in multiple places by different teams. Access control lists (ACLs) are used to filter incoming and outgoing network traffic flowing through the NGINX Ingress Controller.
  • Other new features –
  • A readiness probe
  • Support for multiple Ingress Controllers in VirtualServer and VirtualServerRoute resources and Helm charts
  • Status information about VirtualServer and VirtualServerRoute resources
  • Updates to the NGINX Ingress Operator for Red Hat OpenShift

#blog #news #tech #nginx kubernetes ingress controller #nginx app protect