Deno Developer

Deno Developer

1636963349

Connecting to Postgres from the Edge with Deno Deploy

Many serverless at edge products can not connect to Postgres because they don't support TCP. Deno Deploy can.

Postgres is one of the most popular databases. It is fast, familiar, and featureful. It is the first choice of database for many companies. One drawback of Postgres: applications can only connect to it via TCP - a protocol that is not supported by many serverless at edge runtimes (e.g. Cloudflare Workers, Vercel Edge Functions, or Netlify Edge Handlers).

This is often a problem with serverless at edge products. They don't have the same capabilities as an application running inside of a VM, or a container on Kubernetes. This makes them a non-starter for many that need to integrate into an existing system.

With Deno Deploy, we are building a more serverless at edge system with more capabilities. Developers should be able to build locally as they normally do: connect to Postgres, read static files from disk, and use environment variables for configuration. Then be able to deploy globally to our 28 regions across the world without additional boilerplate, configuration, or concerns about missing capabilities.

So with that: how do we connect to Postgres from edge? Well, you import your driver, connect to the database as usual, and then run queries... There is really nothing special to it.

import * as postgres from "https://deno.land/x/postgres@v0.14.2/mod.ts";
import { serve } from "https://deno.land/std@0.114.0/http/server.ts";

// Get the connection string from the environment variable "DATABASE_URL"
const databaseUrl = Deno.env.get("DATABASE_URL")!;

// Create a database pool with three connections that are lazily established
const pool = new postgres.Pool(databaseUrl, 3, true);

serve(async (_req) => {
  try {
    // Grab a connection from the pool
    const connection = await pool.connect();

    try {
      // Run a query
      const result = await connection.queryObject`SELECT * FROM animals`;
      const users = result.rows; // [{ id: 1, name: "Lion" }, ...]

      // Encode the result as pretty printed JSON
      const body = JSON.stringify(users, null, 2);

      // Return the response with the correct content type header
      return new Response(body, {
        status: 200,
        headers: {
          "Content-Type": "application/json; charset=utf-8",
        },
      });
    } finally {
      // Release the connection back into the pool
      connection.release();
    }
  } catch(err) {
    console.error(err);
    return new Response(String(err?.message ?? err), { status: 500 });
  }
});

On Deno Deploy you can connect to your Postgres databases (even with TLS and custom CA certificates) from the edge. You can also connect to other databases with non-HTTP protocols like Redis, MySQL, or MongoDB. To take full advantage of the global nature of Deno Deploy, you could use the Postgres interface to connect to a globally distributed Cockroach DB database. Or to a global Google Spanner instance using its new Postgres interface.

Heck, you could even connect to more obscure systems like MQTT, or even manage your Minecraft game server using Minecraft RCON.

If you want a more detailed rundown of using Postgres on Deno Deploy, check out our Postgres tutorial in the Deno Deploy documentation. You can also check out the Deno documentation on Deno.connect, Deno.connectTls, and Deno.startTls - the APIs used create outbound TCP and TLS connections from Deno and Deno Deploy.

Original article source at https://deno.com

#deno #postgres #postgressql

What is GEEK

Buddha Community

Connecting to Postgres from the Edge with Deno Deploy
Deno Developer

Deno Developer

1636963349

Connecting to Postgres from the Edge with Deno Deploy

Many serverless at edge products can not connect to Postgres because they don't support TCP. Deno Deploy can.

Postgres is one of the most popular databases. It is fast, familiar, and featureful. It is the first choice of database for many companies. One drawback of Postgres: applications can only connect to it via TCP - a protocol that is not supported by many serverless at edge runtimes (e.g. Cloudflare Workers, Vercel Edge Functions, or Netlify Edge Handlers).

This is often a problem with serverless at edge products. They don't have the same capabilities as an application running inside of a VM, or a container on Kubernetes. This makes them a non-starter for many that need to integrate into an existing system.

With Deno Deploy, we are building a more serverless at edge system with more capabilities. Developers should be able to build locally as they normally do: connect to Postgres, read static files from disk, and use environment variables for configuration. Then be able to deploy globally to our 28 regions across the world without additional boilerplate, configuration, or concerns about missing capabilities.

So with that: how do we connect to Postgres from edge? Well, you import your driver, connect to the database as usual, and then run queries... There is really nothing special to it.

import * as postgres from "https://deno.land/x/postgres@v0.14.2/mod.ts";
import { serve } from "https://deno.land/std@0.114.0/http/server.ts";

// Get the connection string from the environment variable "DATABASE_URL"
const databaseUrl = Deno.env.get("DATABASE_URL")!;

// Create a database pool with three connections that are lazily established
const pool = new postgres.Pool(databaseUrl, 3, true);

serve(async (_req) => {
  try {
    // Grab a connection from the pool
    const connection = await pool.connect();

    try {
      // Run a query
      const result = await connection.queryObject`SELECT * FROM animals`;
      const users = result.rows; // [{ id: 1, name: "Lion" }, ...]

      // Encode the result as pretty printed JSON
      const body = JSON.stringify(users, null, 2);

      // Return the response with the correct content type header
      return new Response(body, {
        status: 200,
        headers: {
          "Content-Type": "application/json; charset=utf-8",
        },
      });
    } finally {
      // Release the connection back into the pool
      connection.release();
    }
  } catch(err) {
    console.error(err);
    return new Response(String(err?.message ?? err), { status: 500 });
  }
});

On Deno Deploy you can connect to your Postgres databases (even with TLS and custom CA certificates) from the edge. You can also connect to other databases with non-HTTP protocols like Redis, MySQL, or MongoDB. To take full advantage of the global nature of Deno Deploy, you could use the Postgres interface to connect to a globally distributed Cockroach DB database. Or to a global Google Spanner instance using its new Postgres interface.

Heck, you could even connect to more obscure systems like MQTT, or even manage your Minecraft game server using Minecraft RCON.

If you want a more detailed rundown of using Postgres on Deno Deploy, check out our Postgres tutorial in the Deno Deploy documentation. You can also check out the Deno documentation on Deno.connect, Deno.connectTls, and Deno.startTls - the APIs used create outbound TCP and TLS connections from Deno and Deno Deploy.

Original article source at https://deno.com

#deno #postgres #postgressql

PostgreSQL Connection Pooling: Part 4 – PgBouncer vs. Pgpool-II

In our previous posts in this series, we spoke at length about using PgBouncer  and Pgpool-II , the connection pool architecture and pros and cons of leveraging one for your PostgreSQL deployment. In our final post, we will put them head-to-head in a detailed feature comparison and compare the results of PgBouncer vs. Pgpool-II performance for your PostgreSQL hosting !

The bottom line – Pgpool-II is a great tool if you need load-balancing and high availability. Connection pooling is almost a bonus you get alongside. PgBouncer does only one thing, but does it really well. If the objective is to limit the number of connections and reduce resource consumption, PgBouncer wins hands down.

It is also perfectly fine to use both PgBouncer and Pgpool-II in a chain – you can have a PgBouncer to provide connection pooling, which talks to a Pgpool-II instance that provides high availability and load balancing. This gives you the best of both worlds!

Using PgBouncer with Pgpool-II - Connection Pooling Diagram

PostgreSQL Connection Pooling: Part 4 – PgBouncer vs. Pgpool-II

CLICK TO TWEET

Performance Testing

While PgBouncer may seem to be the better option in theory, theory can often be misleading. So, we pitted the two connection poolers head-to-head, using the standard pgbench tool, to see which one provides better transactions per second throughput through a benchmark test. For good measure, we ran the same tests without a connection pooler too.

Testing Conditions

All of the PostgreSQL benchmark tests were run under the following conditions:

  1. Initialized pgbench using a scale factor of 100.
  2. Disabled auto-vacuuming on the PostgreSQL instance to prevent interference.
  3. No other workload was working at the time.
  4. Used the default pgbench script to run the tests.
  5. Used default settings for both PgBouncer and Pgpool-II, except max_children*. All PostgreSQL limits were also set to their defaults.
  6. All tests ran as a single thread, on a single-CPU, 2-core machine, for a duration of 5 minutes.
  7. Forced pgbench to create a new connection for each transaction using the -C option. This emulates modern web application workloads and is the whole reason to use a pooler!

We ran each iteration for 5 minutes to ensure any noise averaged out. Here is how the middleware was installed:

  • For PgBouncer, we installed it on the same box as the PostgreSQL server(s). This is the configuration we use in our managed PostgreSQL clusters. Since PgBouncer is a very light-weight process, installing it on the box has no impact on overall performance.
  • For Pgpool-II, we tested both when the Pgpool-II instance was installed on the same machine as PostgreSQL (on box column), and when it was installed on a different machine (off box column). As expected, the performance is much better when Pgpool-II is off the box as it doesn’t have to compete with the PostgreSQL server for resources.

Throughput Benchmark

Here are the transactions per second (TPS) results for each scenario across a range of number of clients:

#database #developer #performance #postgresql #connection control #connection pooler #connection pooler performance #connection queue #high availability #load balancing #number of connections #performance testing #pgbench #pgbouncer #pgbouncer and pgpool-ii #pgbouncer vs pgpool #pgpool-ii #pooling modes #postgresql connection pooling #postgresql limits #resource consumption #throughput benchmark #transactions per second #without pooling

Zelma  Gerlach

Zelma Gerlach

1621616520

Edge Computing: Device Edge vs. Cloud Edge

It sometimes makes sense to treat edge computing not as a generic category but as two distinct types of architectures: cloud edge and device edge.

Most people talk about edge computing as a singular type of architecture. But in some respects, it makes sense to think of edge computing as two fundamentally distinct types of architectures: Device edge and cloud edge.

Although a device edge and a cloud edge operate in similar ways from an architectural perspective, they cater to different types of use cases, and they pose different challenges.

Here’s a breakdown of how device edge and cloud edge compare.

Edge computing, defined

First, let’s briefly define edge computing itself.

Edge computing is any type of architecture in which workloads are hosted closer to the “edge” of the network — which typically means closer to end-users — than they would be in conventional architectures that centralize processing and data storage inside large data centers.

#cloud #edge computing #cloud computing #device edge #cloud edge

Ian  Robinson

Ian Robinson

1623250560

An Introduction To Data Connectivity and Data Connectivity Solutions

In this article, we discuss facts about data connectivity, the related concepts, its benefits, as well as a discussion on some data connectivity solutions.

Introduction

In today’s world, data is the crux of major business decisions used by organizations all over the world. As such, it is imperative that the organizations have access to the right data and be able to analyze and make business decisions proactively. This article talks about data connectivity, the related concepts, its benefits, as well as a discussion on some data connectivity solutions.

#big data #data connectivity #data connectivity solutions #connectivity

Houston  Sipes

Houston Sipes

1602315003

Microsoft Innovates Its Azure Multi-Cloud, Multi-Edge Hybrid Capabilities

During the recent Ignite virtual conference, Microsoft announced several updates for their Azure multi-cloud and edge hybrid offerings. These updates span from security innovations to new edge capabilities.

From its inception onward, Microsoft Azure has been hybrid by design, providing customers with services that allow ground to cloud and cloud to ground shifts of workloads. Moreover, Microsoft keeps expanding its cloud platform hybrid capabilities to allow customers to run their apps anywhere across on-premises, multi-cloud, and the edge. At Ignite, the public cloud vendor announced several innovations for Azure Arc, Stack, VMWare and Sphere.

At Ignite last year, Microsoft launched Azure Arc, a service allowing enterprises to bring Azure services and management to any infrastructure, including AWS and Google Cloud. This service was an addition to Microsoft’s Azure Hybrid portfolio, which also includes Azure Stack and Edge. Later in 2020, the service received an update with support for Kubernetes. Now Azure Arc has more capabilities with the new Azure Arc enabled data services in preview. Furthermore, the Azure Arc enabled servers are now generally available.

#amazon #microsoft azure #cloud #iaas #kubernetes #iot #edge #google #azure #edge computing #microsoft #hybrid cloud #deployment #aws #containers #devops #architecture & design #development #news