1672175520
A battle-tested Node.js PostgreSQL client with strict types, detailed logging and assertions.
(The above GIF shows Slonik producing query logs. Slonik produces logs using Roarr. Logs include stack trace of the actual query invocation location and values used to execute the query.)
Read: Stop using Knex.js
Note: Using this project does not require TypeScript. It is a regular ES6 module. Ignore the type definitions used in the documentation if you do not use a type system.
Slonik began as a collection of utilities designed for working with node-postgres
. It continues to use node-postgres
driver as it provides a robust foundation for interacting with PostgreSQL. However, what once was a collection of utilities has since grown into a framework that abstracts repeating code patterns, protects against unsafe connection handling and value interpolation, and provides a rich debugging experience.
Slonik has been battle-tested with large data volumes and queries ranging from simple CRUD operations to data-warehousing needs.
The name of the elephant depicted in the official PostgreSQL logo is Slonik. The name itself is derived from the Russian word for "little elephant".
Read: The History of Slonik, the PostgreSQL Elephant Logo
Among the primary reasons for developing Slonik, was the motivation to reduce the repeating code patterns and add a level of type safety. This is primarily achieved through the methods such as one
, many
, etc. But what is the issue? It is best illustrated with an example.
Suppose the requirement is to write a method that retrieves a resource ID given values defining (what we assume to be) a unique constraint. If we did not have the aforementioned helper methods available, then it would need to be written as:
import {
sql,
type DatabaseConnection
} from 'slonik';
type DatabaseRecordIdType = number;
const getFooIdByBar = async (connection: DatabaseConnection, bar: string): Promise<DatabaseRecordIdType> => {
const fooResult = await connection.query(sql.typeAlias('id')`
SELECT id
FROM foo
WHERE bar = ${bar}
`);
if (fooResult.rowCount === 0) {
throw new Error('Resource not found.');
}
if (fooResult.rowCount > 1) {
throw new Error('Data integrity constraint violation.');
}
return fooResult[0].id;
};
oneFirst
method abstracts all of the above logic into:
const getFooIdByBar = (connection: DatabaseConnection, bar: string): Promise<DatabaseRecordIdType> => {
return connection.oneFirst(sql.typeAlias('id')`
SELECT id
FROM foo
WHERE bar = ${bar}
`);
};
oneFirst
throws:
NotFoundError
if query returns no rowsDataIntegrityError
if query returns multiple rowsDataIntegrityError
if query returns multiple columnsIn the absence of helper methods, the overhead of repeating code becomes particularly visible when writing routines where multiple queries depend on the proceeding query results. Using methods with inbuilt assertions ensures that in case of an error, the error points to the source of the problem. In contrast, unless assertions for all possible outcomes are typed out as in the previous example, the unexpected result of the query will be fed to the next operation. If you are lucky, the next operation will simply break; if you are unlucky, you are risking data corruption and hard-to-locate bugs.
Furthermore, using methods that guarantee the shape of the results allows us to leverage static type checking and catch some of the errors even before executing the code, e.g.
const fooId = await connection.many(sql.typeAlias('id')`
SELECT id
FROM foo
WHERE bar = ${bar}
`);
await connection.query(sql.typeAlias('void')`
DELETE FROM baz
WHERE foo_id = ${fooId}
`);
Static type check of the above example will produce a warning as the fooId
is guaranteed to be an array and binding of the last query is expecting a primitive value.
Slonik only allows to check out a connection for the duration of the promise routine supplied to the pool#connect()
method.
The primary reason for implementing only this connection pooling method is because the alternative is inherently unsafe, e.g.
// This is not valid Slonik API
const main = async () => {
const connection = await pool.connect();
await connection.query(sql.typeAlias('foo')`SELECT foo()`);
await connection.release();
};
In this example, if SELECT foo()
produces an error, then connection is never released, i.e. the connection hangs indefinitely.
A fix to the above is to ensure that connection#release()
is always called, i.e.
// This is not valid Slonik API
const main = async () => {
const connection = await pool.connect();
let lastExecutionResult;
try {
lastExecutionResult = await connection.query(sql.typeAlias('foo')`SELECT foo()`);
} finally {
await connection.release();
}
return lastExecutionResult;
};
Slonik abstracts the latter pattern into pool#connect()
method.
const main = () => {
return pool.connect((connection) => {
return connection.query(sql.typeAlias('foo')`SELECT foo()`);
});
};
Using this pattern, we guarantee that connection is always released as soon as the connect()
routine resolves or is rejected.
Just like in the unsafe connection handling example, Slonik only allows to create a transaction for the duration of the promise routine supplied to the connection#transaction()
method.
connection.transaction(async (transactionConnection) => {
await transactionConnection.query(sql.typeAlias('void')`INSERT INTO foo (bar) VALUES ('baz')`);
await transactionConnection.query(sql.typeAlias('void')`INSERT INTO qux (quux) VALUES ('quuz')`);
});
This pattern ensures that the transaction is either committed or aborted the moment the promise is either resolved or rejected.
SQL injections are one of the most well known attack vectors. Some of the biggest data leaks were the consequence of improper user-input handling. In general, SQL injections are easily preventable by using parameterization and by restricting database permissions, e.g.
// This is not valid Slonik API
connection.query('SELECT $1', [
userInput
]);
In this example, the query text (SELECT $1
) and parameters (userInput
) are passed separately to the PostgreSQL server where the parameters are safely substituted into the query. This is a safe way to execute a query using user-input.
The vulnerabilities appear when developers cut corners or when they do not know about parameterization, i.e. there is a risk that someone will instead write:
// This is not valid Slonik API
connection.query('SELECT \'' + userInput + '\'');
As evident by the history of the data leaks, this happens more often than anyone would like to admit. This security vulnerability is especially a significant risk in Node.js community, where a predominant number of developers are coming from frontend and have not had training working with RDBMSes. Therefore, one of the key selling points of Slonik is that it adds multiple layers of protection to prevent unsafe handling of user input.
To begin with, Slonik does not allow running plain-text queries.
// This is not valid Slonik API
connection.query('SELECT 1');
The above invocation would produce an error:
TypeError: Query must be constructed using
sql
tagged template literal.
This means that the only way to run a query is by constructing it using sql
tagged template literal, e.g.
connection.query(sql.unsafe`SELECT 1`);
To add a parameter to the query, user must use template literal placeholders, e.g.
connection.query(sql.unsafe`SELECT ${userInput}`);
Slonik takes over from here and constructs a query with value bindings, and sends the resulting query text and parameters to PostgreSQL. There is no other way of passing parameters to the query – this adds a strong layer of protection against accidental unsafe user input handling due to limited knowledge of the SQL client API.
As Slonik restricts user's ability to generate and execute dynamic SQL, it provides helper functions used to generate fragments of the query and the corresponding value bindings, e.g. sql.identifier
, sql.join
and sql.unnest
. These methods generate tokens that the query executor interprets to construct a safe query, e.g.
connection.query(sql.unsafe`
SELECT ${sql.identifier(['foo', 'a'])}
FROM (
VALUES
(
${sql.join(
[
sql.join(['a1', 'b1', 'c1'], sql.fragment`, `),
sql.join(['a2', 'b2', 'c2'], sql.fragment`, `)
],
sql.fragment`), (`
)}
)
) foo(a, b, c)
WHERE foo.b IN (${sql.join(['c1', 'a2'], sql.fragment`, `)})
`);
This (contrived) example generates a query equivalent to:
SELECT "foo"."a"
FROM (
VALUES
($1, $2, $3),
($4, $5, $6)
) foo(a, b, c)
WHERE foo.b IN ($7, $8)
This query is executed with the parameters provided by the user.
To sum up, Slonik is designed to prevent accidental creation of queries vulnerable to SQL injections.
Slonik client is configured using a custom connection URI (DSN).
postgresql://[user[:password]@][host[:port]][/database name][?name=value[&...]]
Supported parameters:
Name | Meaning | Default |
---|---|---|
application_name | application_name | |
sslmode | sslmode (supported values: disable , no-verify , require ) | disable |
Note that unless listed above, other libpq parameters are not supported.
Examples of valid DSNs:
postgresql://
postgresql://localhost
postgresql://localhost:5432
postgresql://localhost/foo
postgresql://foo@localhost
postgresql://foo:bar@localhost
postgresql://foo@localhost/bar?application_name=baz
Other configurations are available through the clientConfiguration
parameter.
Use createPool
to create a connection pool, e.g.
import {
createPool,
} from 'slonik';
const pool = await createPool('postgres://');
Instance of Slonik connection pool can be then used to create a new connection, e.g.
pool.connect(async (connection) => {
await connection.query(sql.typeAlias('id')`SELECT 1 AS id`);
});
The connection will be kept alive until the promise resolves (the result of the method supplied to connect()
).
Refer to query method documentation to learn about the connection methods.
If you do not require having a persistent connection to the same backend, then you can directly use pool
to run queries, e.g.
pool.query(sql.typeAlias('id')`SELECT 1 AS id`);
Beware that in the latter example, the connection picked to execute the query is a random connection from the connection pool, i.e. using the latter method (without explicit connect()
) does not guarantee that multiple queries will refer to the same backend.
Use pool.end()
to end idle connections and prevent creation of new connections.
The result of pool.end()
is a promise that is resolved when all connections are ended.
import {
createPool,
sql,
} from 'slonik';
const pool = await createPool('postgres://');
const main = async () => {
await pool.query(sql.typeAlias('id')`
SELECT 1 AS id
`);
await pool.end();
};
main();
Note: pool.end()
does not terminate active connections/ transactions.
Use pool.getPoolState()
to find out if pool is alive and how many connections are active and idle, and how many clients are waiting for a connection.
import {
createPool,
sql,
} from 'slonik';
const pool = await createPool('postgres://');
const main = async () => {
pool.getPoolState();
// {
// activeConnectionCount: 0,
// ended: false,
// idleConnectionCount: 0,
// waitingClientCount: 0,
// }
await pool.connect(() => {
pool.getPoolState();
// {
// activeConnectionCount: 1,
// ended: false,
// idleConnectionCount: 0,
// waitingClientCount: 0,
// }
});
pool.getPoolState();
// {
// activeConnectionCount: 0,
// ended: false,
// idleConnectionCount: 1,
// waitingClientCount: 0,
// }
await pool.end();
pool.getPoolState();
// {
// activeConnectionCount: 0,
// ended: true,
// idleConnectionCount: 0,
// waitingClientCount: 0,
// }
};
main();
Note: pool.end()
does not terminate active connections/ transactions.
/**
* @param connectionUri PostgreSQL [Connection URI](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING).
*/
createPool(
connectionUri: string,
clientConfiguration: ClientConfiguration
): DatabasePool;
/**
* @property captureStackTrace Dictates whether to capture stack trace before executing query. Middlewares access stack trace through query execution context. (Default: false)
* @property connectionRetryLimit Number of times to retry establishing a new connection. (Default: 3)
* @property connectionTimeout Timeout (in milliseconds) after which an error is raised if connection cannot be established. (Default: 5000)
* @property idleInTransactionSessionTimeout Timeout (in milliseconds) after which idle clients are closed. Use 'DISABLE_TIMEOUT' constant to disable the timeout. (Default: 60000)
* @property idleTimeout Timeout (in milliseconds) after which idle clients are closed. Use 'DISABLE_TIMEOUT' constant to disable the timeout. (Default: 5000)
* @property interceptors An array of [Slonik interceptors](https://github.com/gajus/slonik#slonik-interceptors).
* @property maximumPoolSize Do not allow more than this many connections. Use 'DISABLE_TIMEOUT' constant to disable the timeout. (Default: 10)
* @property PgPool Override the underlying PostgreSQL Pool constructor.
* @property queryRetryLimit Number of times a query failing with Transaction Rollback class error, that doesn't belong to a transaction, is retried. (Default: 5)
* @property ssl [tls.connect options](https://nodejs.org/api/tls.html#tlsconnectoptions-callback)
* @property statementTimeout Timeout (in milliseconds) after which database is instructed to abort the query. Use 'DISABLE_TIMEOUT' constant to disable the timeout. (Default: 60000)
* @property transactionRetryLimit Number of times a transaction failing with Transaction Rollback class error is retried. (Default: 5)
* @property typeParsers An array of [Slonik type parsers](https://github.com/gajus/slonik#slonik-type-parsers).
*/
type ClientConfiguration = {
captureStackTrace?: boolean,
connectionRetryLimit?: number,
connectionTimeout?: number | 'DISABLE_TIMEOUT',
idleInTransactionSessionTimeout?: number | 'DISABLE_TIMEOUT',
idleTimeout?: number | 'DISABLE_TIMEOUT',
interceptors?: Interceptor[],
maximumPoolSize?: number,
PgPool?: new (poolConfig: PoolConfig) => PgPool,
queryRetryLimit?: number,
ssl?: Parameters<tls.connect>[0],
statementTimeout?: number | 'DISABLE_TIMEOUT',
transactionRetryLimit?: number,
typeParsers?: TypeParser[],
};
Example:
import {
createPool
} from 'slonik';
const pool = await createPool('postgres://');
await pool.query(sql.typeAlias('id')`SELECT 1 AS id`);
None.
Check out slonik-interceptor-preset
for an opinionated collection of interceptors.
These type parsers are enabled by default:
Type name | Implementation |
---|---|
date | Produces a literal date as a string (format: YYYY-MM-DD). |
int8 | Produces an integer. |
interval | Produces interval in seconds (integer). |
numeric | Produces a float. |
timestamp | Produces a unix timestamp (in milliseconds). |
timestamptz | Produces a unix timestamp (in milliseconds). |
To disable the default type parsers, pass an empty array, e.g.
createPool('postgres://', {
typeParsers: []
});
You can create default type parser collection using createTypeParserPreset
, e.g.
import {
createTypeParserPreset
} from 'slonik';
createPool('postgres://', {
typeParsers: [
...createTypeParserPreset()
]
});
There are 4 types of configurable timeouts:
Configuration | Description | Default |
---|---|---|
connectionTimeout | Timeout (in milliseconds) after which an error is raised if connection cannot be established. | 5000 |
idleInTransactionSessionTimeout | Timeout (in milliseconds) after which idle clients are closed. Use 'DISABLE_TIMEOUT' constant to disable the timeout. | 60000 |
idleTimeout | Timeout (in milliseconds) after which idle clients are closed. Use 'DISABLE_TIMEOUT' constant to disable the timeout. | 5000 |
statementTimeout | Timeout (in milliseconds) after which database is instructed to abort the query. Use 'DISABLE_TIMEOUT' constant to disable the timeout. | 60000 |
Slonik sets aggressive timeouts by default. These timeouts are designed to provide safe interface to the database. These timeouts might not work for all programs. If your program has long running statements, consider adjusting timeouts just for those statements instead of changing the defaults.
notices
query result property (notice
event is never fired on connection instance).Slonik only allows to check out a connection for the duration of the promise routine supplied to the pool#connect()
method.
import {
createPool,
} from 'slonik';
const pool = await createPool('postgres://localhost');
const result = await pool.connect(async (connection) => {
await connection.query(sql.typeAlias('id')`SELECT 1 AS id`);
await connection.query(sql.typeAlias('id')`SELECT 2 AS id`);
return 'foo';
});
result;
// 'foo'
Connection is released back to the pool after the promise produced by the function supplied to connect()
method is either resolved or rejected.
Read: Protecting against unsafe connection handling
Slonik provides a way to mock queries against the database.
createMockPool
to create a mock connection.createMockQueryResult
to create a mock query result.import {
createMockPool,
createMockQueryResult,
} from 'slonik';
type OverridesType =
query: (sql: string, values: PrimitiveValueExpression[],) => Promise<QueryResult<QueryResultRow>>,
};
createMockPool(overrides: OverridesType): DatabasePool;
createMockQueryResult(rows: QueryResultRow[]): QueryResult<QueryResultRow>;
Example:
import {
createMockPool,
createMockQueryResult,
} from 'slonik';
const pool = createMockPool({
query: async () => {
return createMockQueryResult([
{
foo: 'bar',
},
]);
},
});
await pool.connect(async (connection) => {
const results = await connection.query(sql.typeAlias('foo')`
SELECT ${'foo'} AS foo
`);
});
pg
vs slonik
pg
is built intentionally to provide unopinionated, minimal abstraction and encourages use of other modules to implement convenience methods.
Slonik is built on top of pg
and it provides convenience methods for building queries and querying data.
Work on pg
began on Tue Sep 28 22:09:21 2010. It is authored by Brian Carlson.
pg-promise
vs slonik
As the name suggests, pg-promise
was originally built to enable use of pg
module with promises (at the time, pg
only supported Continuation Passing Style (CPS), i.e. callbacks). Since then pg-promise
added features for connection/ transaction handling, a powerful query-formatting engine and a declarative approach to handling query results.
The primary difference between Slonik and pg-promise
:
sql
tagged template literals. This design protects against unsafe value interpolation.Note: Author of pg-promise
has objected to the above claims. I have removed a difference that was clearly wrong. I maintain that the above two differences remain valid differences: even though pg-promise
might have substitute functionality for variable interpolation and interceptors, it implements them in a way that does not provide the same benefits that Slonik provides, namely: guaranteed security and support for extending library functionality using multiple plugins.
Other differences are primarily in how the equivalent features are implemented, e.g.
pg-promise | Slonik |
---|---|
Custom type formatting. | Not available in Slonik. The current proposal is to create an interceptor that would have access to the query fragment constructor. |
formatting filters | Slonik tagged template value expressions to construct query fragments and bind parameter values. |
Query files. | Use slonik-sql-tag-raw . |
Tasks. | Use pool.connect . |
Configurable transactions. | Not available in Slonik. Track this issue. |
Events. | Use interceptors. |
When weighting which abstraction to use, it would be unfair not to consider that pg-promise
is a mature project with dozens of contributors. Meanwhile, Slonik is a young project (started in March 2017) that until recently was developed without active community input. However, if you do support the unique features that Slonik adds, the opinionated API design, and are not afraid of adopting a technology in its young days, then I warmly invite you to adopt Slonik and become a contributor to what I intend to make the standard PostgreSQL client in the Node.js community.
Work on pg-promise
began Wed Mar 4 02:00:34 2015. It is authored by Vitaly Tomilov.
postgres
vs slonik
postgres
recently gained in popularity due to its performance benefits when compared to pg
. In terms of API, it has a pretty bare-bones API that heavily relies on using ES6 tagged templates and abstracts away many concepts of connection pool handling. While postgres
API might be preferred by some, projects that already use pg
may have difficulty migrating.
However, by using postgres-bridge (postgres
/pg
compatibility layer), you can benefit from postgres
performance improvements while still using Slonik API:
import postgres from 'postgres';
import { createPostgresBridge } from 'postgres-bridge';
import { createPool } from 'slonik';
const PostgresBridge = createPostgresBridge(postgres);
const pool = createPool('postgres://', {
PgPool: PostgresBridge,
});
Type parsers describe how to parse PostgreSQL types.
type TypeParser = {
name: string,
parse: (value: string) => *
};
Example:
{
name: 'int8',
parse: (value) => {
return parseInt(value, 10);
}
}
Note: Unlike pg-types
that uses OIDs to identify types, Slonik identifies types using their names.
Use this query to find type names:
SELECT typname
FROM pg_type
ORDER BY typname ASC
Type parsers are configured using typeParsers
client configuration.
Read: Default type parsers.
Type name | Implementation | Factory function name |
---|---|---|
date | Produces a literal date as a string (format: YYYY-MM-DD). | createDateTypeParser |
int8 | Produces an integer. | createBigintTypeParser |
interval | Produces interval in seconds (integer). | createIntervalTypeParser |
numeric | Produces a float. | createNumericTypeParser |
timestamp | Produces a unix timestamp (in milliseconds). | createTimestampTypeParser |
timestamptz | Produces a unix timestamp (in milliseconds). | createTimestampWithTimeZoneTypeParser |
Built-in type parsers can be created using the exported factory functions, e.g.
import {
createTimestampTypeParser
} from 'slonik';
createTimestampTypeParser();
// {
// name: 'timestamp',
// parse: (value) => {
// return value === null ? value : Date.parse(value + ' UTC');
// }
// }
Functionality can be added to Slonik client by adding interceptors (middleware).
Interceptors are configured using client configuration, e.g.
import {
createPool
} from 'slonik';
const interceptors = [];
const connection = await createPool('postgres://', {
interceptors
});
Interceptors are executed in the order they are added.
Read: Default interceptors.
Interceptor is an object that implements methods that can change the behaviour of the database client at different stages of the connection life-cycle
type Interceptor = {
afterPoolConnection?: (
connectionContext: ConnectionContext,
connection: DatabasePoolConnection
) => MaybePromise<null>,
afterQueryExecution?: (
queryContext: QueryContext,
query: Query,
result: QueryResult<QueryResultRow>
) => MaybePromise<QueryResult<QueryResultRow>>,
beforePoolConnection?: (
connectionContext: ConnectionContext
) => MaybePromise<?DatabasePool>,
beforePoolConnectionRelease?: (
connectionContext: ConnectionContext,
connection: DatabasePoolConnection
) => MaybePromise<null>,
beforeQueryExecution?: (
queryContext: QueryContext,
query: Query
) => MaybePromise<QueryResult<QueryResultRow>> | MaybePromise<null>,
beforeQueryResult?: (
queryContext: QueryContext,
query: Query,
result: QueryResult<QueryResultRow>
) => MaybePromise<null>,
beforeTransformQuery?: (
queryContext: QueryContext,
query: Query
) => Promise<null>,
queryExecutionError?: (
queryContext: QueryContext,
query: Query,
error: SlonikError
) => MaybePromise<null>,
transformQuery?: (
queryContext: QueryContext,
query: Query
) => Query,
transformRow?: (
queryContext: QueryContext,
query: Query,
row: QueryResultRow,
fields: Field[],
) => QueryResultRow
};
afterPoolConnection
Executed after a connection is acquired from the connection pool (or a new connection is created), e.g.
const pool = await createPool('postgres://');
// Interceptor is executed here. ↓
pool.connect();
afterQueryExecution
Executed after query has been executed and before rows were transformed using transformRow
.
Note: When query is executed using stream
, then afterQuery
is called with empty result set.
beforeQueryExecution
This function can optionally return a direct result of the query which will cause the actual query never to be executed.
beforeQueryResult
Executed just before the result is returned to the client.
Use this method to capture the result that will be returned to the client.
beforeTransformQuery
Executed before transformQuery
. Use this interceptor to capture the original query (e.g. for logging purposes).
beforePoolConnection
Executed before connection is created.
This function can optionally return a pool to another database, causing a connection to be made to the new pool.
beforePoolConnectionRelease
Executed before connection is released back to the connection pool, e.g.
const pool = await createPool('postgres://');
pool.connect(async () => {
await 1;
// Interceptor is executed here. ↓
});
queryExecutionError
Executed if query execution produces an error.
Use queryExecutionError
to log and/ or re-throw another error.
transformQuery
Executed before beforeQueryExecution
.
Transforms query.
transformRow
Executed for each row.
Transforms row.
Use transformRow
to modify the query result.
Name | Description |
---|---|
slonik-interceptor-field-name-transformation | Transforms Slonik query result field names. |
slonik-interceptor-query-benchmarking | Benchmarks Slonik queries. |
slonik-interceptor-query-cache | Caches Slonik queries. |
slonik-interceptor-query-logging | Logs Slonik queries. |
slonik-interceptor-query-normalisation | Normalises Slonik queries. |
Check out slonik-interceptor-preset
for an opinionated collection of interceptors.
Use sql.unnest
to create a set of rows using unnest
. Using the unnest
approach requires only 1 variable per every column; values for each column are passed as an array, e.g.
await connection.query(sql.unsafe`
INSERT INTO foo (bar, baz, qux)
SELECT *
FROM ${sql.unnest(
[
[1, 2, 3],
[4, 5, 6]
],
[
'int4',
'int4',
'int4'
]
)}
`);
Produces:
{
sql: 'INSERT INTO foo (bar, baz, qux) SELECT * FROM unnest($1::int4[], $2::int4[], $3::int4[])',
values: [
[
1,
4
],
[
2,
5
],
[
3,
6
]
]
}
Inserting data this way ensures that the query is stable and reduces the amount of time it takes to parse the query.
A typical load balancing requirement is to route all "logical" read-only queries to a read-only instance. This requirement can be implemented in 2 ways:
beforePoolConnection
middleware to assign query to a connection pool based on the query itself.First option is preferable as it is the most explicit. However, it also has the most overhead to implement.
On the other hand, beforePoolConnection
makes it easy to route based on conventions, but carries a greater risk of accidentally routing queries with side-effects to a read-only instance.
The first option is self-explanatory to implement, but this recipe demonstrates my convention for using beforePoolConnection
to route queries.
Note: How you determine which queries are safe to route to a read-only instance is outside of scope for this documentation.
Note: beforePoolConnection
only works for connections initiated by a query, i.e. pool#query
and not pool#connect()
.
Note: pool#transaction
triggers beforePoolConnection
but has no query
.
Note: This particular implementation does not handle SELECT INTO
.
const readOnlyPool = await createPool('postgres://read-only');
const pool = await createPool('postgres://main', {
interceptors: [
{
beforePoolConnection: (connectionContext) => {
if (!connectionContext.query?.sql.trim().startsWith('SELECT ')) {
// Returning null falls back to using the DatabasePool from which the query originates.
return null;
}
// This is a convention for the edge-cases where a SELECT query includes a volatile function.
// Adding a @volatile comment anywhere into the query bypasses the read-only route, e.g.
// sql.unsafe`
// # @volatile
// SELECT write_log()
// `
if (connectionContext.query?.sql.includes('@volatile')) {
return null;
}
// Returning an instance of DatabasePool will attempt to run the query using the other connection pool.
// Note that all other interceptors of the pool that the query originated from are short-circuited.
return readOnlyPool;
}
}
]
});
// This query will use `postgres://read-only` connection.
pool.query(sql.typeAlias('id')`SELECT 1 AS id`);
// This query will use `postgres://main` connection.
pool.query(sql.typeAlias('id')`UPDATE 1 AS id`);
Parameter symbols only work in optimizable SQL commands (SELECT, INSERT, UPDATE, DELETE, and certain commands containing one of these). In other statement types (generically called utility statements, e.g. ALTER, CREATE, DROP and SET), you must insert values textually even if they are just data values.
In the context of Slonik, if you are building utility statements you must use query building methods that interpolate values directly into queries:
sql.identifier
– for identifiers.sql.literalValue
– for values.Example:
await connection.query(sql.typeAlias('void')`
CREATE USER ${sql.identifier(['foo'])}
WITH PASSWORD ${sql.literalValue('bar')}
`);
Slonik integrates zod to provide runtime query result validation and static type inference.
Validating queries requires to:
sql.type
tagged template (see below)Build-time type safety guarantees that your application will work as expected at the time of the build (assuming that the types are correct in the first place).
The problem is that once you deploy the application, the database schema might change independently of the codebase. This drift may result in your application behaving in unpredictable and potentially dangerous ways, e.g., imagine if table product
changed price
from numeric
to text
. Without runtime validation, this would cause a cascade of problems and potential database corruption. Even worse, without runtime checks, this could go unnoticed for a long time.
In contrast, by using runtime checks, you can ensure that the contract between your codebase and the database is always respected. If there is a breaking change, the application fails with a loud error that is easy to debug.
By using zod
, we get the best of both worlds: type safety and runtime checks.
sql.type
Let's assume that you have a PostgreSQL table person
:
CREATE TABLE "public"."person" (
"id" integer GENERATED ALWAYS AS IDENTITY,
"name" text NOT NULL,
PRIMARY KEY ("id")
);
and you want to retrieve all persons in the database, along with their id
and name
:
connection.any(sql.unsafe`
SELECT id, name
FROM person
`);
With your knowledge of the database schema, define a zod object:
const personObject = z.object({
id: z.number(),
name: z.string(),
});
Update your query to use sql.type
tag and pass personObject
:
const personQuery = sql.type(personObject)`
SELECT id, name
FROM person
`;
Finally, query the database using typed sql
tagged template:
const persons = await connection.any(personQuery);
With this information, Slonik guarantees that every member of persons
is an object that has properties id
and name
, which are a non-null number
and a non-null string
respectively.
In the context of the network overhead, validation accounts for a tiny amount of the total execution time.
Just to give an idea, in our sample of data, it takes sub 0.1ms to validate 1 row, ~3ms to validate 1,000 and ~25ms to validate 100,000 rows.
Slonik disallows unknown keys, i.e. query that returns {foo: 'bar', baz: 'qux'}
with z.object({foo: z.string()})
schema will produce SchemaValidationError
error.
If query produces a row that does not satisfy zod object, then SchemaValidationError
error is thrown.
SchemaValidationError
includes properties that describe the query and validation errors:
sql
– SQL of the query that produced unexpected row.row
– row data that did not satisfy the schema.issues
– array of unmet expectations.Whenever this error occurs, the same information is also included in the logs.
In most cases, you shouldn't attempt to handle these errors at individual query level – allow to propagate to the top of the application and fix the issue when you become aware of it.
However, in cases such as dealing with unstructured data, it might be useful to handle these errors at a query level, e.g.
import {
SchemaValidationError
} from 'slonik';
try {
} catch (error) {
if (error extends SchemaValidationError) {
// Handle scheme validation error
}
}
You can infer the TypeScript type of the query result. There are couple of ways of doing it:
// Infer using z.infer<typeof yourSchema>
// https://github.com/colinhacks/zod#type-inference
type Person = z.infer<typeof personObject>;
// from sql tagged template `parser` property
type Person = z.infer<
personQuery.parser
>;
Using zod transform you can refine the result shape and its type, e.g.
const coordinatesType = z.string().transform((subject) => {
const [
x,
y,
] = subject.split(',');
return {
x: Number(x),
y: Number(y),
};
});
const zodObject = z.object({
foo: coordinatesType,
});
const query = sql.type(zodObject)`SELECT '1,2' as foo`;
const result = await pool.one(query);
expectTypeOf(result).toMatchTypeOf<{foo: {x: number, y: number, }, }>();
t.deepEqual(result, {
foo: {
x: 1,
y: 2,
},
});
Slonik works without the interceptor, but it doesn't validate the query results. To validate results, you must implement an interceptor that parses the results.
For context, when Zod parsing was first introduced to Slonik, it was enabled for all queries by default. However, I eventually realized that the baked-in implementation is not going to suit everyone's needs. For this reason, I decided to take out the built-in interceptor in favor of providing examples for common use cases. What follows is the original default implementation.
import {
type Interceptor,
type QueryResultRow,
SchemaValidationError,
} from 'slonik';
const createResultParserInterceptor = (): Interceptor => {
return {
// If you are not going to transform results using Zod, then you should use `afterQueryExecution` instead.
// Future versions of Zod will provide a more efficient parser when parsing without transformations.
// You can even combine the two – use `afterQueryExecution` to validate results, and (conditionally)
// transform results as needed in `transformRow`.
transformRow: (executionContext, actualQuery, row) => {
const {
log,
resultParser,
} = executionContext;
if (!resultParser) {
return row;
}
const validationResult = resultParser.safeParse(row);
if (!validationResult.success) {
throw new SchemaValidationError(
actualQuery,
row,
validationResult.error.issues,
);
}
return validationResult.data as QueryResultRow;
},
};
};
To use it, simply add it as a middleware:
import {
createPool,
} from 'slonik';
createPool('postgresql://', {
interceptors: [
createResultParserInterceptor(),
]
});
sql
tagsql
tag serves two purposes:
sql
tag can be imported from Slonik package:
import {
sql
} from 'slonik';
Sometimes it may be desirable to construct a custom instance of sql
tag. In those cases, you can use the createSqlTag
factory, e.g.
import {
createSqlTag
} from 'slonik';
const sql = createSqlTag();
You can create a sql
tag with a predefined set of Zod type aliases that can be later referenced when creating a query with runtime validation.
Slonik documentation assumes that these type aliases are defined:
const sql = createSqlTag({
typeAliases: {
// `foo` is a documentation specific example
foo: z.object({
foo: z.string(),
}),
id: z.object({
id: z.number(),
}),
void: z.object({}).strict(),
}
})
These are documentation specific examples that you are not expected to blindly copy. However, id
and void
are recommended aliases as they reflect common patterns, e.g.
const personId = await pool.oneFirst(
sql.typeAlias('id')`
SELECT id
FROM person
`
);
await pool.query(sql.typeAlias('void')`
INSERT INTO person_view (person_id)
VALUES (${personId})
`);
sql
tagSee runtime validation.
Slonik query methods can only be executed using sql
tagged template literal, e.g.
import {
sql
} from 'slonik'
connection.query(sql.typeAlias('id')`
SELECT 1 AS id
FROM foo
WHERE bar = ${'baz'}
`);
The above is equivalent to evaluating:
SELECT 1 AS id
FROM foo
WHERE bar = $1
query with 'baz' value binding.
Manually constructing queries is not allowed.
There is an internal mechanism that checks to see if query was created using sql
tagged template literal, i.e.
const query = {
sql: 'SELECT 1 AS id FROM foo WHERE bar = $1',
type: 'SQL',
values: [
'baz'
]
};
connection.query(query);
Will result in an error:
Query must be constructed using
sql
tagged template literal.
This is a security measure designed to prevent unsafe query execution.
Furthermore, a query object constructed using sql
tagged template literal is frozen to prevent further manipulation.
sql
sql
tagged template literals can be nested, e.g.
const query0 = sql.unsafe`SELECT ${'foo'} FROM bar`;
const query1 = sql.unsafe`SELECT ${'baz'} FROM (${query0})`;
Produces:
{
sql: 'SELECT $1 FROM (SELECT $2 FROM bar)',
values: [
'baz',
'foo'
]
}
Queries are built using methods of the sql
tagged template literal.
If this is your first time using Slonik, read Dynamically generating SQL queries using Node.js
sql.array
(
values: readonly PrimitiveValueExpression[],
memberType: SqlFragment | TypeNameIdentifier,
) => ArraySqlToken,
Creates an array value binding, e.g.
await connection.query(sql.typeAlias('id')`
SELECT (${sql.array([1, 2, 3], 'int4')}) AS id
`);
Produces:
{
sql: 'SELECT $1::"int4"[]',
values: [
[
1,
2,
3
]
]
}
sql.array
memberType
If memberType
is a string (TypeNameIdentifier
), then it is treated as a type name identifier and will be quoted using double quotes, i.e. sql.array([1, 2, 3], 'int4')
is equivalent to $1::"int4"[]
. The implication is that keywords that are often used interchangeably with type names are not going to work, e.g. int4
is a type name identifier and will work. However, int
is a keyword and will not work. You can either use type name identifiers or you can construct custom member using sql.fragment
tag, e.g.
await connection.query(sql.typeAlias('id')`
SELECT (${sql.array([1, 2, 3], sql.fragment`int[]`)}) AS id
`);
Produces:
{
sql: 'SELECT $1::int[]',
values: [
[
1,
2,
3
]
]
}
sql.array
vs sql.join
Unlike sql.join
, sql.array
generates a stable query of a predictable length, i.e. regardless of the number of values in the array, the generated query remains the same:
pg_stat_statements
to aggregate all query execution statistics.Example:
sql.typeAlias('id')`
SELECT id
FROM foo
WHERE id IN (${sql.join([1, 2, 3], sql.fragment`, `)})
`;
sql.typeAlias('id')`
SELECT id
FROM foo
WHERE id NOT IN (${sql.join([1, 2, 3], sql.fragment`, `)})
`;
Is equivalent to:
sql.typeAlias('id')`
SELECT id
FROM foo
WHERE id = ANY(${sql.array([1, 2, 3], 'int4')})
`;
sql.typeAlias('id')`
SELECT id
FROM foo
WHERE id != ALL(${sql.array([1, 2, 3], 'int4')})
`;
Furthermore, unlike sql.join
, sql.array
can be used with an empty array of values. In short, sql.array
should be preferred over sql.join
when possible.
sql.binary
(
data: Buffer
) => BinarySqlToken;
Binds binary (bytea
) data, e.g.
await connection.query(sql.unsafe`
SELECT ${sql.binary(Buffer.from('foo'))}
`);
Produces:
{
sql: 'SELECT $1',
values: [
Buffer.from('foo')
]
}
sql.date
(
date: Date
) => DateSqlToken;
Inserts a date, e.g.
await connection.query(sql.unsafe`
SELECT ${sql.date(new Date('2022-08-19T03:27:24.951Z'))}
`);
Produces:
{
sql: 'SELECT $1::date',
values: [
'2022-08-19'
]
}
sql.fragment
(
template: TemplateStringsArray,
...values: ValueExpression[]
) => SqlFragment;
A SQL fragment, e.g.
sql.fragment`FOO`
Produces:
{
sql: 'FOO',
values: []
}
SQL fragments can be used to build more complex queries, e.g.
const whereFragment = sql.fragment`
WHERE bar = 'baz';
`;
sql.typeAlias('id')`
SELECT id
FROM foo
${whereFragment}
`
The only difference between queries and fragments is that fragments are untyped and they cannot be used as inputs to query methods (use sql.type
instead).
sql.identifier
(
names: string[],
) => IdentifierSqlToken;
Delimited identifiers are created by enclosing an arbitrary sequence of characters in double-quotes ("). To create a delimited identifier, create an sql
tag function placeholder value using sql.identifier
, e.g.
sql.typeAlias('id')`
SELECT 1 AS id
FROM ${sql.identifier(['bar', 'baz'])}
`;
Produces:
{
sql: 'SELECT 1 FROM "bar"."baz"',
values: []
}
sql.interval
(
interval: {
years?: number,
months?: number,
weeks?: number,
days?: number,
hours?: number,
minutes?: number,
seconds?: number,
}
) => IntervalSqlToken;
Inserts an interval, e.g.
sql.typeAlias('id')`
SELECT 1 AS id
FROM ${sql.interval({days: 3})}
`;
Produces:
{
sql: 'SELECT make_interval("days" => $1)',
values: [
3
]
}
You can use sql.interval
exactly how you would use PostgreSQL make_interval
function. However, notice that Slonik does not use abbreviations, i.e. "secs" is seconds and "mins" is minutes.
make_interval | sql.interval | Interval output |
---|---|---|
make_interval("days" => 1, "hours" => 2) | sql.interval({days: 1, hours: 2}) | 1 day 02:00:00 |
make_interval("mins" => 1) | sql.interval({minutes: 1}) | 00:01:00 |
make_interval("secs" => 120) | sql.interval({seconds: 120}) | 00:02:00 |
make_interval("secs" => 0.001) | sql.interval({seconds: 0.001}) | 00:00:00.001 |
sql.interval
If you need a dynamic interval (e.g. X days), you can achieve this using multiplication, e.g.
sql.unsafe`
SELECT ${2} * interval '1 day'
`
The above is equivalent to interval '2 days'
.
You could also use make_interval()
directly, e.g.
sql.unsafe`
SELECT make_interval("days" => ${2})
`
sql.interval
was added mostly as a type-safe alternative.
sql.join
(
members: SqlSqlToken[],
glue: SqlSqlToken
) => ListSqlToken;
Concatenates SQL expressions using glue
separator, e.g.
await connection.query(sql.unsafe`
SELECT ${sql.join([1, 2, 3], sql.fragment`, `)}
`);
Produces:
{
sql: 'SELECT $1, $2, $3',
values: [
1,
2,
3
]
}
sql.join
is the primary building block for most of the SQL, e.g.
Boolean expressions:
sql.unsafe`
SELECT ${sql.join([1, 2], sql.fragment` AND `)}
`
// SELECT $1 AND $2
Tuple:
sql.unsafe`
SELECT (${sql.join([1, 2], sql.fragment`, `)})
`
// SELECT ($1, $2)
Tuple list:
sql.unsafe`
SELECT ${sql.join(
[
sql.fragment`(${sql.join([1, 2], sql.fragment`, `)})`,
sql.fragment`(${sql.join([3, 4], sql.fragment`, `)})`,
],
sql.fragment`, `
)}
`
// SELECT ($1, $2), ($3, $4)
sql.json
(
value: SerializableValue
) => JsonSqlToken;
Serializes value and binds it as a JSON string literal, e.g.
await connection.query(sql.unsafe`
SELECT (${sql.json([1, 2, 3])})
`);
Produces:
{
sql: 'SELECT $1::json',
values: [
'[1,2,3]'
]
}
sql.jsonb
(
value: SerializableValue
) => JsonBinarySqlToken;
Serializes value and binds it as a JSON binary, e.g.
await connection.query(sql.unsafe`
SELECT (${sql.jsonb([1, 2, 3])})
`);
Produces:
{
sql: 'SELECT $1::jsonb',
values: [
'[1,2,3]'
]
}
sql.literalValue
⚠️ Do not use. This method interpolates values as literals and it must be used only for building utility statements. You are most likely looking for value placeholders.
(
value: string,
) => SqlSqlToken;
Escapes and interpolates a literal value into a query.
await connection.query(sql.unsafe`
CREATE USER "foo" WITH PASSWORD ${sql.literalValue('bar')}
`);
Produces:
{
sql: 'CREATE USER "foo" WITH PASSWORD \'bar\''
}
sql.timestamp
(
date: Date
) => TimestampSqlToken;
Inserts a timestamp, e.g.
await connection.query(sql.unsafe`
SELECT ${sql.timestamp(new Date('2022-08-19T03:27:24.951Z'))}
`);
Produces:
{
sql: 'SELECT to_timestamp($1)',
values: [
'1660879644.951'
]
}
sql.unnest
(
tuples: ReadonlyArray<readonly any[]>,
columnTypes: Array<[...string[], TypeNameIdentifier]> | Array<SqlSqlToken | TypeNameIdentifier>
): UnnestSqlToken;
Creates an unnest
expressions, e.g.
await connection.query(sql.unsafe`
SELECT bar, baz
FROM ${sql.unnest(
[
[1, 'foo'],
[2, 'bar']
],
[
'int4',
'text'
]
)} AS foo(bar, baz)
`);
Produces:
{
sql: 'SELECT bar, baz FROM unnest($1::"int4"[], $2::"text"[]) AS foo(bar, baz)',
values: [
[
1,
2
],
[
'foo',
'bar'
]
]
}
If columnType
array member type is string
, it will treat it as a type name identifier (and quote with double quotes; illustrated in the example above).
If columnType
array member type is SqlToken
, it will inline type name without quotes, e.g.
await connection.query(sql.unsafe`
SELECT bar, baz
FROM ${sql.unnest(
[
[1, 'foo'],
[2, 'bar']
],
[
sql.fragment`integer`,
sql.fragment`text`
]
)} AS foo(bar, baz)
`);
Produces:
{
sql: 'SELECT bar, baz FROM unnest($1::integer[], $2::text[]) AS foo(bar, baz)',
values: [
[
1,
2
],
[
'foo',
'bar'
]
]
}
If columnType
array member type is [...string[], TypeNameIdentifier]
, it will act as sql.identifier
, e.g.
await connection.query(sql.unsafe`
SELECT bar, baz
FROM ${sql.unnest(
[
[1, 3],
[2, 4]
],
[
['foo', 'int4'],
['foo', 'int4']
]
)} AS foo(bar, baz)
`);
Produces:
{
sql: 'SELECT bar, baz FROM unnest($1::"foo"."int4"[], $2::"foo"."int4"[]) AS foo(bar, baz)',
values: [
[
1,
2
],
[
3,
4
]
]
}
sql.unsafe
(
template: TemplateStringsArray,
...values: ValueExpression[]
) => QuerySqlToken;
Creates a query with Zod any
type. The result of such a query has TypeScript type any
.
const result = await connection.one(sql.unsafe`
SELECT foo
FROM bar
`);
// `result` type is `any`
sql.unsafe
is effectively a shortcut to sql.type(z.any())
.
sql.unsafe
is as a convenience method for development. Your production code must not use sql.unsafe
. Instead,
sql.type
to type the query resultsql.typeAlias
to alias an existing typesql.fragment
if you are writing a fragment of a queryany
Returns result rows.
Example:
const rows = await connection.any(sql.typeAlias('foo')`SELECT foo`);
#any
is similar to #query
except that it returns rows without fields information.
anyFirst
Returns value of the first column of every row in the result set.
DataIntegrityError
if query returns multiple columns.Example:
const fooValues = await connection.anyFirst(sql.typeAlias('foo')`SELECT foo`);
exists
Returns a boolean value indicating whether query produces results.
The query that is passed to this function is wrapped in SELECT exists()
prior to it getting executed, i.e.
pool.exists(sql.typeAlias('void')`
SELECT
LIMIT 1
`)
is equivalent to:
pool.oneFirst(sql.unsafe`
SELECT exists(
SELECT
LIMIT 1
)
`)
copyFromBinary
(
streamQuery: QuerySqlToken,
tupleList: any[][],
columnTypes: TypeNameIdentifier[],
) => Promise<null>;
Copies from a binary stream.
The binary stream is constructed using user supplied tupleList
and columnTypes
values.
Example:
const tupleList = [
[
1,
'baz'
],
[
2,
'baz'
]
];
const columnTypes = [
'int4',
'text'
];
await connection.copyFromBinary(
sql.unsafe`
COPY foo
(
id,
baz
)
FROM STDIN BINARY
`,
tupleList,
columnTypes
);
NULL
values.copyFromBinary
implementation is designed to minimize the query execution time at the cost of increased script memory usage and execution time. This is achieved by separating data encoding from feeding data to PostgreSQL, i.e. all data passed to copyFromBinary
is first encoded and then fed to PostgreSQL (contrast this to using a stream with encoding transformation to feed data to PostgreSQL).
many
Returns result rows.
NotFoundError
if query returns no rows.Example:
const rows = await connection.many(sql.typeAlias('foo')`SELECT foo`);
manyFirst
Returns value of the first column of every row in the result set.
NotFoundError
if query returns no rows.DataIntegrityError
if query returns multiple columns.Example:
const fooValues = await connection.many(sql.typeAlias('foo')`SELECT foo`);
maybeOne
Selects the first row from the result.
null
if row is not found.DataIntegrityError
if query returns multiple rows.Example:
const row = await connection.maybeOne(sql.typeAlias('foo')`SELECT foo`);
// row.foo is the result of the `foo` column value of the first row.
maybeOneFirst
Returns value of the first column from the first row.
null
if row is not found.DataIntegrityError
if query returns multiple rows.DataIntegrityError
if query returns multiple columns.Example:
const foo = await connection.maybeOneFirst(sql.typeAlias('foo')`SELECT foo`);
// foo is the result of the `foo` column value of the first row.
one
Selects the first row from the result.
NotFoundError
if query returns no rows.DataIntegrityError
if query returns multiple rows.Example:
const row = await connection.one(sql.typeAlias('foo')`SELECT foo`);
// row.foo is the result of the `foo` column value of the first row.
Note:
I've been asked "What makes this different from knex.js
knex('foo').limit(1)
?".knex('foo').limit(1)
simply generates "SELECT * FROM foo LIMIT 1" query.knex
is a query builder; it does not assert the value of the result. Slonik#one
adds assertions about the result of the query.
oneFirst
Returns value of the first column from the first row.
NotFoundError
if query returns no rows.DataIntegrityError
if query returns multiple rows.DataIntegrityError
if query returns multiple columns.Example:
const foo = await connection.oneFirst(sql.typeAlias('foo')`SELECT foo`);
// foo is the result of the `foo` column value of the first row.
query
API and the result shape are equivalent to pg#query
.
Example:
await connection.query(sql.typeAlias('foo')`SELECT foo`);
// {
// command: 'SELECT',
// fields: [],
// notices: [],
// rowCount: 1,
// rows: [
// {
// foo: 'bar'
// }
// ]
// }
stream
Streams query results.
Example:
await connection.stream(sql.typeAlias('foo')`SELECT foo`, (stream) => {
stream.on('data', (datum) => {
datum;
// {
// fields: [
// {
// name: 'foo',
// dataTypeId: 23,
// }
// ],
// row: {
// foo: 'bar'
// }
// }
});
});
Note: Implemented using pg-query-stream
.
transaction
transaction
method is used wrap execution of queries in START TRANSACTION
and COMMIT
or ROLLBACK
. COMMIT
is called if the transaction handler returns a promise that resolves; ROLLBACK
is called otherwise.
transaction
method can be used together with createPool
method. When used to create a transaction from an instance of a pool, a new connection is allocated for the duration of the transaction.
const result = await connection.transaction(async (transactionConnection) => {
await transactionConnection.query(sql.unsafe`INSERT INTO foo (bar) VALUES ('baz')`);
await transactionConnection.query(sql.unsafe`INSERT INTO qux (quux) VALUES ('corge')`);
return 'FOO';
});
result === 'FOO';
Slonik uses SAVEPOINT
to automatically nest transactions, e.g.
await connection.transaction(async (t1) => {
await t1.query(sql.unsafe`INSERT INTO foo (bar) VALUES ('baz')`);
return t1.transaction((t2) => {
return t2.query(sql.unsafe`INSERT INTO qux (quux) VALUES ('corge')`);
});
});
is equivalent to:
START TRANSACTION;
INSERT INTO foo (bar) VALUES ('baz');
SAVEPOINT slonik_savepoint_1;
INSERT INTO qux (quux) VALUES ('corge');
COMMIT;
Slonik automatically rollsback to the last savepoint if a query belonging to a transaction results in an error, e.g.
await connection.transaction(async (t1) => {
await t1.query(sql.unsafe`INSERT INTO foo (bar) VALUES ('baz')`);
try {
await t1.transaction(async (t2) => {
await t2.query(sql.unsafe`INSERT INTO qux (quux) VALUES ('corge')`);
return Promise.reject(new Error('foo'));
});
} catch (error) {
}
});
is equivalent to:
START TRANSACTION;
INSERT INTO foo (bar) VALUES ('baz');
SAVEPOINT slonik_savepoint_1;
INSERT INTO qux (quux) VALUES ('corge');
ROLLBACK TO SAVEPOINT slonik_savepoint_1;
COMMIT;
If error is unhandled, then the entire transaction is rolledback, e.g.
await connection.transaction(async (t1) => {
await t1.query(sql.typeAlias('void')`INSERT INTO foo (bar) VALUES ('baz')`);
await t1.transaction(async (t2) => {
await t2.query(sql.typeAlias('void')`INSERT INTO qux (quux) VALUES ('corge')`);
await t1.transaction(async (t3) => {
await t3.query(sql.typeAlias('void')`INSERT INTO uier (grault) VALUES ('garply')`);
return Promise.reject(new Error('foo'));
});
});
});
is equivalent to:
START TRANSACTION;
INSERT INTO foo (bar) VALUES ('baz');
SAVEPOINT slonik_savepoint_1;
INSERT INTO qux (quux) VALUES ('corge');
SAVEPOINT slonik_savepoint_2;
INSERT INTO uier (grault) VALUES ('garply');
ROLLBACK TO SAVEPOINT slonik_savepoint_2;
ROLLBACK TO SAVEPOINT slonik_savepoint_1;
ROLLBACK;
Transactions that are failing with Transaction Rollback class errors are automatically retried.
A failing transaction will be rolled back and the callback function passed to the transaction method call will be executed again. Nested transactions are also retried until the retry limit is reached. If the nested transaction keeps failing with a Transaction Rollback error, then the parent transaction will be retried until the retry limit is reached.
How many times a transaction is retried is controlled using transactionRetryLimit
configuration (default: 5) and the transactionRetryLimit
parameter of the transaction
method (default: undefined). If a transactionRetryLimit
is given to the method call then it is used otherwise the transactionRetryLimit
configuration is used.
A single query (not part of a transaction) failing with a Transaction Rollback class error is automatically retried.
How many times it is retried is controlled by using the queryRetryLimit
configuration (default: 5).
parseDsn
(
dsn: string,
) => ConnectionOptions;
Parses DSN to ConnectionOptions
type.
Example:
import {
parseDsn,
} from 'slonik';
parseDsn('postgresql://foo@localhost/bar?application_name=baz');
stringifyDsn
(
connectionOptions: ConnectionOptions,
) => string;
Stringifies ConnectionOptions
to a DSN.
Example:
import {
stringifyDsn,
} from 'slonik';
stringifyDsn({
host: 'localhost',
username: 'foo',
databaseName: 'bar',
applicationName: 'baz',
});
All Slonik errors extend from SlonikError
, i.e. You can catch Slonik specific errors using the following logic.
import {
SlonikError
} from 'slonik';
try {
await query();
} catch (error) {
if (error instanceof SlonikError) {
// This error is thrown by Slonik.
}
}
node-postgres
errorWhen error originates from node-postgres
, the original error is available under originalError
property.
This property is exposed for debugging purposes only. Do not use it for conditional checks – it can change.
If you require to extract meta-data about a specific type of error (e.g. constraint violation name), raise a GitHub issue describing your use case.
BackendTerminatedError
BackendTerminatedError
is thrown when the backend is terminated by the user, i.e. pg_terminate_backend
.
BackendTerminatedError
must be handled at the connection level, i.e.
await pool.connect(async (connection0) => {
try {
await pool.connect(async (connection1) => {
const backendProcessId = await connection1.oneFirst(sql.typeAlias('id')`SELECT pg_backend_pid() AS id`);
setTimeout(() => {
connection0.query(sql.typeAlias('void')`SELECT pg_cancel_backend(${backendProcessId})`)
}, 2000);
try {
await connection1.query(sql.typeAlias('void')`SELECT pg_sleep(30)`);
} catch (error) {
// This code will not be executed.
}
});
} catch (error) {
if (error instanceof BackendTerminatedError) {
// Handle backend termination.
} else {
throw error;
}
}
});
CheckIntegrityConstraintViolationError
CheckIntegrityConstraintViolationError
is thrown when PostgreSQL responds with check_violation
(23514
) error.
ConnectionError
ConnectionError
is thrown when connection cannot be established to the PostgreSQL server.
DataIntegrityError
To handle the case where the data result does not match the expectations, catch DataIntegrityError
error.
import {
DataIntegrityError
} from 'slonik';
let row;
try {
row = await connection.one(sql.typeAlias('foo')`SELECT foo`);
} catch (error) {
if (error instanceof DataIntegrityError) {
console.error('There is more than one row matching the select criteria.');
} else {
throw error;
}
}
ForeignKeyIntegrityConstraintViolationError
ForeignKeyIntegrityConstraintViolationError
is thrown when PostgreSQL responds with foreign_key_violation
(23503
) error.
NotFoundError
To handle the case where query returns less than one row, catch NotFoundError
error.
import {
NotFoundError
} from 'slonik';
let row;
try {
row = await connection.one(sql.typeAlias('foo')`SELECT foo`);
} catch (error) {
if (!(error instanceof NotFoundError)) {
throw error;
}
}
if (row) {
// row.foo is the result of the `foo` column value of the first row.
}
NotNullIntegrityConstraintViolationError
NotNullIntegrityConstraintViolationError
is thrown when PostgreSQL responds with not_null_violation
(23502
) error.
StatementCancelledError
StatementCancelledError
is thrown when a query is cancelled by the user (i.e. pg_cancel_backend
) or in case of a timeout.
It should be safe to use the same connection if StatementCancelledError
is handled, e.g.
await pool.connect(async (connection0) => {
await pool.connect(async (connection1) => {
const backendProcessId = await connection1.oneFirst(sql.typeAlias('id')`SELECT pg_backend_pid() AS id`);
setTimeout(() => {
connection0.query(sql.typeAlias('void')`SELECT pg_cancel_backend(${backendProcessId})`)
}, 2000);
try {
await connection1.query(sql.typeAlias('void')`SELECT pg_sleep(30)`);
} catch (error) {
if (error instanceof StatementCancelledError) {
// Safe to continue using the same connection.
} else {
throw error;
}
}
});
});
StatementTimeoutError
StatementTimeoutError
inherits from StatementCancelledError
and it is called only in case of a timeout.
UniqueIntegrityConstraintViolationError
UniqueIntegrityConstraintViolationError
is thrown when PostgreSQL responds with unique_violation
(23505
) error.
TupleMovedToAnotherPartitionError
TupleMovedToAnotherPartitionError
is thrown when affecting tuple moved into different partition
.
This library intentionally doesn't handle migrations, because a database client and migrations are conceptually distinct problems.
My personal preference is to use Flyway – it is a robust solution that many DBAs are already familiar with.
The Slonik community has also shared their successes with these Node.js frameworks:
This package is using TypeScript types.
Refer to ./src/types.ts
.
The public interface exports the following types:
CommonQueryMethods
(most generic)DatabaseConnection
(DatabasePool | DatabasePoolConnection
)DatabasePool
DatabasePoolConnection
DatabaseTransactionConnection
Use these types to annotate connection
instance in your code base, e.g.
import {
type DatabaseConnection
} from 'slonik';
export default async (
connection: DatabaseConnection,
code: string
): Promise<number> => {
return await connection.oneFirst(sql.typeAlias('id')`
SELECT id
FROM country
WHERE code = ${code}
`);
};
See runtime validation.
Slonik uses roarr to log queries.
To enable logging, define ROARR_LOG=true
environment variable.
By default, Slonik logs only connection events, e.g. when connection is created, connection is acquired and notices.
Query-level logging can be added using slonik-interceptor-query-logging
interceptor.
Note: Requires slonik-interceptor-query-logging
.
Enabling captureStackTrace
configuration will create a stack trace before invoking the query and include the stack trace in the logs, e.g.
{
"context": {
"package": "slonik",
"namespace": "slonik",
"logLevel": 20,
"executionTime": "357 ms",
"queryId": "01CV2V5S4H57KCYFFBS0BJ8K7E",
"rowCount": 1,
"sql": "SELECT schedule_cinema_data_task();",
"stackTrace": [
"/node_modules/slonik/dist:162:28",
"/node_modules/slonik/dist:314:12",
"/node_modules/slonik/dist:361:20",
"/node_modules/slonik/dist/utilities:17:13",
"/src/bin/commands/do-cinema-data-tasks.js:59:21",
"/src/bin/commands/do-cinema-data-tasks.js:590:45",
"internal/process/next_tick.js:68:7"
],
"values": []
},
"message": "query",
"sequence": 4,
"time": 1540915127833,
"version": "1.0.0"
}
{
"context": {
"package": "slonik",
"namespace": "slonik",
"logLevel": 20,
"executionTime": "66 ms",
"queryId": "01CV2V5SGS0WHJX4GJN09Z3MTB",
"rowCount": 1,
"sql": "SELECT cinema_id \"cinemaId\", target_data \"targetData\" FROM cinema_data_task WHERE id = ?",
"stackTrace": [
"/node_modules/slonik/dist:162:28",
"/node_modules/slonik/dist:285:12",
"/node_modules/slonik/dist/utilities:17:13",
"/src/bin/commands/do-cinema-data-tasks.js:603:26",
"internal/process/next_tick.js:68:7"
],
"values": [
17953947
]
},
"message": "query",
"sequence": 5,
"time": 1540915127902,
"version": "1.0.0"
}
Use @roarr/cli
to pretty-print the output.
Using Atom IDE you can leverage the language-babel
package in combination with the language-sql
to enable highlighting of the SQL strings in the codebase.
To enable highlighting, you need to:
language-babel
and language-sql
packages.language-babel
"JavaScript Tagged Template Literal Grammar Extensions" setting to use language-sql
to highlight template literals with sql
tag (configuration value: sql:source.sql
).sql
helper to construct the queries.For more information, refer to the JavaScript Tagged Template Literal Grammar Extensions documentation of language-babel
package.
The vscode-sql-lit
extension provides syntax highlighting for VS Code:
Running Slonik tests requires having a local PostgreSQL instance.
The easiest way to setup a temporary instance for testing is using Docker, e.g.
docker run --rm -it -e POSTGRES_HOST_AUTH_METHOD=trust -p 5432:5432 postgres
Author: gajus
Source Code: https://github.com/gajus/slonik
License: View license
1632537859
Not babashka. Node.js babashka!?
Ad-hoc CLJS scripting on Node.js.
Experimental. Please report issues here.
Nbb's main goal is to make it easy to get started with ad hoc CLJS scripting on Node.js.
Additional goals and features are:
Nbb requires Node.js v12 or newer.
CLJS code is evaluated through SCI, the same interpreter that powers babashka. Because SCI works with advanced compilation, the bundle size, especially when combined with other dependencies, is smaller than what you get with self-hosted CLJS. That makes startup faster. The trade-off is that execution is less performant and that only a subset of CLJS is available (e.g. no deftype, yet).
Install nbb
from NPM:
$ npm install nbb -g
Omit -g
for a local install.
Try out an expression:
$ nbb -e '(+ 1 2 3)'
6
And then install some other NPM libraries to use in the script. E.g.:
$ npm install csv-parse shelljs zx
Create a script which uses the NPM libraries:
(ns script
(:require ["csv-parse/lib/sync$default" :as csv-parse]
["fs" :as fs]
["path" :as path]
["shelljs$default" :as sh]
["term-size$default" :as term-size]
["zx$default" :as zx]
["zx$fs" :as zxfs]
[nbb.core :refer [*file*]]))
(prn (path/resolve "."))
(prn (term-size))
(println (count (str (fs/readFileSync *file*))))
(prn (sh/ls "."))
(prn (csv-parse "foo,bar"))
(prn (zxfs/existsSync *file*))
(zx/$ #js ["ls"])
Call the script:
$ nbb script.cljs
"/private/tmp/test-script"
#js {:columns 216, :rows 47}
510
#js ["node_modules" "package-lock.json" "package.json" "script.cljs"]
#js [#js ["foo" "bar"]]
true
$ ls
node_modules
package-lock.json
package.json
script.cljs
Nbb has first class support for macros: you can define them right inside your .cljs
file, like you are used to from JVM Clojure. Consider the plet
macro to make working with promises more palatable:
(defmacro plet
[bindings & body]
(let [binding-pairs (reverse (partition 2 bindings))
body (cons 'do body)]
(reduce (fn [body [sym expr]]
(let [expr (list '.resolve 'js/Promise expr)]
(list '.then expr (list 'clojure.core/fn (vector sym)
body))))
body
binding-pairs)))
Using this macro we can look async code more like sync code. Consider this puppeteer example:
(-> (.launch puppeteer)
(.then (fn [browser]
(-> (.newPage browser)
(.then (fn [page]
(-> (.goto page "https://clojure.org")
(.then #(.screenshot page #js{:path "screenshot.png"}))
(.catch #(js/console.log %))
(.then #(.close browser)))))))))
Using plet
this becomes:
(plet [browser (.launch puppeteer)
page (.newPage browser)
_ (.goto page "https://clojure.org")
_ (-> (.screenshot page #js{:path "screenshot.png"})
(.catch #(js/console.log %)))]
(.close browser))
See the puppeteer example for the full code.
Since v0.0.36, nbb includes promesa which is a library to deal with promises. The above plet
macro is similar to promesa.core/let
.
$ time nbb -e '(+ 1 2 3)'
6
nbb -e '(+ 1 2 3)' 0.17s user 0.02s system 109% cpu 0.168 total
The baseline startup time for a script is about 170ms seconds on my laptop. When invoked via npx
this adds another 300ms or so, so for faster startup, either use a globally installed nbb
or use $(npm bin)/nbb script.cljs
to bypass npx
.
Nbb does not depend on any NPM dependencies. All NPM libraries loaded by a script are resolved relative to that script. When using the Reagent module, React is resolved in the same way as any other NPM library.
To load .cljs
files from local paths or dependencies, you can use the --classpath
argument. The current dir is added to the classpath automatically. So if there is a file foo/bar.cljs
relative to your current dir, then you can load it via (:require [foo.bar :as fb])
. Note that nbb
uses the same naming conventions for namespaces and directories as other Clojure tools: foo-bar
in the namespace name becomes foo_bar
in the directory name.
To load dependencies from the Clojure ecosystem, you can use the Clojure CLI or babashka to download them and produce a classpath:
$ classpath="$(clojure -A:nbb -Spath -Sdeps '{:aliases {:nbb {:replace-deps {com.github.seancorfield/honeysql {:git/tag "v2.0.0-rc5" :git/sha "01c3a55"}}}}}')"
and then feed it to the --classpath
argument:
$ nbb --classpath "$classpath" -e "(require '[honey.sql :as sql]) (sql/format {:select :foo :from :bar :where [:= :baz 2]})"
["SELECT foo FROM bar WHERE baz = ?" 2]
Currently nbb
only reads from directories, not jar files, so you are encouraged to use git libs. Support for .jar
files will be added later.
The name of the file that is currently being executed is available via nbb.core/*file*
or on the metadata of vars:
(ns foo
(:require [nbb.core :refer [*file*]]))
(prn *file*) ;; "/private/tmp/foo.cljs"
(defn f [])
(prn (:file (meta #'f))) ;; "/private/tmp/foo.cljs"
Nbb includes reagent.core
which will be lazily loaded when required. You can use this together with ink to create a TUI application:
$ npm install ink
ink-demo.cljs
:
(ns ink-demo
(:require ["ink" :refer [render Text]]
[reagent.core :as r]))
(defonce state (r/atom 0))
(doseq [n (range 1 11)]
(js/setTimeout #(swap! state inc) (* n 500)))
(defn hello []
[:> Text {:color "green"} "Hello, world! " @state])
(render (r/as-element [hello]))
Working with callbacks and promises can become tedious. Since nbb v0.0.36 the promesa.core
namespace is included with the let
and do!
macros. An example:
(ns prom
(:require [promesa.core :as p]))
(defn sleep [ms]
(js/Promise.
(fn [resolve _]
(js/setTimeout resolve ms))))
(defn do-stuff
[]
(p/do!
(println "Doing stuff which takes a while")
(sleep 1000)
1))
(p/let [a (do-stuff)
b (inc a)
c (do-stuff)
d (+ b c)]
(prn d))
$ nbb prom.cljs
Doing stuff which takes a while
Doing stuff which takes a while
3
Also see API docs.
Since nbb v0.0.75 applied-science/js-interop is available:
(ns example
(:require [applied-science.js-interop :as j]))
(def o (j/lit {:a 1 :b 2 :c {:d 1}}))
(prn (j/select-keys o [:a :b])) ;; #js {:a 1, :b 2}
(prn (j/get-in o [:c :d])) ;; 1
Most of this library is supported in nbb, except the following:
:syms
.-x
notation. In nbb, you must use keywords.See the example of what is currently supported.
See the examples directory for small examples.
Also check out these projects built with nbb:
See API documentation.
See this gist on how to convert an nbb script or project to shadow-cljs.
Prequisites:
To build:
bb release
Run bb tasks
for more project-related tasks.
Download Details:
Author: borkdude
Download Link: Download The Source Code
Official Website: https://github.com/borkdude/nbb
License: EPL-1.0
#node #javascript
1616671994
If you look at the backend technology used by today’s most popular apps there is one thing you would find common among them and that is the use of NodeJS Framework. Yes, the NodeJS framework is that effective and successful.
If you wish to have a strong backend for efficient app performance then have NodeJS at the backend.
WebClues Infotech offers different levels of experienced and expert professionals for your app development needs. So hire a dedicated NodeJS developer from WebClues Infotech with your experience requirement and expertise.
So what are you waiting for? Get your app developed with strong performance parameters from WebClues Infotech
For inquiry click here: https://www.webcluesinfotech.com/hire-nodejs-developer/
Book Free Interview: https://bit.ly/3dDShFg
#hire dedicated node.js developers #hire node.js developers #hire top dedicated node.js developers #hire node.js developers in usa & india #hire node js development company #hire the best node.js developers & programmers
1622719015
Front-end web development has been overwhelmed by JavaScript highlights for quite a long time. Google, Facebook, Wikipedia, and most of all online pages use JS for customer side activities. As of late, it additionally made a shift to cross-platform mobile development as a main technology in React Native, Nativescript, Apache Cordova, and other crossover devices.
Throughout the most recent couple of years, Node.js moved to backend development as well. Designers need to utilize a similar tech stack for the whole web project without learning another language for server-side development. Node.js is a device that adjusts JS usefulness and syntax to the backend.
Node.js isn’t a language, or library, or system. It’s a runtime situation: commonly JavaScript needs a program to work, however Node.js makes appropriate settings for JS to run outside of the program. It’s based on a JavaScript V8 motor that can run in Chrome, different programs, or independently.
The extent of V8 is to change JS program situated code into machine code — so JS turns into a broadly useful language and can be perceived by servers. This is one of the advantages of utilizing Node.js in web application development: it expands the usefulness of JavaScript, permitting designers to coordinate the language with APIs, different languages, and outside libraries.
Of late, organizations have been effectively changing from their backend tech stacks to Node.js. LinkedIn picked Node.js over Ruby on Rails since it took care of expanding responsibility better and decreased the quantity of servers by multiple times. PayPal and Netflix did something comparative, just they had a goal to change their design to microservices. We should investigate the motivations to pick Node.JS for web application development and when we are planning to hire node js developers.
The principal thing that makes Node.js a go-to environment for web development is its JavaScript legacy. It’s the most well known language right now with a great many free devices and a functioning local area. Node.js, because of its association with JS, immediately rose in ubiquity — presently it has in excess of 368 million downloads and a great many free tools in the bundle module.
Alongside prevalence, Node.js additionally acquired the fundamental JS benefits:
In addition, it’s a piece of a well known MEAN tech stack (the blend of MongoDB, Express.js, Angular, and Node.js — four tools that handle all vital parts of web application development).
This is perhaps the most clear advantage of Node.js web application development. JavaScript is an unquestionable requirement for web development. Regardless of whether you construct a multi-page or single-page application, you need to know JS well. On the off chance that you are now OK with JavaScript, learning Node.js won’t be an issue. Grammar, fundamental usefulness, primary standards — every one of these things are comparable.
In the event that you have JS designers in your group, it will be simpler for them to learn JS-based Node than a totally new dialect. What’s more, the front-end and back-end codebase will be basically the same, simple to peruse, and keep up — in light of the fact that they are both JS-based.
There’s another motivation behind why Node.js got famous so rapidly. The environment suits well the idea of microservice development (spilling stone monument usefulness into handfuls or many more modest administrations).
Microservices need to speak with one another rapidly — and Node.js is probably the quickest device in information handling. Among the fundamental Node.js benefits for programming development are its non-obstructing algorithms.
Node.js measures a few demands all at once without trusting that the first will be concluded. Many microservices can send messages to one another, and they will be gotten and addressed all the while.
Node.js was worked in view of adaptability — its name really says it. The environment permits numerous hubs to run all the while and speak with one another. Here’s the reason Node.js adaptability is better than other web backend development arrangements.
Node.js has a module that is liable for load adjusting for each running CPU center. This is one of numerous Node.js module benefits: you can run various hubs all at once, and the environment will naturally adjust the responsibility.
Node.js permits even apportioning: you can part your application into various situations. You show various forms of the application to different clients, in light of their age, interests, area, language, and so on. This builds personalization and diminishes responsibility. Hub accomplishes this with kid measures — tasks that rapidly speak with one another and share a similar root.
What’s more, Node’s non-hindering solicitation handling framework adds to fast, letting applications measure a great many solicitations.
Numerous designers consider nonconcurrent to be one of the two impediments and benefits of Node.js web application development. In Node, at whatever point the capacity is executed, the code consequently sends a callback. As the quantity of capacities develops, so does the number of callbacks — and you end up in a circumstance known as the callback damnation.
In any case, Node.js offers an exit plan. You can utilize systems that will plan capacities and sort through callbacks. Systems will associate comparable capacities consequently — so you can track down an essential component via search or in an envelope. At that point, there’s no compelling reason to look through callbacks.
So, these are some of the top benefits of Nodejs in web application development. This is how Nodejs is contributing a lot to the field of web application development.
I hope now you are totally aware of the whole process of how Nodejs is really important for your web project. If you are looking to hire a node js development company in India then I would suggest that you take a little consultancy too whenever you call.
Good Luck!
#node.js development company in india #node js development company #hire node js developers #hire node.js developers in india #node.js development services #node.js development
1616839211
Top organizations and start-ups hire Node.js developers from SISGAIN for their strategic software development projects in Illinois, USA. On the off chance that you are searching for a first rate innovation to assemble a constant Node.js web application development or a module, Node.js applications are the most appropriate alternative to pick. As Leading Node.js development company, we leverage our profound information on its segments and convey solutions that bring noteworthy business results. For more information email us at hello@sisgain.com
#node.js development services #hire node.js developers #node.js web application development #node.js development company #node js application
1625114985
Node.js is a prominent tech trend in the space of web and mobile application development. It has been proven very efficient and useful for a variety of application development. Thus, all business owners are eager to leverage this technology for creating their applications.
Are you striving to develop an application using Node.js? But can’t decide which company to hire for NodeJS app development? Well! Don’t stress over it, as the following list of NodeJS app development companies is going to help you find the best partner.
Let’s take a glance at top NodeJS application development companies to hire developers in 2021 for developing a mind-blowing application solution.
Before enlisting companies, I would like to say that every company has a foundation on which they thrive. Their end goals, qualities, and excellence define their competence. Thus, I prepared this list by considering a number of aspects. While making this list, I have considered the following aspects:
I believe this list will help you out in choosing the best NodeJS service provider company. So, now let’s explore the top NodeJS developer companies to choose from in 2021.
#1. JSGuru
JSGuru is a top-rated NodeJS app development company with an innovative team of dedicated NodeJS developers engaged in catering best-class UI/UX design, software products, and AWS professional services.
It is a team of one of the most talented developers to hire for all types of innovative solution development, including social media, dating, enterprise, and business-oriented solutions. The company has worked for years with a number of startups and launched a variety of products by collaborating with big-name corporations like T-systems.
If you want to hire NodeJS developers to secure an outstanding application, I would definitely suggest them. They serve in the area of eLearning, FinTech, eCommerce, Telecommunications, Mobile Device Management, and more.
Ratings: 4.9/5.0
Founded: 2006
Headquarters: Banja Luka, Bosnia, and Herzegovina
Price: Starting from $50/hour
Visit Website - https://www.valuecoders.com/blog/technology-and-apps/top-node-js-app-development-companies
#node js developer #hire node js developer #hiring node js developers #node js development company #node.js development company #node js development services