1550583256
Angular has announced its latest version Angular 7. Many companies and developers are still comfortable working with Angular 4 or Angular 6. Though there are many fixes and bug improvements in each version, each one has its own advantages. We will be discussing the advantages of both Angular versions one by one.
#angular
1648803600
I founded this project, because I wanted to publish the code I wrote in the last two years, when I tried to write enhanced checking for PostgreSQL upstream. It was not fully successful - integration into upstream requires some larger plpgsql refactoring - probably it will not be done in next years (now is Dec 2013). But written code is fully functional and can be used in production (and it is used in production). So, I created this extension to be available for all plpgsql developers.
If you like it and if you would to join to development of this extension, register yourself to postgresql extension hacking google group.
Features
I invite any ideas, patches, bugreports.
plpgsql_check is next generation of plpgsql_lint. It allows to check source code by explicit call plpgsql_check_function.
PostgreSQL PostgreSQL 10, 11, 12, 13 and 14 are supported.
The SQL statements inside PL/pgSQL functions are checked by validator for semantic errors. These errors can be found by plpgsql_check_function:
Active mode
postgres=# CREATE EXTENSION plpgsql_check;
LOAD
postgres=# CREATE TABLE t1(a int, b int);
CREATE TABLE
postgres=#
CREATE OR REPLACE FUNCTION public.f1()
RETURNS void
LANGUAGE plpgsql
AS $function$
DECLARE r record;
BEGIN
FOR r IN SELECT * FROM t1
LOOP
RAISE NOTICE '%', r.c; -- there is bug - table t1 missing "c" column
END LOOP;
END;
$function$;
CREATE FUNCTION
postgres=# select f1(); -- execution doesn't find a bug due to empty table t1
f1
────
(1 row)
postgres=# \x
Expanded display is on.
postgres=# select * from plpgsql_check_function_tb('f1()');
─[ RECORD 1 ]───────────────────────────
functionid │ f1
lineno │ 6
statement │ RAISE
sqlstate │ 42703
message │ record "r" has no field "c"
detail │ [null]
hint │ [null]
level │ error
position │ 0
query │ [null]
postgres=# \sf+ f1
CREATE OR REPLACE FUNCTION public.f1()
RETURNS void
LANGUAGE plpgsql
1 AS $function$
2 DECLARE r record;
3 BEGIN
4 FOR r IN SELECT * FROM t1
5 LOOP
6 RAISE NOTICE '%', r.c; -- there is bug - table t1 missing "c" column
7 END LOOP;
8 END;
9 $function$
Function plpgsql_check_function() has three possible formats: text, json or xml
select * from plpgsql_check_function('f1()', fatal_errors := false);
plpgsql_check_function
------------------------------------------------------------------------
error:42703:4:SQL statement:column "c" of relation "t1" does not exist
Query: update t1 set c = 30
-- ^
error:42P01:7:RAISE:missing FROM-clause entry for table "r"
Query: SELECT r.c
-- ^
error:42601:7:RAISE:too few parameters specified for RAISE
(7 rows)
postgres=# select * from plpgsql_check_function('fx()', format:='xml');
plpgsql_check_function
────────────────────────────────────────────────────────────────
<Function oid="16400"> ↵
<Issue> ↵
<Level>error</level> ↵
<Sqlstate>42P01</Sqlstate> ↵
<Message>relation "foo111" does not exist</Message> ↵
<Stmt lineno="3">RETURN</Stmt> ↵
<Query position="23">SELECT (select a from foo111)</Query>↵
</Issue> ↵
</Function>
(1 row)
You can set level of warnings via function's parameters:
'fx()'::regprocedure
or 16799::regprocedure
. Possible alternative is using a name only, when function's name is unique - like 'fx'
. When the name is not unique or the function doesn't exists it raises a error.relid DEFAULT 0
- oid of relation assigned with trigger function. It is necessary for check of any trigger function.
fatal_errors boolean DEFAULT true
- stop on first error
other_warnings boolean DEFAULT true
- show warnings like different attributes number in assignmenet on left and right side, variable overlaps function's parameter, unused variables, unwanted casting, ..
extra_warnings boolean DEFAULT true
- show warnings like missing RETURN
, shadowed variables, dead code, never read (unused) function's parameter, unmodified variables, modified auto variables, ..
performance_warnings boolean DEFAULT false
- performance related warnings like declared type with type modificator, casting, implicit casts in where clause (can be reason why index is not used), ..
security_warnings boolean DEFAULT false
- security related checks like SQL injection vulnerability detection
anyelementtype regtype DEFAULT 'int'
- a real type used instead anyelement type
anyenumtype regtype DEFAULT '-'
- a real type used instead anyenum type
anyrangetype regtype DEFAULT 'int4range'
- a real type used instead anyrange type
anycompatibletype DEFAULT 'int'
- a real type used instead anycompatible type
anycompatiblerangetype DEFAULT 'int4range'
- a real type used instead anycompatible range type
without_warnings DEFAULT false
- disable all warnings
all_warnings DEFAULT false
- enable all warnings
newtable DEFAULT NULL
, oldtable DEFAULT NULL
- the names of NEW or OLD transitive tables. These parameters are required when transitive tables are used.
When you want to check any trigger, you have to enter a relation that will be used together with trigger function
CREATE TABLE bar(a int, b int);
postgres=# \sf+ foo_trg
CREATE OR REPLACE FUNCTION public.foo_trg()
RETURNS trigger
LANGUAGE plpgsql
1 AS $function$
2 BEGIN
3 NEW.c := NEW.a + NEW.b;
4 RETURN NEW;
5 END;
6 $function$
Missing relation specification
postgres=# select * from plpgsql_check_function('foo_trg()');
ERROR: missing trigger relation
HINT: Trigger relation oid must be valid
Correct trigger checking (with specified relation)
postgres=# select * from plpgsql_check_function('foo_trg()', 'bar');
plpgsql_check_function
--------------------------------------------------------
error:42703:3:assignment:record "new" has no field "c"
(1 row)
For triggers with transitive tables you can set a oldtable
or newtable
parameters:
create or replace function footab_trig_func()
returns trigger as $$
declare x int;
begin
if false then
-- should be ok;
select count(*) from newtab into x;
-- should fail;
select count(*) from newtab where d = 10 into x;
end if;
return null;
end;
$$ language plpgsql;
select * from plpgsql_check_function('footab_trig_func','footab', newtable := 'newtab');
You can use the plpgsql_check_function for mass check functions and mass check triggers. Please, test following queries:
-- check all nontrigger plpgsql functions
SELECT p.oid, p.proname, plpgsql_check_function(p.oid)
FROM pg_catalog.pg_namespace n
JOIN pg_catalog.pg_proc p ON pronamespace = n.oid
JOIN pg_catalog.pg_language l ON p.prolang = l.oid
WHERE l.lanname = 'plpgsql' AND p.prorettype <> 2279;
or
SELECT p.proname, tgrelid::regclass, cf.*
FROM pg_proc p
JOIN pg_trigger t ON t.tgfoid = p.oid
JOIN pg_language l ON p.prolang = l.oid
JOIN pg_namespace n ON p.pronamespace = n.oid,
LATERAL plpgsql_check_function(p.oid, t.tgrelid) cf
WHERE n.nspname = 'public' and l.lanname = 'plpgsql'
or
-- check all plpgsql functions (functions or trigger functions with defined triggers)
SELECT
(pcf).functionid::regprocedure, (pcf).lineno, (pcf).statement,
(pcf).sqlstate, (pcf).message, (pcf).detail, (pcf).hint, (pcf).level,
(pcf)."position", (pcf).query, (pcf).context
FROM
(
SELECT
plpgsql_check_function_tb(pg_proc.oid, COALESCE(pg_trigger.tgrelid, 0)) AS pcf
FROM pg_proc
LEFT JOIN pg_trigger
ON (pg_trigger.tgfoid = pg_proc.oid)
WHERE
prolang = (SELECT lang.oid FROM pg_language lang WHERE lang.lanname = 'plpgsql') AND
pronamespace <> (SELECT nsp.oid FROM pg_namespace nsp WHERE nsp.nspname = 'pg_catalog') AND
-- ignore unused triggers
(pg_proc.prorettype <> (SELECT typ.oid FROM pg_type typ WHERE typ.typname = 'trigger') OR
pg_trigger.tgfoid IS NOT NULL)
OFFSET 0
) ss
ORDER BY (pcf).functionid::regprocedure::text, (pcf).lineno
Passive mode
Functions should be checked on start - plpgsql_check module must be loaded.
plpgsql_check.mode = [ disabled | by_function | fresh_start | every_start ]
plpgsql_check.fatal_errors = [ yes | no ]
plpgsql_check.show_nonperformance_warnings = false
plpgsql_check.show_performance_warnings = false
Default mode is by_function, that means that the enhanced check is done only in active mode - by plpgsql_check_function. fresh_start
means cold start.
You can enable passive mode by
load 'plpgsql'; -- 1.1 and higher doesn't need it
load 'plpgsql_check';
set plpgsql_check.mode = 'every_start';
SELECT fx(10); -- run functions - function is checked before runtime starts it
Limits
plpgsql_check should find almost all errors on really static code. When developer use some PLpgSQL's dynamic features like dynamic SQL or record data type, then false positives are possible. These should be rare - in well written code - and then the affected function should be redesigned or plpgsql_check should be disabled for this function.
CREATE OR REPLACE FUNCTION f1()
RETURNS void AS $$
DECLARE r record;
BEGIN
FOR r IN EXECUTE 'SELECT * FROM t1'
LOOP
RAISE NOTICE '%', r.c;
END LOOP;
END;
$$ LANGUAGE plpgsql SET plpgsql.enable_check TO false;
A usage of plpgsql_check adds a small overhead (in enabled passive mode) and you should use it only in develop or preprod environments.
This module doesn't check queries that are assembled in runtime. It is not possible to identify results of dynamic queries - so plpgsql_check cannot to set correct type to record variables and cannot to check a dependent SQLs and expressions.
When type of record's variable is not know, you can assign it explicitly with pragma type
:
DECLARE r record;
BEGIN
EXECUTE format('SELECT * FROM %I', _tablename) INTO r;
PERFORM plpgsql_check_pragma('type: r (id int, processed bool)');
IF NOT r.processed THEN
...
Attention: The SQL injection check can detect only some SQL injection vulnerabilities. This tool cannot be used for security audit! Some issues should not be detected. This check can raise false alarms too - probably when variable is sanitized by other command or when value is of some compose type.
plpgsql_check should not to detect structure of referenced cursors. A reference on cursor in PLpgSQL is implemented as name of global cursor. In check time, the name is not known (not in all possibilities), and global cursor doesn't exist. It is significant break for any static analyse. PLpgSQL cannot to set correct type for record variables and cannot to check a dependent SQLs and expressions. A solution is same like dynamic SQL. Don't use record variable as target when you use refcursor type or disable plpgsql_check for these functions.
CREATE OR REPLACE FUNCTION foo(refcur_var refcursor)
RETURNS void AS $$
DECLARE
rec_var record;
BEGIN
FETCH refcur_var INTO rec_var; -- this is STOP for plpgsql_check
RAISE NOTICE '%', rec_var; -- record rec_var is not assigned yet error
In this case a record type should not be used (use known rowtype instead):
CREATE OR REPLACE FUNCTION foo(refcur_var refcursor)
RETURNS void AS $$
DECLARE
rec_var some_rowtype;
BEGIN
FETCH refcur_var INTO rec_var;
RAISE NOTICE '%', rec_var;
plpgsql_check cannot verify queries over temporary tables that are created in plpgsql's function runtime. For this use case it is necessary to create a fake temp table or disable plpgsql_check for this function.
In reality temp tables are stored in own (per user) schema with higher priority than persistent tables. So you can do (with following trick safetly):
CREATE OR REPLACE FUNCTION public.disable_dml()
RETURNS trigger
LANGUAGE plpgsql AS $function$
BEGIN
RAISE EXCEPTION SQLSTATE '42P01'
USING message = format('this instance of %I table doesn''t allow any DML operation', TG_TABLE_NAME),
hint = format('you should to run "CREATE TEMP TABLE %1$I(LIKE %1$I INCLUDING ALL);" statement',
TG_TABLE_NAME);
RETURN NULL;
END;
$function$;
CREATE TABLE foo(a int, b int); -- doesn't hold data ever
CREATE TRIGGER foo_disable_dml
BEFORE INSERT OR UPDATE OR DELETE ON foo
EXECUTE PROCEDURE disable_dml();
postgres=# INSERT INTO foo VALUES(10,20);
ERROR: this instance of foo table doesn't allow any DML operation
HINT: you should to run "CREATE TEMP TABLE foo(LIKE foo INCLUDING ALL);" statement
postgres=#
CREATE TABLE
postgres=# INSERT INTO foo VALUES(10,20);
INSERT 0 1
This trick emulates GLOBAL TEMP tables partially and it allows a statical validation. Other possibility is using a [template foreign data wrapper] (https://github.com/okbob/template_fdw)
You can use pragma table
and create ephemeral table:
BEGIN
CREATE TEMP TABLE xxx(a int);
PERFORM plpgsql_check_pragma('table: xxx(a int)');
INSERT INTO xxx VALUES(10);
Dependency list
A function plpgsql_show_dependency_tb can show all functions, operators and relations used inside processed function:
postgres=# select * from plpgsql_show_dependency_tb('testfunc(int,float)');
┌──────────┬───────┬────────┬─────────┬────────────────────────────┐
│ type │ oid │ schema │ name │ params │
╞══════════╪═══════╪════════╪═════════╪════════════════════════════╡
│ FUNCTION │ 36008 │ public │ myfunc1 │ (integer,double precision) │
│ FUNCTION │ 35999 │ public │ myfunc2 │ (integer,double precision) │
│ OPERATOR │ 36007 │ public │ ** │ (integer,integer) │
│ RELATION │ 36005 │ public │ myview │ │
│ RELATION │ 36002 │ public │ mytable │ │
└──────────┴───────┴────────┴─────────┴────────────────────────────┘
(4 rows)
Profiler
The plpgsql_check contains simple profiler of plpgsql functions and procedures. It can work with/without a access to shared memory. It depends on shared_preload_libraries
config. When plpgsql_check was initialized by shared_preload_libraries
, then it can allocate shared memory, and function's profiles are stored there. When plpgsql_check cannot to allocate shared momory, the profile is stored in session memory.
Due dependencies, shared_preload_libraries
should to contains plpgsql
first
postgres=# show shared_preload_libraries ;
┌──────────────────────────┐
│ shared_preload_libraries │
╞══════════════════════════╡
│ plpgsql,plpgsql_check │
└──────────────────────────┘
(1 row)
The profiler is active when GUC plpgsql_check.profiler
is on. The profiler doesn't require shared memory, but if there are not shared memory, then the profile is limmitted just to active session.
When plpgsql_check is initialized by shared_preload_libraries
, another GUC is available to configure the amount of shared memory used by the profiler: plpgsql_check.profiler_max_shared_chunks
. This defines the maximum number of statements chunk that can be stored in shared memory. For each plpgsql function (or procedure), the whole content is split into chunks of 30 statements. If needed, multiple chunks can be used to store the whole content of a single function. A single chunk is 1704 bytes. The default value for this GUC is 15000, which should be enough for big projects containing hundred of thousands of statements in plpgsql, and will consume about 24MB of memory. If your project doesn't require that much number of chunks, you can set this parameter to a smaller number in order to decrease the memory usage. The minimum value is 50 (which should consume about 83kB of memory), and the maximum value is 100000 (which should consume about 163MB of memory). Changing this parameter requires a PostgreSQL restart.
The profiler will also retrieve the query identifier for each instruction that contains an expression or optimizable statement. Note that this requires pg_stat_statements, or another similar third-party extension), to be installed. There are some limitations to the query identifier retrieval:
Attention: A update of shared profiles can decrease performance on servers under higher load.
The profile can be displayed by function plpgsql_profiler_function_tb
:
postgres=# select lineno, avg_time, source from plpgsql_profiler_function_tb('fx(int)');
┌────────┬──────────┬───────────────────────────────────────────────────────────────────┐
│ lineno │ avg_time │ source │
╞════════╪══════════╪═══════════════════════════════════════════════════════════════════╡
│ 1 │ │ │
│ 2 │ │ declare result int = 0; │
│ 3 │ 0.075 │ begin │
│ 4 │ 0.202 │ for i in 1..$1 loop │
│ 5 │ 0.005 │ select result + i into result; select result + i into result; │
│ 6 │ │ end loop; │
│ 7 │ 0 │ return result; │
│ 8 │ │ end; │
└────────┴──────────┴───────────────────────────────────────────────────────────────────┘
(9 rows)
The profile per statements (not per line) can be displayed by function plpgsql_profiler_function_statements_tb:
CREATE OR REPLACE FUNCTION public.fx1(a integer)
RETURNS integer
LANGUAGE plpgsql
1 AS $function$
2 begin
3 if a > 10 then
4 raise notice 'ahoj';
5 return -1;
6 else
7 raise notice 'nazdar';
8 return 1;
9 end if;
10 end;
11 $function$
postgres=# select stmtid, parent_stmtid, parent_note, lineno, exec_stmts, stmtname
from plpgsql_profiler_function_statements_tb('fx1');
┌────────┬───────────────┬─────────────┬────────┬────────────┬─────────────────┐
│ stmtid │ parent_stmtid │ parent_note │ lineno │ exec_stmts │ stmtname │
╞════════╪═══════════════╪═════════════╪════════╪════════════╪═════════════════╡
│ 0 │ ∅ │ ∅ │ 2 │ 0 │ statement block │
│ 1 │ 0 │ body │ 3 │ 0 │ IF │
│ 2 │ 1 │ then body │ 4 │ 0 │ RAISE │
│ 3 │ 1 │ then body │ 5 │ 0 │ RETURN │
│ 4 │ 1 │ else body │ 7 │ 0 │ RAISE │
│ 5 │ 1 │ else body │ 8 │ 0 │ RETURN │
└────────┴───────────────┴─────────────┴────────┴────────────┴─────────────────┘
(6 rows)
All stored profiles can be displayed by calling function plpgsql_profiler_functions_all
:
postgres=# select * from plpgsql_profiler_functions_all();
┌───────────────────────┬────────────┬────────────┬──────────┬─────────────┬──────────┬──────────┐
│ funcoid │ exec_count │ total_time │ avg_time │ stddev_time │ min_time │ max_time │
╞═══════════════════════╪════════════╪════════════╪══════════╪═════════════╪══════════╪══════════╡
│ fxx(double precision) │ 1 │ 0.01 │ 0.01 │ 0.00 │ 0.01 │ 0.01 │
└───────────────────────┴────────────┴────────────┴──────────┴─────────────┴──────────┴──────────┘
(1 row)
There are two functions for cleaning stored profiles: plpgsql_profiler_reset_all()
and plpgsql_profiler_reset(regprocedure)
.
plpgsql_check provides two functions:
plpgsql_coverage_statements(name)
plpgsql_coverage_branches(name)
There is another very good PLpgSQL profiler - https://bitbucket.org/openscg/plprofiler
My extension is designed to be simple for use and practical. Nothing more or less.
plprofiler is more complex. It build call graphs and from this graph it can creates flame graph of execution times.
Both extensions can be used together with buildin PostgreSQL's feature - tracking functions.
set track_functions to 'pl';
...
select * from pg_stat_user_functions;
Tracer
plpgsql_check provides a tracing possibility - in this mode you can see notices on start or end functions (terse and default verbosity) and start or end statements (verbose verbosity). For default and verbose verbosity the content of function arguments is displayed. The content of related variables are displayed when verbosity is verbose.
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 ->> start of inline_code_block (Oid=0)
NOTICE: #2 ->> start of function fx(integer,integer,date,text) (Oid=16405)
NOTICE: #2 call by inline_code_block line 1 at PERFORM
NOTICE: #2 "a" => '10', "b" => null, "c" => '2020-08-03', "d" => 'stěhule'
NOTICE: #4 ->> start of function fx(integer) (Oid=16404)
NOTICE: #4 call by fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: #4 "a" => '10'
NOTICE: #4 <<- end of function fx (elapsed time=0.098 ms)
NOTICE: #2 <<- end of function fx (elapsed time=0.399 ms)
NOTICE: #0 <<- end of block (elapsed time=0.754 ms)
The number after #
is a execution frame counter (this number is related to deep of error context stack). It allows to pair start end and of function.
Tracing is enabled by setting plpgsql_check.tracer
to on
. Attention - enabling this behaviour has significant negative impact on performance (unlike the profiler). You can set a level for output used by tracer plpgsql_check.tracer_errlevel
(default is notice
). The output content is limited by length specified by plpgsql_check.tracer_variable_max_length
configuration variable.
In terse verbose mode the output is reduced:
postgres=# set plpgsql_check.tracer_verbosity TO terse;
SET
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 start of inline code block (oid=0)
NOTICE: #2 start of fx (oid=16405)
NOTICE: #4 start of fx (oid=16404)
NOTICE: #4 end of fx
NOTICE: #2 end of fx
NOTICE: #0 end of inline code block
In verbose mode the output is extended about statement details:
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 ->> start of block inline_code_block (oid=0)
NOTICE: #0.1 1 --> start of PERFORM
NOTICE: #2 ->> start of function fx(integer,integer,date,text) (oid=16405)
NOTICE: #2 call by inline_code_block line 1 at PERFORM
NOTICE: #2 "a" => '10', "b" => null, "c" => '2020-08-04', "d" => 'stěhule'
NOTICE: #2.1 1 --> start of PERFORM
NOTICE: #2.1 "a" => '10'
NOTICE: #4 ->> start of function fx(integer) (oid=16404)
NOTICE: #4 call by fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: #4 "a" => '10'
NOTICE: #4.1 6 --> start of assignment
NOTICE: #4.1 "a" => '10', "b" => '20'
NOTICE: #4.1 <-- end of assignment (elapsed time=0.076 ms)
NOTICE: #4.1 "res" => '130'
NOTICE: #4.2 7 --> start of RETURN
NOTICE: #4.2 "res" => '130'
NOTICE: #4.2 <-- end of RETURN (elapsed time=0.054 ms)
NOTICE: #4 <<- end of function fx (elapsed time=0.373 ms)
NOTICE: #2.1 <-- end of PERFORM (elapsed time=0.589 ms)
NOTICE: #2 <<- end of function fx (elapsed time=0.727 ms)
NOTICE: #0.1 <-- end of PERFORM (elapsed time=1.147 ms)
NOTICE: #0 <<- end of block (elapsed time=1.286 ms)
Special feature of tracer is tracing of ASSERT
statement when plpgsql_check.trace_assert
is on
. When plpgsql_check.trace_assert_verbosity
is DEFAULT
, then all function's or procedure's variables are displayed when assert expression is false. When this configuration is VERBOSE
then all variables from all plpgsql frames are displayed. This behaviour is independent on plpgsql.check_asserts
value. It can be used, although the assertions are disabled in plpgsql runtime.
postgres=# set plpgsql_check.tracer to off;
postgres=# set plpgsql_check.trace_assert_verbosity TO verbose;
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false
NOTICE: "a" => '10', "res" => null, "b" => '20'
NOTICE: #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: "a" => '10', "b" => null, "c" => '2020-08-05', "d" => 'stěhule'
NOTICE: #0 PL/pgSQL function inline_code_block line 1 at PERFORM
ERROR: assertion failed
CONTEXT: PL/pgSQL function fx(integer) line 12 at ASSERT
SQL statement "SELECT fx(a)"
PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
SQL statement "SELECT fx(10,null, 'now', e'stěhule')"
PL/pgSQL function inline_code_block line 1 at PERFORM
postgres=# set plpgsql.check_asserts to off;
SET
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false
NOTICE: "a" => '10', "res" => null, "b" => '20'
NOTICE: #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: "a" => '10', "b" => null, "c" => '2020-08-05', "d" => 'stěhule'
NOTICE: #0 PL/pgSQL function inline_code_block line 1 at PERFORM
DO
Tracer prints content of variables or function arguments. For security definer function, this content can hold security sensitive data. This is reason why tracer is disabled by default and should be enabled only with super user rights plpgsql_check.enable_tracer
.
Pragma
You can configure plpgsql_check behave inside checked function with "pragma" function. This is a analogy of PL/SQL or ADA language of PRAGMA feature. PLpgSQL doesn't support PRAGMA, but plpgsql_check detects function named plpgsql_check_pragma
and get options from parameters of this function. These plpgsql_check options are valid to end of group of statements.
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
BEGIN
...
-- for following statements disable check
PERFORM plpgsql_check_pragma('disable:check');
...
-- enable check again
PERFORM plpgsql_check_pragma('enable:check');
...
END;
$$ LANGUAGE plpgsql;
The function plpgsql_check_pragma
is immutable function that returns one. It is defined by plpgsql_check
extension. You can declare alternative plpgsql_check_pragma
function like:
CREATE OR REPLACE FUNCTION plpgsql_check_pragma(VARIADIC args[])
RETURNS int AS $$
SELECT 1
$$ LANGUAGE sql IMMUTABLE;
Using pragma function in declaration part of top block sets options on function level too.
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
DECLARE
aux int := plpgsql_check_pragma('disable:extra_warnings');
...
Shorter syntax for pragma is supported too:
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
DECLARE r record;
BEGIN
PERFORM 'PRAGMA:TYPE:r (a int, b int)';
PERFORM 'PRAGMA:TABLE: x (like pg_class)';
...
echo:str
- print string (for testing)
status:check
,status:tracer
, status:other_warnings
, status:performance_warnings
, status:extra_warnings
,status:security_warnings
enable:check
,enable:tracer
, enable:other_warnings
, enable:performance_warnings
, enable:extra_warnings
,enable:security_warnings
disable:check
,disable:tracer
, disable:other_warnings
, disable:performance_warnings
, disable:extra_warnings
,disable:security_warnings
type:varname typename
or type:varname (fieldname type, ...)
- set type to variable of record type
table: name (column_name type, ...)
or table: name (like tablename)
- create ephereal table
Pragmas enable:tracer
and disable:tracer
are active for Postgres 12 and higher
Compilation
You need a development environment for PostgreSQL extensions:
make clean
make install
result:
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 clean
rm -f plpgsql_check.so libplpgsql_check.a libplpgsql_check.pc
rm -f plpgsql_check.o
rm -rf results/ regression.diffs regression.out tmp_check/ log/
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 all
clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -I. -I./ -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include/internal -D_GNU_SOURCE -c -o plpgsql_check.o plpgsql_check.c
clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -shared -o plpgsql_check.so plpgsql_check.o -L/usr/local/pgsql/lib -Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags
[pavel@localhost plpgsql_check]$ su root
Password: *******
[root@localhost plpgsql_check]# make USE_PGXS=1 install
/usr/bin/mkdir -p '/usr/local/pgsql/lib'
/usr/bin/mkdir -p '/usr/local/pgsql/share/extension'
/usr/bin/mkdir -p '/usr/local/pgsql/share/extension'
/usr/bin/install -c -m 755 plpgsql_check.so '/usr/local/pgsql/lib/plpgsql_check.so'
/usr/bin/install -c -m 644 plpgsql_check.control '/usr/local/pgsql/share/extension/'
/usr/bin/install -c -m 644 plpgsql_check--0.9.sql '/usr/local/pgsql/share/extension/'
[root@localhost plpgsql_check]# exit
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 installcheck
/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress --inputdir=./ --psqldir='/usr/local/pgsql/bin' --dbname=pl_regression --load-language=plpgsql --dbname=contrib_regression plpgsql_check_passive plpgsql_check_active plpgsql_check_active-9.5
(using postmaster on Unix socket, default port)
============== dropping database "contrib_regression" ==============
DROP DATABASE
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
============== installing plpgsql ==============
CREATE LANGUAGE
============== running regression test queries ==============
test plpgsql_check_passive ... ok
test plpgsql_check_active ... ok
test plpgsql_check_active-9.5 ... ok
=====================
All 3 tests passed.
=====================
Sometimes successful compilation can require libicu-dev package (PostgreSQL 10 and higher - when pg was compiled with ICU support)
sudo apt install libicu-dev
You can check precompiled dll libraries http://okbob.blogspot.cz/2015/02/plpgsqlcheck-is-available-for-microsoft.html
or compile by self:
plpgsql_check.dll
to PostgreSQL\14\lib
plpgsql_check.control
and plpgsql_check--2.1.sql
to PostgreSQL\14\share\extension
Compilation against PostgreSQL 10 requires libICU!
Licence
Copyright (c) Pavel Stehule (pavel.stehule@gmail.com)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Note
If you like it, send a postcard to address
Pavel Stehule
Skalice 12
256 01 Benesov u Prahy
Czech Republic
I invite any questions, comments, bug reports, patches on mail address pavel.stehule@gmail.com
Author: okbob
Source Code: https://github.com/okbob/plpgsql_check
License: View license
1648900800
I founded this project, because I wanted to publish the code I wrote in the last two years, when I tried to write enhanced checking for PostgreSQL upstream. It was not fully successful - integration into upstream requires some larger plpgsql refactoring - probably it will not be done in next years (now is Dec 2013). But written code is fully functional and can be used in production (and it is used in production). So, I created this extension to be available for all plpgsql developers.
If you like it and if you would to join to development of this extension, register yourself to postgresql extension hacking google group.
Features
I invite any ideas, patches, bugreports.
plpgsql_check is next generation of plpgsql_lint. It allows to check source code by explicit call plpgsql_check_function.
PostgreSQL PostgreSQL 10, 11, 12, 13 and 14 are supported.
The SQL statements inside PL/pgSQL functions are checked by validator for semantic errors. These errors can be found by plpgsql_check_function:
Active mode
postgres=# CREATE EXTENSION plpgsql_check;
LOAD
postgres=# CREATE TABLE t1(a int, b int);
CREATE TABLE
postgres=#
CREATE OR REPLACE FUNCTION public.f1()
RETURNS void
LANGUAGE plpgsql
AS $function$
DECLARE r record;
BEGIN
FOR r IN SELECT * FROM t1
LOOP
RAISE NOTICE '%', r.c; -- there is bug - table t1 missing "c" column
END LOOP;
END;
$function$;
CREATE FUNCTION
postgres=# select f1(); -- execution doesn't find a bug due to empty table t1
f1
────
(1 row)
postgres=# \x
Expanded display is on.
postgres=# select * from plpgsql_check_function_tb('f1()');
─[ RECORD 1 ]───────────────────────────
functionid │ f1
lineno │ 6
statement │ RAISE
sqlstate │ 42703
message │ record "r" has no field "c"
detail │ [null]
hint │ [null]
level │ error
position │ 0
query │ [null]
postgres=# \sf+ f1
CREATE OR REPLACE FUNCTION public.f1()
RETURNS void
LANGUAGE plpgsql
1 AS $function$
2 DECLARE r record;
3 BEGIN
4 FOR r IN SELECT * FROM t1
5 LOOP
6 RAISE NOTICE '%', r.c; -- there is bug - table t1 missing "c" column
7 END LOOP;
8 END;
9 $function$
Function plpgsql_check_function() has three possible formats: text, json or xml
select * from plpgsql_check_function('f1()', fatal_errors := false);
plpgsql_check_function
------------------------------------------------------------------------
error:42703:4:SQL statement:column "c" of relation "t1" does not exist
Query: update t1 set c = 30
-- ^
error:42P01:7:RAISE:missing FROM-clause entry for table "r"
Query: SELECT r.c
-- ^
error:42601:7:RAISE:too few parameters specified for RAISE
(7 rows)
postgres=# select * from plpgsql_check_function('fx()', format:='xml');
plpgsql_check_function
────────────────────────────────────────────────────────────────
<Function oid="16400"> ↵
<Issue> ↵
<Level>error</level> ↵
<Sqlstate>42P01</Sqlstate> ↵
<Message>relation "foo111" does not exist</Message> ↵
<Stmt lineno="3">RETURN</Stmt> ↵
<Query position="23">SELECT (select a from foo111)</Query>↵
</Issue> ↵
</Function>
(1 row)
You can set level of warnings via function's parameters:
'fx()'::regprocedure
or 16799::regprocedure
. Possible alternative is using a name only, when function's name is unique - like 'fx'
. When the name is not unique or the function doesn't exists it raises a error.relid DEFAULT 0
- oid of relation assigned with trigger function. It is necessary for check of any trigger function.
fatal_errors boolean DEFAULT true
- stop on first error
other_warnings boolean DEFAULT true
- show warnings like different attributes number in assignmenet on left and right side, variable overlaps function's parameter, unused variables, unwanted casting, ..
extra_warnings boolean DEFAULT true
- show warnings like missing RETURN
, shadowed variables, dead code, never read (unused) function's parameter, unmodified variables, modified auto variables, ..
performance_warnings boolean DEFAULT false
- performance related warnings like declared type with type modificator, casting, implicit casts in where clause (can be reason why index is not used), ..
security_warnings boolean DEFAULT false
- security related checks like SQL injection vulnerability detection
anyelementtype regtype DEFAULT 'int'
- a real type used instead anyelement type
anyenumtype regtype DEFAULT '-'
- a real type used instead anyenum type
anyrangetype regtype DEFAULT 'int4range'
- a real type used instead anyrange type
anycompatibletype DEFAULT 'int'
- a real type used instead anycompatible type
anycompatiblerangetype DEFAULT 'int4range'
- a real type used instead anycompatible range type
without_warnings DEFAULT false
- disable all warnings
all_warnings DEFAULT false
- enable all warnings
newtable DEFAULT NULL
, oldtable DEFAULT NULL
- the names of NEW or OLD transitive tables. These parameters are required when transitive tables are used.
When you want to check any trigger, you have to enter a relation that will be used together with trigger function
CREATE TABLE bar(a int, b int);
postgres=# \sf+ foo_trg
CREATE OR REPLACE FUNCTION public.foo_trg()
RETURNS trigger
LANGUAGE plpgsql
1 AS $function$
2 BEGIN
3 NEW.c := NEW.a + NEW.b;
4 RETURN NEW;
5 END;
6 $function$
Missing relation specification
postgres=# select * from plpgsql_check_function('foo_trg()');
ERROR: missing trigger relation
HINT: Trigger relation oid must be valid
Correct trigger checking (with specified relation)
postgres=# select * from plpgsql_check_function('foo_trg()', 'bar');
plpgsql_check_function
--------------------------------------------------------
error:42703:3:assignment:record "new" has no field "c"
(1 row)
For triggers with transitive tables you can set a oldtable
or newtable
parameters:
create or replace function footab_trig_func()
returns trigger as $$
declare x int;
begin
if false then
-- should be ok;
select count(*) from newtab into x;
-- should fail;
select count(*) from newtab where d = 10 into x;
end if;
return null;
end;
$$ language plpgsql;
select * from plpgsql_check_function('footab_trig_func','footab', newtable := 'newtab');
You can use the plpgsql_check_function for mass check functions and mass check triggers. Please, test following queries:
-- check all nontrigger plpgsql functions
SELECT p.oid, p.proname, plpgsql_check_function(p.oid)
FROM pg_catalog.pg_namespace n
JOIN pg_catalog.pg_proc p ON pronamespace = n.oid
JOIN pg_catalog.pg_language l ON p.prolang = l.oid
WHERE l.lanname = 'plpgsql' AND p.prorettype <> 2279;
or
SELECT p.proname, tgrelid::regclass, cf.*
FROM pg_proc p
JOIN pg_trigger t ON t.tgfoid = p.oid
JOIN pg_language l ON p.prolang = l.oid
JOIN pg_namespace n ON p.pronamespace = n.oid,
LATERAL plpgsql_check_function(p.oid, t.tgrelid) cf
WHERE n.nspname = 'public' and l.lanname = 'plpgsql'
or
-- check all plpgsql functions (functions or trigger functions with defined triggers)
SELECT
(pcf).functionid::regprocedure, (pcf).lineno, (pcf).statement,
(pcf).sqlstate, (pcf).message, (pcf).detail, (pcf).hint, (pcf).level,
(pcf)."position", (pcf).query, (pcf).context
FROM
(
SELECT
plpgsql_check_function_tb(pg_proc.oid, COALESCE(pg_trigger.tgrelid, 0)) AS pcf
FROM pg_proc
LEFT JOIN pg_trigger
ON (pg_trigger.tgfoid = pg_proc.oid)
WHERE
prolang = (SELECT lang.oid FROM pg_language lang WHERE lang.lanname = 'plpgsql') AND
pronamespace <> (SELECT nsp.oid FROM pg_namespace nsp WHERE nsp.nspname = 'pg_catalog') AND
-- ignore unused triggers
(pg_proc.prorettype <> (SELECT typ.oid FROM pg_type typ WHERE typ.typname = 'trigger') OR
pg_trigger.tgfoid IS NOT NULL)
OFFSET 0
) ss
ORDER BY (pcf).functionid::regprocedure::text, (pcf).lineno
Passive mode
Functions should be checked on start - plpgsql_check module must be loaded.
plpgsql_check.mode = [ disabled | by_function | fresh_start | every_start ]
plpgsql_check.fatal_errors = [ yes | no ]
plpgsql_check.show_nonperformance_warnings = false
plpgsql_check.show_performance_warnings = false
Default mode is by_function, that means that the enhanced check is done only in active mode - by plpgsql_check_function. fresh_start
means cold start.
You can enable passive mode by
load 'plpgsql'; -- 1.1 and higher doesn't need it
load 'plpgsql_check';
set plpgsql_check.mode = 'every_start';
SELECT fx(10); -- run functions - function is checked before runtime starts it
Limits
plpgsql_check should find almost all errors on really static code. When developer use some PLpgSQL's dynamic features like dynamic SQL or record data type, then false positives are possible. These should be rare - in well written code - and then the affected function should be redesigned or plpgsql_check should be disabled for this function.
CREATE OR REPLACE FUNCTION f1()
RETURNS void AS $$
DECLARE r record;
BEGIN
FOR r IN EXECUTE 'SELECT * FROM t1'
LOOP
RAISE NOTICE '%', r.c;
END LOOP;
END;
$$ LANGUAGE plpgsql SET plpgsql.enable_check TO false;
A usage of plpgsql_check adds a small overhead (in enabled passive mode) and you should use it only in develop or preprod environments.
This module doesn't check queries that are assembled in runtime. It is not possible to identify results of dynamic queries - so plpgsql_check cannot to set correct type to record variables and cannot to check a dependent SQLs and expressions.
When type of record's variable is not know, you can assign it explicitly with pragma type
:
DECLARE r record;
BEGIN
EXECUTE format('SELECT * FROM %I', _tablename) INTO r;
PERFORM plpgsql_check_pragma('type: r (id int, processed bool)');
IF NOT r.processed THEN
...
Attention: The SQL injection check can detect only some SQL injection vulnerabilities. This tool cannot be used for security audit! Some issues should not be detected. This check can raise false alarms too - probably when variable is sanitized by other command or when value is of some compose type.
plpgsql_check should not to detect structure of referenced cursors. A reference on cursor in PLpgSQL is implemented as name of global cursor. In check time, the name is not known (not in all possibilities), and global cursor doesn't exist. It is significant break for any static analyse. PLpgSQL cannot to set correct type for record variables and cannot to check a dependent SQLs and expressions. A solution is same like dynamic SQL. Don't use record variable as target when you use refcursor type or disable plpgsql_check for these functions.
CREATE OR REPLACE FUNCTION foo(refcur_var refcursor)
RETURNS void AS $$
DECLARE
rec_var record;
BEGIN
FETCH refcur_var INTO rec_var; -- this is STOP for plpgsql_check
RAISE NOTICE '%', rec_var; -- record rec_var is not assigned yet error
In this case a record type should not be used (use known rowtype instead):
CREATE OR REPLACE FUNCTION foo(refcur_var refcursor)
RETURNS void AS $$
DECLARE
rec_var some_rowtype;
BEGIN
FETCH refcur_var INTO rec_var;
RAISE NOTICE '%', rec_var;
plpgsql_check cannot verify queries over temporary tables that are created in plpgsql's function runtime. For this use case it is necessary to create a fake temp table or disable plpgsql_check for this function.
In reality temp tables are stored in own (per user) schema with higher priority than persistent tables. So you can do (with following trick safetly):
CREATE OR REPLACE FUNCTION public.disable_dml()
RETURNS trigger
LANGUAGE plpgsql AS $function$
BEGIN
RAISE EXCEPTION SQLSTATE '42P01'
USING message = format('this instance of %I table doesn''t allow any DML operation', TG_TABLE_NAME),
hint = format('you should to run "CREATE TEMP TABLE %1$I(LIKE %1$I INCLUDING ALL);" statement',
TG_TABLE_NAME);
RETURN NULL;
END;
$function$;
CREATE TABLE foo(a int, b int); -- doesn't hold data ever
CREATE TRIGGER foo_disable_dml
BEFORE INSERT OR UPDATE OR DELETE ON foo
EXECUTE PROCEDURE disable_dml();
postgres=# INSERT INTO foo VALUES(10,20);
ERROR: this instance of foo table doesn't allow any DML operation
HINT: you should to run "CREATE TEMP TABLE foo(LIKE foo INCLUDING ALL);" statement
postgres=#
CREATE TABLE
postgres=# INSERT INTO foo VALUES(10,20);
INSERT 0 1
This trick emulates GLOBAL TEMP tables partially and it allows a statical validation. Other possibility is using a [template foreign data wrapper] (https://github.com/okbob/template_fdw)
You can use pragma table
and create ephemeral table:
BEGIN
CREATE TEMP TABLE xxx(a int);
PERFORM plpgsql_check_pragma('table: xxx(a int)');
INSERT INTO xxx VALUES(10);
Dependency list
A function plpgsql_show_dependency_tb can show all functions, operators and relations used inside processed function:
postgres=# select * from plpgsql_show_dependency_tb('testfunc(int,float)');
┌──────────┬───────┬────────┬─────────┬────────────────────────────┐
│ type │ oid │ schema │ name │ params │
╞══════════╪═══════╪════════╪═════════╪════════════════════════════╡
│ FUNCTION │ 36008 │ public │ myfunc1 │ (integer,double precision) │
│ FUNCTION │ 35999 │ public │ myfunc2 │ (integer,double precision) │
│ OPERATOR │ 36007 │ public │ ** │ (integer,integer) │
│ RELATION │ 36005 │ public │ myview │ │
│ RELATION │ 36002 │ public │ mytable │ │
└──────────┴───────┴────────┴─────────┴────────────────────────────┘
(4 rows)
Profiler
The plpgsql_check contains simple profiler of plpgsql functions and procedures. It can work with/without a access to shared memory. It depends on shared_preload_libraries
config. When plpgsql_check was initialized by shared_preload_libraries
, then it can allocate shared memory, and function's profiles are stored there. When plpgsql_check cannot to allocate shared momory, the profile is stored in session memory.
Due dependencies, shared_preload_libraries
should to contains plpgsql
first
postgres=# show shared_preload_libraries ;
┌──────────────────────────┐
│ shared_preload_libraries │
╞══════════════════════════╡
│ plpgsql,plpgsql_check │
└──────────────────────────┘
(1 row)
The profiler is active when GUC plpgsql_check.profiler
is on. The profiler doesn't require shared memory, but if there are not shared memory, then the profile is limmitted just to active session.
When plpgsql_check is initialized by shared_preload_libraries
, another GUC is available to configure the amount of shared memory used by the profiler: plpgsql_check.profiler_max_shared_chunks
. This defines the maximum number of statements chunk that can be stored in shared memory. For each plpgsql function (or procedure), the whole content is split into chunks of 30 statements. If needed, multiple chunks can be used to store the whole content of a single function. A single chunk is 1704 bytes. The default value for this GUC is 15000, which should be enough for big projects containing hundred of thousands of statements in plpgsql, and will consume about 24MB of memory. If your project doesn't require that much number of chunks, you can set this parameter to a smaller number in order to decrease the memory usage. The minimum value is 50 (which should consume about 83kB of memory), and the maximum value is 100000 (which should consume about 163MB of memory). Changing this parameter requires a PostgreSQL restart.
The profiler will also retrieve the query identifier for each instruction that contains an expression or optimizable statement. Note that this requires pg_stat_statements, or another similar third-party extension), to be installed. There are some limitations to the query identifier retrieval:
Attention: A update of shared profiles can decrease performance on servers under higher load.
The profile can be displayed by function plpgsql_profiler_function_tb
:
postgres=# select lineno, avg_time, source from plpgsql_profiler_function_tb('fx(int)');
┌────────┬──────────┬───────────────────────────────────────────────────────────────────┐
│ lineno │ avg_time │ source │
╞════════╪══════════╪═══════════════════════════════════════════════════════════════════╡
│ 1 │ │ │
│ 2 │ │ declare result int = 0; │
│ 3 │ 0.075 │ begin │
│ 4 │ 0.202 │ for i in 1..$1 loop │
│ 5 │ 0.005 │ select result + i into result; select result + i into result; │
│ 6 │ │ end loop; │
│ 7 │ 0 │ return result; │
│ 8 │ │ end; │
└────────┴──────────┴───────────────────────────────────────────────────────────────────┘
(9 rows)
The profile per statements (not per line) can be displayed by function plpgsql_profiler_function_statements_tb:
CREATE OR REPLACE FUNCTION public.fx1(a integer)
RETURNS integer
LANGUAGE plpgsql
1 AS $function$
2 begin
3 if a > 10 then
4 raise notice 'ahoj';
5 return -1;
6 else
7 raise notice 'nazdar';
8 return 1;
9 end if;
10 end;
11 $function$
postgres=# select stmtid, parent_stmtid, parent_note, lineno, exec_stmts, stmtname
from plpgsql_profiler_function_statements_tb('fx1');
┌────────┬───────────────┬─────────────┬────────┬────────────┬─────────────────┐
│ stmtid │ parent_stmtid │ parent_note │ lineno │ exec_stmts │ stmtname │
╞════════╪═══════════════╪═════════════╪════════╪════════════╪═════════════════╡
│ 0 │ ∅ │ ∅ │ 2 │ 0 │ statement block │
│ 1 │ 0 │ body │ 3 │ 0 │ IF │
│ 2 │ 1 │ then body │ 4 │ 0 │ RAISE │
│ 3 │ 1 │ then body │ 5 │ 0 │ RETURN │
│ 4 │ 1 │ else body │ 7 │ 0 │ RAISE │
│ 5 │ 1 │ else body │ 8 │ 0 │ RETURN │
└────────┴───────────────┴─────────────┴────────┴────────────┴─────────────────┘
(6 rows)
All stored profiles can be displayed by calling function plpgsql_profiler_functions_all
:
postgres=# select * from plpgsql_profiler_functions_all();
┌───────────────────────┬────────────┬────────────┬──────────┬─────────────┬──────────┬──────────┐
│ funcoid │ exec_count │ total_time │ avg_time │ stddev_time │ min_time │ max_time │
╞═══════════════════════╪════════════╪════════════╪══════════╪═════════════╪══════════╪══════════╡
│ fxx(double precision) │ 1 │ 0.01 │ 0.01 │ 0.00 │ 0.01 │ 0.01 │
└───────────────────────┴────────────┴────────────┴──────────┴─────────────┴──────────┴──────────┘
(1 row)
There are two functions for cleaning stored profiles: plpgsql_profiler_reset_all()
and plpgsql_profiler_reset(regprocedure)
.
plpgsql_check provides two functions:
plpgsql_coverage_statements(name)
plpgsql_coverage_branches(name)
There is another very good PLpgSQL profiler - https://bitbucket.org/openscg/plprofiler
My extension is designed to be simple for use and practical. Nothing more or less.
plprofiler is more complex. It build call graphs and from this graph it can creates flame graph of execution times.
Both extensions can be used together with buildin PostgreSQL's feature - tracking functions.
set track_functions to 'pl';
...
select * from pg_stat_user_functions;
Tracer
plpgsql_check provides a tracing possibility - in this mode you can see notices on start or end functions (terse and default verbosity) and start or end statements (verbose verbosity). For default and verbose verbosity the content of function arguments is displayed. The content of related variables are displayed when verbosity is verbose.
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 ->> start of inline_code_block (Oid=0)
NOTICE: #2 ->> start of function fx(integer,integer,date,text) (Oid=16405)
NOTICE: #2 call by inline_code_block line 1 at PERFORM
NOTICE: #2 "a" => '10', "b" => null, "c" => '2020-08-03', "d" => 'stěhule'
NOTICE: #4 ->> start of function fx(integer) (Oid=16404)
NOTICE: #4 call by fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: #4 "a" => '10'
NOTICE: #4 <<- end of function fx (elapsed time=0.098 ms)
NOTICE: #2 <<- end of function fx (elapsed time=0.399 ms)
NOTICE: #0 <<- end of block (elapsed time=0.754 ms)
The number after #
is a execution frame counter (this number is related to deep of error context stack). It allows to pair start end and of function.
Tracing is enabled by setting plpgsql_check.tracer
to on
. Attention - enabling this behaviour has significant negative impact on performance (unlike the profiler). You can set a level for output used by tracer plpgsql_check.tracer_errlevel
(default is notice
). The output content is limited by length specified by plpgsql_check.tracer_variable_max_length
configuration variable.
In terse verbose mode the output is reduced:
postgres=# set plpgsql_check.tracer_verbosity TO terse;
SET
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 start of inline code block (oid=0)
NOTICE: #2 start of fx (oid=16405)
NOTICE: #4 start of fx (oid=16404)
NOTICE: #4 end of fx
NOTICE: #2 end of fx
NOTICE: #0 end of inline code block
In verbose mode the output is extended about statement details:
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 ->> start of block inline_code_block (oid=0)
NOTICE: #0.1 1 --> start of PERFORM
NOTICE: #2 ->> start of function fx(integer,integer,date,text) (oid=16405)
NOTICE: #2 call by inline_code_block line 1 at PERFORM
NOTICE: #2 "a" => '10', "b" => null, "c" => '2020-08-04', "d" => 'stěhule'
NOTICE: #2.1 1 --> start of PERFORM
NOTICE: #2.1 "a" => '10'
NOTICE: #4 ->> start of function fx(integer) (oid=16404)
NOTICE: #4 call by fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: #4 "a" => '10'
NOTICE: #4.1 6 --> start of assignment
NOTICE: #4.1 "a" => '10', "b" => '20'
NOTICE: #4.1 <-- end of assignment (elapsed time=0.076 ms)
NOTICE: #4.1 "res" => '130'
NOTICE: #4.2 7 --> start of RETURN
NOTICE: #4.2 "res" => '130'
NOTICE: #4.2 <-- end of RETURN (elapsed time=0.054 ms)
NOTICE: #4 <<- end of function fx (elapsed time=0.373 ms)
NOTICE: #2.1 <-- end of PERFORM (elapsed time=0.589 ms)
NOTICE: #2 <<- end of function fx (elapsed time=0.727 ms)
NOTICE: #0.1 <-- end of PERFORM (elapsed time=1.147 ms)
NOTICE: #0 <<- end of block (elapsed time=1.286 ms)
Special feature of tracer is tracing of ASSERT
statement when plpgsql_check.trace_assert
is on
. When plpgsql_check.trace_assert_verbosity
is DEFAULT
, then all function's or procedure's variables are displayed when assert expression is false. When this configuration is VERBOSE
then all variables from all plpgsql frames are displayed. This behaviour is independent on plpgsql.check_asserts
value. It can be used, although the assertions are disabled in plpgsql runtime.
postgres=# set plpgsql_check.tracer to off;
postgres=# set plpgsql_check.trace_assert_verbosity TO verbose;
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false
NOTICE: "a" => '10', "res" => null, "b" => '20'
NOTICE: #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: "a" => '10', "b" => null, "c" => '2020-08-05', "d" => 'stěhule'
NOTICE: #0 PL/pgSQL function inline_code_block line 1 at PERFORM
ERROR: assertion failed
CONTEXT: PL/pgSQL function fx(integer) line 12 at ASSERT
SQL statement "SELECT fx(a)"
PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
SQL statement "SELECT fx(10,null, 'now', e'stěhule')"
PL/pgSQL function inline_code_block line 1 at PERFORM
postgres=# set plpgsql.check_asserts to off;
SET
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false
NOTICE: "a" => '10', "res" => null, "b" => '20'
NOTICE: #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: "a" => '10', "b" => null, "c" => '2020-08-05', "d" => 'stěhule'
NOTICE: #0 PL/pgSQL function inline_code_block line 1 at PERFORM
DO
Tracer prints content of variables or function arguments. For security definer function, this content can hold security sensitive data. This is reason why tracer is disabled by default and should be enabled only with super user rights plpgsql_check.enable_tracer
.
Pragma
You can configure plpgsql_check behave inside checked function with "pragma" function. This is a analogy of PL/SQL or ADA language of PRAGMA feature. PLpgSQL doesn't support PRAGMA, but plpgsql_check detects function named plpgsql_check_pragma
and get options from parameters of this function. These plpgsql_check options are valid to end of group of statements.
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
BEGIN
...
-- for following statements disable check
PERFORM plpgsql_check_pragma('disable:check');
...
-- enable check again
PERFORM plpgsql_check_pragma('enable:check');
...
END;
$$ LANGUAGE plpgsql;
The function plpgsql_check_pragma
is immutable function that returns one. It is defined by plpgsql_check
extension. You can declare alternative plpgsql_check_pragma
function like:
CREATE OR REPLACE FUNCTION plpgsql_check_pragma(VARIADIC args[])
RETURNS int AS $$
SELECT 1
$$ LANGUAGE sql IMMUTABLE;
Using pragma function in declaration part of top block sets options on function level too.
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
DECLARE
aux int := plpgsql_check_pragma('disable:extra_warnings');
...
Shorter syntax for pragma is supported too:
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
DECLARE r record;
BEGIN
PERFORM 'PRAGMA:TYPE:r (a int, b int)';
PERFORM 'PRAGMA:TABLE: x (like pg_class)';
...
echo:str
- print string (for testing)
status:check
,status:tracer
, status:other_warnings
, status:performance_warnings
, status:extra_warnings
,status:security_warnings
enable:check
,enable:tracer
, enable:other_warnings
, enable:performance_warnings
, enable:extra_warnings
,enable:security_warnings
disable:check
,disable:tracer
, disable:other_warnings
, disable:performance_warnings
, disable:extra_warnings
,disable:security_warnings
type:varname typename
or type:varname (fieldname type, ...)
- set type to variable of record type
table: name (column_name type, ...)
or table: name (like tablename)
- create ephereal table
Pragmas enable:tracer
and disable:tracer
are active for Postgres 12 and higher
Compilation
You need a development environment for PostgreSQL extensions:
make clean
make install
result:
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 clean
rm -f plpgsql_check.so libplpgsql_check.a libplpgsql_check.pc
rm -f plpgsql_check.o
rm -rf results/ regression.diffs regression.out tmp_check/ log/
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 all
clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -I. -I./ -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include/internal -D_GNU_SOURCE -c -o plpgsql_check.o plpgsql_check.c
clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -shared -o plpgsql_check.so plpgsql_check.o -L/usr/local/pgsql/lib -Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags
[pavel@localhost plpgsql_check]$ su root
Password: *******
[root@localhost plpgsql_check]# make USE_PGXS=1 install
/usr/bin/mkdir -p '/usr/local/pgsql/lib'
/usr/bin/mkdir -p '/usr/local/pgsql/share/extension'
/usr/bin/mkdir -p '/usr/local/pgsql/share/extension'
/usr/bin/install -c -m 755 plpgsql_check.so '/usr/local/pgsql/lib/plpgsql_check.so'
/usr/bin/install -c -m 644 plpgsql_check.control '/usr/local/pgsql/share/extension/'
/usr/bin/install -c -m 644 plpgsql_check--0.9.sql '/usr/local/pgsql/share/extension/'
[root@localhost plpgsql_check]# exit
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 installcheck
/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress --inputdir=./ --psqldir='/usr/local/pgsql/bin' --dbname=pl_regression --load-language=plpgsql --dbname=contrib_regression plpgsql_check_passive plpgsql_check_active plpgsql_check_active-9.5
(using postmaster on Unix socket, default port)
============== dropping database "contrib_regression" ==============
DROP DATABASE
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
============== installing plpgsql ==============
CREATE LANGUAGE
============== running regression test queries ==============
test plpgsql_check_passive ... ok
test plpgsql_check_active ... ok
test plpgsql_check_active-9.5 ... ok
=====================
All 3 tests passed.
=====================
Sometimes successful compilation can require libicu-dev package (PostgreSQL 10 and higher - when pg was compiled with ICU support)
sudo apt install libicu-dev
You can check precompiled dll libraries http://okbob.blogspot.cz/2015/02/plpgsqlcheck-is-available-for-microsoft.html
or compile by self:
plpgsql_check.dll
to PostgreSQL\14\lib
plpgsql_check.control
and plpgsql_check--2.1.sql
to PostgreSQL\14\share\extension
Compilation against PostgreSQL 10 requires libICU!
Licence
Copyright (c) Pavel Stehule (pavel.stehule@gmail.com)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Note
If you like it, send a postcard to address
Pavel Stehule
Skalice 12
256 01 Benesov u Prahy
Czech Republic
I invite any questions, comments, bug reports, patches on mail address pavel.stehule@gmail.com
Author: okbob
Source Code: https://github.com/okbob/plpgsql_check
License: View license
1653475560
msgpack.php
A pure PHP implementation of the MessagePack serialization format.
The recommended way to install the library is through Composer:
composer require rybakit/msgpack
To pack values you can either use an instance of a Packer
:
$packer = new Packer();
$packed = $packer->pack($value);
or call a static method on the MessagePack
class:
$packed = MessagePack::pack($value);
In the examples above, the method pack
automatically packs a value depending on its type. However, not all PHP types can be uniquely translated to MessagePack types. For example, the MessagePack format defines map
and array
types, which are represented by a single array
type in PHP. By default, the packer will pack a PHP array as a MessagePack array if it has sequential numeric keys, starting from 0
and as a MessagePack map otherwise:
$mpArr1 = $packer->pack([1, 2]); // MP array [1, 2]
$mpArr2 = $packer->pack([0 => 1, 1 => 2]); // MP array [1, 2]
$mpMap1 = $packer->pack([0 => 1, 2 => 3]); // MP map {0: 1, 2: 3}
$mpMap2 = $packer->pack([1 => 2, 2 => 3]); // MP map {1: 2, 2: 3}
$mpMap3 = $packer->pack(['a' => 1, 'b' => 2]); // MP map {a: 1, b: 2}
However, sometimes you need to pack a sequential array as a MessagePack map. To do this, use the packMap
method:
$mpMap = $packer->packMap([1, 2]); // {0: 1, 1: 2}
Here is a list of type-specific packing methods:
$packer->packNil(); // MP nil
$packer->packBool(true); // MP bool
$packer->packInt(42); // MP int
$packer->packFloat(M_PI); // MP float (32 or 64)
$packer->packFloat32(M_PI); // MP float 32
$packer->packFloat64(M_PI); // MP float 64
$packer->packStr('foo'); // MP str
$packer->packBin("\x80"); // MP bin
$packer->packArray([1, 2]); // MP array
$packer->packMap(['a' => 1]); // MP map
$packer->packExt(1, "\xaa"); // MP ext
Check the "Custom types" section below on how to pack custom types.
The Packer
object supports a number of bitmask-based options for fine-tuning the packing process (defaults are in bold):
Name | Description |
---|---|
FORCE_STR | Forces PHP strings to be packed as MessagePack UTF-8 strings |
FORCE_BIN | Forces PHP strings to be packed as MessagePack binary data |
DETECT_STR_BIN | Detects MessagePack str/bin type automatically |
FORCE_ARR | Forces PHP arrays to be packed as MessagePack arrays |
FORCE_MAP | Forces PHP arrays to be packed as MessagePack maps |
DETECT_ARR_MAP | Detects MessagePack array/map type automatically |
FORCE_FLOAT32 | Forces PHP floats to be packed as 32-bits MessagePack floats |
FORCE_FLOAT64 | Forces PHP floats to be packed as 64-bits MessagePack floats |
The type detection mode (
DETECT_STR_BIN
/DETECT_ARR_MAP
) adds some overhead which can be noticed when you pack large (16- and 32-bit) arrays or strings. However, if you know the value type in advance (for example, you only work with UTF-8 strings or/and associative arrays), you can eliminate this overhead by forcing the packer to use the appropriate type, which will save it from running the auto-detection routine. Another option is to explicitly specify the value type. The library provides 2 auxiliary classes for this,Map
andBin
. Check the "Custom types" section below for details.
Examples:
// detect str/bin type and pack PHP 64-bit floats (doubles) to MP 32-bit floats
$packer = new Packer(PackOptions::DETECT_STR_BIN | PackOptions::FORCE_FLOAT32);
// these will throw MessagePack\Exception\InvalidOptionException
$packer = new Packer(PackOptions::FORCE_STR | PackOptions::FORCE_BIN);
$packer = new Packer(PackOptions::FORCE_FLOAT32 | PackOptions::FORCE_FLOAT64);
To unpack data you can either use an instance of a BufferUnpacker
:
$unpacker = new BufferUnpacker();
$unpacker->reset($packed);
$value = $unpacker->unpack();
or call a static method on the MessagePack
class:
$value = MessagePack::unpack($packed);
If the packed data is received in chunks (e.g. when reading from a stream), use the tryUnpack
method, which attempts to unpack data and returns an array of unpacked messages (if any) instead of throwing an InsufficientDataException
:
while ($chunk = ...) {
$unpacker->append($chunk);
if ($messages = $unpacker->tryUnpack()) {
return $messages;
}
}
If you want to unpack from a specific position in a buffer, use seek
:
$unpacker->seek(42); // set position equal to 42 bytes
$unpacker->seek(-8); // set position to 8 bytes before the end of the buffer
To skip bytes from the current position, use skip
:
$unpacker->skip(10); // set position to 10 bytes ahead of the current position
To get the number of remaining (unread) bytes in the buffer:
$unreadBytesCount = $unpacker->getRemainingCount();
To check whether the buffer has unread data:
$hasUnreadBytes = $unpacker->hasRemaining();
If needed, you can remove already read data from the buffer by calling:
$releasedBytesCount = $unpacker->release();
With the read
method you can read raw (packed) data:
$packedData = $unpacker->read(2); // read 2 bytes
Besides the above methods BufferUnpacker
provides type-specific unpacking methods, namely:
$unpacker->unpackNil(); // PHP null
$unpacker->unpackBool(); // PHP bool
$unpacker->unpackInt(); // PHP int
$unpacker->unpackFloat(); // PHP float
$unpacker->unpackStr(); // PHP UTF-8 string
$unpacker->unpackBin(); // PHP binary string
$unpacker->unpackArray(); // PHP sequential array
$unpacker->unpackMap(); // PHP associative array
$unpacker->unpackExt(); // PHP MessagePack\Type\Ext object
The BufferUnpacker
object supports a number of bitmask-based options for fine-tuning the unpacking process (defaults are in bold):
Name | Description |
---|---|
BIGINT_AS_STR | Converts overflowed integers to strings [1] |
BIGINT_AS_GMP | Converts overflowed integers to GMP objects [2] |
BIGINT_AS_DEC | Converts overflowed integers to Decimal\Decimal objects [3] |
1. The binary MessagePack format has unsigned 64-bit as its largest integer data type, but PHP does not support such integers, which means that an overflow can occur during unpacking.
2. Make sure the GMP extension is enabled.
3. Make sure the Decimal extension is enabled.
Examples:
$packedUint64 = "\xcf"."\xff\xff\xff\xff"."\xff\xff\xff\xff";
$unpacker = new BufferUnpacker($packedUint64);
var_dump($unpacker->unpack()); // string(20) "18446744073709551615"
$unpacker = new BufferUnpacker($packedUint64, UnpackOptions::BIGINT_AS_GMP);
var_dump($unpacker->unpack()); // object(GMP) {...}
$unpacker = new BufferUnpacker($packedUint64, UnpackOptions::BIGINT_AS_DEC);
var_dump($unpacker->unpack()); // object(Decimal\Decimal) {...}
In addition to the basic types, the library provides functionality to serialize and deserialize arbitrary types. This can be done in several ways, depending on your use case. Let's take a look at them.
If you need to serialize an instance of one of your classes into one of the basic MessagePack types, the best way to do this is to implement the CanBePacked interface in the class. A good example of such a class is the Map
type class that comes with the library. This type is useful when you want to explicitly specify that a given PHP array should be packed as a MessagePack map without triggering an automatic type detection routine:
$packer = new Packer();
$packedMap = $packer->pack(new Map([1, 2, 3]));
$packedArray = $packer->pack([1, 2, 3]);
More type examples can be found in the src/Type directory.
As with type objects, type transformers are only responsible for serializing values. They should be used when you need to serialize a value that does not implement the CanBePacked interface. Examples of such values could be instances of built-in or third-party classes that you don't own, or non-objects such as resources.
A transformer class must implement the CanPack interface. To use a transformer, it must first be registered in the packer. Here is an example of how to serialize PHP streams into the MessagePack bin
format type using one of the supplied transformers, StreamTransformer
:
$packer = new Packer(null, [new StreamTransformer()]);
$packedBin = $packer->pack(fopen('/path/to/file', 'r+'));
More type transformer examples can be found in the src/TypeTransformer directory.
In contrast to the cases described above, extensions are intended to handle extension types and are responsible for both serialization and deserialization of values (types).
An extension class must implement the Extension interface. To use an extension, it must first be registered in the packer and the unpacker.
The MessagePack specification divides extension types into two groups: predefined and application-specific. Currently, there is only one predefined type in the specification, Timestamp.
Timestamp
The Timestamp extension type is a predefined type. Support for this type in the library is done through the TimestampExtension
class. This class is responsible for handling Timestamp
objects, which represent the number of seconds and optional adjustment in nanoseconds:
$timestampExtension = new TimestampExtension();
$packer = new Packer();
$packer = $packer->extendWith($timestampExtension);
$unpacker = new BufferUnpacker();
$unpacker = $unpacker->extendWith($timestampExtension);
$packedTimestamp = $packer->pack(Timestamp::now());
$timestamp = $unpacker->reset($packedTimestamp)->unpack();
$seconds = $timestamp->getSeconds();
$nanoseconds = $timestamp->getNanoseconds();
When using the MessagePack
class, the Timestamp extension is already registered:
$packedTimestamp = MessagePack::pack(Timestamp::now());
$timestamp = MessagePack::unpack($packedTimestamp);
Application-specific extensions
In addition, the format can be extended with your own types. For example, to make the built-in PHP DateTime
objects first-class citizens in your code, you can create a corresponding extension, as shown in the example. Please note, that custom extensions have to be registered with a unique extension ID (an integer from 0
to 127
).
More extension examples can be found in the examples/MessagePack directory.
To learn more about how extension types can be useful, check out this article.
If an error occurs during packing/unpacking, a PackingFailedException
or an UnpackingFailedException
will be thrown, respectively. In addition, an InsufficientDataException
can be thrown during unpacking.
An InvalidOptionException
will be thrown in case an invalid option (or a combination of mutually exclusive options) is used.
Run tests as follows:
vendor/bin/phpunit
Also, if you already have Docker installed, you can run the tests in a docker container. First, create a container:
./dockerfile.sh | docker build -t msgpack -
The command above will create a container named msgpack
with PHP 8.1 runtime. You may change the default runtime by defining the PHP_IMAGE
environment variable:
PHP_IMAGE='php:8.0-cli' ./dockerfile.sh | docker build -t msgpack -
See a list of various images here.
Then run the unit tests:
docker run --rm -v $PWD:/msgpack -w /msgpack msgpack
To ensure that the unpacking works correctly with malformed/semi-malformed data, you can use a testing technique called Fuzzing. The library ships with a help file (target) for PHP-Fuzzer and can be used as follows:
php-fuzzer fuzz tests/fuzz_buffer_unpacker.php
To check performance, run:
php -n -dzend_extension=opcache.so \
-dpcre.jit=1 -dopcache.enable=1 -dopcache.enable_cli=1 \
tests/bench.php
Example output
Filter: MessagePack\Tests\Perf\Filter\ListFilter
Rounds: 3
Iterations: 100000
=============================================
Test/Target Packer BufferUnpacker
---------------------------------------------
nil .................. 0.0030 ........ 0.0139
false ................ 0.0037 ........ 0.0144
true ................. 0.0040 ........ 0.0137
7-bit uint #1 ........ 0.0052 ........ 0.0120
7-bit uint #2 ........ 0.0059 ........ 0.0114
7-bit uint #3 ........ 0.0061 ........ 0.0119
5-bit sint #1 ........ 0.0067 ........ 0.0126
5-bit sint #2 ........ 0.0064 ........ 0.0132
5-bit sint #3 ........ 0.0066 ........ 0.0135
8-bit uint #1 ........ 0.0078 ........ 0.0200
8-bit uint #2 ........ 0.0077 ........ 0.0212
8-bit uint #3 ........ 0.0086 ........ 0.0203
16-bit uint #1 ....... 0.0111 ........ 0.0271
16-bit uint #2 ....... 0.0115 ........ 0.0260
16-bit uint #3 ....... 0.0103 ........ 0.0273
32-bit uint #1 ....... 0.0116 ........ 0.0326
32-bit uint #2 ....... 0.0118 ........ 0.0332
32-bit uint #3 ....... 0.0127 ........ 0.0325
64-bit uint #1 ....... 0.0140 ........ 0.0277
64-bit uint #2 ....... 0.0134 ........ 0.0294
64-bit uint #3 ....... 0.0134 ........ 0.0281
8-bit int #1 ......... 0.0086 ........ 0.0241
8-bit int #2 ......... 0.0089 ........ 0.0225
8-bit int #3 ......... 0.0085 ........ 0.0229
16-bit int #1 ........ 0.0118 ........ 0.0280
16-bit int #2 ........ 0.0121 ........ 0.0270
16-bit int #3 ........ 0.0109 ........ 0.0274
32-bit int #1 ........ 0.0128 ........ 0.0346
32-bit int #2 ........ 0.0118 ........ 0.0339
32-bit int #3 ........ 0.0135 ........ 0.0368
64-bit int #1 ........ 0.0138 ........ 0.0276
64-bit int #2 ........ 0.0132 ........ 0.0286
64-bit int #3 ........ 0.0137 ........ 0.0274
64-bit int #4 ........ 0.0180 ........ 0.0285
64-bit float #1 ...... 0.0134 ........ 0.0284
64-bit float #2 ...... 0.0125 ........ 0.0275
64-bit float #3 ...... 0.0126 ........ 0.0283
fix string #1 ........ 0.0035 ........ 0.0133
fix string #2 ........ 0.0094 ........ 0.0216
fix string #3 ........ 0.0094 ........ 0.0222
fix string #4 ........ 0.0091 ........ 0.0241
8-bit string #1 ...... 0.0122 ........ 0.0301
8-bit string #2 ...... 0.0118 ........ 0.0304
8-bit string #3 ...... 0.0119 ........ 0.0315
16-bit string #1 ..... 0.0150 ........ 0.0388
16-bit string #2 ..... 0.1545 ........ 0.1665
32-bit string ........ 0.1570 ........ 0.1756
wide char string #1 .. 0.0091 ........ 0.0236
wide char string #2 .. 0.0122 ........ 0.0313
8-bit binary #1 ...... 0.0100 ........ 0.0302
8-bit binary #2 ...... 0.0123 ........ 0.0324
8-bit binary #3 ...... 0.0126 ........ 0.0327
16-bit binary ........ 0.0168 ........ 0.0372
32-bit binary ........ 0.1588 ........ 0.1754
fix array #1 ......... 0.0042 ........ 0.0131
fix array #2 ......... 0.0294 ........ 0.0367
fix array #3 ......... 0.0412 ........ 0.0472
16-bit array #1 ...... 0.1378 ........ 0.1596
16-bit array #2 ........... S ............. S
32-bit array .............. S ............. S
complex array ........ 0.1865 ........ 0.2283
fix map #1 ........... 0.0725 ........ 0.1048
fix map #2 ........... 0.0319 ........ 0.0405
fix map #3 ........... 0.0356 ........ 0.0665
fix map #4 ........... 0.0465 ........ 0.0497
16-bit map #1 ........ 0.2540 ........ 0.3028
16-bit map #2 ............. S ............. S
32-bit map ................ S ............. S
complex map .......... 0.2372 ........ 0.2710
fixext 1 ............. 0.0283 ........ 0.0358
fixext 2 ............. 0.0291 ........ 0.0371
fixext 4 ............. 0.0302 ........ 0.0355
fixext 8 ............. 0.0288 ........ 0.0384
fixext 16 ............ 0.0293 ........ 0.0359
8-bit ext ............ 0.0302 ........ 0.0439
16-bit ext ........... 0.0334 ........ 0.0499
32-bit ext ........... 0.1845 ........ 0.1888
32-bit timestamp #1 .. 0.0337 ........ 0.0547
32-bit timestamp #2 .. 0.0335 ........ 0.0560
64-bit timestamp #1 .. 0.0371 ........ 0.0575
64-bit timestamp #2 .. 0.0374 ........ 0.0542
64-bit timestamp #3 .. 0.0356 ........ 0.0533
96-bit timestamp #1 .. 0.0362 ........ 0.0699
96-bit timestamp #2 .. 0.0381 ........ 0.0701
96-bit timestamp #3 .. 0.0367 ........ 0.0687
=============================================
Total 2.7618 4.0820
Skipped 4 4
Failed 0 0
Ignored 0 0
With JIT:
php -n -dzend_extension=opcache.so \
-dpcre.jit=1 -dopcache.jit_buffer_size=64M -dopcache.jit=tracing -dopcache.enable=1 -dopcache.enable_cli=1 \
tests/bench.php
Example output
Filter: MessagePack\Tests\Perf\Filter\ListFilter
Rounds: 3
Iterations: 100000
=============================================
Test/Target Packer BufferUnpacker
---------------------------------------------
nil .................. 0.0005 ........ 0.0054
false ................ 0.0004 ........ 0.0059
true ................. 0.0004 ........ 0.0059
7-bit uint #1 ........ 0.0010 ........ 0.0047
7-bit uint #2 ........ 0.0010 ........ 0.0046
7-bit uint #3 ........ 0.0010 ........ 0.0046
5-bit sint #1 ........ 0.0025 ........ 0.0046
5-bit sint #2 ........ 0.0023 ........ 0.0046
5-bit sint #3 ........ 0.0024 ........ 0.0045
8-bit uint #1 ........ 0.0043 ........ 0.0081
8-bit uint #2 ........ 0.0043 ........ 0.0079
8-bit uint #3 ........ 0.0041 ........ 0.0080
16-bit uint #1 ....... 0.0064 ........ 0.0095
16-bit uint #2 ....... 0.0064 ........ 0.0091
16-bit uint #3 ....... 0.0064 ........ 0.0094
32-bit uint #1 ....... 0.0085 ........ 0.0114
32-bit uint #2 ....... 0.0077 ........ 0.0122
32-bit uint #3 ....... 0.0077 ........ 0.0120
64-bit uint #1 ....... 0.0085 ........ 0.0159
64-bit uint #2 ....... 0.0086 ........ 0.0157
64-bit uint #3 ....... 0.0086 ........ 0.0158
8-bit int #1 ......... 0.0042 ........ 0.0080
8-bit int #2 ......... 0.0042 ........ 0.0080
8-bit int #3 ......... 0.0042 ........ 0.0081
16-bit int #1 ........ 0.0065 ........ 0.0095
16-bit int #2 ........ 0.0065 ........ 0.0090
16-bit int #3 ........ 0.0056 ........ 0.0085
32-bit int #1 ........ 0.0067 ........ 0.0107
32-bit int #2 ........ 0.0066 ........ 0.0106
32-bit int #3 ........ 0.0063 ........ 0.0104
64-bit int #1 ........ 0.0072 ........ 0.0162
64-bit int #2 ........ 0.0073 ........ 0.0174
64-bit int #3 ........ 0.0072 ........ 0.0164
64-bit int #4 ........ 0.0077 ........ 0.0161
64-bit float #1 ...... 0.0053 ........ 0.0135
64-bit float #2 ...... 0.0053 ........ 0.0135
64-bit float #3 ...... 0.0052 ........ 0.0135
fix string #1 ....... -0.0002 ........ 0.0044
fix string #2 ........ 0.0035 ........ 0.0067
fix string #3 ........ 0.0035 ........ 0.0077
fix string #4 ........ 0.0033 ........ 0.0078
8-bit string #1 ...... 0.0059 ........ 0.0110
8-bit string #2 ...... 0.0063 ........ 0.0121
8-bit string #3 ...... 0.0064 ........ 0.0124
16-bit string #1 ..... 0.0099 ........ 0.0146
16-bit string #2 ..... 0.1522 ........ 0.1474
32-bit string ........ 0.1511 ........ 0.1483
wide char string #1 .. 0.0039 ........ 0.0084
wide char string #2 .. 0.0073 ........ 0.0123
8-bit binary #1 ...... 0.0040 ........ 0.0112
8-bit binary #2 ...... 0.0075 ........ 0.0123
8-bit binary #3 ...... 0.0077 ........ 0.0129
16-bit binary ........ 0.0096 ........ 0.0145
32-bit binary ........ 0.1535 ........ 0.1479
fix array #1 ......... 0.0008 ........ 0.0061
fix array #2 ......... 0.0121 ........ 0.0165
fix array #3 ......... 0.0193 ........ 0.0222
16-bit array #1 ...... 0.0607 ........ 0.0479
16-bit array #2 ........... S ............. S
32-bit array .............. S ............. S
complex array ........ 0.0749 ........ 0.0824
fix map #1 ........... 0.0329 ........ 0.0431
fix map #2 ........... 0.0161 ........ 0.0189
fix map #3 ........... 0.0205 ........ 0.0262
fix map #4 ........... 0.0252 ........ 0.0205
16-bit map #1 ........ 0.1016 ........ 0.0927
16-bit map #2 ............. S ............. S
32-bit map ................ S ............. S
complex map .......... 0.1096 ........ 0.1030
fixext 1 ............. 0.0157 ........ 0.0161
fixext 2 ............. 0.0175 ........ 0.0183
fixext 4 ............. 0.0156 ........ 0.0185
fixext 8 ............. 0.0163 ........ 0.0184
fixext 16 ............ 0.0164 ........ 0.0182
8-bit ext ............ 0.0158 ........ 0.0207
16-bit ext ........... 0.0203 ........ 0.0219
32-bit ext ........... 0.1614 ........ 0.1539
32-bit timestamp #1 .. 0.0195 ........ 0.0249
32-bit timestamp #2 .. 0.0188 ........ 0.0260
64-bit timestamp #1 .. 0.0207 ........ 0.0281
64-bit timestamp #2 .. 0.0212 ........ 0.0291
64-bit timestamp #3 .. 0.0207 ........ 0.0295
96-bit timestamp #1 .. 0.0222 ........ 0.0358
96-bit timestamp #2 .. 0.0228 ........ 0.0353
96-bit timestamp #3 .. 0.0210 ........ 0.0319
=============================================
Total 1.6432 1.9674
Skipped 4 4
Failed 0 0
Ignored 0 0
You may change default benchmark settings by defining the following environment variables:
Name | Default |
---|---|
MP_BENCH_TARGETS | pure_p,pure_u , see a list of available targets |
MP_BENCH_ITERATIONS | 100_000 |
MP_BENCH_DURATION | not set |
MP_BENCH_ROUNDS | 3 |
MP_BENCH_TESTS | -@slow , see a list of available tests |
For example:
export MP_BENCH_TARGETS=pure_p
export MP_BENCH_ITERATIONS=1000000
export MP_BENCH_ROUNDS=5
# a comma separated list of test names
export MP_BENCH_TESTS='complex array, complex map'
# or a group name
# export MP_BENCH_TESTS='-@slow' // @pecl_comp
# or a regexp
# export MP_BENCH_TESTS='/complex (array|map)/'
Another example, benchmarking both the library and the PECL extension:
MP_BENCH_TARGETS=pure_p,pure_u,pecl_p,pecl_u \
php -n -dextension=msgpack.so -dzend_extension=opcache.so \
-dpcre.jit=1 -dopcache.enable=1 -dopcache.enable_cli=1 \
tests/bench.php
Example output
Filter: MessagePack\Tests\Perf\Filter\ListFilter
Rounds: 3
Iterations: 100000
===========================================================================
Test/Target Packer BufferUnpacker msgpack_pack msgpack_unpack
---------------------------------------------------------------------------
nil .................. 0.0031 ........ 0.0141 ...... 0.0055 ........ 0.0064
false ................ 0.0039 ........ 0.0154 ...... 0.0056 ........ 0.0053
true ................. 0.0038 ........ 0.0139 ...... 0.0056 ........ 0.0044
7-bit uint #1 ........ 0.0061 ........ 0.0110 ...... 0.0059 ........ 0.0046
7-bit uint #2 ........ 0.0065 ........ 0.0119 ...... 0.0042 ........ 0.0029
7-bit uint #3 ........ 0.0054 ........ 0.0117 ...... 0.0045 ........ 0.0025
5-bit sint #1 ........ 0.0047 ........ 0.0103 ...... 0.0038 ........ 0.0022
5-bit sint #2 ........ 0.0048 ........ 0.0117 ...... 0.0038 ........ 0.0022
5-bit sint #3 ........ 0.0046 ........ 0.0102 ...... 0.0038 ........ 0.0023
8-bit uint #1 ........ 0.0063 ........ 0.0174 ...... 0.0039 ........ 0.0031
8-bit uint #2 ........ 0.0063 ........ 0.0167 ...... 0.0040 ........ 0.0029
8-bit uint #3 ........ 0.0063 ........ 0.0168 ...... 0.0039 ........ 0.0030
16-bit uint #1 ....... 0.0092 ........ 0.0222 ...... 0.0049 ........ 0.0030
16-bit uint #2 ....... 0.0096 ........ 0.0227 ...... 0.0042 ........ 0.0046
16-bit uint #3 ....... 0.0123 ........ 0.0274 ...... 0.0059 ........ 0.0051
32-bit uint #1 ....... 0.0136 ........ 0.0331 ...... 0.0060 ........ 0.0048
32-bit uint #2 ....... 0.0130 ........ 0.0336 ...... 0.0070 ........ 0.0048
32-bit uint #3 ....... 0.0127 ........ 0.0329 ...... 0.0051 ........ 0.0048
64-bit uint #1 ....... 0.0126 ........ 0.0268 ...... 0.0055 ........ 0.0049
64-bit uint #2 ....... 0.0135 ........ 0.0281 ...... 0.0052 ........ 0.0046
64-bit uint #3 ....... 0.0131 ........ 0.0274 ...... 0.0069 ........ 0.0044
8-bit int #1 ......... 0.0077 ........ 0.0236 ...... 0.0058 ........ 0.0044
8-bit int #2 ......... 0.0087 ........ 0.0244 ...... 0.0058 ........ 0.0048
8-bit int #3 ......... 0.0084 ........ 0.0241 ...... 0.0055 ........ 0.0049
16-bit int #1 ........ 0.0112 ........ 0.0271 ...... 0.0048 ........ 0.0045
16-bit int #2 ........ 0.0124 ........ 0.0292 ...... 0.0057 ........ 0.0049
16-bit int #3 ........ 0.0118 ........ 0.0270 ...... 0.0058 ........ 0.0050
32-bit int #1 ........ 0.0137 ........ 0.0366 ...... 0.0058 ........ 0.0051
32-bit int #2 ........ 0.0133 ........ 0.0366 ...... 0.0056 ........ 0.0049
32-bit int #3 ........ 0.0129 ........ 0.0350 ...... 0.0052 ........ 0.0048
64-bit int #1 ........ 0.0145 ........ 0.0254 ...... 0.0034 ........ 0.0025
64-bit int #2 ........ 0.0097 ........ 0.0214 ...... 0.0034 ........ 0.0025
64-bit int #3 ........ 0.0096 ........ 0.0287 ...... 0.0059 ........ 0.0050
64-bit int #4 ........ 0.0143 ........ 0.0277 ...... 0.0059 ........ 0.0046
64-bit float #1 ...... 0.0134 ........ 0.0281 ...... 0.0057 ........ 0.0052
64-bit float #2 ...... 0.0141 ........ 0.0281 ...... 0.0057 ........ 0.0050
64-bit float #3 ...... 0.0144 ........ 0.0282 ...... 0.0057 ........ 0.0050
fix string #1 ........ 0.0036 ........ 0.0143 ...... 0.0066 ........ 0.0053
fix string #2 ........ 0.0107 ........ 0.0222 ...... 0.0065 ........ 0.0068
fix string #3 ........ 0.0116 ........ 0.0245 ...... 0.0063 ........ 0.0069
fix string #4 ........ 0.0105 ........ 0.0253 ...... 0.0083 ........ 0.0077
8-bit string #1 ...... 0.0126 ........ 0.0318 ...... 0.0075 ........ 0.0088
8-bit string #2 ...... 0.0121 ........ 0.0295 ...... 0.0076 ........ 0.0086
8-bit string #3 ...... 0.0125 ........ 0.0293 ...... 0.0130 ........ 0.0093
16-bit string #1 ..... 0.0159 ........ 0.0368 ...... 0.0117 ........ 0.0086
16-bit string #2 ..... 0.1547 ........ 0.1686 ...... 0.1516 ........ 0.1373
32-bit string ........ 0.1558 ........ 0.1729 ...... 0.1511 ........ 0.1396
wide char string #1 .. 0.0098 ........ 0.0237 ...... 0.0066 ........ 0.0065
wide char string #2 .. 0.0128 ........ 0.0291 ...... 0.0061 ........ 0.0082
8-bit binary #1 ........... I ............. I ........... F ............. I
8-bit binary #2 ........... I ............. I ........... F ............. I
8-bit binary #3 ........... I ............. I ........... F ............. I
16-bit binary ............. I ............. I ........... F ............. I
32-bit binary ............. I ............. I ........... F ............. I
fix array #1 ......... 0.0040 ........ 0.0129 ...... 0.0120 ........ 0.0058
fix array #2 ......... 0.0279 ........ 0.0390 ...... 0.0143 ........ 0.0165
fix array #3 ......... 0.0415 ........ 0.0463 ...... 0.0162 ........ 0.0187
16-bit array #1 ...... 0.1349 ........ 0.1628 ...... 0.0334 ........ 0.0341
16-bit array #2 ........... S ............. S ........... S ............. S
32-bit array .............. S ............. S ........... S ............. S
complex array ............. I ............. I ........... F ............. F
fix map #1 ................ I ............. I ........... F ............. I
fix map #2 ........... 0.0345 ........ 0.0391 ...... 0.0143 ........ 0.0168
fix map #3 ................ I ............. I ........... F ............. I
fix map #4 ........... 0.0459 ........ 0.0473 ...... 0.0151 ........ 0.0163
16-bit map #1 ........ 0.2518 ........ 0.2962 ...... 0.0400 ........ 0.0490
16-bit map #2 ............. S ............. S ........... S ............. S
32-bit map ................ S ............. S ........... S ............. S
complex map .......... 0.2380 ........ 0.2682 ...... 0.0545 ........ 0.0579
fixext 1 .................. I ............. I ........... F ............. F
fixext 2 .................. I ............. I ........... F ............. F
fixext 4 .................. I ............. I ........... F ............. F
fixext 8 .................. I ............. I ........... F ............. F
fixext 16 ................. I ............. I ........... F ............. F
8-bit ext ................. I ............. I ........... F ............. F
16-bit ext ................ I ............. I ........... F ............. F
32-bit ext ................ I ............. I ........... F ............. F
32-bit timestamp #1 ....... I ............. I ........... F ............. F
32-bit timestamp #2 ....... I ............. I ........... F ............. F
64-bit timestamp #1 ....... I ............. I ........... F ............. F
64-bit timestamp #2 ....... I ............. I ........... F ............. F
64-bit timestamp #3 ....... I ............. I ........... F ............. F
96-bit timestamp #1 ....... I ............. I ........... F ............. F
96-bit timestamp #2 ....... I ............. I ........... F ............. F
96-bit timestamp #3 ....... I ............. I ........... F ............. F
===========================================================================
Total 1.5625 2.3866 0.7735 0.7243
Skipped 4 4 4 4
Failed 0 0 24 17
Ignored 24 24 0 7
With JIT:
MP_BENCH_TARGETS=pure_p,pure_u,pecl_p,pecl_u \
php -n -dextension=msgpack.so -dzend_extension=opcache.so \
-dpcre.jit=1 -dopcache.jit_buffer_size=64M -dopcache.jit=tracing -dopcache.enable=1 -dopcache.enable_cli=1 \
tests/bench.php
Example output
Filter: MessagePack\Tests\Perf\Filter\ListFilter
Rounds: 3
Iterations: 100000
===========================================================================
Test/Target Packer BufferUnpacker msgpack_pack msgpack_unpack
---------------------------------------------------------------------------
nil .................. 0.0001 ........ 0.0052 ...... 0.0053 ........ 0.0042
false ................ 0.0007 ........ 0.0060 ...... 0.0057 ........ 0.0043
true ................. 0.0008 ........ 0.0060 ...... 0.0056 ........ 0.0041
7-bit uint #1 ........ 0.0031 ........ 0.0046 ...... 0.0062 ........ 0.0041
7-bit uint #2 ........ 0.0021 ........ 0.0043 ...... 0.0062 ........ 0.0041
7-bit uint #3 ........ 0.0022 ........ 0.0044 ...... 0.0061 ........ 0.0040
5-bit sint #1 ........ 0.0030 ........ 0.0048 ...... 0.0062 ........ 0.0040
5-bit sint #2 ........ 0.0032 ........ 0.0046 ...... 0.0062 ........ 0.0040
5-bit sint #3 ........ 0.0031 ........ 0.0046 ...... 0.0062 ........ 0.0040
8-bit uint #1 ........ 0.0054 ........ 0.0079 ...... 0.0062 ........ 0.0050
8-bit uint #2 ........ 0.0051 ........ 0.0079 ...... 0.0064 ........ 0.0044
8-bit uint #3 ........ 0.0051 ........ 0.0082 ...... 0.0062 ........ 0.0044
16-bit uint #1 ....... 0.0077 ........ 0.0094 ...... 0.0065 ........ 0.0045
16-bit uint #2 ....... 0.0077 ........ 0.0094 ...... 0.0063 ........ 0.0045
16-bit uint #3 ....... 0.0077 ........ 0.0095 ...... 0.0064 ........ 0.0047
32-bit uint #1 ....... 0.0088 ........ 0.0119 ...... 0.0063 ........ 0.0043
32-bit uint #2 ....... 0.0089 ........ 0.0117 ...... 0.0062 ........ 0.0039
32-bit uint #3 ....... 0.0089 ........ 0.0118 ...... 0.0063 ........ 0.0044
64-bit uint #1 ....... 0.0097 ........ 0.0155 ...... 0.0063 ........ 0.0045
64-bit uint #2 ....... 0.0095 ........ 0.0153 ...... 0.0061 ........ 0.0045
64-bit uint #3 ....... 0.0096 ........ 0.0156 ...... 0.0063 ........ 0.0047
8-bit int #1 ......... 0.0053 ........ 0.0083 ...... 0.0062 ........ 0.0044
8-bit int #2 ......... 0.0052 ........ 0.0080 ...... 0.0062 ........ 0.0044
8-bit int #3 ......... 0.0052 ........ 0.0080 ...... 0.0062 ........ 0.0043
16-bit int #1 ........ 0.0089 ........ 0.0097 ...... 0.0069 ........ 0.0046
16-bit int #2 ........ 0.0075 ........ 0.0093 ...... 0.0063 ........ 0.0043
16-bit int #3 ........ 0.0075 ........ 0.0094 ...... 0.0062 ........ 0.0046
32-bit int #1 ........ 0.0086 ........ 0.0122 ...... 0.0063 ........ 0.0044
32-bit int #2 ........ 0.0087 ........ 0.0120 ...... 0.0066 ........ 0.0046
32-bit int #3 ........ 0.0086 ........ 0.0121 ...... 0.0060 ........ 0.0044
64-bit int #1 ........ 0.0096 ........ 0.0149 ...... 0.0060 ........ 0.0045
64-bit int #2 ........ 0.0096 ........ 0.0157 ...... 0.0062 ........ 0.0044
64-bit int #3 ........ 0.0096 ........ 0.0160 ...... 0.0063 ........ 0.0046
64-bit int #4 ........ 0.0097 ........ 0.0157 ...... 0.0061 ........ 0.0044
64-bit float #1 ...... 0.0079 ........ 0.0153 ...... 0.0056 ........ 0.0044
64-bit float #2 ...... 0.0079 ........ 0.0152 ...... 0.0057 ........ 0.0045
64-bit float #3 ...... 0.0079 ........ 0.0155 ...... 0.0057 ........ 0.0044
fix string #1 ........ 0.0010 ........ 0.0045 ...... 0.0071 ........ 0.0044
fix string #2 ........ 0.0048 ........ 0.0075 ...... 0.0070 ........ 0.0060
fix string #3 ........ 0.0048 ........ 0.0086 ...... 0.0068 ........ 0.0060
fix string #4 ........ 0.0050 ........ 0.0088 ...... 0.0070 ........ 0.0059
8-bit string #1 ...... 0.0081 ........ 0.0129 ...... 0.0069 ........ 0.0062
8-bit string #2 ...... 0.0086 ........ 0.0128 ...... 0.0069 ........ 0.0065
8-bit string #3 ...... 0.0086 ........ 0.0126 ...... 0.0115 ........ 0.0065
16-bit string #1 ..... 0.0105 ........ 0.0137 ...... 0.0128 ........ 0.0068
16-bit string #2 ..... 0.1510 ........ 0.1486 ...... 0.1526 ........ 0.1391
32-bit string ........ 0.1517 ........ 0.1475 ...... 0.1504 ........ 0.1370
wide char string #1 .. 0.0044 ........ 0.0085 ...... 0.0067 ........ 0.0057
wide char string #2 .. 0.0081 ........ 0.0125 ...... 0.0069 ........ 0.0063
8-bit binary #1 ........... I ............. I ........... F ............. I
8-bit binary #2 ........... I ............. I ........... F ............. I
8-bit binary #3 ........... I ............. I ........... F ............. I
16-bit binary ............. I ............. I ........... F ............. I
32-bit binary ............. I ............. I ........... F ............. I
fix array #1 ......... 0.0014 ........ 0.0059 ...... 0.0132 ........ 0.0055
fix array #2 ......... 0.0146 ........ 0.0156 ...... 0.0155 ........ 0.0148
fix array #3 ......... 0.0211 ........ 0.0229 ...... 0.0179 ........ 0.0180
16-bit array #1 ...... 0.0673 ........ 0.0498 ...... 0.0343 ........ 0.0388
16-bit array #2 ........... S ............. S ........... S ............. S
32-bit array .............. S ............. S ........... S ............. S
complex array ............. I ............. I ........... F ............. F
fix map #1 ................ I ............. I ........... F ............. I
fix map #2 ........... 0.0148 ........ 0.0180 ...... 0.0156 ........ 0.0179
fix map #3 ................ I ............. I ........... F ............. I
fix map #4 ........... 0.0252 ........ 0.0201 ...... 0.0214 ........ 0.0167
16-bit map #1 ........ 0.1027 ........ 0.0836 ...... 0.0388 ........ 0.0510
16-bit map #2 ............. S ............. S ........... S ............. S
32-bit map ................ S ............. S ........... S ............. S
complex map .......... 0.1104 ........ 0.1010 ...... 0.0556 ........ 0.0602
fixext 1 .................. I ............. I ........... F ............. F
fixext 2 .................. I ............. I ........... F ............. F
fixext 4 .................. I ............. I ........... F ............. F
fixext 8 .................. I ............. I ........... F ............. F
fixext 16 ................. I ............. I ........... F ............. F
8-bit ext ................. I ............. I ........... F ............. F
16-bit ext ................ I ............. I ........... F ............. F
32-bit ext ................ I ............. I ........... F ............. F
32-bit timestamp #1 ....... I ............. I ........... F ............. F
32-bit timestamp #2 ....... I ............. I ........... F ............. F
64-bit timestamp #1 ....... I ............. I ........... F ............. F
64-bit timestamp #2 ....... I ............. I ........... F ............. F
64-bit timestamp #3 ....... I ............. I ........... F ............. F
96-bit timestamp #1 ....... I ............. I ........... F ............. F
96-bit timestamp #2 ....... I ............. I ........... F ............. F
96-bit timestamp #3 ....... I ............. I ........... F ............. F
===========================================================================
Total 0.9642 1.0909 0.8224 0.7213
Skipped 4 4 4 4
Failed 0 0 24 17
Ignored 24 24 0 7
Note that the msgpack extension (v2.1.2) doesn't support ext, bin and UTF-8 str types.
The library is released under the MIT License. See the bundled LICENSE file for details.
Author: rybakit
Source Code: https://github.com/rybakit/msgpack.php
License: MIT License
1677907260
Node.js client for the official ChatGPT API.
This package is a Node.js wrapper around ChatGPT by OpenAI. TS batteries included. ✨
March 1, 2023
The official OpenAI chat completions API has been released, and it is now the default for this package! 🔥
Method | Free? | Robust? | Quality? |
---|---|---|---|
ChatGPTAPI | ❌ No | ✅ Yes | ✅️ Real ChatGPT models |
ChatGPTUnofficialProxyAPI | ✅ Yes | ☑️ Maybe | ✅ Real ChatGPT |
Note: We strongly recommend using ChatGPTAPI
since it uses the officially supported API from OpenAI. We may remove support for ChatGPTUnofficialProxyAPI
in a future release.
ChatGPTAPI
- Uses the gpt-3.5-turbo-0301
model with the official OpenAI chat completions API (official, robust approach, but it's not free)ChatGPTUnofficialProxyAPI
- Uses an unofficial proxy server to access ChatGPT's backend API in a way that circumvents Cloudflare (uses the real ChatGPT and is pretty lightweight, but relies on a third-party server and is rate-limited)To run the CLI, you'll need an OpenAI API key:
export OPENAI_API_KEY="sk-TODO"
npx chatgpt "your prompt here"
By default, the response is streamed to stdout, the results are stored in a local config file, and every invocation starts a new conversation. You can use -c
to continue the previous conversation and --no-stream
to disable streaming.
Under the hood, the CLI uses ChatGPTAPI
with text-davinci-003
to mimic ChatGPT.
Usage:
$ chatgpt <prompt>
Commands:
<prompt> Ask ChatGPT a question
rm-cache Clears the local message cache
ls-cache Prints the local message cache path
For more info, run any command with the `--help` flag:
$ chatgpt --help
$ chatgpt rm-cache --help
$ chatgpt ls-cache --help
Options:
-c, --continue Continue last conversation (default: false)
-d, --debug Enables debug logging (default: false)
-s, --stream Streams the response (default: true)
-s, --store Enables the local message cache (default: true)
-t, --timeout Timeout in milliseconds
-k, --apiKey OpenAI API key
-n, --conversationName Unique name for the conversation
-h, --help Display this message
-v, --version Display version number
npm install chatgpt
Make sure you're using node >= 18
so fetch
is available (or node >= 14
if you install a fetch polyfill).
To use this module from Node.js, you need to pick between two methods:
Method | Free? | Robust? | Quality? |
---|---|---|---|
ChatGPTAPI | ❌ No | ✅ Yes | ✅️ Real ChatGPT models |
ChatGPTUnofficialProxyAPI | ✅ Yes | ☑️ Maybe | ✅ Real ChatGPT |
ChatGPTAPI
- Uses the gpt-3.5-turbo-0301
model with the official OpenAI chat completions API (official, robust approach, but it's not free). You can override the model, completion params, and system message to fully customize your assistant.
ChatGPTUnofficialProxyAPI
- Uses an unofficial proxy server to access ChatGPT's backend API in a way that circumvents Cloudflare (uses the real ChatGPT and is pretty lightweight, but relies on a third-party server and is rate-limited)
Both approaches have very similar APIs, so it should be simple to swap between them.
Note: We strongly recommend using ChatGPTAPI
since it uses the officially supported API from OpenAI. We may remove support for ChatGPTUnofficialProxyAPI
in a future release.
Sign up for an OpenAI API key and store it in your environment.
import { ChatGPTAPI } from 'chatgpt'
async function example() {
const api = new ChatGPTAPI({
apiKey: process.env.OPENAI_API_KEY
})
const res = await api.sendMessage('Hello World!')
console.log(res.text)
}
You can override the default model
(gpt-3.5-turbo-0301
) and any OpenAI chat completion params using completionParams
:
const api = new ChatGPTAPI({
apiKey: process.env.OPENAI_API_KEY,
completionParams: {
temperature: 0.5,
top_p: 0.8
}
})
If you want to track the conversation, you'll need to pass the parentMessageId
like this:
const api = new ChatGPTAPI({ apiKey: process.env.OPENAI_API_KEY })
// send a message and wait for the response
let res = await api.sendMessage('What is OpenAI?')
console.log(res.text)
// send a follow-up
res = await api.sendMessage('Can you expand on that?', {
parentMessageId: res.id
})
console.log(res.text)
// send another follow-up
res = await api.sendMessage('What were we talking about?', {
parentMessageId: res.id
})
console.log(res.text)
You can add streaming via the onProgress
handler:
const res = await api.sendMessage('Write a 500 word essay on frogs.', {
// print the partial response as the AI is "typing"
onProgress: (partialResponse) => console.log(partialResponse.text)
})
// print the full text at the end
console.log(res.text)
You can add a timeout using the timeoutMs
option:
// timeout after 2 minutes (which will also abort the underlying HTTP request)
const response = await api.sendMessage(
'write me a really really long essay on frogs',
{
timeoutMs: 2 * 60 * 1000
}
)
If you want to see more info about what's actually being sent to OpenAI's chat completions API, set the debug: true
option in the ChatGPTAPI
constructor:
const api = new ChatGPTAPI({
apiKey: process.env.OPENAI_API_KEY,
debug: true
})
We default to a basic systemMessage
. You can override this in either the ChatGPTAPI
constructor or sendMessage
:
const res = await api.sendMessage('what is the answer to the universe?', {
systemMessage: `You are ChatGPT, a large language model trained by OpenAI. You answer as concisely as possible for each responseIf you are generating a list, do not have too many items.
Current date: ${new Date().toISOString()}\n\n`
})
Note that we automatically handle appending the previous messages to the prompt and attempt to optimize for the available tokens (which defaults to 4096
).
Usage in CommonJS (Dynamic import)
async function example() {
// To use ESM in CommonJS, you can use a dynamic import
const { ChatGPTAPI } = await import('chatgpt')
const api = new ChatGPTAPI({ apiKey: process.env.OPENAI_API_KEY })
const res = await api.sendMessage('Hello World!')
console.log(res.text)
}
The API for ChatGPTUnofficialProxyAPI
is almost exactly the same. You just need to provide a ChatGPT accessToken
instead of an OpenAI API key.
import { ChatGPTUnofficialProxyAPI } from 'chatgpt'
async function example() {
const api = new ChatGPTUnofficialProxyAPI({
accessToken: process.env.OPENAI_ACCESS_TOKEN
})
const res = await api.sendMessage('Hello World!')
console.log(res.text)
}
See demos/demo-reverse-proxy for a full example:
npx tsx demos/demo-reverse-proxy.ts
ChatGPTUnofficialProxyAPI
messages also contain a conversationid
in addition to parentMessageId
, since the ChatGPT webapp can't reference messages across
You can override the reverse proxy by passing apiReverseProxyUrl
:
const api = new ChatGPTUnofficialProxyAPI({
accessToken: process.env.OPENAI_ACCESS_TOKEN,
apiReverseProxyUrl: 'https://your-example-server.com/api/conversation'
})
Known reverse proxies run by community members include:
Reverse Proxy URL | Author | Rate Limits | Last Checked |
---|---|---|---|
https://chat.duti.tech/api/conversation | @acheong08 | 120 req/min by IP | 2/19/2023 |
https://gpt.pawan.krd/backend-api/conversation | @PawanOsman | ? | 2/19/2023 |
Note: info on how the reverse proxies work is not being published at this time in order to prevent OpenAI from disabling access.
To use ChatGPTUnofficialProxyAPI
, you'll need an OpenAI access token from the ChatGPT webapp. To do this, you can use any of the following methods which take an email
and password
and return an access token:
These libraries work with email + password accounts (e.g., they do not support accounts where you auth via Microsoft / Google).
Alternatively, you can manually get an accessToken
by logging in to the ChatGPT webapp and then opening https://chat.openai.com/api/auth/session
, which will return a JSON object containing your accessToken
string.
Access tokens last for days.
Note: using a reverse proxy will expose your access token to a third-party. There shouldn't be any adverse effects possible from this, but please consider the risks before using this method.
See the auto-generated docs for more info on methods and parameters.
Most of the demos use ChatGPTAPI
. It should be pretty easy to convert them to use ChatGPTUnofficialProxyAPI
if you'd rather use that approach. The only thing that needs to change is how you initialize the api with an accessToken
instead of an apiKey
.
To run the included demos:
OPENAI_API_KEY
in .envA basic demo is included for testing purposes:
npx tsx demos/demo.ts
A demo showing on progress handler:
npx tsx demos/demo-on-progress.ts
The on progress demo uses the optional onProgress
parameter to sendMessage
to receive intermediary results as ChatGPT is "typing".
npx tsx demos/demo-conversation.ts
A persistence demo shows how to store messages in Redis for persistence:
npx tsx demos/demo-persistence.ts
Any keyv adaptor is supported for persistence, and there are overrides if you'd like to use a different way of storing / retrieving messages.
Note that persisting message is required for remembering the context of previous conversations beyond the scope of the current Node.js process, since by default, we only store messages in memory. Here's an external demo of using a completely custom database solution to persist messages.
Note: Persistence is handled automatically when using ChatGPTUnofficialProxyAPI
because it is connecting indirectly to ChatGPT.
All of these awesome projects are built using the chatgpt
package. 🤯
If you create a cool integration, feel free to open a PR and add it to the list.
node >= 14
.fetch
is installed.chatgpt
, we recommend using it only from your backend APIPrevious Updates
Feb 19, 2023
We now provide three ways of accessing the unofficial ChatGPT API, all of which have tradeoffs:
Method | Free? | Robust? | Quality? |
---|---|---|---|
ChatGPTAPI | ❌ No | ✅ Yes | ☑️ Mimics ChatGPT |
ChatGPTUnofficialProxyAPI | ✅ Yes | ☑️ Maybe | ✅ Real ChatGPT |
ChatGPTAPIBrowser (v3) | ✅ Yes | ❌ No | ✅ Real ChatGPT |
Note: I recommend that you use either ChatGPTAPI
or ChatGPTUnofficialProxyAPI
.
ChatGPTAPI
- Uses text-davinci-003
to mimic ChatGPT via the official OpenAI completions API (most robust approach, but it's not free and doesn't use a model fine-tuned for chat)ChatGPTUnofficialProxyAPI
- Uses an unofficial proxy server to access ChatGPT's backend API in a way that circumvents Cloudflare (uses the real ChatGPT and is pretty lightweight, but relies on a third-party server and is rate-limited)ChatGPTAPIBrowser
- (deprecated; v3.5.1 of this package) Uses Puppeteer to access the official ChatGPT webapp (uses the real ChatGPT, but very flaky, heavyweight, and error prone)Feb 5, 2023
OpenAI has disabled the leaked chat model we were previously using, so we're now defaulting to text-davinci-003
, which is not free.
We've found several other hidden, fine-tuned chat models, but OpenAI keeps disabling them, so we're searching for alternative workarounds.
Feb 1, 2023
This package no longer requires any browser hacks – it is now using the official OpenAI completions API with a leaked model that ChatGPT uses under the hood. 🔥
import { ChatGPTAPI } from 'chatgpt'
const api = new ChatGPTAPI({
apiKey: process.env.OPENAI_API_KEY
})
const res = await api.sendMessage('Hello World!')
console.log(res.text)
Please upgrade to chatgpt@latest
(at least v4.0.0). The updated version is significantly more lightweight and robust compared with previous versions. You also don't have to worry about IP issues or rate limiting.
Huge shoutout to @waylaidwanderer for discovering the leaked chat model!
If you run into any issues, we do have a pretty active Discord with a bunch of ChatGPT hackers from the Node.js & Python communities.
Lastly, please consider starring this repo and following me on twitter to help support the project.
Thanks && cheers, Travis
Author: Transitive-bullshit
Source Code: https://github.com/transitive-bullshit/chatgpt-api
License: MIT license
1647540000
The Substrate Knowledge Map provides information that you—as a Substrate hackathon participant—need to know to develop a non-trivial application for your hackathon submission.
The map covers 6 main sections:
Each section contains basic information on each topic, with links to additional documentation for you to dig deeper. Within each section, you'll find a mix of quizzes and labs to test your knowledge as your progress through the map. The goal of the labs and quizzes is to help you consolidate what you've learned and put it to practice with some hands-on activities.
One question we often get is why learn the Substrate framework when we can write smart contracts to build decentralized applications?
The short answer is that using the Substrate framework and writing smart contracts are two different approaches.
Traditional smart contract platforms allow users to publish additional logic on top of some core blockchain logic. Since smart contract logic can be published by anyone, including malicious actors and inexperienced developers, there are a number of intentional safeguards and restrictions built around these public smart contract platforms. For example:
Fees: Smart contract developers must ensure that contract users are charged for the computation and storage they impose on the computers running their contract. With fees, block creators are protected from abuse of the network.
Sandboxed: A contract is not able to modify core blockchain storage or storage items of other contracts directly. Its power is limited to only modifying its own state, and the ability to make outside calls to other contracts or runtime functions.
Reversion: Contracts can be prone to undesirable situations that lead to logical errors when wanting to revert or upgrade them. Developers need to learn additional patterns such as splitting their contract's logic and data to ensure seamless upgrades.
These safeguards and restrictions make running smart contracts slower and more costly. However, it's important to consider the different developer audiences for contract development versus Substrate runtime development.
Building decentralized applications with smart contracts allows your community to extend and develop on top of your runtime logic without worrying about proposals, runtime upgrades, and so on. You can also use smart contracts as a testing ground for future runtime changes, but done in an isolated way that protects your network from any errors the changes might introduce.
In summary, smart contract development:
Unlike traditional smart contract development, Substrate runtime development offers none of the network protections or safeguards. Instead, as a runtime developer, you have total control over how the blockchain behaves. However, this level of control also means that there is a higher barrier to entry.
Substrate is a framework for building blockchains, which almost makes comparing it to smart contract development like comparing apples and oranges. With the Substrate framework, developers can build smart contracts but that is only a fraction of using Substrate to its full potential.
With Substrate, you have full control over the underlying logic that your network's nodes will run. You also have full access for modifying and controlling each and every storage item across your runtime modules. As you progress through this map, you'll discover concepts and techniques that will help you to unlock the potential of the Substrate framework, giving you the freedom to build the blockchain that best suits the needs of your application.
You'll also discover how you can upgrade the Substrate runtime with a single transaction instead of having to organize a community hard-fork. Upgradeability is one of the primary design features of the Substrate framework.
In summary, runtime development:
To learn more about using smart contracts within Substrate, refer to the Smart Contract - Overview page as well as the Polkadot Builders Guide.
If you need any community support, please join the following channels based on the area where you need help:
Alternatively, also look for support on Stackoverflow where questions are tagged with "substrate" or on the Parity Subport repo.
Use the following links to explore the sites and resources available on each:
Substrate Developer Hub has the most comprehensive all-round coverage about Substrate, from a "big picture" explanation of architecture to specific technical concepts. The site also provides tutorials to guide you as your learn the Substrate framework and the API reference documentation. You should check this site first if you want to look up information about Substrate runtime development. The site consists of:
Knowledge Base: Explaining the foundational concepts of building blockchain runtimes using Substrate.
Tutorials: Hand-on tutorials for developers to follow. The first SIX tutorials show the fundamentals in Substrate and are recommended for every Substrate learner to go through.
How-to Guides: These resources are like the O'Reilly cookbook series written in a task-oriented way for readers to get the job done. Some examples of the topics overed include:
API docs: Substrate API reference documentation.
Substrate Node Template provides a light weight, minimal Substrate blockchain node that you can set up as a local development environment.
Substrate Front-end template provides a front-end interface built with React using Polkadot-JS API to connect to any Substrate node. Developers are encouraged to start new Substrate projects based on these templates.
If you face any technical difficulties and need support, feel free to join the Substrate Technical matrix channel and ask your questions there.
Polkadot Wiki documents the specific behavior and mechanisms of the Polkadot network. The Polkadot network allows multiple blockchains to connect and pass messages to each other. On the wiki, you can learn about how Polkadot—built using Substrate—is customized to support inter-blockchain message passing.
Polkadot JS API doc: documents how to use the Polkadot-JS API. This JavaScript-based API allows developers to build custom front-ends for their blockchains and applications. Polkadot JS API provides a way to connect to Substrate-based blockchains to query runtime metadata and send transactions.
👉 Submit your answers to Quiz #1
Here you will set up your local machine to install the Rust compiler—ensuring that you have both stable and nightly versions installed. Both stable and nightly versions are required because currently a Substrate runtime is compiled to a native binary using the stable Rust compiler, then compiled to a WebAssembly (WASM) binary, which only the nightly Rust compiler can do.
Also refer to:
👉 Complete Lab #1: Run a Substrate node
Polkadot JS Apps is the canonical front-end to interact with any Substrate-based chain.
You can configure whichever endpoint you want it to connected to, even to your localhost
running node. Refer to the following two diagrams.
👉 Complete Quiz #2
👉 Complete Lab #2: Using Polkadot-JS Apps
Notes: If you are connecting Apps to a custom chain (or your locally-running node), you may need to specify your chain's custom data types in JSON under Settings > Developer.
Polkadot-JS Apps only receives a series of bytes from the blockchain. It is up to the developer to tell it how to decode and interpret these custom data type. To learn more on this, refer to:
You will also need to create an account. To do so, follow these steps on account generation. You'll learn that you can also use the Polkadot-JS Browser Plugin (a Metamask-like browser extension to manage your Substrate accounts) and it will automatically be imported into Polkadot-JS Apps.
Notes: When you run a Substrate chain in development mode (with the
--dev
flag), well-known accounts (Alice
,Bob
,Charlie
, etc.) are always created for you.
👉 Complete Lab #3: Create an Account
You need to know some Rust programming concepts and have a good understanding on how blockchain technology works in order to make the most of developing with Substrate. The following resources will help you brush up in these areas.
You will need familiarize yourself with Rust to understand how Substrate is built and how to make the most of its capabilities.
If you are new to Rust, or need a brush up on your Rust knowledge, please refer to The Rust Book. You could still continue learning about Substrate without knowing Rust, but we recommend you come back to this section whenever in doubt about what any of the Rust syntax you're looking at means. Here are the parts of the Rust book we recommend you familiarize yourself with:
Given that you'll be writing a blockchain runtime, you need to know what a blockchain is, and how it works. The **Web3 Blockchain Fundamental MOOC Youtube video series provides a good basis for understanding key blockchain concepts and how blockchains work.
The lectures we recommend you watch are: lectures 1 - 7 and lecture 10. That's 8 lectures, or about 4 hours of video.
👉 Complete Quiz #3
To know more about the high level architecture of Substrate, please go through the Knowledge Base articles on Getting Started: Overview and Getting Started: Architecture.
In this document, we assume you will develop a Substrate runtime with FRAME (v2). This is what a Substrate node consists of.
Each node has many components that manage things like the transaction queue, communicating over a P2P network, reaching consensus on the state of the blockchain, and the chain's actual runtime logic (aka the blockchain runtime). Each aspect of the node is interesting in its own right, and the runtime is particularly interesting because it contains the business logic (aka "state transition function") that codifies the chain's functionality. The runtime contains a collection of pallets that are configured to work together.
On the node level, Substrate leverages libp2p for the p2p networking layer and puts the transaction pool, consensus mechanism, and underlying data storage (a key-value database) on the node level. These components all work "under the hood", and in this knowledge map we won't cover them in detail except for mentioning their existence.
👉 Complete Quiz #4
In our Developer Hub, we have a thorough coverage on various subjects you need to know to develop with Substrate. So here we just list out the key topics and reference back to Developer Hub. Please go through the following key concepts and the directed resources to know the fundamentals of runtime development.
Key Concept: Runtime, this is where the blockchain state transition function (the blockchain application-specific logic) is defined. It is about composing multiple pallets (can be understood as Rust modules) together in the runtime and hooking them up together.
Runtime Development: Execution, this article describes how a block is produced, and how transactions are selected and executed to reach the next "stage" in the blockchain.
Runtime Develpment: Pallets, this article describes what the basic structure of a Substrate pallet is consists of.
Runtime Development: FRAME, this article gives a high level overview of the system pallets Substrate already implements to help you quickly develop as a runtime engineer. Have a quick skim so you have a basic idea of the different pallets Substrate is made of.
👉 Complete Lab #4: Adding a Pallet into a Runtime
Runtime Development: Storage, this article describes how data is stored on-chain and how you could access them.
Runtime Development: Events & Errors, this page describe how external parties know what has happened in the blockchain, via the emitted events and errors when executing transactions.
Notes: All of the above concepts we leverage on the
#[pallet::*]
macro to define them in the code. If you are interested to learn more about what other types of pallet macros exist go to the FRAME macro API documentation and this doc on some frequently used Substrate macros.
👉 Complete Lab #5: Building a Proof-of-Existence dApp
👉 Complete Lab #6: Building a Substrate Kitties dApp
👉 Complete Quiz #5
Polkadot JS API is the javascript API for Substrate. By using it you can build a javascript front end or utility and interact with any Substrate-based blockchain.
The Substrate Front-end Template is an example of using Polkadot JS API in a React front-end.
👉 Complete Lab #7: Using Polkadot-JS API
👉 Complete Quiz #6: Using Polkadot-JS API
Learn about the difference between smart contract development vs Substrate runtime development, and when to use each here.
In Substrate, you can program smart contracts using ink!.
👉 Complete Quiz #7: Using ink!
A lot 😄
On-chain runtime upgrades. We have a tutorial on On-chain (forkless) Runtime Upgrade. This tutorial introduces how to perform and schedule a runtime upgrade as an on-chain transaction.
About transaction weight and fee, and benchmarking your runtime to determine the proper transaction cost.
There are certain limits to on-chain logic. For instance, computation cannot be too intensive that it affects the block output time, and computation must be deterministic. This means that computation that relies on external data fetching cannot be done on-chain. In Substrate, developers can run these types of computation off-chain and have the result sent back on-chain via extrinsics.
Tightly- and Loosely-coupled pallets, calling one pallet's functions from another pallet via trait specification.
Blockchain Consensus Mechansim, and a guide on customizing it to proof-of-work here.
Parachains: one key feature of Substrate is the capability of becoming a parachain for relay chains like Polkadot. You can develop your own application-specific logic in your chain and rely on the validator community of the relay chain to secure your network, instead of building another validator community yourself. Learn more with the following resources:
Author: substrate-developer-hub
Source Code: https://github.com/substrate-developer-hub/hackathon-knowledge-map
License: