1623842220
CodeIgniter already has a pagination library using which you can easily add pagination on your page.
In this tutorial, I show how you can create pagination with a search filter in the CodeIgniter 4 project.
.env
file which is available at the project root.NOTE – If dot (.) not added at the start then rename the file to .env.
database.default.hostname = 127.0.0.1
database.default.database = testdb
database.default.username = root
database.default.password =
database.default.DBDriver = MySQLi
#codeigniter 4 #php
1648900800
I founded this project, because I wanted to publish the code I wrote in the last two years, when I tried to write enhanced checking for PostgreSQL upstream. It was not fully successful - integration into upstream requires some larger plpgsql refactoring - probably it will not be done in next years (now is Dec 2013). But written code is fully functional and can be used in production (and it is used in production). So, I created this extension to be available for all plpgsql developers.
If you like it and if you would to join to development of this extension, register yourself to postgresql extension hacking google group.
Features
I invite any ideas, patches, bugreports.
plpgsql_check is next generation of plpgsql_lint. It allows to check source code by explicit call plpgsql_check_function.
PostgreSQL PostgreSQL 10, 11, 12, 13 and 14 are supported.
The SQL statements inside PL/pgSQL functions are checked by validator for semantic errors. These errors can be found by plpgsql_check_function:
Active mode
postgres=# CREATE EXTENSION plpgsql_check;
LOAD
postgres=# CREATE TABLE t1(a int, b int);
CREATE TABLE
postgres=#
CREATE OR REPLACE FUNCTION public.f1()
RETURNS void
LANGUAGE plpgsql
AS $function$
DECLARE r record;
BEGIN
FOR r IN SELECT * FROM t1
LOOP
RAISE NOTICE '%', r.c; -- there is bug - table t1 missing "c" column
END LOOP;
END;
$function$;
CREATE FUNCTION
postgres=# select f1(); -- execution doesn't find a bug due to empty table t1
f1
────
(1 row)
postgres=# \x
Expanded display is on.
postgres=# select * from plpgsql_check_function_tb('f1()');
─[ RECORD 1 ]───────────────────────────
functionid │ f1
lineno │ 6
statement │ RAISE
sqlstate │ 42703
message │ record "r" has no field "c"
detail │ [null]
hint │ [null]
level │ error
position │ 0
query │ [null]
postgres=# \sf+ f1
CREATE OR REPLACE FUNCTION public.f1()
RETURNS void
LANGUAGE plpgsql
1 AS $function$
2 DECLARE r record;
3 BEGIN
4 FOR r IN SELECT * FROM t1
5 LOOP
6 RAISE NOTICE '%', r.c; -- there is bug - table t1 missing "c" column
7 END LOOP;
8 END;
9 $function$
Function plpgsql_check_function() has three possible formats: text, json or xml
select * from plpgsql_check_function('f1()', fatal_errors := false);
plpgsql_check_function
------------------------------------------------------------------------
error:42703:4:SQL statement:column "c" of relation "t1" does not exist
Query: update t1 set c = 30
-- ^
error:42P01:7:RAISE:missing FROM-clause entry for table "r"
Query: SELECT r.c
-- ^
error:42601:7:RAISE:too few parameters specified for RAISE
(7 rows)
postgres=# select * from plpgsql_check_function('fx()', format:='xml');
plpgsql_check_function
────────────────────────────────────────────────────────────────
<Function oid="16400"> ↵
<Issue> ↵
<Level>error</level> ↵
<Sqlstate>42P01</Sqlstate> ↵
<Message>relation "foo111" does not exist</Message> ↵
<Stmt lineno="3">RETURN</Stmt> ↵
<Query position="23">SELECT (select a from foo111)</Query>↵
</Issue> ↵
</Function>
(1 row)
You can set level of warnings via function's parameters:
'fx()'::regprocedure
or 16799::regprocedure
. Possible alternative is using a name only, when function's name is unique - like 'fx'
. When the name is not unique or the function doesn't exists it raises a error.relid DEFAULT 0
- oid of relation assigned with trigger function. It is necessary for check of any trigger function.
fatal_errors boolean DEFAULT true
- stop on first error
other_warnings boolean DEFAULT true
- show warnings like different attributes number in assignmenet on left and right side, variable overlaps function's parameter, unused variables, unwanted casting, ..
extra_warnings boolean DEFAULT true
- show warnings like missing RETURN
, shadowed variables, dead code, never read (unused) function's parameter, unmodified variables, modified auto variables, ..
performance_warnings boolean DEFAULT false
- performance related warnings like declared type with type modificator, casting, implicit casts in where clause (can be reason why index is not used), ..
security_warnings boolean DEFAULT false
- security related checks like SQL injection vulnerability detection
anyelementtype regtype DEFAULT 'int'
- a real type used instead anyelement type
anyenumtype regtype DEFAULT '-'
- a real type used instead anyenum type
anyrangetype regtype DEFAULT 'int4range'
- a real type used instead anyrange type
anycompatibletype DEFAULT 'int'
- a real type used instead anycompatible type
anycompatiblerangetype DEFAULT 'int4range'
- a real type used instead anycompatible range type
without_warnings DEFAULT false
- disable all warnings
all_warnings DEFAULT false
- enable all warnings
newtable DEFAULT NULL
, oldtable DEFAULT NULL
- the names of NEW or OLD transitive tables. These parameters are required when transitive tables are used.
When you want to check any trigger, you have to enter a relation that will be used together with trigger function
CREATE TABLE bar(a int, b int);
postgres=# \sf+ foo_trg
CREATE OR REPLACE FUNCTION public.foo_trg()
RETURNS trigger
LANGUAGE plpgsql
1 AS $function$
2 BEGIN
3 NEW.c := NEW.a + NEW.b;
4 RETURN NEW;
5 END;
6 $function$
Missing relation specification
postgres=# select * from plpgsql_check_function('foo_trg()');
ERROR: missing trigger relation
HINT: Trigger relation oid must be valid
Correct trigger checking (with specified relation)
postgres=# select * from plpgsql_check_function('foo_trg()', 'bar');
plpgsql_check_function
--------------------------------------------------------
error:42703:3:assignment:record "new" has no field "c"
(1 row)
For triggers with transitive tables you can set a oldtable
or newtable
parameters:
create or replace function footab_trig_func()
returns trigger as $$
declare x int;
begin
if false then
-- should be ok;
select count(*) from newtab into x;
-- should fail;
select count(*) from newtab where d = 10 into x;
end if;
return null;
end;
$$ language plpgsql;
select * from plpgsql_check_function('footab_trig_func','footab', newtable := 'newtab');
You can use the plpgsql_check_function for mass check functions and mass check triggers. Please, test following queries:
-- check all nontrigger plpgsql functions
SELECT p.oid, p.proname, plpgsql_check_function(p.oid)
FROM pg_catalog.pg_namespace n
JOIN pg_catalog.pg_proc p ON pronamespace = n.oid
JOIN pg_catalog.pg_language l ON p.prolang = l.oid
WHERE l.lanname = 'plpgsql' AND p.prorettype <> 2279;
or
SELECT p.proname, tgrelid::regclass, cf.*
FROM pg_proc p
JOIN pg_trigger t ON t.tgfoid = p.oid
JOIN pg_language l ON p.prolang = l.oid
JOIN pg_namespace n ON p.pronamespace = n.oid,
LATERAL plpgsql_check_function(p.oid, t.tgrelid) cf
WHERE n.nspname = 'public' and l.lanname = 'plpgsql'
or
-- check all plpgsql functions (functions or trigger functions with defined triggers)
SELECT
(pcf).functionid::regprocedure, (pcf).lineno, (pcf).statement,
(pcf).sqlstate, (pcf).message, (pcf).detail, (pcf).hint, (pcf).level,
(pcf)."position", (pcf).query, (pcf).context
FROM
(
SELECT
plpgsql_check_function_tb(pg_proc.oid, COALESCE(pg_trigger.tgrelid, 0)) AS pcf
FROM pg_proc
LEFT JOIN pg_trigger
ON (pg_trigger.tgfoid = pg_proc.oid)
WHERE
prolang = (SELECT lang.oid FROM pg_language lang WHERE lang.lanname = 'plpgsql') AND
pronamespace <> (SELECT nsp.oid FROM pg_namespace nsp WHERE nsp.nspname = 'pg_catalog') AND
-- ignore unused triggers
(pg_proc.prorettype <> (SELECT typ.oid FROM pg_type typ WHERE typ.typname = 'trigger') OR
pg_trigger.tgfoid IS NOT NULL)
OFFSET 0
) ss
ORDER BY (pcf).functionid::regprocedure::text, (pcf).lineno
Passive mode
Functions should be checked on start - plpgsql_check module must be loaded.
plpgsql_check.mode = [ disabled | by_function | fresh_start | every_start ]
plpgsql_check.fatal_errors = [ yes | no ]
plpgsql_check.show_nonperformance_warnings = false
plpgsql_check.show_performance_warnings = false
Default mode is by_function, that means that the enhanced check is done only in active mode - by plpgsql_check_function. fresh_start
means cold start.
You can enable passive mode by
load 'plpgsql'; -- 1.1 and higher doesn't need it
load 'plpgsql_check';
set plpgsql_check.mode = 'every_start';
SELECT fx(10); -- run functions - function is checked before runtime starts it
Limits
plpgsql_check should find almost all errors on really static code. When developer use some PLpgSQL's dynamic features like dynamic SQL or record data type, then false positives are possible. These should be rare - in well written code - and then the affected function should be redesigned or plpgsql_check should be disabled for this function.
CREATE OR REPLACE FUNCTION f1()
RETURNS void AS $$
DECLARE r record;
BEGIN
FOR r IN EXECUTE 'SELECT * FROM t1'
LOOP
RAISE NOTICE '%', r.c;
END LOOP;
END;
$$ LANGUAGE plpgsql SET plpgsql.enable_check TO false;
A usage of plpgsql_check adds a small overhead (in enabled passive mode) and you should use it only in develop or preprod environments.
This module doesn't check queries that are assembled in runtime. It is not possible to identify results of dynamic queries - so plpgsql_check cannot to set correct type to record variables and cannot to check a dependent SQLs and expressions.
When type of record's variable is not know, you can assign it explicitly with pragma type
:
DECLARE r record;
BEGIN
EXECUTE format('SELECT * FROM %I', _tablename) INTO r;
PERFORM plpgsql_check_pragma('type: r (id int, processed bool)');
IF NOT r.processed THEN
...
Attention: The SQL injection check can detect only some SQL injection vulnerabilities. This tool cannot be used for security audit! Some issues should not be detected. This check can raise false alarms too - probably when variable is sanitized by other command or when value is of some compose type.
plpgsql_check should not to detect structure of referenced cursors. A reference on cursor in PLpgSQL is implemented as name of global cursor. In check time, the name is not known (not in all possibilities), and global cursor doesn't exist. It is significant break for any static analyse. PLpgSQL cannot to set correct type for record variables and cannot to check a dependent SQLs and expressions. A solution is same like dynamic SQL. Don't use record variable as target when you use refcursor type or disable plpgsql_check for these functions.
CREATE OR REPLACE FUNCTION foo(refcur_var refcursor)
RETURNS void AS $$
DECLARE
rec_var record;
BEGIN
FETCH refcur_var INTO rec_var; -- this is STOP for plpgsql_check
RAISE NOTICE '%', rec_var; -- record rec_var is not assigned yet error
In this case a record type should not be used (use known rowtype instead):
CREATE OR REPLACE FUNCTION foo(refcur_var refcursor)
RETURNS void AS $$
DECLARE
rec_var some_rowtype;
BEGIN
FETCH refcur_var INTO rec_var;
RAISE NOTICE '%', rec_var;
plpgsql_check cannot verify queries over temporary tables that are created in plpgsql's function runtime. For this use case it is necessary to create a fake temp table or disable plpgsql_check for this function.
In reality temp tables are stored in own (per user) schema with higher priority than persistent tables. So you can do (with following trick safetly):
CREATE OR REPLACE FUNCTION public.disable_dml()
RETURNS trigger
LANGUAGE plpgsql AS $function$
BEGIN
RAISE EXCEPTION SQLSTATE '42P01'
USING message = format('this instance of %I table doesn''t allow any DML operation', TG_TABLE_NAME),
hint = format('you should to run "CREATE TEMP TABLE %1$I(LIKE %1$I INCLUDING ALL);" statement',
TG_TABLE_NAME);
RETURN NULL;
END;
$function$;
CREATE TABLE foo(a int, b int); -- doesn't hold data ever
CREATE TRIGGER foo_disable_dml
BEFORE INSERT OR UPDATE OR DELETE ON foo
EXECUTE PROCEDURE disable_dml();
postgres=# INSERT INTO foo VALUES(10,20);
ERROR: this instance of foo table doesn't allow any DML operation
HINT: you should to run "CREATE TEMP TABLE foo(LIKE foo INCLUDING ALL);" statement
postgres=#
CREATE TABLE
postgres=# INSERT INTO foo VALUES(10,20);
INSERT 0 1
This trick emulates GLOBAL TEMP tables partially and it allows a statical validation. Other possibility is using a [template foreign data wrapper] (https://github.com/okbob/template_fdw)
You can use pragma table
and create ephemeral table:
BEGIN
CREATE TEMP TABLE xxx(a int);
PERFORM plpgsql_check_pragma('table: xxx(a int)');
INSERT INTO xxx VALUES(10);
Dependency list
A function plpgsql_show_dependency_tb can show all functions, operators and relations used inside processed function:
postgres=# select * from plpgsql_show_dependency_tb('testfunc(int,float)');
┌──────────┬───────┬────────┬─────────┬────────────────────────────┐
│ type │ oid │ schema │ name │ params │
╞══════════╪═══════╪════════╪═════════╪════════════════════════════╡
│ FUNCTION │ 36008 │ public │ myfunc1 │ (integer,double precision) │
│ FUNCTION │ 35999 │ public │ myfunc2 │ (integer,double precision) │
│ OPERATOR │ 36007 │ public │ ** │ (integer,integer) │
│ RELATION │ 36005 │ public │ myview │ │
│ RELATION │ 36002 │ public │ mytable │ │
└──────────┴───────┴────────┴─────────┴────────────────────────────┘
(4 rows)
Profiler
The plpgsql_check contains simple profiler of plpgsql functions and procedures. It can work with/without a access to shared memory. It depends on shared_preload_libraries
config. When plpgsql_check was initialized by shared_preload_libraries
, then it can allocate shared memory, and function's profiles are stored there. When plpgsql_check cannot to allocate shared momory, the profile is stored in session memory.
Due dependencies, shared_preload_libraries
should to contains plpgsql
first
postgres=# show shared_preload_libraries ;
┌──────────────────────────┐
│ shared_preload_libraries │
╞══════════════════════════╡
│ plpgsql,plpgsql_check │
└──────────────────────────┘
(1 row)
The profiler is active when GUC plpgsql_check.profiler
is on. The profiler doesn't require shared memory, but if there are not shared memory, then the profile is limmitted just to active session.
When plpgsql_check is initialized by shared_preload_libraries
, another GUC is available to configure the amount of shared memory used by the profiler: plpgsql_check.profiler_max_shared_chunks
. This defines the maximum number of statements chunk that can be stored in shared memory. For each plpgsql function (or procedure), the whole content is split into chunks of 30 statements. If needed, multiple chunks can be used to store the whole content of a single function. A single chunk is 1704 bytes. The default value for this GUC is 15000, which should be enough for big projects containing hundred of thousands of statements in plpgsql, and will consume about 24MB of memory. If your project doesn't require that much number of chunks, you can set this parameter to a smaller number in order to decrease the memory usage. The minimum value is 50 (which should consume about 83kB of memory), and the maximum value is 100000 (which should consume about 163MB of memory). Changing this parameter requires a PostgreSQL restart.
The profiler will also retrieve the query identifier for each instruction that contains an expression or optimizable statement. Note that this requires pg_stat_statements, or another similar third-party extension), to be installed. There are some limitations to the query identifier retrieval:
Attention: A update of shared profiles can decrease performance on servers under higher load.
The profile can be displayed by function plpgsql_profiler_function_tb
:
postgres=# select lineno, avg_time, source from plpgsql_profiler_function_tb('fx(int)');
┌────────┬──────────┬───────────────────────────────────────────────────────────────────┐
│ lineno │ avg_time │ source │
╞════════╪══════════╪═══════════════════════════════════════════════════════════════════╡
│ 1 │ │ │
│ 2 │ │ declare result int = 0; │
│ 3 │ 0.075 │ begin │
│ 4 │ 0.202 │ for i in 1..$1 loop │
│ 5 │ 0.005 │ select result + i into result; select result + i into result; │
│ 6 │ │ end loop; │
│ 7 │ 0 │ return result; │
│ 8 │ │ end; │
└────────┴──────────┴───────────────────────────────────────────────────────────────────┘
(9 rows)
The profile per statements (not per line) can be displayed by function plpgsql_profiler_function_statements_tb:
CREATE OR REPLACE FUNCTION public.fx1(a integer)
RETURNS integer
LANGUAGE plpgsql
1 AS $function$
2 begin
3 if a > 10 then
4 raise notice 'ahoj';
5 return -1;
6 else
7 raise notice 'nazdar';
8 return 1;
9 end if;
10 end;
11 $function$
postgres=# select stmtid, parent_stmtid, parent_note, lineno, exec_stmts, stmtname
from plpgsql_profiler_function_statements_tb('fx1');
┌────────┬───────────────┬─────────────┬────────┬────────────┬─────────────────┐
│ stmtid │ parent_stmtid │ parent_note │ lineno │ exec_stmts │ stmtname │
╞════════╪═══════════════╪═════════════╪════════╪════════════╪═════════════════╡
│ 0 │ ∅ │ ∅ │ 2 │ 0 │ statement block │
│ 1 │ 0 │ body │ 3 │ 0 │ IF │
│ 2 │ 1 │ then body │ 4 │ 0 │ RAISE │
│ 3 │ 1 │ then body │ 5 │ 0 │ RETURN │
│ 4 │ 1 │ else body │ 7 │ 0 │ RAISE │
│ 5 │ 1 │ else body │ 8 │ 0 │ RETURN │
└────────┴───────────────┴─────────────┴────────┴────────────┴─────────────────┘
(6 rows)
All stored profiles can be displayed by calling function plpgsql_profiler_functions_all
:
postgres=# select * from plpgsql_profiler_functions_all();
┌───────────────────────┬────────────┬────────────┬──────────┬─────────────┬──────────┬──────────┐
│ funcoid │ exec_count │ total_time │ avg_time │ stddev_time │ min_time │ max_time │
╞═══════════════════════╪════════════╪════════════╪══════════╪═════════════╪══════════╪══════════╡
│ fxx(double precision) │ 1 │ 0.01 │ 0.01 │ 0.00 │ 0.01 │ 0.01 │
└───────────────────────┴────────────┴────────────┴──────────┴─────────────┴──────────┴──────────┘
(1 row)
There are two functions for cleaning stored profiles: plpgsql_profiler_reset_all()
and plpgsql_profiler_reset(regprocedure)
.
plpgsql_check provides two functions:
plpgsql_coverage_statements(name)
plpgsql_coverage_branches(name)
There is another very good PLpgSQL profiler - https://bitbucket.org/openscg/plprofiler
My extension is designed to be simple for use and practical. Nothing more or less.
plprofiler is more complex. It build call graphs and from this graph it can creates flame graph of execution times.
Both extensions can be used together with buildin PostgreSQL's feature - tracking functions.
set track_functions to 'pl';
...
select * from pg_stat_user_functions;
Tracer
plpgsql_check provides a tracing possibility - in this mode you can see notices on start or end functions (terse and default verbosity) and start or end statements (verbose verbosity). For default and verbose verbosity the content of function arguments is displayed. The content of related variables are displayed when verbosity is verbose.
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 ->> start of inline_code_block (Oid=0)
NOTICE: #2 ->> start of function fx(integer,integer,date,text) (Oid=16405)
NOTICE: #2 call by inline_code_block line 1 at PERFORM
NOTICE: #2 "a" => '10', "b" => null, "c" => '2020-08-03', "d" => 'stěhule'
NOTICE: #4 ->> start of function fx(integer) (Oid=16404)
NOTICE: #4 call by fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: #4 "a" => '10'
NOTICE: #4 <<- end of function fx (elapsed time=0.098 ms)
NOTICE: #2 <<- end of function fx (elapsed time=0.399 ms)
NOTICE: #0 <<- end of block (elapsed time=0.754 ms)
The number after #
is a execution frame counter (this number is related to deep of error context stack). It allows to pair start end and of function.
Tracing is enabled by setting plpgsql_check.tracer
to on
. Attention - enabling this behaviour has significant negative impact on performance (unlike the profiler). You can set a level for output used by tracer plpgsql_check.tracer_errlevel
(default is notice
). The output content is limited by length specified by plpgsql_check.tracer_variable_max_length
configuration variable.
In terse verbose mode the output is reduced:
postgres=# set plpgsql_check.tracer_verbosity TO terse;
SET
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 start of inline code block (oid=0)
NOTICE: #2 start of fx (oid=16405)
NOTICE: #4 start of fx (oid=16404)
NOTICE: #4 end of fx
NOTICE: #2 end of fx
NOTICE: #0 end of inline code block
In verbose mode the output is extended about statement details:
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 ->> start of block inline_code_block (oid=0)
NOTICE: #0.1 1 --> start of PERFORM
NOTICE: #2 ->> start of function fx(integer,integer,date,text) (oid=16405)
NOTICE: #2 call by inline_code_block line 1 at PERFORM
NOTICE: #2 "a" => '10', "b" => null, "c" => '2020-08-04', "d" => 'stěhule'
NOTICE: #2.1 1 --> start of PERFORM
NOTICE: #2.1 "a" => '10'
NOTICE: #4 ->> start of function fx(integer) (oid=16404)
NOTICE: #4 call by fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: #4 "a" => '10'
NOTICE: #4.1 6 --> start of assignment
NOTICE: #4.1 "a" => '10', "b" => '20'
NOTICE: #4.1 <-- end of assignment (elapsed time=0.076 ms)
NOTICE: #4.1 "res" => '130'
NOTICE: #4.2 7 --> start of RETURN
NOTICE: #4.2 "res" => '130'
NOTICE: #4.2 <-- end of RETURN (elapsed time=0.054 ms)
NOTICE: #4 <<- end of function fx (elapsed time=0.373 ms)
NOTICE: #2.1 <-- end of PERFORM (elapsed time=0.589 ms)
NOTICE: #2 <<- end of function fx (elapsed time=0.727 ms)
NOTICE: #0.1 <-- end of PERFORM (elapsed time=1.147 ms)
NOTICE: #0 <<- end of block (elapsed time=1.286 ms)
Special feature of tracer is tracing of ASSERT
statement when plpgsql_check.trace_assert
is on
. When plpgsql_check.trace_assert_verbosity
is DEFAULT
, then all function's or procedure's variables are displayed when assert expression is false. When this configuration is VERBOSE
then all variables from all plpgsql frames are displayed. This behaviour is independent on plpgsql.check_asserts
value. It can be used, although the assertions are disabled in plpgsql runtime.
postgres=# set plpgsql_check.tracer to off;
postgres=# set plpgsql_check.trace_assert_verbosity TO verbose;
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false
NOTICE: "a" => '10', "res" => null, "b" => '20'
NOTICE: #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: "a" => '10', "b" => null, "c" => '2020-08-05', "d" => 'stěhule'
NOTICE: #0 PL/pgSQL function inline_code_block line 1 at PERFORM
ERROR: assertion failed
CONTEXT: PL/pgSQL function fx(integer) line 12 at ASSERT
SQL statement "SELECT fx(a)"
PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
SQL statement "SELECT fx(10,null, 'now', e'stěhule')"
PL/pgSQL function inline_code_block line 1 at PERFORM
postgres=# set plpgsql.check_asserts to off;
SET
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false
NOTICE: "a" => '10', "res" => null, "b" => '20'
NOTICE: #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: "a" => '10', "b" => null, "c" => '2020-08-05', "d" => 'stěhule'
NOTICE: #0 PL/pgSQL function inline_code_block line 1 at PERFORM
DO
Tracer prints content of variables or function arguments. For security definer function, this content can hold security sensitive data. This is reason why tracer is disabled by default and should be enabled only with super user rights plpgsql_check.enable_tracer
.
Pragma
You can configure plpgsql_check behave inside checked function with "pragma" function. This is a analogy of PL/SQL or ADA language of PRAGMA feature. PLpgSQL doesn't support PRAGMA, but plpgsql_check detects function named plpgsql_check_pragma
and get options from parameters of this function. These plpgsql_check options are valid to end of group of statements.
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
BEGIN
...
-- for following statements disable check
PERFORM plpgsql_check_pragma('disable:check');
...
-- enable check again
PERFORM plpgsql_check_pragma('enable:check');
...
END;
$$ LANGUAGE plpgsql;
The function plpgsql_check_pragma
is immutable function that returns one. It is defined by plpgsql_check
extension. You can declare alternative plpgsql_check_pragma
function like:
CREATE OR REPLACE FUNCTION plpgsql_check_pragma(VARIADIC args[])
RETURNS int AS $$
SELECT 1
$$ LANGUAGE sql IMMUTABLE;
Using pragma function in declaration part of top block sets options on function level too.
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
DECLARE
aux int := plpgsql_check_pragma('disable:extra_warnings');
...
Shorter syntax for pragma is supported too:
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
DECLARE r record;
BEGIN
PERFORM 'PRAGMA:TYPE:r (a int, b int)';
PERFORM 'PRAGMA:TABLE: x (like pg_class)';
...
echo:str
- print string (for testing)
status:check
,status:tracer
, status:other_warnings
, status:performance_warnings
, status:extra_warnings
,status:security_warnings
enable:check
,enable:tracer
, enable:other_warnings
, enable:performance_warnings
, enable:extra_warnings
,enable:security_warnings
disable:check
,disable:tracer
, disable:other_warnings
, disable:performance_warnings
, disable:extra_warnings
,disable:security_warnings
type:varname typename
or type:varname (fieldname type, ...)
- set type to variable of record type
table: name (column_name type, ...)
or table: name (like tablename)
- create ephereal table
Pragmas enable:tracer
and disable:tracer
are active for Postgres 12 and higher
Compilation
You need a development environment for PostgreSQL extensions:
make clean
make install
result:
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 clean
rm -f plpgsql_check.so libplpgsql_check.a libplpgsql_check.pc
rm -f plpgsql_check.o
rm -rf results/ regression.diffs regression.out tmp_check/ log/
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 all
clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -I. -I./ -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include/internal -D_GNU_SOURCE -c -o plpgsql_check.o plpgsql_check.c
clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -shared -o plpgsql_check.so plpgsql_check.o -L/usr/local/pgsql/lib -Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags
[pavel@localhost plpgsql_check]$ su root
Password: *******
[root@localhost plpgsql_check]# make USE_PGXS=1 install
/usr/bin/mkdir -p '/usr/local/pgsql/lib'
/usr/bin/mkdir -p '/usr/local/pgsql/share/extension'
/usr/bin/mkdir -p '/usr/local/pgsql/share/extension'
/usr/bin/install -c -m 755 plpgsql_check.so '/usr/local/pgsql/lib/plpgsql_check.so'
/usr/bin/install -c -m 644 plpgsql_check.control '/usr/local/pgsql/share/extension/'
/usr/bin/install -c -m 644 plpgsql_check--0.9.sql '/usr/local/pgsql/share/extension/'
[root@localhost plpgsql_check]# exit
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 installcheck
/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress --inputdir=./ --psqldir='/usr/local/pgsql/bin' --dbname=pl_regression --load-language=plpgsql --dbname=contrib_regression plpgsql_check_passive plpgsql_check_active plpgsql_check_active-9.5
(using postmaster on Unix socket, default port)
============== dropping database "contrib_regression" ==============
DROP DATABASE
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
============== installing plpgsql ==============
CREATE LANGUAGE
============== running regression test queries ==============
test plpgsql_check_passive ... ok
test plpgsql_check_active ... ok
test plpgsql_check_active-9.5 ... ok
=====================
All 3 tests passed.
=====================
Sometimes successful compilation can require libicu-dev package (PostgreSQL 10 and higher - when pg was compiled with ICU support)
sudo apt install libicu-dev
You can check precompiled dll libraries http://okbob.blogspot.cz/2015/02/plpgsqlcheck-is-available-for-microsoft.html
or compile by self:
plpgsql_check.dll
to PostgreSQL\14\lib
plpgsql_check.control
and plpgsql_check--2.1.sql
to PostgreSQL\14\share\extension
Compilation against PostgreSQL 10 requires libICU!
Licence
Copyright (c) Pavel Stehule (pavel.stehule@gmail.com)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Note
If you like it, send a postcard to address
Pavel Stehule
Skalice 12
256 01 Benesov u Prahy
Czech Republic
I invite any questions, comments, bug reports, patches on mail address pavel.stehule@gmail.com
Author: okbob
Source Code: https://github.com/okbob/plpgsql_check
License: View license
1648803600
I founded this project, because I wanted to publish the code I wrote in the last two years, when I tried to write enhanced checking for PostgreSQL upstream. It was not fully successful - integration into upstream requires some larger plpgsql refactoring - probably it will not be done in next years (now is Dec 2013). But written code is fully functional and can be used in production (and it is used in production). So, I created this extension to be available for all plpgsql developers.
If you like it and if you would to join to development of this extension, register yourself to postgresql extension hacking google group.
Features
I invite any ideas, patches, bugreports.
plpgsql_check is next generation of plpgsql_lint. It allows to check source code by explicit call plpgsql_check_function.
PostgreSQL PostgreSQL 10, 11, 12, 13 and 14 are supported.
The SQL statements inside PL/pgSQL functions are checked by validator for semantic errors. These errors can be found by plpgsql_check_function:
Active mode
postgres=# CREATE EXTENSION plpgsql_check;
LOAD
postgres=# CREATE TABLE t1(a int, b int);
CREATE TABLE
postgres=#
CREATE OR REPLACE FUNCTION public.f1()
RETURNS void
LANGUAGE plpgsql
AS $function$
DECLARE r record;
BEGIN
FOR r IN SELECT * FROM t1
LOOP
RAISE NOTICE '%', r.c; -- there is bug - table t1 missing "c" column
END LOOP;
END;
$function$;
CREATE FUNCTION
postgres=# select f1(); -- execution doesn't find a bug due to empty table t1
f1
────
(1 row)
postgres=# \x
Expanded display is on.
postgres=# select * from plpgsql_check_function_tb('f1()');
─[ RECORD 1 ]───────────────────────────
functionid │ f1
lineno │ 6
statement │ RAISE
sqlstate │ 42703
message │ record "r" has no field "c"
detail │ [null]
hint │ [null]
level │ error
position │ 0
query │ [null]
postgres=# \sf+ f1
CREATE OR REPLACE FUNCTION public.f1()
RETURNS void
LANGUAGE plpgsql
1 AS $function$
2 DECLARE r record;
3 BEGIN
4 FOR r IN SELECT * FROM t1
5 LOOP
6 RAISE NOTICE '%', r.c; -- there is bug - table t1 missing "c" column
7 END LOOP;
8 END;
9 $function$
Function plpgsql_check_function() has three possible formats: text, json or xml
select * from plpgsql_check_function('f1()', fatal_errors := false);
plpgsql_check_function
------------------------------------------------------------------------
error:42703:4:SQL statement:column "c" of relation "t1" does not exist
Query: update t1 set c = 30
-- ^
error:42P01:7:RAISE:missing FROM-clause entry for table "r"
Query: SELECT r.c
-- ^
error:42601:7:RAISE:too few parameters specified for RAISE
(7 rows)
postgres=# select * from plpgsql_check_function('fx()', format:='xml');
plpgsql_check_function
────────────────────────────────────────────────────────────────
<Function oid="16400"> ↵
<Issue> ↵
<Level>error</level> ↵
<Sqlstate>42P01</Sqlstate> ↵
<Message>relation "foo111" does not exist</Message> ↵
<Stmt lineno="3">RETURN</Stmt> ↵
<Query position="23">SELECT (select a from foo111)</Query>↵
</Issue> ↵
</Function>
(1 row)
You can set level of warnings via function's parameters:
'fx()'::regprocedure
or 16799::regprocedure
. Possible alternative is using a name only, when function's name is unique - like 'fx'
. When the name is not unique or the function doesn't exists it raises a error.relid DEFAULT 0
- oid of relation assigned with trigger function. It is necessary for check of any trigger function.
fatal_errors boolean DEFAULT true
- stop on first error
other_warnings boolean DEFAULT true
- show warnings like different attributes number in assignmenet on left and right side, variable overlaps function's parameter, unused variables, unwanted casting, ..
extra_warnings boolean DEFAULT true
- show warnings like missing RETURN
, shadowed variables, dead code, never read (unused) function's parameter, unmodified variables, modified auto variables, ..
performance_warnings boolean DEFAULT false
- performance related warnings like declared type with type modificator, casting, implicit casts in where clause (can be reason why index is not used), ..
security_warnings boolean DEFAULT false
- security related checks like SQL injection vulnerability detection
anyelementtype regtype DEFAULT 'int'
- a real type used instead anyelement type
anyenumtype regtype DEFAULT '-'
- a real type used instead anyenum type
anyrangetype regtype DEFAULT 'int4range'
- a real type used instead anyrange type
anycompatibletype DEFAULT 'int'
- a real type used instead anycompatible type
anycompatiblerangetype DEFAULT 'int4range'
- a real type used instead anycompatible range type
without_warnings DEFAULT false
- disable all warnings
all_warnings DEFAULT false
- enable all warnings
newtable DEFAULT NULL
, oldtable DEFAULT NULL
- the names of NEW or OLD transitive tables. These parameters are required when transitive tables are used.
When you want to check any trigger, you have to enter a relation that will be used together with trigger function
CREATE TABLE bar(a int, b int);
postgres=# \sf+ foo_trg
CREATE OR REPLACE FUNCTION public.foo_trg()
RETURNS trigger
LANGUAGE plpgsql
1 AS $function$
2 BEGIN
3 NEW.c := NEW.a + NEW.b;
4 RETURN NEW;
5 END;
6 $function$
Missing relation specification
postgres=# select * from plpgsql_check_function('foo_trg()');
ERROR: missing trigger relation
HINT: Trigger relation oid must be valid
Correct trigger checking (with specified relation)
postgres=# select * from plpgsql_check_function('foo_trg()', 'bar');
plpgsql_check_function
--------------------------------------------------------
error:42703:3:assignment:record "new" has no field "c"
(1 row)
For triggers with transitive tables you can set a oldtable
or newtable
parameters:
create or replace function footab_trig_func()
returns trigger as $$
declare x int;
begin
if false then
-- should be ok;
select count(*) from newtab into x;
-- should fail;
select count(*) from newtab where d = 10 into x;
end if;
return null;
end;
$$ language plpgsql;
select * from plpgsql_check_function('footab_trig_func','footab', newtable := 'newtab');
You can use the plpgsql_check_function for mass check functions and mass check triggers. Please, test following queries:
-- check all nontrigger plpgsql functions
SELECT p.oid, p.proname, plpgsql_check_function(p.oid)
FROM pg_catalog.pg_namespace n
JOIN pg_catalog.pg_proc p ON pronamespace = n.oid
JOIN pg_catalog.pg_language l ON p.prolang = l.oid
WHERE l.lanname = 'plpgsql' AND p.prorettype <> 2279;
or
SELECT p.proname, tgrelid::regclass, cf.*
FROM pg_proc p
JOIN pg_trigger t ON t.tgfoid = p.oid
JOIN pg_language l ON p.prolang = l.oid
JOIN pg_namespace n ON p.pronamespace = n.oid,
LATERAL plpgsql_check_function(p.oid, t.tgrelid) cf
WHERE n.nspname = 'public' and l.lanname = 'plpgsql'
or
-- check all plpgsql functions (functions or trigger functions with defined triggers)
SELECT
(pcf).functionid::regprocedure, (pcf).lineno, (pcf).statement,
(pcf).sqlstate, (pcf).message, (pcf).detail, (pcf).hint, (pcf).level,
(pcf)."position", (pcf).query, (pcf).context
FROM
(
SELECT
plpgsql_check_function_tb(pg_proc.oid, COALESCE(pg_trigger.tgrelid, 0)) AS pcf
FROM pg_proc
LEFT JOIN pg_trigger
ON (pg_trigger.tgfoid = pg_proc.oid)
WHERE
prolang = (SELECT lang.oid FROM pg_language lang WHERE lang.lanname = 'plpgsql') AND
pronamespace <> (SELECT nsp.oid FROM pg_namespace nsp WHERE nsp.nspname = 'pg_catalog') AND
-- ignore unused triggers
(pg_proc.prorettype <> (SELECT typ.oid FROM pg_type typ WHERE typ.typname = 'trigger') OR
pg_trigger.tgfoid IS NOT NULL)
OFFSET 0
) ss
ORDER BY (pcf).functionid::regprocedure::text, (pcf).lineno
Passive mode
Functions should be checked on start - plpgsql_check module must be loaded.
plpgsql_check.mode = [ disabled | by_function | fresh_start | every_start ]
plpgsql_check.fatal_errors = [ yes | no ]
plpgsql_check.show_nonperformance_warnings = false
plpgsql_check.show_performance_warnings = false
Default mode is by_function, that means that the enhanced check is done only in active mode - by plpgsql_check_function. fresh_start
means cold start.
You can enable passive mode by
load 'plpgsql'; -- 1.1 and higher doesn't need it
load 'plpgsql_check';
set plpgsql_check.mode = 'every_start';
SELECT fx(10); -- run functions - function is checked before runtime starts it
Limits
plpgsql_check should find almost all errors on really static code. When developer use some PLpgSQL's dynamic features like dynamic SQL or record data type, then false positives are possible. These should be rare - in well written code - and then the affected function should be redesigned or plpgsql_check should be disabled for this function.
CREATE OR REPLACE FUNCTION f1()
RETURNS void AS $$
DECLARE r record;
BEGIN
FOR r IN EXECUTE 'SELECT * FROM t1'
LOOP
RAISE NOTICE '%', r.c;
END LOOP;
END;
$$ LANGUAGE plpgsql SET plpgsql.enable_check TO false;
A usage of plpgsql_check adds a small overhead (in enabled passive mode) and you should use it only in develop or preprod environments.
This module doesn't check queries that are assembled in runtime. It is not possible to identify results of dynamic queries - so plpgsql_check cannot to set correct type to record variables and cannot to check a dependent SQLs and expressions.
When type of record's variable is not know, you can assign it explicitly with pragma type
:
DECLARE r record;
BEGIN
EXECUTE format('SELECT * FROM %I', _tablename) INTO r;
PERFORM plpgsql_check_pragma('type: r (id int, processed bool)');
IF NOT r.processed THEN
...
Attention: The SQL injection check can detect only some SQL injection vulnerabilities. This tool cannot be used for security audit! Some issues should not be detected. This check can raise false alarms too - probably when variable is sanitized by other command or when value is of some compose type.
plpgsql_check should not to detect structure of referenced cursors. A reference on cursor in PLpgSQL is implemented as name of global cursor. In check time, the name is not known (not in all possibilities), and global cursor doesn't exist. It is significant break for any static analyse. PLpgSQL cannot to set correct type for record variables and cannot to check a dependent SQLs and expressions. A solution is same like dynamic SQL. Don't use record variable as target when you use refcursor type or disable plpgsql_check for these functions.
CREATE OR REPLACE FUNCTION foo(refcur_var refcursor)
RETURNS void AS $$
DECLARE
rec_var record;
BEGIN
FETCH refcur_var INTO rec_var; -- this is STOP for plpgsql_check
RAISE NOTICE '%', rec_var; -- record rec_var is not assigned yet error
In this case a record type should not be used (use known rowtype instead):
CREATE OR REPLACE FUNCTION foo(refcur_var refcursor)
RETURNS void AS $$
DECLARE
rec_var some_rowtype;
BEGIN
FETCH refcur_var INTO rec_var;
RAISE NOTICE '%', rec_var;
plpgsql_check cannot verify queries over temporary tables that are created in plpgsql's function runtime. For this use case it is necessary to create a fake temp table or disable plpgsql_check for this function.
In reality temp tables are stored in own (per user) schema with higher priority than persistent tables. So you can do (with following trick safetly):
CREATE OR REPLACE FUNCTION public.disable_dml()
RETURNS trigger
LANGUAGE plpgsql AS $function$
BEGIN
RAISE EXCEPTION SQLSTATE '42P01'
USING message = format('this instance of %I table doesn''t allow any DML operation', TG_TABLE_NAME),
hint = format('you should to run "CREATE TEMP TABLE %1$I(LIKE %1$I INCLUDING ALL);" statement',
TG_TABLE_NAME);
RETURN NULL;
END;
$function$;
CREATE TABLE foo(a int, b int); -- doesn't hold data ever
CREATE TRIGGER foo_disable_dml
BEFORE INSERT OR UPDATE OR DELETE ON foo
EXECUTE PROCEDURE disable_dml();
postgres=# INSERT INTO foo VALUES(10,20);
ERROR: this instance of foo table doesn't allow any DML operation
HINT: you should to run "CREATE TEMP TABLE foo(LIKE foo INCLUDING ALL);" statement
postgres=#
CREATE TABLE
postgres=# INSERT INTO foo VALUES(10,20);
INSERT 0 1
This trick emulates GLOBAL TEMP tables partially and it allows a statical validation. Other possibility is using a [template foreign data wrapper] (https://github.com/okbob/template_fdw)
You can use pragma table
and create ephemeral table:
BEGIN
CREATE TEMP TABLE xxx(a int);
PERFORM plpgsql_check_pragma('table: xxx(a int)');
INSERT INTO xxx VALUES(10);
Dependency list
A function plpgsql_show_dependency_tb can show all functions, operators and relations used inside processed function:
postgres=# select * from plpgsql_show_dependency_tb('testfunc(int,float)');
┌──────────┬───────┬────────┬─────────┬────────────────────────────┐
│ type │ oid │ schema │ name │ params │
╞══════════╪═══════╪════════╪═════════╪════════════════════════════╡
│ FUNCTION │ 36008 │ public │ myfunc1 │ (integer,double precision) │
│ FUNCTION │ 35999 │ public │ myfunc2 │ (integer,double precision) │
│ OPERATOR │ 36007 │ public │ ** │ (integer,integer) │
│ RELATION │ 36005 │ public │ myview │ │
│ RELATION │ 36002 │ public │ mytable │ │
└──────────┴───────┴────────┴─────────┴────────────────────────────┘
(4 rows)
Profiler
The plpgsql_check contains simple profiler of plpgsql functions and procedures. It can work with/without a access to shared memory. It depends on shared_preload_libraries
config. When plpgsql_check was initialized by shared_preload_libraries
, then it can allocate shared memory, and function's profiles are stored there. When plpgsql_check cannot to allocate shared momory, the profile is stored in session memory.
Due dependencies, shared_preload_libraries
should to contains plpgsql
first
postgres=# show shared_preload_libraries ;
┌──────────────────────────┐
│ shared_preload_libraries │
╞══════════════════════════╡
│ plpgsql,plpgsql_check │
└──────────────────────────┘
(1 row)
The profiler is active when GUC plpgsql_check.profiler
is on. The profiler doesn't require shared memory, but if there are not shared memory, then the profile is limmitted just to active session.
When plpgsql_check is initialized by shared_preload_libraries
, another GUC is available to configure the amount of shared memory used by the profiler: plpgsql_check.profiler_max_shared_chunks
. This defines the maximum number of statements chunk that can be stored in shared memory. For each plpgsql function (or procedure), the whole content is split into chunks of 30 statements. If needed, multiple chunks can be used to store the whole content of a single function. A single chunk is 1704 bytes. The default value for this GUC is 15000, which should be enough for big projects containing hundred of thousands of statements in plpgsql, and will consume about 24MB of memory. If your project doesn't require that much number of chunks, you can set this parameter to a smaller number in order to decrease the memory usage. The minimum value is 50 (which should consume about 83kB of memory), and the maximum value is 100000 (which should consume about 163MB of memory). Changing this parameter requires a PostgreSQL restart.
The profiler will also retrieve the query identifier for each instruction that contains an expression or optimizable statement. Note that this requires pg_stat_statements, or another similar third-party extension), to be installed. There are some limitations to the query identifier retrieval:
Attention: A update of shared profiles can decrease performance on servers under higher load.
The profile can be displayed by function plpgsql_profiler_function_tb
:
postgres=# select lineno, avg_time, source from plpgsql_profiler_function_tb('fx(int)');
┌────────┬──────────┬───────────────────────────────────────────────────────────────────┐
│ lineno │ avg_time │ source │
╞════════╪══════════╪═══════════════════════════════════════════════════════════════════╡
│ 1 │ │ │
│ 2 │ │ declare result int = 0; │
│ 3 │ 0.075 │ begin │
│ 4 │ 0.202 │ for i in 1..$1 loop │
│ 5 │ 0.005 │ select result + i into result; select result + i into result; │
│ 6 │ │ end loop; │
│ 7 │ 0 │ return result; │
│ 8 │ │ end; │
└────────┴──────────┴───────────────────────────────────────────────────────────────────┘
(9 rows)
The profile per statements (not per line) can be displayed by function plpgsql_profiler_function_statements_tb:
CREATE OR REPLACE FUNCTION public.fx1(a integer)
RETURNS integer
LANGUAGE plpgsql
1 AS $function$
2 begin
3 if a > 10 then
4 raise notice 'ahoj';
5 return -1;
6 else
7 raise notice 'nazdar';
8 return 1;
9 end if;
10 end;
11 $function$
postgres=# select stmtid, parent_stmtid, parent_note, lineno, exec_stmts, stmtname
from plpgsql_profiler_function_statements_tb('fx1');
┌────────┬───────────────┬─────────────┬────────┬────────────┬─────────────────┐
│ stmtid │ parent_stmtid │ parent_note │ lineno │ exec_stmts │ stmtname │
╞════════╪═══════════════╪═════════════╪════════╪════════════╪═════════════════╡
│ 0 │ ∅ │ ∅ │ 2 │ 0 │ statement block │
│ 1 │ 0 │ body │ 3 │ 0 │ IF │
│ 2 │ 1 │ then body │ 4 │ 0 │ RAISE │
│ 3 │ 1 │ then body │ 5 │ 0 │ RETURN │
│ 4 │ 1 │ else body │ 7 │ 0 │ RAISE │
│ 5 │ 1 │ else body │ 8 │ 0 │ RETURN │
└────────┴───────────────┴─────────────┴────────┴────────────┴─────────────────┘
(6 rows)
All stored profiles can be displayed by calling function plpgsql_profiler_functions_all
:
postgres=# select * from plpgsql_profiler_functions_all();
┌───────────────────────┬────────────┬────────────┬──────────┬─────────────┬──────────┬──────────┐
│ funcoid │ exec_count │ total_time │ avg_time │ stddev_time │ min_time │ max_time │
╞═══════════════════════╪════════════╪════════════╪══════════╪═════════════╪══════════╪══════════╡
│ fxx(double precision) │ 1 │ 0.01 │ 0.01 │ 0.00 │ 0.01 │ 0.01 │
└───────────────────────┴────────────┴────────────┴──────────┴─────────────┴──────────┴──────────┘
(1 row)
There are two functions for cleaning stored profiles: plpgsql_profiler_reset_all()
and plpgsql_profiler_reset(regprocedure)
.
plpgsql_check provides two functions:
plpgsql_coverage_statements(name)
plpgsql_coverage_branches(name)
There is another very good PLpgSQL profiler - https://bitbucket.org/openscg/plprofiler
My extension is designed to be simple for use and practical. Nothing more or less.
plprofiler is more complex. It build call graphs and from this graph it can creates flame graph of execution times.
Both extensions can be used together with buildin PostgreSQL's feature - tracking functions.
set track_functions to 'pl';
...
select * from pg_stat_user_functions;
Tracer
plpgsql_check provides a tracing possibility - in this mode you can see notices on start or end functions (terse and default verbosity) and start or end statements (verbose verbosity). For default and verbose verbosity the content of function arguments is displayed. The content of related variables are displayed when verbosity is verbose.
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 ->> start of inline_code_block (Oid=0)
NOTICE: #2 ->> start of function fx(integer,integer,date,text) (Oid=16405)
NOTICE: #2 call by inline_code_block line 1 at PERFORM
NOTICE: #2 "a" => '10', "b" => null, "c" => '2020-08-03', "d" => 'stěhule'
NOTICE: #4 ->> start of function fx(integer) (Oid=16404)
NOTICE: #4 call by fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: #4 "a" => '10'
NOTICE: #4 <<- end of function fx (elapsed time=0.098 ms)
NOTICE: #2 <<- end of function fx (elapsed time=0.399 ms)
NOTICE: #0 <<- end of block (elapsed time=0.754 ms)
The number after #
is a execution frame counter (this number is related to deep of error context stack). It allows to pair start end and of function.
Tracing is enabled by setting plpgsql_check.tracer
to on
. Attention - enabling this behaviour has significant negative impact on performance (unlike the profiler). You can set a level for output used by tracer plpgsql_check.tracer_errlevel
(default is notice
). The output content is limited by length specified by plpgsql_check.tracer_variable_max_length
configuration variable.
In terse verbose mode the output is reduced:
postgres=# set plpgsql_check.tracer_verbosity TO terse;
SET
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 start of inline code block (oid=0)
NOTICE: #2 start of fx (oid=16405)
NOTICE: #4 start of fx (oid=16404)
NOTICE: #4 end of fx
NOTICE: #2 end of fx
NOTICE: #0 end of inline code block
In verbose mode the output is extended about statement details:
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 ->> start of block inline_code_block (oid=0)
NOTICE: #0.1 1 --> start of PERFORM
NOTICE: #2 ->> start of function fx(integer,integer,date,text) (oid=16405)
NOTICE: #2 call by inline_code_block line 1 at PERFORM
NOTICE: #2 "a" => '10', "b" => null, "c" => '2020-08-04', "d" => 'stěhule'
NOTICE: #2.1 1 --> start of PERFORM
NOTICE: #2.1 "a" => '10'
NOTICE: #4 ->> start of function fx(integer) (oid=16404)
NOTICE: #4 call by fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: #4 "a" => '10'
NOTICE: #4.1 6 --> start of assignment
NOTICE: #4.1 "a" => '10', "b" => '20'
NOTICE: #4.1 <-- end of assignment (elapsed time=0.076 ms)
NOTICE: #4.1 "res" => '130'
NOTICE: #4.2 7 --> start of RETURN
NOTICE: #4.2 "res" => '130'
NOTICE: #4.2 <-- end of RETURN (elapsed time=0.054 ms)
NOTICE: #4 <<- end of function fx (elapsed time=0.373 ms)
NOTICE: #2.1 <-- end of PERFORM (elapsed time=0.589 ms)
NOTICE: #2 <<- end of function fx (elapsed time=0.727 ms)
NOTICE: #0.1 <-- end of PERFORM (elapsed time=1.147 ms)
NOTICE: #0 <<- end of block (elapsed time=1.286 ms)
Special feature of tracer is tracing of ASSERT
statement when plpgsql_check.trace_assert
is on
. When plpgsql_check.trace_assert_verbosity
is DEFAULT
, then all function's or procedure's variables are displayed when assert expression is false. When this configuration is VERBOSE
then all variables from all plpgsql frames are displayed. This behaviour is independent on plpgsql.check_asserts
value. It can be used, although the assertions are disabled in plpgsql runtime.
postgres=# set plpgsql_check.tracer to off;
postgres=# set plpgsql_check.trace_assert_verbosity TO verbose;
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false
NOTICE: "a" => '10', "res" => null, "b" => '20'
NOTICE: #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: "a" => '10', "b" => null, "c" => '2020-08-05', "d" => 'stěhule'
NOTICE: #0 PL/pgSQL function inline_code_block line 1 at PERFORM
ERROR: assertion failed
CONTEXT: PL/pgSQL function fx(integer) line 12 at ASSERT
SQL statement "SELECT fx(a)"
PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
SQL statement "SELECT fx(10,null, 'now', e'stěhule')"
PL/pgSQL function inline_code_block line 1 at PERFORM
postgres=# set plpgsql.check_asserts to off;
SET
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false
NOTICE: "a" => '10', "res" => null, "b" => '20'
NOTICE: #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: "a" => '10', "b" => null, "c" => '2020-08-05', "d" => 'stěhule'
NOTICE: #0 PL/pgSQL function inline_code_block line 1 at PERFORM
DO
Tracer prints content of variables or function arguments. For security definer function, this content can hold security sensitive data. This is reason why tracer is disabled by default and should be enabled only with super user rights plpgsql_check.enable_tracer
.
Pragma
You can configure plpgsql_check behave inside checked function with "pragma" function. This is a analogy of PL/SQL or ADA language of PRAGMA feature. PLpgSQL doesn't support PRAGMA, but plpgsql_check detects function named plpgsql_check_pragma
and get options from parameters of this function. These plpgsql_check options are valid to end of group of statements.
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
BEGIN
...
-- for following statements disable check
PERFORM plpgsql_check_pragma('disable:check');
...
-- enable check again
PERFORM plpgsql_check_pragma('enable:check');
...
END;
$$ LANGUAGE plpgsql;
The function plpgsql_check_pragma
is immutable function that returns one. It is defined by plpgsql_check
extension. You can declare alternative plpgsql_check_pragma
function like:
CREATE OR REPLACE FUNCTION plpgsql_check_pragma(VARIADIC args[])
RETURNS int AS $$
SELECT 1
$$ LANGUAGE sql IMMUTABLE;
Using pragma function in declaration part of top block sets options on function level too.
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
DECLARE
aux int := plpgsql_check_pragma('disable:extra_warnings');
...
Shorter syntax for pragma is supported too:
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
DECLARE r record;
BEGIN
PERFORM 'PRAGMA:TYPE:r (a int, b int)';
PERFORM 'PRAGMA:TABLE: x (like pg_class)';
...
echo:str
- print string (for testing)
status:check
,status:tracer
, status:other_warnings
, status:performance_warnings
, status:extra_warnings
,status:security_warnings
enable:check
,enable:tracer
, enable:other_warnings
, enable:performance_warnings
, enable:extra_warnings
,enable:security_warnings
disable:check
,disable:tracer
, disable:other_warnings
, disable:performance_warnings
, disable:extra_warnings
,disable:security_warnings
type:varname typename
or type:varname (fieldname type, ...)
- set type to variable of record type
table: name (column_name type, ...)
or table: name (like tablename)
- create ephereal table
Pragmas enable:tracer
and disable:tracer
are active for Postgres 12 and higher
Compilation
You need a development environment for PostgreSQL extensions:
make clean
make install
result:
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 clean
rm -f plpgsql_check.so libplpgsql_check.a libplpgsql_check.pc
rm -f plpgsql_check.o
rm -rf results/ regression.diffs regression.out tmp_check/ log/
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 all
clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -I. -I./ -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include/internal -D_GNU_SOURCE -c -o plpgsql_check.o plpgsql_check.c
clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -shared -o plpgsql_check.so plpgsql_check.o -L/usr/local/pgsql/lib -Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags
[pavel@localhost plpgsql_check]$ su root
Password: *******
[root@localhost plpgsql_check]# make USE_PGXS=1 install
/usr/bin/mkdir -p '/usr/local/pgsql/lib'
/usr/bin/mkdir -p '/usr/local/pgsql/share/extension'
/usr/bin/mkdir -p '/usr/local/pgsql/share/extension'
/usr/bin/install -c -m 755 plpgsql_check.so '/usr/local/pgsql/lib/plpgsql_check.so'
/usr/bin/install -c -m 644 plpgsql_check.control '/usr/local/pgsql/share/extension/'
/usr/bin/install -c -m 644 plpgsql_check--0.9.sql '/usr/local/pgsql/share/extension/'
[root@localhost plpgsql_check]# exit
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 installcheck
/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress --inputdir=./ --psqldir='/usr/local/pgsql/bin' --dbname=pl_regression --load-language=plpgsql --dbname=contrib_regression plpgsql_check_passive plpgsql_check_active plpgsql_check_active-9.5
(using postmaster on Unix socket, default port)
============== dropping database "contrib_regression" ==============
DROP DATABASE
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
============== installing plpgsql ==============
CREATE LANGUAGE
============== running regression test queries ==============
test plpgsql_check_passive ... ok
test plpgsql_check_active ... ok
test plpgsql_check_active-9.5 ... ok
=====================
All 3 tests passed.
=====================
Sometimes successful compilation can require libicu-dev package (PostgreSQL 10 and higher - when pg was compiled with ICU support)
sudo apt install libicu-dev
You can check precompiled dll libraries http://okbob.blogspot.cz/2015/02/plpgsqlcheck-is-available-for-microsoft.html
or compile by self:
plpgsql_check.dll
to PostgreSQL\14\lib
plpgsql_check.control
and plpgsql_check--2.1.sql
to PostgreSQL\14\share\extension
Compilation against PostgreSQL 10 requires libICU!
Licence
Copyright (c) Pavel Stehule (pavel.stehule@gmail.com)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Note
If you like it, send a postcard to address
Pavel Stehule
Skalice 12
256 01 Benesov u Prahy
Czech Republic
I invite any questions, comments, bug reports, patches on mail address pavel.stehule@gmail.com
Author: okbob
Source Code: https://github.com/okbob/plpgsql_check
License: View license
1660147320
Whenever we work with data of any sort, we need a clear picture of the kind of data that we are dealing with. For most of the data out there, which may contain thousands or even millions of entries with a wide variety of information, it’s really impossible to make sense of that data without any tool to present the data in a short and readable format.
Most of the time we need to go through the data, manipulate it, and visualize it for getting insights. Well, there is a great library which goes by the name pandas which provides us with that capability. The most frequent Data manipulation operation is Data Filtering. It is very similar to the WHERE clause in SQL or you must have used a filter in MS Excel for selecting specific rows based on some conditions.
pandas is a powerful, flexible and open source data analysis/manipulation tool which is essentially a python package that provides speed, flexibility and expressive data structures crafted to work with “relational” or “labelled” data in an intuitive and easy manner. It is one of the most popular libraries to perform real-world data analysis in Python.
pandas is built on top of the NumPy library which aims to integrate well with the scientific computing environment and numerous other 3rd party libraries. It has two primary data structures namely Series (1D) and Dataframes(2D), which in most real-world use cases is the type of data that is being dealt with in many sectors of finance, scientific computing, engineering and statistics.
Installing pandas
!pip install pandas
Importing the Pandas library, reading our sample data file and assigning it to “df” DataFrame
import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")
Let’s check out our dataframe:
print(df.head())
Sample_data
Now that we have our DataFrame, we will be applying various methods to filter it.
We have a column named “Total_Sales” in our DataFrame and we want to filter out all the sales value which is greater than 300.
#Filter a DataFrame for a single column value with a given condition
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())
Sales with Greater than 300
Here we are filtering all the values whose “Total_Sales” value is greater than 300 and also where the “Units” is greater than 20. We will have to use the python operator “&” which performs a bitwise AND operation in order to display the corresponding result.
#Filter a DataFrame with multiple conditions
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())
Filter on Sales and Units
If we want to filter our data frame based on a certain date value, for example here we are trying to get all the results based on a particular date, in our case the results after the date ’03/10/21′.
#Filter a DataFrame based on specific date
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())
Filter on Date
Here we are getting all the results for our Date operation evaluating multiple dates.
#Filter a DataFrame with multiple conditions our Date value
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())
Filter on a date with multiple conditions
Here we are selecting a column called ‘Region’ and getting all the rows that are from the region ‘East’, thus filtering based on a specific string value.
#Filter a DataFrame to a specific string
east = df[df['Region'] == 'East']
print(east.head())
Filter based on a specific string
Here we are selecting a column called ‘Region’ and getting all the rows which has the letter ‘E’ as the first character i.e at index 0 in the specified column results.
#Filter a DataFrame to show rows starting with a specfic letter
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())
Filter based on a specific letter
Here we are filtering rows in the column ‘Region’ which contains the values ‘West’ as well as ‘East’ and display the combined result. Two methods can be used to perform this filtering namely using a pipe | operator with the corresponding desired set of values with the below syntax OR we can use the .isin() function to filter for the values in a given column, which in our case is the ‘Region’, and provide the list of the desired set of values inside it as a list.
#Filter a DataFrame rows based on list of values
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())
Output of Method -2
Here we want all the values in the column ‘Region’, which ends with ‘th’ in their string value and display them. In other words, we want our results to show the values of ‘North‘ and ‘South‘ and ignore ‘East’ and ‘West’. The method .str.contains() with the specified values along with the $ RegEx pattern can be used to get the desired results.
For more information please check the Regex Documentation
#Filtering the DataFrame rows using regular expressions(REGEX)
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())
Filter based on REGEX
Here, we’ll check for null and not null values in all the columns with the help of isnull() function.
#Filtering to check for null and not null values in all columns
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())
Filter based on NULL or NOT null values
#Filtering to check for null values if any in the 'Units' column
units_df = df[df['Units'].isnull()]
print(units_df.head())
Finding null values on specific columns
#Filtering to check for not null values in the 'Units' column
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())
Finding not-null values on specific columns
query()
with a condition#Using query function in pandas
df_query = df.query('Total_Sales > 300')
print(df_query.head())
Filtering values with Query
Function
query()
with multiple conditions#Using query function with multiple conditions in pandas
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())
Filtering multiple columns with Query
Function
loc
and iloc
functions.#Creating a sample DataFrame for illustrations
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)
sample_data
Explanation: iloc
considers rows based on the position of the given index, so that it takes only integers as values.
For more information please check out Pandas Documentation
#Filter with iloc
data.iloc[0 : 5]
Filter using iloc
Explanation: loc
considers rows based on index labels
#Filter with loc
data.loc[0 : 5]
Filter using loc
You might be thinking about why the loc
function returns 6 rows instead of 5 rows. This is because loc
does not produce output based on index position. It considers labels of index only which can be an alphabet as well and includes both starting and endpoint.
So, these were some of the most common filtering methods used in pandas. There are many other filtering methods that could be used, but these are some of the most common.
Link: https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe
#pandas #python #datafame
1660017761
Chaque fois que nous travaillons avec des données de toutes sortes, nous avons besoin d'une image claire du type de données avec lesquelles nous traitons. Pour la plupart des données disponibles, qui peuvent contenir des milliers, voire des millions d'entrées avec une grande variété d'informations, il est vraiment impossible de donner un sens à ces données sans aucun outil pour présenter les données dans un format court et lisible.
La plupart du temps, nous devons parcourir les données, les manipuler et les visualiser pour obtenir des informations. Eh bien, il existe une excellente bibliothèque qui porte le nom de pandas et qui nous offre cette capacité. L'opération de manipulation de données la plus fréquente est le filtrage de données. Il est très similaire à la clause WHERE dans SQL ou vous devez avoir utilisé un filtre dans MS Excel pour sélectionner des lignes spécifiques en fonction de certaines conditions.
pandas est un outil d'analyse/manipulation de données puissant, flexible et open source qui est essentiellement unpackage pythonqui offre vitesse, flexibilité et structures de données expressives conçues pour fonctionner avec des données « relationnelles » ou « étiquetées » de manière intuitive et simple. C'est l'une des bibliothèques les plus populairespour effectuer une analyse de données du monde réel en Python.
pandas est construit au-dessus de la bibliothèque NumPy qui vise à bien s'intégrer à l'environnement informatique scientifique et à de nombreuses autres bibliothèques tierces. Il comporte deux structures de données principales, à savoir Series (1D) et Dataframes (2D) , qui, dans la plupart des cas d'utilisation réels, correspondent au type de données traitées dans de nombreux secteurs de la finance, du calcul scientifique, de l'ingénierie et des statistiques.
Installer des pandas
!pip install pandas
Importation de la bibliothèque Pandas, lecture de notre exemple de fichier de données et affectation à "df" DataFrame
import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")
Voyons notre dataframe :
print(df.head())
Sample_data
Maintenant que nous avons notre DataFrame, nous allons appliquer différentes méthodes pour le filtrer.
Nous avons une colonne nommée "Total_Sales" dans notre DataFrame et nous voulons filtrer toute la valeur des ventes supérieure à 300.
#Filter a DataFrame for a single column value with a given condition
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())
Ventes avec plus de 300
Ici, nous filtrons toutes les valeurs dont la valeur "Total_Sales" est supérieure à 300 et également où les "Unités" sont supérieures à 20. Nous devrons utiliser l'opérateur python "&" qui effectue une opération ET au niveau du bit afin d'afficher le résultat correspondant.
#Filter a DataFrame with multiple conditions
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())
Filtrer sur les ventes et les unités
Si nous voulons filtrer notre trame de données en fonction d'une certaine valeur de date, par exemple ici nous essayons d'obtenir tous les résultats en fonction d'une date particulière, dans notre cas les résultats après la date '03/10/21'.
#Filter a DataFrame based on specific date
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())
Filtrer par date
Ici, nous obtenons tous les résultats de notre opération Date évaluant plusieurs dates .
#Filter a DataFrame with multiple conditions our Date value
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())
Filtrer sur une date avec plusieurs conditions
Ici, nous sélectionnons une colonne appelée 'Region' et obtenons toutes les lignes qui proviennent de la région 'East', filtrant ainsi en fonction d'une valeur de chaîne spécifique .
#Filter a DataFrame to a specific string
east = df[df['Region'] == 'East']
print(east.head())
Filtre basé sur une chaîne spécifique
Ici, nous sélectionnons une colonne appelée 'Region' et obtenons toutes les lignes qui ont la lettre 'E' comme premier caractère, c'est-à-dire à l'index 0 dans les résultats de colonne spécifiés.
#Filter a DataFrame to show rows starting with a specfic letter
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())
Filtre basé sur une lettre spécifique
Ici, nous filtrons les lignes dans la colonne « Région » qui contient les valeurs « Ouest » ainsi que « Est » et affichons le résultat combiné. Deux méthodes peuvent être utilisées pour effectuer ce filtrage à savoir l'utilisation d'un tube | opérateur avec l'ensemble de valeurs souhaité correspondant avec la syntaxe ci-dessous OU nous pouvons utiliser la fonction .isin() pour filtrer les valeurs dans une colonne donnée, qui dans notre cas est la 'Région', et fournir la liste de l'ensemble souhaité de valeurs à l'intérieur sous forme de liste.
#Filter a DataFrame rows based on list of values
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())
Sortie de la méthode -2
Ici, nous voulons toutes les valeurs de la colonne 'Region' , qui se termine par 'th' dans leur valeur de chaîne et les afficher. En d'autres termes, nous voulons que nos résultats montrent les valeurs de « Nord » et « Sud » et ignorent « Est » et « Ouest » . La méthode .str.contains() avec les valeurs spécifiées avec le modèle $ RegEx peut être utilisée pour obtenir les résultats souhaités.
Pour plus d'informations, veuillez consulter la documentation Regex
#Filtering the DataFrame rows using regular expressions(REGEX)
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())
Filtre basé sur REGEX
Ici, nous allons vérifier les valeurs nulles et non nulles dans toutes les colonnes à l'aide de la fonction isnull() .
#Filtering to check for null and not null values in all columns
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())
Filtre basé sur les valeurs NULL ou NOT null
#Filtering to check for null values if any in the 'Units' column
units_df = df[df['Units'].isnull()]
print(units_df.head())
Recherche de valeurs nulles sur des colonnes spécifiques
#Filtering to check for not null values in the 'Units' column
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())
Recherche de valeurs non nulles sur des colonnes spécifiques
query()
d'une condition#Using query function in pandas
df_query = df.query('Total_Sales > 300')
print(df_query.head())
Filtrer les valeurs avec Query
la fonction
query()
de plusieurs conditions#Using query function with multiple conditions in pandas
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())
Filtrer plusieurs colonnes avec Query
Function
loc
et iloc
.#Creating a sample DataFrame for illustrations
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)
sample_data
Explication : iloc
considère les lignes en fonction de la position de l'index donné, de sorte qu'il ne prend que des entiers comme valeurs.
Pour plus d'informations, veuillez consulter la documentation de Pandas
#Filter with iloc
data.iloc[0 : 5]
Filtrer en utilisantiloc
Explication : loc
considère les lignes en fonction des étiquettes d'index
#Filter with loc
data.loc[0 : 5]
Filtrer en utilisantloc
Vous vous demandez peut-être pourquoi la loc
fonction renvoie 6 lignes au lieu de 5 lignes. En effet , ne produit pas de sortie basée sur la position de l'index. Il ne prend en compte que les étiquettes d'index qui peuvent également être un alphabet et incluent à la fois le point de départ et le point final. loc
Donc, ce sont quelques-unes des méthodes de filtrage les plus couramment utilisées dans les pandas. Il existe de nombreuses autres méthodes de filtrage qui pourraient être utilisées, mais celles-ci sont parmi les plus courantes.
Lien : https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe
#pandas #python #datafame
1660046820
Bất cứ khi nào chúng tôi làm việc với bất kỳ loại dữ liệu nào, chúng tôi cần một bức tranh rõ ràng về loại dữ liệu mà chúng tôi đang xử lý. Đối với hầu hết dữ liệu ngoài kia, có thể chứa hàng nghìn hoặc thậm chí hàng triệu mục nhập với nhiều loại thông tin, thực sự không thể hiểu được dữ liệu đó nếu không có bất kỳ công cụ nào để trình bày dữ liệu ở định dạng ngắn gọn và dễ đọc.
Hầu hết thời gian chúng ta cần xem qua dữ liệu, thao tác và trực quan hóa nó để có được thông tin chi tiết. Chà, có một thư viện tuyệt vời mang tên gấu trúc cung cấp cho chúng ta khả năng đó. Thao tác thao tác dữ liệu thường xuyên nhất là Lọc dữ liệu. Nó rất giống với mệnh đề WHERE trong SQL hoặc bạn phải sử dụng một bộ lọc trong MS Excel để chọn các hàng cụ thể dựa trên một số điều kiện.
pandas là một công cụ phân tích / thao tác dữ liệu nguồn mở, linh hoạt và mạnh mẽ, về cơ bản là mộtgói pythoncung cấp tốc độ, tính linh hoạt và cấu trúc dữ liệu biểu cảm được tạo ra để làm việc với dữ liệu “quan hệ” hoặc “có nhãn” một cách trực quan và dễ dàng. Nó là một trong những thư viện phổ biến nhấtđể thực hiện phân tích dữ liệu trong thế giới thực bằng Python.
pandas được xây dựng dựa trên thư viện NumPy nhằm mục đích tích hợp tốt với môi trường máy tính khoa học và nhiều thư viện bên thứ 3 khác. Nó có hai cấu trúc dữ liệu chính là Series (1D) và Dataframe (2D) , trong hầu hết các trường hợp sử dụng trong thế giới thực là loại dữ liệu đang được xử lý trong nhiều lĩnh vực tài chính, máy tính khoa học, kỹ thuật và thống kê.
Cài đặt gấu trúc
!pip install pandas
Nhập thư viện Pandas, đọc tệp dữ liệu mẫu của chúng tôi và gán nó cho DataFrame “df”
import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")
Hãy kiểm tra khung dữ liệu của chúng tôi :
print(df.head())
Dữ liệu mẫu
Bây giờ chúng tôi đã có DataFrame của mình, chúng tôi sẽ áp dụng nhiều phương pháp khác nhau để lọc nó.
Chúng tôi có một cột tên là “Total_Sales” trong DataFrame của mình và chúng tôi muốn lọc ra tất cả giá trị bán hàng lớn hơn 300.
#Filter a DataFrame for a single column value with a given condition
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())
Doanh số lớn hơn 300
Ở đây chúng tôi đang lọc tất cả các giá trị có giá trị “Total_Sales” lớn hơn 300 và cũng có giá trị “Đơn vị” lớn hơn 20. Chúng tôi sẽ phải sử dụng toán tử python “&” thực hiện thao tác AND bitwise để hiển thị kết quả tương ứng.
#Filter a DataFrame with multiple conditions
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())
Lọc theo Doanh số và Đơn vị
Nếu chúng tôi muốn lọc khung dữ liệu của mình dựa trên một giá trị ngày nhất định, ví dụ: ở đây chúng tôi đang cố gắng lấy tất cả kết quả dựa trên một ngày cụ thể, trong trường hợp của chúng tôi là kết quả sau ngày '03/10/21'.
#Filter a DataFrame based on specific date
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())
Lọc vào ngày
Ở đây, chúng tôi nhận được tất cả các kết quả cho hoạt động Ngày đánh giá nhiều ngày của chúng tôi .
#Filter a DataFrame with multiple conditions our Date value
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())
Lọc vào một ngày có nhiều điều kiện
Ở đây chúng tôi đang chọn một cột có tên là 'Khu vực' và lấy tất cả các hàng từ khu vực 'Đông', do đó lọc dựa trên một giá trị chuỗi cụ thể .
#Filter a DataFrame to a specific string
east = df[df['Region'] == 'East']
print(east.head())
Lọc dựa trên một chuỗi cụ thể
Ở đây chúng tôi đang chọn một cột có tên là 'Vùng' và lấy tất cả các hàng có ký tự 'E' là ký tự đầu tiên, tức là ở chỉ số 0 trong kết quả cột được chỉ định.
#Filter a DataFrame to show rows starting with a specfic letter
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())
Lọc dựa trên một chữ cái cụ thể
Ở đây chúng tôi đang lọc các hàng trong cột 'Vùng' chứa các giá trị 'Tây' cũng như 'Đông' và hiển thị kết quả kết hợp. Hai phương pháp có thể được sử dụng để thực hiện việc lọc này là sử dụng đường ống | toán tử với tập giá trị mong muốn tương ứng với cú pháp bên dưới HOẶC chúng ta có thể sử dụng hàm .isin () để lọc các giá trị trong một cột nhất định, trong trường hợp của chúng ta là 'Vùng' và cung cấp danh sách tập hợp mong muốn của các giá trị bên trong nó dưới dạng danh sách.
#Filter a DataFrame rows based on list of values
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())
Đầu ra của Phương pháp -2
Ở đây chúng tôi muốn tất cả các giá trị trong cột 'Vùng' , kết thúc bằng 'th' trong giá trị chuỗi của chúng và hiển thị chúng. Nói cách khác, chúng tôi muốn kết quả của mình hiển thị các giá trị của "Nor th " và "Sou th " và bỏ qua "East" và "West" . Phương thức .str.contains () với các giá trị được chỉ định cùng với mẫu $ RegEx có thể được sử dụng để có được kết quả mong muốn.
Để biết thêm thông tin, vui lòng kiểm tra Tài liệu Regex
#Filtering the DataFrame rows using regular expressions(REGEX)
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())
Lọc dựa trên REGEX
Ở đây, chúng tôi sẽ kiểm tra các giá trị null và không null trong tất cả các cột với sự trợ giúp của hàm isnull () .
#Filtering to check for null and not null values in all columns
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())
Lọc dựa trên giá trị NULL hoặc NOT null
#Filtering to check for null values if any in the 'Units' column
units_df = df[df['Units'].isnull()]
print(units_df.head())
Tìm giá trị null trên các cột cụ thể
#Filtering to check for not null values in the 'Units' column
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())
Tìm các giá trị not-null trên các cột cụ thể
query()
với một điều kiện#Using query function in pandas
df_query = df.query('Total_Sales > 300')
print(df_query.head())
Lọc các giá trị bằng Query
Hàm
query()
nhiều điều kiện#Using query function with multiple conditions in pandas
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())
Lọc nhiều cột với Query
Hàm
loc
và iloc
.#Creating a sample DataFrame for illustrations
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)
dữ liệu mẫu
Giải thích : iloc
xem xét các hàng dựa trên vị trí của chỉ mục đã cho, do đó nó chỉ nhận các số nguyên làm giá trị.
Để biết thêm thông tin, vui lòng xem Tài liệu về Gấu trúc
#Filter with iloc
data.iloc[0 : 5]
Lọc bằng cách sử dụngiloc
Giải thích : loc
xem xét các hàng dựa trên nhãn chỉ mục
#Filter with loc
data.loc[0 : 5]
Lọc bằng cách sử dụngloc
Bạn có thể đang suy nghĩ về lý do tại sao loc
hàm trả về 6 hàng thay vì 5 hàng. Điều này là do không tạo ra sản lượng dựa trên vị trí chỉ mục. Nó chỉ xem xét các nhãn của chỉ mục cũng có thể là một bảng chữ cái và bao gồm cả điểm đầu và điểm cuối. loc
Vì vậy, đây là một số phương pháp lọc phổ biến nhất được sử dụng ở gấu trúc. Có nhiều phương pháp lọc khác có thể được sử dụng, nhưng đây là một số phương pháp phổ biến nhất.
Liên kết: https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe
#pandas #python #datafame