1603730880
Based on the project ant-design-vue
, typescript
, vue3.0
, vite
, tsx
vue3 style background management system implementation,
vue-vben-admin2.0 - main
branch
Test account: vben/123456
The document address is continuously updated. . ,
Node.js
:-The version should be greater than 12.0.0
yarn
> npm
> cnpm
: - package management tool.Git
:-Version management toolVisual Studio Code
-(VSCode): The latest version// Make git sensitive to the case of file names
git config core . Ignorecase false
// Pull the project code
git clone https: //github.com/anncwb/vue-vben-admin.git vue-vben-admin-2.0
cd view - vben - admin - 2.0
// If you use other package management tools, you can
install it yourself // If yarn is not installed, please run: npm install -g yarn
yarn install
yarn serve
Build the Yarn ## packing
Build the Yarn: NO-Cache ## package will delete the cache before execution
Report the Yarn ## generate report preview build package
lint the Yarn: stylelint ## style format
yarn lint:prettier ## js/ts code format
REINSTALL the Yarn ## remove the dependency Replace, compatible window
preview the Yarn ## local packaged preview
yarn log ## 生成CHANGELOG
Clean the Yarn: Cache ## delete cache
Clean the Yarn: lib ## delete node_modules, compatible window system
feat
Add new featuresfix
Fix the problem/BUGstyle
The code style does not affect the running resultperf
Optimization/performance improvementrefactor
Refactorrevert
Undo edittest
Test relateddocs
Documentation/noteschore
Dependency update/scaffolding configuration modification etc.workflow
Workflow improvementsci
Continuous integrationtypes
Type definition file changeswip
In developmentgit checkout -b feat/xxxx
git commit -am 'feat(function): add xxxxx'
git push origin feat/xxxx
pull request
More components/functions/suggestions/bugs/welcome to submit pr or issue
Author: anncwb
Demo: https://vvbin.cn/next/
Source Code: https://github.com/anncwb/vue-vben-admin
#vuejs #javascript #vue
1667488080
A full Python implementation of the ROUGE metric, producing same results as in the official perl implementation.
Important remarks
<3e-5
for ROUGE-L as well as ROUGE-W and <4e-5
for ROUGE-N.-b 665
.In case of doubts, please see all the implemented tests to compare outputs between the official ROUGE-1.5.5 and this script.
Package is uploaded on PyPI <https://pypi.org/project/py-rouge>
_.
You can install it with pip:
pip install py-rouge
or do it manually:
git clone https://github.com/Diego999/py-rouge
cd py-rouge
python setup.py install
Issues/Pull Requests/Feedbacks
Don't hesitate to contact for any feedback or create issues/pull requests (especially if you want to rewrite the stemmer implemented in ROUGE-1.5.5 in python ;)).
Example
import rouge
def prepare_results(m, p, r, f):
return '\t{}:\t{}: {:5.2f}\t{}: {:5.2f}\t{}: {:5.2f}'.format(m, 'P', 100.0 * p, 'R', 100.0 * r, 'F1', 100.0 * f)
for aggregator in ['Avg', 'Best', 'Individual']:
print('Evaluation with {}'.format(aggregator))
apply_avg = aggregator == 'Avg'
apply_best = aggregator == 'Best'
evaluator = rouge.Rouge(metrics=['rouge-n', 'rouge-l', 'rouge-w'],
max_n=4,
limit_length=True,
length_limit=100,
length_limit_type='words',
apply_avg=apply_avg,
apply_best=apply_best,
alpha=0.5, # Default F1_score
weight_factor=1.2,
stemming=True)
hypothesis_1 = "King Norodom Sihanouk has declined requests to chair a summit of Cambodia 's top political leaders , saying the meeting would not bring any progress in deadlocked negotiations to form a government .\nGovernment and opposition parties have asked King Norodom Sihanouk to host a summit meeting after a series of post-election negotiations between the two opposition groups and Hun Sen 's party to form a new government failed .\nHun Sen 's ruling party narrowly won a majority in elections in July , but the opposition _ claiming widespread intimidation and fraud _ has denied Hun Sen the two-thirds vote in parliament required to approve the next government .\n"
references_1 = ["Prospects were dim for resolution of the political crisis in Cambodia in October 1998.\nPrime Minister Hun Sen insisted that talks take place in Cambodia while opposition leaders Ranariddh and Sam Rainsy, fearing arrest at home, wanted them abroad.\nKing Sihanouk declined to chair talks in either place.\nA U.S. House resolution criticized Hun Sen's regime while the opposition tried to cut off his access to loans.\nBut in November the King announced a coalition government with Hun Sen heading the executive and Ranariddh leading the parliament.\nLeft out, Sam Rainsy sought the King's assurance of Hun Sen's promise of safety and freedom for all politicians.",
"Cambodian prime minister Hun Sen rejects demands of 2 opposition parties for talks in Beijing after failing to win a 2/3 majority in recent elections.\nSihanouk refuses to host talks in Beijing.\nOpposition parties ask the Asian Development Bank to stop loans to Hun Sen's government.\nCCP defends Hun Sen to the US Senate.\nFUNCINPEC refuses to share the presidency.\nHun Sen and Ranariddh eventually form a coalition at summit convened by Sihanouk.\nHun Sen remains prime minister, Ranariddh is president of the national assembly, and a new senate will be formed.\nOpposition leader Rainsy left out.\nHe seeks strong assurance of safety should he return to Cambodia.\n",
]
hypothesis_2 = "China 's government said Thursday that two prominent dissidents arrested this week are suspected of endangering national security _ the clearest sign yet Chinese leaders plan to quash a would-be opposition party .\nOne leader of a suppressed new political party will be tried on Dec. 17 on a charge of colluding with foreign enemies of China '' to incite the subversion of state power , '' according to court documents given to his wife on Monday .\nWith attorneys locked up , harassed or plain scared , two prominent dissidents will defend themselves against charges of subversion Thursday in China 's highest-profile dissident trials in two years .\n"
references_2 = "Hurricane Mitch, category 5 hurricane, brought widespread death and destruction to Central American.\nEspecially hard hit was Honduras where an estimated 6,076 people lost their lives.\nThe hurricane, which lingered off the coast of Honduras for 3 days before moving off, flooded large areas, destroying crops and property.\nThe U.S. and European Union were joined by Pope John Paul II in a call for money and workers to help the stricken area.\nPresident Clinton sent Tipper Gore, wife of Vice President Gore to the area to deliver much needed supplies to the area, demonstrating U.S. commitment to the recovery of the region.\n"
all_hypothesis = [hypothesis_1, hypothesis_2]
all_references = [references_1, references_2]
scores = evaluator.get_scores(all_hypothesis, all_references)
for metric, results in sorted(scores.items(), key=lambda x: x[0]):
if not apply_avg and not apply_best: # value is a type of list as we evaluate each summary vs each reference
for hypothesis_id, results_per_ref in enumerate(results):
nb_references = len(results_per_ref['p'])
for reference_id in range(nb_references):
print('\tHypothesis #{} & Reference #{}: '.format(hypothesis_id, reference_id))
print('\t' + prepare_results(metric,results_per_ref['p'][reference_id], results_per_ref['r'][reference_id], results_per_ref['f'][reference_id]))
print()
else:
print(prepare_results(metric, results['p'], results['r'], results['f']))
print()
It produces the following output:
Evaluation with Avg
rouge-1: P: 28.62 R: 26.46 F1: 27.49
rouge-2: P: 4.21 R: 3.92 F1: 4.06
rouge-3: P: 0.80 R: 0.74 F1: 0.77
rouge-4: P: 0.00 R: 0.00 F1: 0.00
rouge-l: P: 30.52 R: 28.57 F1: 29.51
rouge-w: P: 15.85 R: 8.28 F1: 10.87
Evaluation with Best
rouge-1: P: 30.44 R: 28.36 F1: 29.37
rouge-2: P: 4.74 R: 4.46 F1: 4.59
rouge-3: P: 1.06 R: 0.98 F1: 1.02
rouge-4: P: 0.00 R: 0.00 F1: 0.00
rouge-l: P: 31.54 R: 29.71 F1: 30.60
rouge-w: P: 16.42 R: 8.82 F1: 11.47
Evaluation with Individual
Hypothesis #0 & Reference #0:
rouge-1: P: 38.54 R: 35.58 F1: 37.00
Hypothesis #0 & Reference #1:
rouge-1: P: 45.83 R: 43.14 F1: 44.44
Hypothesis #1 & Reference #0:
rouge-1: P: 15.05 R: 13.59 F1: 14.29
Hypothesis #0 & Reference #0:
rouge-2: P: 7.37 R: 6.80 F1: 7.07
Hypothesis #0 & Reference #1:
rouge-2: P: 9.47 R: 8.91 F1: 9.18
Hypothesis #1 & Reference #0:
rouge-2: P: 0.00 R: 0.00 F1: 0.00
Hypothesis #0 & Reference #0:
rouge-3: P: 2.13 R: 1.96 F1: 2.04
Hypothesis #0 & Reference #1:
rouge-3: P: 1.06 R: 1.00 F1: 1.03
Hypothesis #1 & Reference #0:
rouge-3: P: 0.00 R: 0.00 F1: 0.00
Hypothesis #0 & Reference #0:
rouge-4: P: 0.00 R: 0.00 F1: 0.00
Hypothesis #0 & Reference #1:
rouge-4: P: 0.00 R: 0.00 F1: 0.00
Hypothesis #1 & Reference #0:
rouge-4: P: 0.00 R: 0.00 F1: 0.00
Hypothesis #0 & Reference #0:
rouge-l: P: 42.11 R: 39.39 F1: 40.70
Hypothesis #0 & Reference #1:
rouge-l: P: 46.19 R: 43.92 F1: 45.03
Hypothesis #1 & Reference #0:
rouge-l: P: 16.88 R: 15.50 F1: 16.16
Hypothesis #0 & Reference #0:
rouge-w: P: 22.27 R: 11.49 F1: 15.16
Hypothesis #0 & Reference #1:
rouge-w: P: 24.56 R: 13.60 F1: 17.51
Hypothesis #1 & Reference #0:
rouge-w: P: 8.29 R: 4.04 F1: 5.43
Author: Diego999
Source Code: https://github.com/Diego999/py-rouge
License: Apache-2.0 license
1646753760
A new Cumulus-based Substrate node, ready for hacking :cloud:
This project is a fork of the Substrate Node Template modified to include dependencies required for registering this node as a parathread or parachain to an established relay chain.
👉 Learn more about parachains here, and parathreads here.
Follow these steps to prepare a local Substrate development environment :hammer_and_wrench:
If necessary, refer to the setup instructions at the Substrate Developer Hub.
Once the development environment is set up, build the Cumulus Parachain Template. This command will build the Wasm Runtime and native code:
cargo build --release
NOTE: In the following two sections, we document how to manually start a few relay chain nodes, start a parachain node (collator), and register the parachain with the relay chain.
We also have the
polkadot-launch
CLI tool that automate the following steps and help you easily launch relay chains and parachains. However it is still good to go through the following procedures once to understand the mechanism for running and registering a parachain.
To operate a parathread or parachain, you must connect to a relay chain. Typically you would test on a local Rococo development network, then move to the testnet, and finally launch on the mainnet. Keep in mind you need to configure the specific relay chain you will connect to in your collator chain_spec.rs
. In the following examples, we will use rococo-local
as the relay network.
Clone and build Polkadot (beware of the version tag we used):
# Get a fresh clone, or `cd` to where you have polkadot already:
git clone -b v0.9.7 --depth 1 https://github.com/paritytech/polkadot.git
cd polkadot
cargo build --release
First, we create the chain specification file (chainspec). Note the chainspec file must be generated on a single node and then shared among all nodes!
👉 Learn more about chain specification here.
./target/release/polkadot build-spec \
--chain rococo-local \
--raw \
--disable-default-bootnode \
> rococo_local.json
We need n + 1 full validator nodes running on a relay chain to accept n parachain / parathread connections. Here we will start two relay chain nodes so we can have one parachain node connecting in later.
From the Polkadot working directory:
# Start Relay `Alice` node
./target/release/polkadot \
--chain ./rococo_local.json \
-d /tmp/relay/alice \
--validator \
--alice \
--port 50555
Open a new terminal, same directory:
# Start Relay `Bob` node
./target/release/polkadot \
--chain ./rococo_local.json \
-d /tmp/relay/bob \
--validator \
--bob \
--port 50556
Add more nodes as needed, with non-conflicting ports, DB directories, and validator keys (--charlie
, --dave
, etc.).
To connect to a relay chain, you must first _reserve a ParaId
for your parathread that will become a parachain. To do this, you will need sufficient amount of currency on the network account to reserve the ID.
In this example, we will use Charlie
development account where we have funds available. Once you submit this extrinsic successfully, you can start your collators.
The easiest way to reserve your ParaId
is via Polkadot Apps UI under the Parachains
-> Parathreads
tab and use the + ParaID
button.
To operate your parachain, you need to specify the correct relay chain you will connect to in your collator chain_spec.rs
. Specifically you pass the command for the network you need in the Extensions
of your ChainSpec::from_genesis()
in the code.
Extensions {
relay_chain: "rococo-local".into(), // You MUST set this to the correct network!
para_id: id.into(),
},
You can choose from any pre-set runtime chainspec in the Polkadot repo, by referring to the
cli/src/command.rs
andnode/service/src/chain_spec.rs
files or generate your own and use that. See the Cumulus Workshop for how.
In the following examples, we will use the rococo-local
relay network we setup in the last section.
We first generate the genesis state and genesis wasm needed for the parachain registration.
# Build the parachain node (from it's top level dir)
cd substrate-parachain-template
cargo build --release
# Folder to store resource files needed for parachain registration
mkdir -p resources
# Build the chainspec
./target/release/parachain-collator build-spec \
--disable-default-bootnode > ./resources/template-local-plain.json
# Build the raw chainspec file
./target/release/parachain-collator build-spec \
--chain=./resources/template-local-plain.json \
--raw --disable-default-bootnode > ./resources/template-local-raw.json
# Export genesis state to `./resources`, using 2000 as the ParaId
./target/release/parachain-collator export-genesis-state --parachain-id 2000 > ./resources/para-2000-genesis
# Export the genesis wasm
./target/release/parachain-collator export-genesis-wasm > ./resources/para-2000-wasm
NOTE: we have set the
para_ID
to be 2000 here. This must be unique for all parathreads/chains on the relay chain you register with. You must reserve this first on the relay chain for the testnet or mainnet.
From the parachain template working directory:
# NOTE: this command assumes the chain spec is in a directory named `polkadot`
# that is at the same level of the template working directory. Change as needed.
#
# It also assumes a ParaId of 2000. Change as needed.
./target/release/parachain-collator \
-d /tmp/parachain/alice \
--collator \
--alice \
--force-authoring \
--ws-port 9945 \
--parachain-id 2000 \
-- \
--execution wasm \
--chain ../polkadot/rococo_local.json
Output:
2021-05-30 16:57:39 Parachain Collator Template
2021-05-30 16:57:39 ✌️ version 3.0.0-acce183-x86_64-linux-gnu
2021-05-30 16:57:39 ❤️ by Anonymous, 2017-2021
2021-05-30 16:57:39 📋 Chain specification: Local Testnet
2021-05-30 16:57:39 🏷 Node name: Alice
2021-05-30 16:57:39 👤 Role: AUTHORITY
2021-05-30 16:57:39 💾 Database: RocksDb at /tmp/parachain/alice/chains/local_testnet/db
2021-05-30 16:57:39 ⛓ Native runtime: template-parachain-1 (template-parachain-0.tx1.au1)
2021-05-30 16:57:41 Parachain id: Id(2000)
2021-05-30 16:57:41 Parachain Account: 5Ec4AhPUwPeyTFyuhGuBbD224mY85LKLMSqSSo33JYWCazU4
2021-05-30 16:57:41 Parachain genesis state: 0x0000000000000000000000000000000000000000000000000000000000000000000a96f42b5cb798190e5f679bb16970905087a9a9fc612fb5ca6b982b85783c0d03170a2e7597b7b7e3d84c05391d139a62b157e78786d8c082f29dcf4c11131400
2021-05-30 16:57:41 Is collating: yes
2021-05-30 16:57:41 [Parachain] 🔨 Initializing Genesis block/state (state: 0x0a96…3c0d, header-hash: 0xd42b…f271)
2021-05-30 16:57:41 [Parachain] ⏱ Loaded block-time = 12s from block 0xd42bb78354bc21770e3f0930ed45c7377558d2d8e81ca4d457e573128aabf271
2021-05-30 16:57:43 [Relaychain] 🔨 Initializing Genesis block/state (state: 0xace1…1b62, header-hash: 0xfa68…cf58)
2021-05-30 16:57:43 [Relaychain] 👴 Loading GRANDPA authority set from genesis on what appears to be first startup.
2021-05-30 16:57:44 [Relaychain] ⏱ Loaded block-time = 6s from block 0xfa68f5abd2a80394b87c9bd07e0f4eee781b8c696d0a22c8e5ba38ae10e1cf58
2021-05-30 16:57:44 [Relaychain] 👶 Creating empty BABE epoch changes on what appears to be first startup.
2021-05-30 16:57:44 [Relaychain] 🏷 Local node identity is: 12D3KooWBjYK2W4dsBfsrFA9tZCStb5ogPb6STQqi2AK9awXfXyG
2021-05-30 16:57:44 [Relaychain] 📦 Highest known block at #0
2021-05-30 16:57:44 [Relaychain] 〽️ Prometheus server started at 127.0.0.1:9616
2021-05-30 16:57:44 [Relaychain] Listening for new connections on 127.0.0.1:9945.
2021-05-30 16:57:44 [Parachain] Using default protocol ID "sup" because none is configured in the chain specs
2021-05-30 16:57:44 [Parachain] 🏷 Local node identity is: 12D3KooWADBSC58of6ng2M29YTDkmWCGehHoUZhsy9LGkHgYscBw
2021-05-30 16:57:44 [Parachain] 📦 Highest known block at #0
2021-05-30 16:57:44 [Parachain] Unable to listen on 127.0.0.1:9945
2021-05-30 16:57:44 [Parachain] Unable to bind RPC server to 127.0.0.1:9945. Trying random port.
2021-05-30 16:57:44 [Parachain] Listening for new connections on 127.0.0.1:45141.
2021-05-30 16:57:45 [Relaychain] 🔍 Discovered new external address for our node: /ip4/192.168.42.204/tcp/30334/ws/p2p/12D3KooWBjYK2W4dsBfsrFA9tZCStb5ogPb6STQqi2AK9awXfXyG
2021-05-30 16:57:45 [Parachain] 🔍 Discovered new external address for our node: /ip4/192.168.42.204/tcp/30333/p2p/12D3KooWADBSC58of6ng2M29YTDkmWCGehHoUZhsy9LGkHgYscBw
2021-05-30 16:57:48 [Relaychain] ✨ Imported #8 (0xe60b…9b0a)
2021-05-30 16:57:49 [Relaychain] 💤 Idle (2 peers), best: #8 (0xe60b…9b0a), finalized #5 (0x1e6f…567c), ⬇ 4.5kiB/s ⬆ 2.2kiB/s
2021-05-30 16:57:49 [Parachain] 💤 Idle (0 peers), best: #0 (0xd42b…f271), finalized #0 (0xd42b…f271), ⬇ 2.0kiB/s ⬆ 1.7kiB/s
2021-05-30 16:57:54 [Relaychain] ✨ Imported #9 (0x1af9…c9be)
2021-05-30 16:57:54 [Relaychain] ✨ Imported #9 (0x6ed8…fdf6)
2021-05-30 16:57:54 [Relaychain] 💤 Idle (2 peers), best: #9 (0x1af9…c9be), finalized #6 (0x3319…69a2), ⬇ 1.8kiB/s ⬆ 0.5kiB/s
2021-05-30 16:57:54 [Parachain] 💤 Idle (0 peers), best: #0 (0xd42b…f271), finalized #0 (0xd42b…f271), ⬇ 0.2kiB/s ⬆ 0.2kiB/s
2021-05-30 16:57:59 [Relaychain] 💤 Idle (2 peers), best: #9 (0x1af9…c9be), finalized #7 (0x5b50…1e5b), ⬇ 0.6kiB/s ⬆ 0.4kiB/s
2021-05-30 16:57:59 [Parachain] 💤 Idle (0 peers), best: #0 (0xd42b…f271), finalized #0 (0xd42b…f271), ⬇ 0 ⬆ 0
2021-05-30 16:58:00 [Relaychain] ✨ Imported #10 (0xc9c9…1ca3)
You see messages are from both a relaychain node and a parachain node. This is because a relay chain light client is also run next to the parachain collator.
Now that you have two relay chain nodes, and a parachain node accompanied with a relay chain light client running, the next step is to register the parachain in the relay chain with the following steps (for detail, refer to the Substrate Cumulus Worship):
Developer
-> sudo
page.paraSudoWrapper
-> sudoScheduleParaInitialize(id, genesis)
as the extrinsic type, shown below.id: ParaId
to 2,000 (or whatever ParaId you used above), and set the parachain: Bool
option to Yes.genesisHead
, drag the genesis state file exported above, para-2000-genesis
, in.validationCode
, drag the genesis wasm file exported above, para-2000-wasm
, in.Note: When registering to the public Rococo testnet, ensure you set a unique
paraId
larger than 1,000. Values below 1,000 are reserved exclusively for system parachains.
The collator node may need to be restarted to get it functioning as expected. After a new epoch starts on the relay chain, your parachain will come online. Once this happens, you should see the collator start reporting parachain blocks:
# Notice the relay epoch change! Only then do we start parachain collating!
#
2021-05-30 17:00:04 [Relaychain] 💤 Idle (2 peers), best: #30 (0xfc02…2a2a), finalized #28 (0x10ff…6539), ⬇ 1.0kiB/s ⬆ 0.3kiB/s
2021-05-30 17:00:04 [Parachain] 💤 Idle (0 peers), best: #0 (0xd42b…f271), finalized #0 (0xd42b…f271), ⬇ 0 ⬆ 0
2021-05-30 17:00:06 [Relaychain] 👶 New epoch 3 launching at block 0x68bc…0605 (block slot 270402601 >= start slot 270402601).
2021-05-30 17:00:06 [Relaychain] 👶 Next epoch starts at slot 270402611
2021-05-30 17:00:06 [Relaychain] ✨ Imported #31 (0x68bc…0605)
2021-05-30 17:00:06 [Parachain] Starting collation. relay_parent=0x68bcc93d24a31a2c89800a56c7a2b275fe9ca7bd63f829b64588ae0d99280605 at=0xd42bb78354bc21770e3f0930ed45c7377558d2d8e81ca4d457e573128aabf271
2021-05-30 17:00:06 [Parachain] 🙌 Starting consensus session on top of parent 0xd42bb78354bc21770e3f0930ed45c7377558d2d8e81ca4d457e573128aabf271
2021-05-30 17:00:06 [Parachain] 🎁 Prepared block for proposing at 1 [hash: 0xf6507812bf60bf53af1311f775aac03869be870df6b0406b2969784d0935cb92; parent_hash: 0xd42b…f271; extrinsics (2): [0x1bf5…1d76, 0x7c9b…4e23]]
2021-05-30 17:00:06 [Parachain] 🔖 Pre-sealed block for proposal at 1. Hash now 0x80fc151d7ccf228b802525022b6de257e42388ec7dc3c1dd7de491313650ccae, previously 0xf6507812bf60bf53af1311f775aac03869be870df6b0406b2969784d0935cb92.
2021-05-30 17:00:06 [Parachain] ✨ Imported #1 (0x80fc…ccae)
2021-05-30 17:00:06 [Parachain] Produced proof-of-validity candidate. block_hash=0x80fc151d7ccf228b802525022b6de257e42388ec7dc3c1dd7de491313650ccae
2021-05-30 17:00:09 [Relaychain] 💤 Idle (2 peers), best: #31 (0x68bc…0605), finalized #29 (0xa6fa…9e16), ⬇ 1.2kiB/s ⬆ 129.9kiB/s
2021-05-30 17:00:09 [Parachain] 💤 Idle (0 peers), best: #0 (0xd42b…f271), finalized #0 (0xd42b…f271), ⬇ 0 ⬆ 0
2021-05-30 17:00:12 [Relaychain] ✨ Imported #32 (0x5e92…ba30)
2021-05-30 17:00:12 [Relaychain] Moving approval window from session 0..=2 to 0..=3
2021-05-30 17:00:12 [Relaychain] ✨ Imported #32 (0x8144…74eb)
2021-05-30 17:00:14 [Relaychain] 💤 Idle (2 peers), best: #32 (0x5e92…ba30), finalized #29 (0xa6fa…9e16), ⬇ 1.4kiB/s ⬆ 0.2kiB/s
2021-05-30 17:00:14 [Parachain] 💤 Idle (0 peers), best: #0 (0xd42b…f271), finalized #0 (0xd42b…f271), ⬇ 0 ⬆ 0
2021-05-30 17:00:18 [Relaychain] ✨ Imported #33 (0x8c30…9ccd)
2021-05-30 17:00:18 [Parachain] Starting collation. relay_parent=0x8c30ce9e6e9867824eb2aff40148ac1ed64cf464f51c5f2574013b44b20f9ccd at=0x80fc151d7ccf228b802525022b6de257e42388ec7dc3c1dd7de491313650ccae
2021-05-30 17:00:19 [Relaychain] 💤 Idle (2 peers), best: #33 (0x8c30…9ccd), finalized #30 (0xfc02…2a2a), ⬇ 0.7kiB/s ⬆ 0.4kiB/s
2021-05-30 17:00:19 [Parachain] 💤 Idle (0 peers), best: #1 (0x80fc…ccae), finalized #0 (0xd42b…f271), ⬇ 0 ⬆ 0
2021-05-30 17:00:22 [Relaychain] 👴 Applying authority set change scheduled at block #31
2021-05-30 17:00:22 [Relaychain] 👴 Applying GRANDPA set change to new set [(Public(88dc3417d5058ec4b4503e0c12ea1a0a89be200fe98922423d4334014fa6b0ee (5FA9nQDV...)), 1), (Public(d17c2d7823ebf260fd138f2d7e27d114c0145d968b5ff5006125f2414fadae69 (5GoNkf6W...)), 1)]
2021-05-30 17:00:22 [Relaychain] 👴 Imported justification for block #31 that triggers command Changing authorities, signaling voter.
2021-05-30 17:00:24 [Relaychain] ✨ Imported #34 (0x211b…febf)
2021-05-30 17:00:24 [Parachain] Starting collation. relay_parent=0x211b3c53bebeff8af05e8f283d59fe171b7f91a5bf9c4669d88943f5a42bfebf at=0x80fc151d7ccf228b802525022b6de257e42388ec7dc3c1dd7de491313650ccae
2021-05-30 17:00:24 [Parachain] 🙌 Starting consensus session on top of parent 0x80fc151d7ccf228b802525022b6de257e42388ec7dc3c1dd7de491313650ccae
2021-05-30 17:00:24 [Parachain] 🎁 Prepared block for proposing at 2 [hash: 0x10fcb3180e966729c842d1b0c4d8d2c4028cfa8bef02b909af5ef787e6a6a694; parent_hash: 0x80fc…ccae; extrinsics (2): [0x4a6c…1fc6, 0x6b84…7cea]]
2021-05-30 17:00:24 [Parachain] 🔖 Pre-sealed block for proposal at 2. Hash now 0x5087fd06b1b73d90cfc3ad175df8495b378fffbb02fea212cc9e49a00fd8b5a0, previously 0x10fcb3180e966729c842d1b0c4d8d2c4028cfa8bef02b909af5ef787e6a6a694.
2021-05-30 17:00:24 [Parachain] ✨ Imported #2 (0x5087…b5a0)
2021-05-30 17:00:24 [Parachain] Produced proof-of-validity candidate. block_hash=0x5087fd06b1b73d90cfc3ad175df8495b378fffbb02fea212cc9e49a00fd8b5a0
2021-05-30 17:00:24 [Relaychain] 💤 Idle (2 peers), best: #34 (0x211b…febf), finalized #31 (0x68bc…0605), ⬇ 1.0kiB/s ⬆ 130.1kiB/s
2021-05-30 17:00:24 [Parachain] 💤 Idle (0 peers), best: #1 (0x80fc…ccae), finalized #0 (0xd42b…f271), ⬇ 0 ⬆ 0
2021-05-30 17:00:29 [Relaychain] 💤 Idle (2 peers), best: #34 (0x211b…febf), finalized #32 (0x5e92…ba30), ⬇ 0.2kiB/s ⬆ 0.1kiB/s
2021-05-30 17:00:29 [Parachain] 💤 Idle (0 peers), best: #1 (0x80fc…ccae), finalized #0 (0xd42b…f271), ⬇ 0 ⬆ 0
2021-05-30 17:00:30 [Relaychain] ✨ Imported #35 (0xee07…38a0)
2021-05-30 17:00:34 [Relaychain] 💤 Idle (2 peers), best: #35 (0xee07…38a0), finalized #33 (0x8c30…9ccd), ⬇ 0.9kiB/s ⬆ 0.3kiB/s
2021-05-30 17:00:34 [Parachain] 💤 Idle (0 peers), best: #1 (0x80fc…ccae), finalized #1 (0x80fc…ccae), ⬇ 0 ⬆ 0
2021-05-30 17:00:36 [Relaychain] ✨ Imported #36 (0xe8ce…4af6)
2021-05-30 17:00:36 [Parachain] Starting collation. relay_parent=0xe8cec8015c0c7bf508bf3f2f82b1696e9cca078e814b0f6671f0b0d5dfe84af6 at=0x5087fd06b1b73d90cfc3ad175df8495b378fffbb02fea212cc9e49a00fd8b5a0
2021-05-30 17:00:39 [Relaychain] 💤 Idle (2 peers), best: #36 (0xe8ce…4af6), finalized #33 (0x8c30…9ccd), ⬇ 0.6kiB/s ⬆ 0.1kiB/s
2021-05-30 17:00:39 [Parachain] 💤 Idle (0 peers), best: #2 (0x5087…b5a0), finalized #1 (0x80fc…ccae), ⬇ 0 ⬆ 0
Note the delay here! It may take some time for your relay chain to enter a new epoch.
Is this Cumulus Parachain Template Rococo & Westend testnets compatible? Yes!
See the Cumulus Workshop for the latest instructions to register a parathread/parachain on a relay chain.
NOTE: When running the relay chain and parachain, you must use the same tagged version of Polkadot and Cumulus so the collator would register successfully to the relay chain. You should test locally registering your parachain successfully before attempting to connect to any running relay chain network!
Find chainspec
files to connect to live networks here. You want to be sure to use the correct git release tag in these files, as they change from time to time and must match the live network!
These networks are under constant development - so please follow the progress and update of your parachains in lock step with the testnet changes if you wish to connect to the network. Do join the Parachain Technical matrix chat room to ask questions and connect with the parachain building teams.
Download Details:
Author: aresprotocols
Source Code: https://github.com/aresprotocols/substrate-parachain-template
License: Unlicense License
1648641360
A symbolic natural language parsing library for Rust, inspired by HDPSG.
This is a library for parsing natural or constructed languages into syntax trees and feature structures. There's no machine learning or probabilistic models, everything is hand-crafted and deterministic.
You can find out more about the motivations of this project in this blog post.
I'm using this to parse a constructed language for my upcoming xenolinguistics game, Themengi.
Using a simple 80-line grammar, introduced in the tutorial below, we can parse a simple subset of English, checking reflexive pronoun binding, case, and number agreement.
$ cargo run --bin cli examples/reflexives.fgr
> she likes himself
Parsed 0 trees
> her likes herself
Parsed 0 trees
> she like herself
Parsed 0 trees
> she likes herself
Parsed 1 tree
(0..3: S
(0..1: N (0..1: she))
(1..2: TV (1..2: likes))
(2..3: N (2..3: herself)))
[
child-2: [
case: acc
pron: ref
needs_pron: #0 she
num: sg
child-0: [ word: herself ]
]
child-1: [
tense: nonpast
child-0: [ word: likes ]
num: #1 sg
]
child-0: [
child-0: [ word: she ]
case: nom
pron: #0
num: #1
]
]
Low resource language? Low problem! No need to train on gigabytes of text, just write a grammar using your brain. Let's hypothesize that in American Sign Language, topicalized nouns (expressed with raised eyebrows) must appear first in the sentence. We can write a small grammar (18 lines), and plug in some sentences:
$ cargo run --bin cli examples/asl-wordorder.fgr -n
> boy sit
Parsed 1 tree
(0..2: S
(0..1: NP ((0..1: N (0..1: boy))))
(1..2: IV (1..2: sit)))
> boy throw ball
Parsed 1 tree
(0..3: S
(0..1: NP ((0..1: N (0..1: boy))))
(1..2: TV (1..2: throw))
(2..3: NP ((2..3: N (2..3: ball)))))
> ball nm-raised-eyebrows boy throw
Parsed 1 tree
(0..4: S
(0..2: NP
(0..1: N (0..1: ball))
(1..2: Topic (1..2: nm-raised-eyebrows)))
(2..3: NP ((2..3: N (2..3: boy))))
(3..4: TV (3..4: throw)))
> boy throw ball nm-raised-eyebrows
Parsed 0 trees
As an example, let's say we want to build a parser for English reflexive pronouns (himself, herself, themselves, themself, itself). We'll also support number ("He likes X" v.s. "They like X") and simple embedded clauses ("He said that they like X").
Grammar files are written in a custom language, similar to BNF, called Feature GRammar (.fgr). There's a VSCode syntax highlighting extension for these files available as fgr-syntax
.
We'll start by defining our lexicon. The lexicon is the set of terminal symbols (symbols in the actual input) that the grammar will match. Terminal symbols must start with a lowercase letter, and non-terminal symbols must start with an uppercase letter.
// pronouns
N -> he
N -> him
N -> himself
N -> she
N -> her
N -> herself
N -> they
N -> them
N -> themselves
N -> themself
// names, lowercase as they are terminals
N -> mary
N -> sue
N -> takeshi
N -> robert
// complementizer
Comp -> that
// verbs -- intransitive, transitive, and clausal
IV -> falls
IV -> fall
IV -> fell
TV -> likes
TV -> like
TV -> liked
CV -> says
CV -> say
CV -> said
Next, we can add our sentence rules (they must be added at the top, as the first rule in the file is assumed to be the top-level rule):
// sentence rules
S -> N IV
S -> N TV N
S -> N CV Comp S
// ... previous lexicon ...
Assuming this file is saved as examples/no-features.fgr
(which it is :wink:), we can test this file with the built-in CLI:
$ cargo run --bin cli examples/no-features.fgr
> he falls
Parsed 1 tree
(0..2: S
(0..1: N (0..1: he))
(1..2: IV (1..2: falls)))
[
child-1: [ child-0: [ word: falls ] ]
child-0: [ child-0: [ word: he ] ]
]
> he falls her
Parsed 0 trees
> he likes her
Parsed 1 tree
(0..3: S
(0..1: N (0..1: he))
(1..2: TV (1..2: likes))
(2..3: N (2..3: her)))
[
child-2: [ child-0: [ word: her ] ]
child-1: [ child-0: [ word: likes ] ]
child-0: [ child-0: [ word: he ] ]
]
> he likes
Parsed 0 trees
> he said that he likes her
Parsed 1 tree
(0..6: S
(0..1: N (0..1: he))
(1..2: CV (1..2: said))
(2..3: Comp (2..3: that))
(3..6: S
(3..4: N (3..4: he))
(4..5: TV (4..5: likes))
(5..6: N (5..6: her))))
[
child-0: [ child-0: [ word: he ] ]
child-2: [ child-0: [ word: that ] ]
child-1: [ child-0: [ word: said ] ]
child-3: [
child-2: [ child-0: [ word: her ] ]
child-1: [ child-0: [ word: likes ] ]
child-0: [ child-0: [ word: he ] ]
]
]
> he said that he
Parsed 0 trees
This grammar already parses some correct sentences, and blocks some trivially incorrect ones. However, it doesn't care about number, case, or reflexives right now:
> she likes himself // unbound reflexive pronoun
Parsed 1 tree
(0..3: S
(0..1: N (0..1: she))
(1..2: TV (1..2: likes))
(2..3: N (2..3: himself)))
[
child-0: [ child-0: [ word: she ] ]
child-2: [ child-0: [ word: himself ] ]
child-1: [ child-0: [ word: likes ] ]
]
> him like her // incorrect case on the subject pronoun, should be nominative
// (he) instead of accusative (him)
Parsed 1 tree
(0..3: S
(0..1: N (0..1: him))
(1..2: TV (1..2: like))
(2..3: N (2..3: her)))
[
child-0: [ child-0: [ word: him ] ]
child-1: [ child-0: [ word: like ] ]
child-2: [ child-0: [ word: her ] ]
]
> he like her // incorrect verb number agreement
Parsed 1 tree
(0..3: S
(0..1: N (0..1: he))
(1..2: TV (1..2: like))
(2..3: N (2..3: her)))
[
child-2: [ child-0: [ word: her ] ]
child-1: [ child-0: [ word: like ] ]
child-0: [ child-0: [ word: he ] ]
]
To fix this, we need to add features to our lexicon, and restrict the sentence rules based on features.
Features are added with square brackets, and are key: value pairs separated by commas. **top**
is a special feature value, which basically means "unspecified" -- we'll come back to it later. Features that are unspecified are also assumed to have a **top**
value, but sometimes explicitly stating top is more clear.
/// Pronouns
// The added features are:
// * num: sg or pl, whether this noun wants a singular verb (likes) or
// a plural verb (like). note this is grammatical number, so for example
// singular they takes plural agreement ("they like X", not *"they likes X")
// * case: nom or acc, whether this noun is nominative or accusative case.
// nominative case goes in the subject, and accusative in the object.
// e.g., "he fell" and "she likes him", not *"him fell" and *"her likes he"
// * pron: he, she, they, or ref -- what type of pronoun this is
// * needs_pron: whether this is a reflexive that needs to bind to another
// pronoun.
N[ num: sg, case: nom, pron: he ] -> he
N[ num: sg, case: acc, pron: he ] -> him
N[ num: sg, case: acc, pron: ref, needs_pron: he ] -> himself
N[ num: sg, case: nom, pron: she ] -> she
N[ num: sg, case: acc, pron: she ] -> her
N[ num: sg, case: acc, pron: ref, needs_pron: she] -> herself
N[ num: pl, case: nom, pron: they ] -> they
N[ num: pl, case: acc, pron: they ] -> them
N[ num: pl, case: acc, pron: ref, needs_pron: they ] -> themselves
N[ num: sg, case: acc, pron: ref, needs_pron: they ] -> themself
// Names
// The added features are:
// * num: sg, as people are singular ("mary likes her" / *"mary like her")
// * case: **top**, as names can be both subjects and objects
// ("mary likes her" / "she likes mary")
// * pron: whichever pronoun the person uses for reflexive agreement
// mary pron: she => mary likes herself
// sue pron: they => sue likes themself
// takeshi pron: he => takeshi likes himself
N[ num: sg, case: **top**, pron: she ] -> mary
N[ num: sg, case: **top**, pron: they ] -> sue
N[ num: sg, case: **top**, pron: he ] -> takeshi
N[ num: sg, case: **top**, pron: he ] -> robert
// Complementizer doesn't need features
Comp -> that
// Verbs -- intransitive, transitive, and clausal
// The added features are:
// * num: sg, pl, or **top** -- to match the noun numbers.
// **top** will match either sg or pl, as past-tense verbs in English
// don't agree in number: "he fell" and "they fell" are both fine
// * tense: past or nonpast -- this won't be used for agreement, but will be
// copied into the final feature structure, and the client code could do
// something with it
IV[ num: sg, tense: nonpast ] -> falls
IV[ num: pl, tense: nonpast ] -> fall
IV[ num: **top**, tense: past ] -> fell
TV[ num: sg, tense: nonpast ] -> likes
TV[ num: pl, tense: nonpast ] -> like
TV[ num: **top**, tense: past ] -> liked
CV[ num: sg, tense: nonpast ] -> says
CV[ num: pl, tense: nonpast ] -> say
CV[ num: **top**, tense: past ] -> said
Now that our lexicon is updated with features, we can update our sentence rules to constrain parsing based on those features. This uses two new features, tags and unification. Tags allow features to be associated between nodes in a rule, and unification controls how those features are compatible. The rules for unification are:
If unification fails anywhere, the parse is aborted and the tree is discarded. This allows the programmer to discard trees if features don't match.
// Sentence rules
// Intransitive verb:
// * Subject must be nominative case
// * Subject and verb must agree in number (copied through #1)
S -> N[ case: nom, num: #1 ] IV[ num: #1 ]
// Transitive verb:
// * Subject must be nominative case
// * Subject and verb must agree in number (copied through #2)
// * If there's a reflexive in the object position, make sure its `needs_pron`
// feature matches the subject's `pron` feature. If the object isn't a
// reflexive, then its `needs_pron` feature will implicitly be `**top**`, so
// will unify with anything.
S -> N[ case: nom, pron: #1, num: #2 ] TV[ num: #2 ] N[ case: acc, needs_pron: #1 ]
// Clausal verb:
// * Subject must be nominative case
// * Subject and verb must agree in number (copied through #1)
// * Reflexives can't cross clause boundaries (*"He said that she likes himself"),
// so we can ignore reflexives and delegate to inner clause rule
S -> N[ case: nom, num: #1 ] CV[ num: #1 ] Comp S
Now that we have this augmented grammar (available as examples/reflexives.fgr
), we can try it out and see that it rejects illicit sentences that were previously accepted, while still accepting valid ones:
> he fell
Parsed 1 tree
(0..2: S
(0..1: N (0..1: he))
(1..2: IV (1..2: fell)))
[
child-1: [
child-0: [ word: fell ]
num: #0 sg
tense: past
]
child-0: [
pron: he
case: nom
num: #0
child-0: [ word: he ]
]
]
> he like him
Parsed 0 trees
> he likes himself
Parsed 1 tree
(0..3: S
(0..1: N (0..1: he))
(1..2: TV (1..2: likes))
(2..3: N (2..3: himself)))
[
child-1: [
num: #0 sg
child-0: [ word: likes ]
tense: nonpast
]
child-2: [
needs_pron: #1 he
num: sg
child-0: [ word: himself ]
pron: ref
case: acc
]
child-0: [
child-0: [ word: he ]
pron: #1
num: #0
case: nom
]
]
> he likes herself
Parsed 0 trees
> mary likes herself
Parsed 1 tree
(0..3: S
(0..1: N (0..1: mary))
(1..2: TV (1..2: likes))
(2..3: N (2..3: herself)))
[
child-0: [
pron: #0 she
num: #1 sg
case: nom
child-0: [ word: mary ]
]
child-1: [
tense: nonpast
child-0: [ word: likes ]
num: #1
]
child-2: [
child-0: [ word: herself ]
num: sg
pron: ref
case: acc
needs_pron: #0
]
]
> mary likes themself
Parsed 0 trees
> sue likes themself
Parsed 1 tree
(0..3: S
(0..1: N (0..1: sue))
(1..2: TV (1..2: likes))
(2..3: N (2..3: themself)))
[
child-0: [
pron: #0 they
child-0: [ word: sue ]
case: nom
num: #1 sg
]
child-1: [
tense: nonpast
num: #1
child-0: [ word: likes ]
]
child-2: [
needs_pron: #0
case: acc
pron: ref
child-0: [ word: themself ]
num: sg
]
]
> sue likes himself
Parsed 0 trees
If this is interesting to you and you want to learn more, you can check out my blog series, the excellent textbook Syntactic Theory: A Formal Introduction (2nd ed.), and the DELPH-IN project, whose work on the LKB inspired this simplified version.
I need to write this section in more detail, but if you're comfortable with Rust, I suggest looking through the codebase. It's not perfect, it started as one of my first Rust projects (after migrating through F# -> TypeScript -> C in search of the right performance/ergonomics tradeoff), and it could use more tests, but overall it's not too bad.
Basically, the processing pipeline is:
Grammar
structGrammar
is defined in rules.rs
.Grammar
is Grammar::parse_from_file
, which is mostly a hand-written recusive descent parser in parse_grammar.rs
. Yes, I recognize the irony here.Grammar::parse
, which does everything for you, or Grammar::parse_chart
, which just does the chart)earley.rs
forest.rs
, using an algorithm I found in a very useful blog series I forget the URL for, because the algorithms in the academic literature for this are... weird.The most interesting thing you can do via code and not via the CLI is probably getting at the raw feature DAG, as that would let you do things like pronoun coreference. The DAG code is in featurestructure.rs
, and should be fairly approachable -- there's a lot of Rust ceremony around Rc<RefCell<...>>
because using an arena allocation crate seemed too harlike overkill, but that is somewhat mitigated by the NodeRef
type alias. Hit me up at https://vgel.me/contact if you need help with anything here!
Download Details:
Author: vgel
Source Code: https://github.com/vgel/treebender
License: MIT License
1648803600
I founded this project, because I wanted to publish the code I wrote in the last two years, when I tried to write enhanced checking for PostgreSQL upstream. It was not fully successful - integration into upstream requires some larger plpgsql refactoring - probably it will not be done in next years (now is Dec 2013). But written code is fully functional and can be used in production (and it is used in production). So, I created this extension to be available for all plpgsql developers.
If you like it and if you would to join to development of this extension, register yourself to postgresql extension hacking google group.
Features
I invite any ideas, patches, bugreports.
plpgsql_check is next generation of plpgsql_lint. It allows to check source code by explicit call plpgsql_check_function.
PostgreSQL PostgreSQL 10, 11, 12, 13 and 14 are supported.
The SQL statements inside PL/pgSQL functions are checked by validator for semantic errors. These errors can be found by plpgsql_check_function:
Active mode
postgres=# CREATE EXTENSION plpgsql_check;
LOAD
postgres=# CREATE TABLE t1(a int, b int);
CREATE TABLE
postgres=#
CREATE OR REPLACE FUNCTION public.f1()
RETURNS void
LANGUAGE plpgsql
AS $function$
DECLARE r record;
BEGIN
FOR r IN SELECT * FROM t1
LOOP
RAISE NOTICE '%', r.c; -- there is bug - table t1 missing "c" column
END LOOP;
END;
$function$;
CREATE FUNCTION
postgres=# select f1(); -- execution doesn't find a bug due to empty table t1
f1
────
(1 row)
postgres=# \x
Expanded display is on.
postgres=# select * from plpgsql_check_function_tb('f1()');
─[ RECORD 1 ]───────────────────────────
functionid │ f1
lineno │ 6
statement │ RAISE
sqlstate │ 42703
message │ record "r" has no field "c"
detail │ [null]
hint │ [null]
level │ error
position │ 0
query │ [null]
postgres=# \sf+ f1
CREATE OR REPLACE FUNCTION public.f1()
RETURNS void
LANGUAGE plpgsql
1 AS $function$
2 DECLARE r record;
3 BEGIN
4 FOR r IN SELECT * FROM t1
5 LOOP
6 RAISE NOTICE '%', r.c; -- there is bug - table t1 missing "c" column
7 END LOOP;
8 END;
9 $function$
Function plpgsql_check_function() has three possible formats: text, json or xml
select * from plpgsql_check_function('f1()', fatal_errors := false);
plpgsql_check_function
------------------------------------------------------------------------
error:42703:4:SQL statement:column "c" of relation "t1" does not exist
Query: update t1 set c = 30
-- ^
error:42P01:7:RAISE:missing FROM-clause entry for table "r"
Query: SELECT r.c
-- ^
error:42601:7:RAISE:too few parameters specified for RAISE
(7 rows)
postgres=# select * from plpgsql_check_function('fx()', format:='xml');
plpgsql_check_function
────────────────────────────────────────────────────────────────
<Function oid="16400"> ↵
<Issue> ↵
<Level>error</level> ↵
<Sqlstate>42P01</Sqlstate> ↵
<Message>relation "foo111" does not exist</Message> ↵
<Stmt lineno="3">RETURN</Stmt> ↵
<Query position="23">SELECT (select a from foo111)</Query>↵
</Issue> ↵
</Function>
(1 row)
You can set level of warnings via function's parameters:
'fx()'::regprocedure
or 16799::regprocedure
. Possible alternative is using a name only, when function's name is unique - like 'fx'
. When the name is not unique or the function doesn't exists it raises a error.relid DEFAULT 0
- oid of relation assigned with trigger function. It is necessary for check of any trigger function.
fatal_errors boolean DEFAULT true
- stop on first error
other_warnings boolean DEFAULT true
- show warnings like different attributes number in assignmenet on left and right side, variable overlaps function's parameter, unused variables, unwanted casting, ..
extra_warnings boolean DEFAULT true
- show warnings like missing RETURN
, shadowed variables, dead code, never read (unused) function's parameter, unmodified variables, modified auto variables, ..
performance_warnings boolean DEFAULT false
- performance related warnings like declared type with type modificator, casting, implicit casts in where clause (can be reason why index is not used), ..
security_warnings boolean DEFAULT false
- security related checks like SQL injection vulnerability detection
anyelementtype regtype DEFAULT 'int'
- a real type used instead anyelement type
anyenumtype regtype DEFAULT '-'
- a real type used instead anyenum type
anyrangetype regtype DEFAULT 'int4range'
- a real type used instead anyrange type
anycompatibletype DEFAULT 'int'
- a real type used instead anycompatible type
anycompatiblerangetype DEFAULT 'int4range'
- a real type used instead anycompatible range type
without_warnings DEFAULT false
- disable all warnings
all_warnings DEFAULT false
- enable all warnings
newtable DEFAULT NULL
, oldtable DEFAULT NULL
- the names of NEW or OLD transitive tables. These parameters are required when transitive tables are used.
When you want to check any trigger, you have to enter a relation that will be used together with trigger function
CREATE TABLE bar(a int, b int);
postgres=# \sf+ foo_trg
CREATE OR REPLACE FUNCTION public.foo_trg()
RETURNS trigger
LANGUAGE plpgsql
1 AS $function$
2 BEGIN
3 NEW.c := NEW.a + NEW.b;
4 RETURN NEW;
5 END;
6 $function$
Missing relation specification
postgres=# select * from plpgsql_check_function('foo_trg()');
ERROR: missing trigger relation
HINT: Trigger relation oid must be valid
Correct trigger checking (with specified relation)
postgres=# select * from plpgsql_check_function('foo_trg()', 'bar');
plpgsql_check_function
--------------------------------------------------------
error:42703:3:assignment:record "new" has no field "c"
(1 row)
For triggers with transitive tables you can set a oldtable
or newtable
parameters:
create or replace function footab_trig_func()
returns trigger as $$
declare x int;
begin
if false then
-- should be ok;
select count(*) from newtab into x;
-- should fail;
select count(*) from newtab where d = 10 into x;
end if;
return null;
end;
$$ language plpgsql;
select * from plpgsql_check_function('footab_trig_func','footab', newtable := 'newtab');
You can use the plpgsql_check_function for mass check functions and mass check triggers. Please, test following queries:
-- check all nontrigger plpgsql functions
SELECT p.oid, p.proname, plpgsql_check_function(p.oid)
FROM pg_catalog.pg_namespace n
JOIN pg_catalog.pg_proc p ON pronamespace = n.oid
JOIN pg_catalog.pg_language l ON p.prolang = l.oid
WHERE l.lanname = 'plpgsql' AND p.prorettype <> 2279;
or
SELECT p.proname, tgrelid::regclass, cf.*
FROM pg_proc p
JOIN pg_trigger t ON t.tgfoid = p.oid
JOIN pg_language l ON p.prolang = l.oid
JOIN pg_namespace n ON p.pronamespace = n.oid,
LATERAL plpgsql_check_function(p.oid, t.tgrelid) cf
WHERE n.nspname = 'public' and l.lanname = 'plpgsql'
or
-- check all plpgsql functions (functions or trigger functions with defined triggers)
SELECT
(pcf).functionid::regprocedure, (pcf).lineno, (pcf).statement,
(pcf).sqlstate, (pcf).message, (pcf).detail, (pcf).hint, (pcf).level,
(pcf)."position", (pcf).query, (pcf).context
FROM
(
SELECT
plpgsql_check_function_tb(pg_proc.oid, COALESCE(pg_trigger.tgrelid, 0)) AS pcf
FROM pg_proc
LEFT JOIN pg_trigger
ON (pg_trigger.tgfoid = pg_proc.oid)
WHERE
prolang = (SELECT lang.oid FROM pg_language lang WHERE lang.lanname = 'plpgsql') AND
pronamespace <> (SELECT nsp.oid FROM pg_namespace nsp WHERE nsp.nspname = 'pg_catalog') AND
-- ignore unused triggers
(pg_proc.prorettype <> (SELECT typ.oid FROM pg_type typ WHERE typ.typname = 'trigger') OR
pg_trigger.tgfoid IS NOT NULL)
OFFSET 0
) ss
ORDER BY (pcf).functionid::regprocedure::text, (pcf).lineno
Passive mode
Functions should be checked on start - plpgsql_check module must be loaded.
plpgsql_check.mode = [ disabled | by_function | fresh_start | every_start ]
plpgsql_check.fatal_errors = [ yes | no ]
plpgsql_check.show_nonperformance_warnings = false
plpgsql_check.show_performance_warnings = false
Default mode is by_function, that means that the enhanced check is done only in active mode - by plpgsql_check_function. fresh_start
means cold start.
You can enable passive mode by
load 'plpgsql'; -- 1.1 and higher doesn't need it
load 'plpgsql_check';
set plpgsql_check.mode = 'every_start';
SELECT fx(10); -- run functions - function is checked before runtime starts it
Limits
plpgsql_check should find almost all errors on really static code. When developer use some PLpgSQL's dynamic features like dynamic SQL or record data type, then false positives are possible. These should be rare - in well written code - and then the affected function should be redesigned or plpgsql_check should be disabled for this function.
CREATE OR REPLACE FUNCTION f1()
RETURNS void AS $$
DECLARE r record;
BEGIN
FOR r IN EXECUTE 'SELECT * FROM t1'
LOOP
RAISE NOTICE '%', r.c;
END LOOP;
END;
$$ LANGUAGE plpgsql SET plpgsql.enable_check TO false;
A usage of plpgsql_check adds a small overhead (in enabled passive mode) and you should use it only in develop or preprod environments.
This module doesn't check queries that are assembled in runtime. It is not possible to identify results of dynamic queries - so plpgsql_check cannot to set correct type to record variables and cannot to check a dependent SQLs and expressions.
When type of record's variable is not know, you can assign it explicitly with pragma type
:
DECLARE r record;
BEGIN
EXECUTE format('SELECT * FROM %I', _tablename) INTO r;
PERFORM plpgsql_check_pragma('type: r (id int, processed bool)');
IF NOT r.processed THEN
...
Attention: The SQL injection check can detect only some SQL injection vulnerabilities. This tool cannot be used for security audit! Some issues should not be detected. This check can raise false alarms too - probably when variable is sanitized by other command or when value is of some compose type.
plpgsql_check should not to detect structure of referenced cursors. A reference on cursor in PLpgSQL is implemented as name of global cursor. In check time, the name is not known (not in all possibilities), and global cursor doesn't exist. It is significant break for any static analyse. PLpgSQL cannot to set correct type for record variables and cannot to check a dependent SQLs and expressions. A solution is same like dynamic SQL. Don't use record variable as target when you use refcursor type or disable plpgsql_check for these functions.
CREATE OR REPLACE FUNCTION foo(refcur_var refcursor)
RETURNS void AS $$
DECLARE
rec_var record;
BEGIN
FETCH refcur_var INTO rec_var; -- this is STOP for plpgsql_check
RAISE NOTICE '%', rec_var; -- record rec_var is not assigned yet error
In this case a record type should not be used (use known rowtype instead):
CREATE OR REPLACE FUNCTION foo(refcur_var refcursor)
RETURNS void AS $$
DECLARE
rec_var some_rowtype;
BEGIN
FETCH refcur_var INTO rec_var;
RAISE NOTICE '%', rec_var;
plpgsql_check cannot verify queries over temporary tables that are created in plpgsql's function runtime. For this use case it is necessary to create a fake temp table or disable plpgsql_check for this function.
In reality temp tables are stored in own (per user) schema with higher priority than persistent tables. So you can do (with following trick safetly):
CREATE OR REPLACE FUNCTION public.disable_dml()
RETURNS trigger
LANGUAGE plpgsql AS $function$
BEGIN
RAISE EXCEPTION SQLSTATE '42P01'
USING message = format('this instance of %I table doesn''t allow any DML operation', TG_TABLE_NAME),
hint = format('you should to run "CREATE TEMP TABLE %1$I(LIKE %1$I INCLUDING ALL);" statement',
TG_TABLE_NAME);
RETURN NULL;
END;
$function$;
CREATE TABLE foo(a int, b int); -- doesn't hold data ever
CREATE TRIGGER foo_disable_dml
BEFORE INSERT OR UPDATE OR DELETE ON foo
EXECUTE PROCEDURE disable_dml();
postgres=# INSERT INTO foo VALUES(10,20);
ERROR: this instance of foo table doesn't allow any DML operation
HINT: you should to run "CREATE TEMP TABLE foo(LIKE foo INCLUDING ALL);" statement
postgres=#
CREATE TABLE
postgres=# INSERT INTO foo VALUES(10,20);
INSERT 0 1
This trick emulates GLOBAL TEMP tables partially and it allows a statical validation. Other possibility is using a [template foreign data wrapper] (https://github.com/okbob/template_fdw)
You can use pragma table
and create ephemeral table:
BEGIN
CREATE TEMP TABLE xxx(a int);
PERFORM plpgsql_check_pragma('table: xxx(a int)');
INSERT INTO xxx VALUES(10);
Dependency list
A function plpgsql_show_dependency_tb can show all functions, operators and relations used inside processed function:
postgres=# select * from plpgsql_show_dependency_tb('testfunc(int,float)');
┌──────────┬───────┬────────┬─────────┬────────────────────────────┐
│ type │ oid │ schema │ name │ params │
╞══════════╪═══════╪════════╪═════════╪════════════════════════════╡
│ FUNCTION │ 36008 │ public │ myfunc1 │ (integer,double precision) │
│ FUNCTION │ 35999 │ public │ myfunc2 │ (integer,double precision) │
│ OPERATOR │ 36007 │ public │ ** │ (integer,integer) │
│ RELATION │ 36005 │ public │ myview │ │
│ RELATION │ 36002 │ public │ mytable │ │
└──────────┴───────┴────────┴─────────┴────────────────────────────┘
(4 rows)
Profiler
The plpgsql_check contains simple profiler of plpgsql functions and procedures. It can work with/without a access to shared memory. It depends on shared_preload_libraries
config. When plpgsql_check was initialized by shared_preload_libraries
, then it can allocate shared memory, and function's profiles are stored there. When plpgsql_check cannot to allocate shared momory, the profile is stored in session memory.
Due dependencies, shared_preload_libraries
should to contains plpgsql
first
postgres=# show shared_preload_libraries ;
┌──────────────────────────┐
│ shared_preload_libraries │
╞══════════════════════════╡
│ plpgsql,plpgsql_check │
└──────────────────────────┘
(1 row)
The profiler is active when GUC plpgsql_check.profiler
is on. The profiler doesn't require shared memory, but if there are not shared memory, then the profile is limmitted just to active session.
When plpgsql_check is initialized by shared_preload_libraries
, another GUC is available to configure the amount of shared memory used by the profiler: plpgsql_check.profiler_max_shared_chunks
. This defines the maximum number of statements chunk that can be stored in shared memory. For each plpgsql function (or procedure), the whole content is split into chunks of 30 statements. If needed, multiple chunks can be used to store the whole content of a single function. A single chunk is 1704 bytes. The default value for this GUC is 15000, which should be enough for big projects containing hundred of thousands of statements in plpgsql, and will consume about 24MB of memory. If your project doesn't require that much number of chunks, you can set this parameter to a smaller number in order to decrease the memory usage. The minimum value is 50 (which should consume about 83kB of memory), and the maximum value is 100000 (which should consume about 163MB of memory). Changing this parameter requires a PostgreSQL restart.
The profiler will also retrieve the query identifier for each instruction that contains an expression or optimizable statement. Note that this requires pg_stat_statements, or another similar third-party extension), to be installed. There are some limitations to the query identifier retrieval:
Attention: A update of shared profiles can decrease performance on servers under higher load.
The profile can be displayed by function plpgsql_profiler_function_tb
:
postgres=# select lineno, avg_time, source from plpgsql_profiler_function_tb('fx(int)');
┌────────┬──────────┬───────────────────────────────────────────────────────────────────┐
│ lineno │ avg_time │ source │
╞════════╪══════════╪═══════════════════════════════════════════════════════════════════╡
│ 1 │ │ │
│ 2 │ │ declare result int = 0; │
│ 3 │ 0.075 │ begin │
│ 4 │ 0.202 │ for i in 1..$1 loop │
│ 5 │ 0.005 │ select result + i into result; select result + i into result; │
│ 6 │ │ end loop; │
│ 7 │ 0 │ return result; │
│ 8 │ │ end; │
└────────┴──────────┴───────────────────────────────────────────────────────────────────┘
(9 rows)
The profile per statements (not per line) can be displayed by function plpgsql_profiler_function_statements_tb:
CREATE OR REPLACE FUNCTION public.fx1(a integer)
RETURNS integer
LANGUAGE plpgsql
1 AS $function$
2 begin
3 if a > 10 then
4 raise notice 'ahoj';
5 return -1;
6 else
7 raise notice 'nazdar';
8 return 1;
9 end if;
10 end;
11 $function$
postgres=# select stmtid, parent_stmtid, parent_note, lineno, exec_stmts, stmtname
from plpgsql_profiler_function_statements_tb('fx1');
┌────────┬───────────────┬─────────────┬────────┬────────────┬─────────────────┐
│ stmtid │ parent_stmtid │ parent_note │ lineno │ exec_stmts │ stmtname │
╞════════╪═══════════════╪═════════════╪════════╪════════════╪═════════════════╡
│ 0 │ ∅ │ ∅ │ 2 │ 0 │ statement block │
│ 1 │ 0 │ body │ 3 │ 0 │ IF │
│ 2 │ 1 │ then body │ 4 │ 0 │ RAISE │
│ 3 │ 1 │ then body │ 5 │ 0 │ RETURN │
│ 4 │ 1 │ else body │ 7 │ 0 │ RAISE │
│ 5 │ 1 │ else body │ 8 │ 0 │ RETURN │
└────────┴───────────────┴─────────────┴────────┴────────────┴─────────────────┘
(6 rows)
All stored profiles can be displayed by calling function plpgsql_profiler_functions_all
:
postgres=# select * from plpgsql_profiler_functions_all();
┌───────────────────────┬────────────┬────────────┬──────────┬─────────────┬──────────┬──────────┐
│ funcoid │ exec_count │ total_time │ avg_time │ stddev_time │ min_time │ max_time │
╞═══════════════════════╪════════════╪════════════╪══════════╪═════════════╪══════════╪══════════╡
│ fxx(double precision) │ 1 │ 0.01 │ 0.01 │ 0.00 │ 0.01 │ 0.01 │
└───────────────────────┴────────────┴────────────┴──────────┴─────────────┴──────────┴──────────┘
(1 row)
There are two functions for cleaning stored profiles: plpgsql_profiler_reset_all()
and plpgsql_profiler_reset(regprocedure)
.
plpgsql_check provides two functions:
plpgsql_coverage_statements(name)
plpgsql_coverage_branches(name)
There is another very good PLpgSQL profiler - https://bitbucket.org/openscg/plprofiler
My extension is designed to be simple for use and practical. Nothing more or less.
plprofiler is more complex. It build call graphs and from this graph it can creates flame graph of execution times.
Both extensions can be used together with buildin PostgreSQL's feature - tracking functions.
set track_functions to 'pl';
...
select * from pg_stat_user_functions;
Tracer
plpgsql_check provides a tracing possibility - in this mode you can see notices on start or end functions (terse and default verbosity) and start or end statements (verbose verbosity). For default and verbose verbosity the content of function arguments is displayed. The content of related variables are displayed when verbosity is verbose.
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 ->> start of inline_code_block (Oid=0)
NOTICE: #2 ->> start of function fx(integer,integer,date,text) (Oid=16405)
NOTICE: #2 call by inline_code_block line 1 at PERFORM
NOTICE: #2 "a" => '10', "b" => null, "c" => '2020-08-03', "d" => 'stěhule'
NOTICE: #4 ->> start of function fx(integer) (Oid=16404)
NOTICE: #4 call by fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: #4 "a" => '10'
NOTICE: #4 <<- end of function fx (elapsed time=0.098 ms)
NOTICE: #2 <<- end of function fx (elapsed time=0.399 ms)
NOTICE: #0 <<- end of block (elapsed time=0.754 ms)
The number after #
is a execution frame counter (this number is related to deep of error context stack). It allows to pair start end and of function.
Tracing is enabled by setting plpgsql_check.tracer
to on
. Attention - enabling this behaviour has significant negative impact on performance (unlike the profiler). You can set a level for output used by tracer plpgsql_check.tracer_errlevel
(default is notice
). The output content is limited by length specified by plpgsql_check.tracer_variable_max_length
configuration variable.
In terse verbose mode the output is reduced:
postgres=# set plpgsql_check.tracer_verbosity TO terse;
SET
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 start of inline code block (oid=0)
NOTICE: #2 start of fx (oid=16405)
NOTICE: #4 start of fx (oid=16404)
NOTICE: #4 end of fx
NOTICE: #2 end of fx
NOTICE: #0 end of inline code block
In verbose mode the output is extended about statement details:
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 ->> start of block inline_code_block (oid=0)
NOTICE: #0.1 1 --> start of PERFORM
NOTICE: #2 ->> start of function fx(integer,integer,date,text) (oid=16405)
NOTICE: #2 call by inline_code_block line 1 at PERFORM
NOTICE: #2 "a" => '10', "b" => null, "c" => '2020-08-04', "d" => 'stěhule'
NOTICE: #2.1 1 --> start of PERFORM
NOTICE: #2.1 "a" => '10'
NOTICE: #4 ->> start of function fx(integer) (oid=16404)
NOTICE: #4 call by fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: #4 "a" => '10'
NOTICE: #4.1 6 --> start of assignment
NOTICE: #4.1 "a" => '10', "b" => '20'
NOTICE: #4.1 <-- end of assignment (elapsed time=0.076 ms)
NOTICE: #4.1 "res" => '130'
NOTICE: #4.2 7 --> start of RETURN
NOTICE: #4.2 "res" => '130'
NOTICE: #4.2 <-- end of RETURN (elapsed time=0.054 ms)
NOTICE: #4 <<- end of function fx (elapsed time=0.373 ms)
NOTICE: #2.1 <-- end of PERFORM (elapsed time=0.589 ms)
NOTICE: #2 <<- end of function fx (elapsed time=0.727 ms)
NOTICE: #0.1 <-- end of PERFORM (elapsed time=1.147 ms)
NOTICE: #0 <<- end of block (elapsed time=1.286 ms)
Special feature of tracer is tracing of ASSERT
statement when plpgsql_check.trace_assert
is on
. When plpgsql_check.trace_assert_verbosity
is DEFAULT
, then all function's or procedure's variables are displayed when assert expression is false. When this configuration is VERBOSE
then all variables from all plpgsql frames are displayed. This behaviour is independent on plpgsql.check_asserts
value. It can be used, although the assertions are disabled in plpgsql runtime.
postgres=# set plpgsql_check.tracer to off;
postgres=# set plpgsql_check.trace_assert_verbosity TO verbose;
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false
NOTICE: "a" => '10', "res" => null, "b" => '20'
NOTICE: #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: "a" => '10', "b" => null, "c" => '2020-08-05', "d" => 'stěhule'
NOTICE: #0 PL/pgSQL function inline_code_block line 1 at PERFORM
ERROR: assertion failed
CONTEXT: PL/pgSQL function fx(integer) line 12 at ASSERT
SQL statement "SELECT fx(a)"
PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
SQL statement "SELECT fx(10,null, 'now', e'stěhule')"
PL/pgSQL function inline_code_block line 1 at PERFORM
postgres=# set plpgsql.check_asserts to off;
SET
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false
NOTICE: "a" => '10', "res" => null, "b" => '20'
NOTICE: #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: "a" => '10', "b" => null, "c" => '2020-08-05', "d" => 'stěhule'
NOTICE: #0 PL/pgSQL function inline_code_block line 1 at PERFORM
DO
Tracer prints content of variables or function arguments. For security definer function, this content can hold security sensitive data. This is reason why tracer is disabled by default and should be enabled only with super user rights plpgsql_check.enable_tracer
.
Pragma
You can configure plpgsql_check behave inside checked function with "pragma" function. This is a analogy of PL/SQL or ADA language of PRAGMA feature. PLpgSQL doesn't support PRAGMA, but plpgsql_check detects function named plpgsql_check_pragma
and get options from parameters of this function. These plpgsql_check options are valid to end of group of statements.
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
BEGIN
...
-- for following statements disable check
PERFORM plpgsql_check_pragma('disable:check');
...
-- enable check again
PERFORM plpgsql_check_pragma('enable:check');
...
END;
$$ LANGUAGE plpgsql;
The function plpgsql_check_pragma
is immutable function that returns one. It is defined by plpgsql_check
extension. You can declare alternative plpgsql_check_pragma
function like:
CREATE OR REPLACE FUNCTION plpgsql_check_pragma(VARIADIC args[])
RETURNS int AS $$
SELECT 1
$$ LANGUAGE sql IMMUTABLE;
Using pragma function in declaration part of top block sets options on function level too.
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
DECLARE
aux int := plpgsql_check_pragma('disable:extra_warnings');
...
Shorter syntax for pragma is supported too:
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
DECLARE r record;
BEGIN
PERFORM 'PRAGMA:TYPE:r (a int, b int)';
PERFORM 'PRAGMA:TABLE: x (like pg_class)';
...
echo:str
- print string (for testing)
status:check
,status:tracer
, status:other_warnings
, status:performance_warnings
, status:extra_warnings
,status:security_warnings
enable:check
,enable:tracer
, enable:other_warnings
, enable:performance_warnings
, enable:extra_warnings
,enable:security_warnings
disable:check
,disable:tracer
, disable:other_warnings
, disable:performance_warnings
, disable:extra_warnings
,disable:security_warnings
type:varname typename
or type:varname (fieldname type, ...)
- set type to variable of record type
table: name (column_name type, ...)
or table: name (like tablename)
- create ephereal table
Pragmas enable:tracer
and disable:tracer
are active for Postgres 12 and higher
Compilation
You need a development environment for PostgreSQL extensions:
make clean
make install
result:
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 clean
rm -f plpgsql_check.so libplpgsql_check.a libplpgsql_check.pc
rm -f plpgsql_check.o
rm -rf results/ regression.diffs regression.out tmp_check/ log/
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 all
clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -I. -I./ -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include/internal -D_GNU_SOURCE -c -o plpgsql_check.o plpgsql_check.c
clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -shared -o plpgsql_check.so plpgsql_check.o -L/usr/local/pgsql/lib -Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags
[pavel@localhost plpgsql_check]$ su root
Password: *******
[root@localhost plpgsql_check]# make USE_PGXS=1 install
/usr/bin/mkdir -p '/usr/local/pgsql/lib'
/usr/bin/mkdir -p '/usr/local/pgsql/share/extension'
/usr/bin/mkdir -p '/usr/local/pgsql/share/extension'
/usr/bin/install -c -m 755 plpgsql_check.so '/usr/local/pgsql/lib/plpgsql_check.so'
/usr/bin/install -c -m 644 plpgsql_check.control '/usr/local/pgsql/share/extension/'
/usr/bin/install -c -m 644 plpgsql_check--0.9.sql '/usr/local/pgsql/share/extension/'
[root@localhost plpgsql_check]# exit
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 installcheck
/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress --inputdir=./ --psqldir='/usr/local/pgsql/bin' --dbname=pl_regression --load-language=plpgsql --dbname=contrib_regression plpgsql_check_passive plpgsql_check_active plpgsql_check_active-9.5
(using postmaster on Unix socket, default port)
============== dropping database "contrib_regression" ==============
DROP DATABASE
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
============== installing plpgsql ==============
CREATE LANGUAGE
============== running regression test queries ==============
test plpgsql_check_passive ... ok
test plpgsql_check_active ... ok
test plpgsql_check_active-9.5 ... ok
=====================
All 3 tests passed.
=====================
Sometimes successful compilation can require libicu-dev package (PostgreSQL 10 and higher - when pg was compiled with ICU support)
sudo apt install libicu-dev
You can check precompiled dll libraries http://okbob.blogspot.cz/2015/02/plpgsqlcheck-is-available-for-microsoft.html
or compile by self:
plpgsql_check.dll
to PostgreSQL\14\lib
plpgsql_check.control
and plpgsql_check--2.1.sql
to PostgreSQL\14\share\extension
Compilation against PostgreSQL 10 requires libICU!
Licence
Copyright (c) Pavel Stehule (pavel.stehule@gmail.com)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Note
If you like it, send a postcard to address
Pavel Stehule
Skalice 12
256 01 Benesov u Prahy
Czech Republic
I invite any questions, comments, bug reports, patches on mail address pavel.stehule@gmail.com
Author: okbob
Source Code: https://github.com/okbob/plpgsql_check
License: View license
1648900800
I founded this project, because I wanted to publish the code I wrote in the last two years, when I tried to write enhanced checking for PostgreSQL upstream. It was not fully successful - integration into upstream requires some larger plpgsql refactoring - probably it will not be done in next years (now is Dec 2013). But written code is fully functional and can be used in production (and it is used in production). So, I created this extension to be available for all plpgsql developers.
If you like it and if you would to join to development of this extension, register yourself to postgresql extension hacking google group.
Features
I invite any ideas, patches, bugreports.
plpgsql_check is next generation of plpgsql_lint. It allows to check source code by explicit call plpgsql_check_function.
PostgreSQL PostgreSQL 10, 11, 12, 13 and 14 are supported.
The SQL statements inside PL/pgSQL functions are checked by validator for semantic errors. These errors can be found by plpgsql_check_function:
Active mode
postgres=# CREATE EXTENSION plpgsql_check;
LOAD
postgres=# CREATE TABLE t1(a int, b int);
CREATE TABLE
postgres=#
CREATE OR REPLACE FUNCTION public.f1()
RETURNS void
LANGUAGE plpgsql
AS $function$
DECLARE r record;
BEGIN
FOR r IN SELECT * FROM t1
LOOP
RAISE NOTICE '%', r.c; -- there is bug - table t1 missing "c" column
END LOOP;
END;
$function$;
CREATE FUNCTION
postgres=# select f1(); -- execution doesn't find a bug due to empty table t1
f1
────
(1 row)
postgres=# \x
Expanded display is on.
postgres=# select * from plpgsql_check_function_tb('f1()');
─[ RECORD 1 ]───────────────────────────
functionid │ f1
lineno │ 6
statement │ RAISE
sqlstate │ 42703
message │ record "r" has no field "c"
detail │ [null]
hint │ [null]
level │ error
position │ 0
query │ [null]
postgres=# \sf+ f1
CREATE OR REPLACE FUNCTION public.f1()
RETURNS void
LANGUAGE plpgsql
1 AS $function$
2 DECLARE r record;
3 BEGIN
4 FOR r IN SELECT * FROM t1
5 LOOP
6 RAISE NOTICE '%', r.c; -- there is bug - table t1 missing "c" column
7 END LOOP;
8 END;
9 $function$
Function plpgsql_check_function() has three possible formats: text, json or xml
select * from plpgsql_check_function('f1()', fatal_errors := false);
plpgsql_check_function
------------------------------------------------------------------------
error:42703:4:SQL statement:column "c" of relation "t1" does not exist
Query: update t1 set c = 30
-- ^
error:42P01:7:RAISE:missing FROM-clause entry for table "r"
Query: SELECT r.c
-- ^
error:42601:7:RAISE:too few parameters specified for RAISE
(7 rows)
postgres=# select * from plpgsql_check_function('fx()', format:='xml');
plpgsql_check_function
────────────────────────────────────────────────────────────────
<Function oid="16400"> ↵
<Issue> ↵
<Level>error</level> ↵
<Sqlstate>42P01</Sqlstate> ↵
<Message>relation "foo111" does not exist</Message> ↵
<Stmt lineno="3">RETURN</Stmt> ↵
<Query position="23">SELECT (select a from foo111)</Query>↵
</Issue> ↵
</Function>
(1 row)
You can set level of warnings via function's parameters:
'fx()'::regprocedure
or 16799::regprocedure
. Possible alternative is using a name only, when function's name is unique - like 'fx'
. When the name is not unique or the function doesn't exists it raises a error.relid DEFAULT 0
- oid of relation assigned with trigger function. It is necessary for check of any trigger function.
fatal_errors boolean DEFAULT true
- stop on first error
other_warnings boolean DEFAULT true
- show warnings like different attributes number in assignmenet on left and right side, variable overlaps function's parameter, unused variables, unwanted casting, ..
extra_warnings boolean DEFAULT true
- show warnings like missing RETURN
, shadowed variables, dead code, never read (unused) function's parameter, unmodified variables, modified auto variables, ..
performance_warnings boolean DEFAULT false
- performance related warnings like declared type with type modificator, casting, implicit casts in where clause (can be reason why index is not used), ..
security_warnings boolean DEFAULT false
- security related checks like SQL injection vulnerability detection
anyelementtype regtype DEFAULT 'int'
- a real type used instead anyelement type
anyenumtype regtype DEFAULT '-'
- a real type used instead anyenum type
anyrangetype regtype DEFAULT 'int4range'
- a real type used instead anyrange type
anycompatibletype DEFAULT 'int'
- a real type used instead anycompatible type
anycompatiblerangetype DEFAULT 'int4range'
- a real type used instead anycompatible range type
without_warnings DEFAULT false
- disable all warnings
all_warnings DEFAULT false
- enable all warnings
newtable DEFAULT NULL
, oldtable DEFAULT NULL
- the names of NEW or OLD transitive tables. These parameters are required when transitive tables are used.
When you want to check any trigger, you have to enter a relation that will be used together with trigger function
CREATE TABLE bar(a int, b int);
postgres=# \sf+ foo_trg
CREATE OR REPLACE FUNCTION public.foo_trg()
RETURNS trigger
LANGUAGE plpgsql
1 AS $function$
2 BEGIN
3 NEW.c := NEW.a + NEW.b;
4 RETURN NEW;
5 END;
6 $function$
Missing relation specification
postgres=# select * from plpgsql_check_function('foo_trg()');
ERROR: missing trigger relation
HINT: Trigger relation oid must be valid
Correct trigger checking (with specified relation)
postgres=# select * from plpgsql_check_function('foo_trg()', 'bar');
plpgsql_check_function
--------------------------------------------------------
error:42703:3:assignment:record "new" has no field "c"
(1 row)
For triggers with transitive tables you can set a oldtable
or newtable
parameters:
create or replace function footab_trig_func()
returns trigger as $$
declare x int;
begin
if false then
-- should be ok;
select count(*) from newtab into x;
-- should fail;
select count(*) from newtab where d = 10 into x;
end if;
return null;
end;
$$ language plpgsql;
select * from plpgsql_check_function('footab_trig_func','footab', newtable := 'newtab');
You can use the plpgsql_check_function for mass check functions and mass check triggers. Please, test following queries:
-- check all nontrigger plpgsql functions
SELECT p.oid, p.proname, plpgsql_check_function(p.oid)
FROM pg_catalog.pg_namespace n
JOIN pg_catalog.pg_proc p ON pronamespace = n.oid
JOIN pg_catalog.pg_language l ON p.prolang = l.oid
WHERE l.lanname = 'plpgsql' AND p.prorettype <> 2279;
or
SELECT p.proname, tgrelid::regclass, cf.*
FROM pg_proc p
JOIN pg_trigger t ON t.tgfoid = p.oid
JOIN pg_language l ON p.prolang = l.oid
JOIN pg_namespace n ON p.pronamespace = n.oid,
LATERAL plpgsql_check_function(p.oid, t.tgrelid) cf
WHERE n.nspname = 'public' and l.lanname = 'plpgsql'
or
-- check all plpgsql functions (functions or trigger functions with defined triggers)
SELECT
(pcf).functionid::regprocedure, (pcf).lineno, (pcf).statement,
(pcf).sqlstate, (pcf).message, (pcf).detail, (pcf).hint, (pcf).level,
(pcf)."position", (pcf).query, (pcf).context
FROM
(
SELECT
plpgsql_check_function_tb(pg_proc.oid, COALESCE(pg_trigger.tgrelid, 0)) AS pcf
FROM pg_proc
LEFT JOIN pg_trigger
ON (pg_trigger.tgfoid = pg_proc.oid)
WHERE
prolang = (SELECT lang.oid FROM pg_language lang WHERE lang.lanname = 'plpgsql') AND
pronamespace <> (SELECT nsp.oid FROM pg_namespace nsp WHERE nsp.nspname = 'pg_catalog') AND
-- ignore unused triggers
(pg_proc.prorettype <> (SELECT typ.oid FROM pg_type typ WHERE typ.typname = 'trigger') OR
pg_trigger.tgfoid IS NOT NULL)
OFFSET 0
) ss
ORDER BY (pcf).functionid::regprocedure::text, (pcf).lineno
Passive mode
Functions should be checked on start - plpgsql_check module must be loaded.
plpgsql_check.mode = [ disabled | by_function | fresh_start | every_start ]
plpgsql_check.fatal_errors = [ yes | no ]
plpgsql_check.show_nonperformance_warnings = false
plpgsql_check.show_performance_warnings = false
Default mode is by_function, that means that the enhanced check is done only in active mode - by plpgsql_check_function. fresh_start
means cold start.
You can enable passive mode by
load 'plpgsql'; -- 1.1 and higher doesn't need it
load 'plpgsql_check';
set plpgsql_check.mode = 'every_start';
SELECT fx(10); -- run functions - function is checked before runtime starts it
Limits
plpgsql_check should find almost all errors on really static code. When developer use some PLpgSQL's dynamic features like dynamic SQL or record data type, then false positives are possible. These should be rare - in well written code - and then the affected function should be redesigned or plpgsql_check should be disabled for this function.
CREATE OR REPLACE FUNCTION f1()
RETURNS void AS $$
DECLARE r record;
BEGIN
FOR r IN EXECUTE 'SELECT * FROM t1'
LOOP
RAISE NOTICE '%', r.c;
END LOOP;
END;
$$ LANGUAGE plpgsql SET plpgsql.enable_check TO false;
A usage of plpgsql_check adds a small overhead (in enabled passive mode) and you should use it only in develop or preprod environments.
This module doesn't check queries that are assembled in runtime. It is not possible to identify results of dynamic queries - so plpgsql_check cannot to set correct type to record variables and cannot to check a dependent SQLs and expressions.
When type of record's variable is not know, you can assign it explicitly with pragma type
:
DECLARE r record;
BEGIN
EXECUTE format('SELECT * FROM %I', _tablename) INTO r;
PERFORM plpgsql_check_pragma('type: r (id int, processed bool)');
IF NOT r.processed THEN
...
Attention: The SQL injection check can detect only some SQL injection vulnerabilities. This tool cannot be used for security audit! Some issues should not be detected. This check can raise false alarms too - probably when variable is sanitized by other command or when value is of some compose type.
plpgsql_check should not to detect structure of referenced cursors. A reference on cursor in PLpgSQL is implemented as name of global cursor. In check time, the name is not known (not in all possibilities), and global cursor doesn't exist. It is significant break for any static analyse. PLpgSQL cannot to set correct type for record variables and cannot to check a dependent SQLs and expressions. A solution is same like dynamic SQL. Don't use record variable as target when you use refcursor type or disable plpgsql_check for these functions.
CREATE OR REPLACE FUNCTION foo(refcur_var refcursor)
RETURNS void AS $$
DECLARE
rec_var record;
BEGIN
FETCH refcur_var INTO rec_var; -- this is STOP for plpgsql_check
RAISE NOTICE '%', rec_var; -- record rec_var is not assigned yet error
In this case a record type should not be used (use known rowtype instead):
CREATE OR REPLACE FUNCTION foo(refcur_var refcursor)
RETURNS void AS $$
DECLARE
rec_var some_rowtype;
BEGIN
FETCH refcur_var INTO rec_var;
RAISE NOTICE '%', rec_var;
plpgsql_check cannot verify queries over temporary tables that are created in plpgsql's function runtime. For this use case it is necessary to create a fake temp table or disable plpgsql_check for this function.
In reality temp tables are stored in own (per user) schema with higher priority than persistent tables. So you can do (with following trick safetly):
CREATE OR REPLACE FUNCTION public.disable_dml()
RETURNS trigger
LANGUAGE plpgsql AS $function$
BEGIN
RAISE EXCEPTION SQLSTATE '42P01'
USING message = format('this instance of %I table doesn''t allow any DML operation', TG_TABLE_NAME),
hint = format('you should to run "CREATE TEMP TABLE %1$I(LIKE %1$I INCLUDING ALL);" statement',
TG_TABLE_NAME);
RETURN NULL;
END;
$function$;
CREATE TABLE foo(a int, b int); -- doesn't hold data ever
CREATE TRIGGER foo_disable_dml
BEFORE INSERT OR UPDATE OR DELETE ON foo
EXECUTE PROCEDURE disable_dml();
postgres=# INSERT INTO foo VALUES(10,20);
ERROR: this instance of foo table doesn't allow any DML operation
HINT: you should to run "CREATE TEMP TABLE foo(LIKE foo INCLUDING ALL);" statement
postgres=#
CREATE TABLE
postgres=# INSERT INTO foo VALUES(10,20);
INSERT 0 1
This trick emulates GLOBAL TEMP tables partially and it allows a statical validation. Other possibility is using a [template foreign data wrapper] (https://github.com/okbob/template_fdw)
You can use pragma table
and create ephemeral table:
BEGIN
CREATE TEMP TABLE xxx(a int);
PERFORM plpgsql_check_pragma('table: xxx(a int)');
INSERT INTO xxx VALUES(10);
Dependency list
A function plpgsql_show_dependency_tb can show all functions, operators and relations used inside processed function:
postgres=# select * from plpgsql_show_dependency_tb('testfunc(int,float)');
┌──────────┬───────┬────────┬─────────┬────────────────────────────┐
│ type │ oid │ schema │ name │ params │
╞══════════╪═══════╪════════╪═════════╪════════════════════════════╡
│ FUNCTION │ 36008 │ public │ myfunc1 │ (integer,double precision) │
│ FUNCTION │ 35999 │ public │ myfunc2 │ (integer,double precision) │
│ OPERATOR │ 36007 │ public │ ** │ (integer,integer) │
│ RELATION │ 36005 │ public │ myview │ │
│ RELATION │ 36002 │ public │ mytable │ │
└──────────┴───────┴────────┴─────────┴────────────────────────────┘
(4 rows)
Profiler
The plpgsql_check contains simple profiler of plpgsql functions and procedures. It can work with/without a access to shared memory. It depends on shared_preload_libraries
config. When plpgsql_check was initialized by shared_preload_libraries
, then it can allocate shared memory, and function's profiles are stored there. When plpgsql_check cannot to allocate shared momory, the profile is stored in session memory.
Due dependencies, shared_preload_libraries
should to contains plpgsql
first
postgres=# show shared_preload_libraries ;
┌──────────────────────────┐
│ shared_preload_libraries │
╞══════════════════════════╡
│ plpgsql,plpgsql_check │
└──────────────────────────┘
(1 row)
The profiler is active when GUC plpgsql_check.profiler
is on. The profiler doesn't require shared memory, but if there are not shared memory, then the profile is limmitted just to active session.
When plpgsql_check is initialized by shared_preload_libraries
, another GUC is available to configure the amount of shared memory used by the profiler: plpgsql_check.profiler_max_shared_chunks
. This defines the maximum number of statements chunk that can be stored in shared memory. For each plpgsql function (or procedure), the whole content is split into chunks of 30 statements. If needed, multiple chunks can be used to store the whole content of a single function. A single chunk is 1704 bytes. The default value for this GUC is 15000, which should be enough for big projects containing hundred of thousands of statements in plpgsql, and will consume about 24MB of memory. If your project doesn't require that much number of chunks, you can set this parameter to a smaller number in order to decrease the memory usage. The minimum value is 50 (which should consume about 83kB of memory), and the maximum value is 100000 (which should consume about 163MB of memory). Changing this parameter requires a PostgreSQL restart.
The profiler will also retrieve the query identifier for each instruction that contains an expression or optimizable statement. Note that this requires pg_stat_statements, or another similar third-party extension), to be installed. There are some limitations to the query identifier retrieval:
Attention: A update of shared profiles can decrease performance on servers under higher load.
The profile can be displayed by function plpgsql_profiler_function_tb
:
postgres=# select lineno, avg_time, source from plpgsql_profiler_function_tb('fx(int)');
┌────────┬──────────┬───────────────────────────────────────────────────────────────────┐
│ lineno │ avg_time │ source │
╞════════╪══════════╪═══════════════════════════════════════════════════════════════════╡
│ 1 │ │ │
│ 2 │ │ declare result int = 0; │
│ 3 │ 0.075 │ begin │
│ 4 │ 0.202 │ for i in 1..$1 loop │
│ 5 │ 0.005 │ select result + i into result; select result + i into result; │
│ 6 │ │ end loop; │
│ 7 │ 0 │ return result; │
│ 8 │ │ end; │
└────────┴──────────┴───────────────────────────────────────────────────────────────────┘
(9 rows)
The profile per statements (not per line) can be displayed by function plpgsql_profiler_function_statements_tb:
CREATE OR REPLACE FUNCTION public.fx1(a integer)
RETURNS integer
LANGUAGE plpgsql
1 AS $function$
2 begin
3 if a > 10 then
4 raise notice 'ahoj';
5 return -1;
6 else
7 raise notice 'nazdar';
8 return 1;
9 end if;
10 end;
11 $function$
postgres=# select stmtid, parent_stmtid, parent_note, lineno, exec_stmts, stmtname
from plpgsql_profiler_function_statements_tb('fx1');
┌────────┬───────────────┬─────────────┬────────┬────────────┬─────────────────┐
│ stmtid │ parent_stmtid │ parent_note │ lineno │ exec_stmts │ stmtname │
╞════════╪═══════════════╪═════════════╪════════╪════════════╪═════════════════╡
│ 0 │ ∅ │ ∅ │ 2 │ 0 │ statement block │
│ 1 │ 0 │ body │ 3 │ 0 │ IF │
│ 2 │ 1 │ then body │ 4 │ 0 │ RAISE │
│ 3 │ 1 │ then body │ 5 │ 0 │ RETURN │
│ 4 │ 1 │ else body │ 7 │ 0 │ RAISE │
│ 5 │ 1 │ else body │ 8 │ 0 │ RETURN │
└────────┴───────────────┴─────────────┴────────┴────────────┴─────────────────┘
(6 rows)
All stored profiles can be displayed by calling function plpgsql_profiler_functions_all
:
postgres=# select * from plpgsql_profiler_functions_all();
┌───────────────────────┬────────────┬────────────┬──────────┬─────────────┬──────────┬──────────┐
│ funcoid │ exec_count │ total_time │ avg_time │ stddev_time │ min_time │ max_time │
╞═══════════════════════╪════════════╪════════════╪══════════╪═════════════╪══════════╪══════════╡
│ fxx(double precision) │ 1 │ 0.01 │ 0.01 │ 0.00 │ 0.01 │ 0.01 │
└───────────────────────┴────────────┴────────────┴──────────┴─────────────┴──────────┴──────────┘
(1 row)
There are two functions for cleaning stored profiles: plpgsql_profiler_reset_all()
and plpgsql_profiler_reset(regprocedure)
.
plpgsql_check provides two functions:
plpgsql_coverage_statements(name)
plpgsql_coverage_branches(name)
There is another very good PLpgSQL profiler - https://bitbucket.org/openscg/plprofiler
My extension is designed to be simple for use and practical. Nothing more or less.
plprofiler is more complex. It build call graphs and from this graph it can creates flame graph of execution times.
Both extensions can be used together with buildin PostgreSQL's feature - tracking functions.
set track_functions to 'pl';
...
select * from pg_stat_user_functions;
Tracer
plpgsql_check provides a tracing possibility - in this mode you can see notices on start or end functions (terse and default verbosity) and start or end statements (verbose verbosity). For default and verbose verbosity the content of function arguments is displayed. The content of related variables are displayed when verbosity is verbose.
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 ->> start of inline_code_block (Oid=0)
NOTICE: #2 ->> start of function fx(integer,integer,date,text) (Oid=16405)
NOTICE: #2 call by inline_code_block line 1 at PERFORM
NOTICE: #2 "a" => '10', "b" => null, "c" => '2020-08-03', "d" => 'stěhule'
NOTICE: #4 ->> start of function fx(integer) (Oid=16404)
NOTICE: #4 call by fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: #4 "a" => '10'
NOTICE: #4 <<- end of function fx (elapsed time=0.098 ms)
NOTICE: #2 <<- end of function fx (elapsed time=0.399 ms)
NOTICE: #0 <<- end of block (elapsed time=0.754 ms)
The number after #
is a execution frame counter (this number is related to deep of error context stack). It allows to pair start end and of function.
Tracing is enabled by setting plpgsql_check.tracer
to on
. Attention - enabling this behaviour has significant negative impact on performance (unlike the profiler). You can set a level for output used by tracer plpgsql_check.tracer_errlevel
(default is notice
). The output content is limited by length specified by plpgsql_check.tracer_variable_max_length
configuration variable.
In terse verbose mode the output is reduced:
postgres=# set plpgsql_check.tracer_verbosity TO terse;
SET
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 start of inline code block (oid=0)
NOTICE: #2 start of fx (oid=16405)
NOTICE: #4 start of fx (oid=16404)
NOTICE: #4 end of fx
NOTICE: #2 end of fx
NOTICE: #0 end of inline code block
In verbose mode the output is extended about statement details:
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #0 ->> start of block inline_code_block (oid=0)
NOTICE: #0.1 1 --> start of PERFORM
NOTICE: #2 ->> start of function fx(integer,integer,date,text) (oid=16405)
NOTICE: #2 call by inline_code_block line 1 at PERFORM
NOTICE: #2 "a" => '10', "b" => null, "c" => '2020-08-04', "d" => 'stěhule'
NOTICE: #2.1 1 --> start of PERFORM
NOTICE: #2.1 "a" => '10'
NOTICE: #4 ->> start of function fx(integer) (oid=16404)
NOTICE: #4 call by fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: #4 "a" => '10'
NOTICE: #4.1 6 --> start of assignment
NOTICE: #4.1 "a" => '10', "b" => '20'
NOTICE: #4.1 <-- end of assignment (elapsed time=0.076 ms)
NOTICE: #4.1 "res" => '130'
NOTICE: #4.2 7 --> start of RETURN
NOTICE: #4.2 "res" => '130'
NOTICE: #4.2 <-- end of RETURN (elapsed time=0.054 ms)
NOTICE: #4 <<- end of function fx (elapsed time=0.373 ms)
NOTICE: #2.1 <-- end of PERFORM (elapsed time=0.589 ms)
NOTICE: #2 <<- end of function fx (elapsed time=0.727 ms)
NOTICE: #0.1 <-- end of PERFORM (elapsed time=1.147 ms)
NOTICE: #0 <<- end of block (elapsed time=1.286 ms)
Special feature of tracer is tracing of ASSERT
statement when plpgsql_check.trace_assert
is on
. When plpgsql_check.trace_assert_verbosity
is DEFAULT
, then all function's or procedure's variables are displayed when assert expression is false. When this configuration is VERBOSE
then all variables from all plpgsql frames are displayed. This behaviour is independent on plpgsql.check_asserts
value. It can be used, although the assertions are disabled in plpgsql runtime.
postgres=# set plpgsql_check.tracer to off;
postgres=# set plpgsql_check.trace_assert_verbosity TO verbose;
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false
NOTICE: "a" => '10', "res" => null, "b" => '20'
NOTICE: #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: "a" => '10', "b" => null, "c" => '2020-08-05', "d" => 'stěhule'
NOTICE: #0 PL/pgSQL function inline_code_block line 1 at PERFORM
ERROR: assertion failed
CONTEXT: PL/pgSQL function fx(integer) line 12 at ASSERT
SQL statement "SELECT fx(a)"
PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
SQL statement "SELECT fx(10,null, 'now', e'stěhule')"
PL/pgSQL function inline_code_block line 1 at PERFORM
postgres=# set plpgsql.check_asserts to off;
SET
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE: #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false
NOTICE: "a" => '10', "res" => null, "b" => '20'
NOTICE: #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
NOTICE: "a" => '10', "b" => null, "c" => '2020-08-05', "d" => 'stěhule'
NOTICE: #0 PL/pgSQL function inline_code_block line 1 at PERFORM
DO
Tracer prints content of variables or function arguments. For security definer function, this content can hold security sensitive data. This is reason why tracer is disabled by default and should be enabled only with super user rights plpgsql_check.enable_tracer
.
Pragma
You can configure plpgsql_check behave inside checked function with "pragma" function. This is a analogy of PL/SQL or ADA language of PRAGMA feature. PLpgSQL doesn't support PRAGMA, but plpgsql_check detects function named plpgsql_check_pragma
and get options from parameters of this function. These plpgsql_check options are valid to end of group of statements.
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
BEGIN
...
-- for following statements disable check
PERFORM plpgsql_check_pragma('disable:check');
...
-- enable check again
PERFORM plpgsql_check_pragma('enable:check');
...
END;
$$ LANGUAGE plpgsql;
The function plpgsql_check_pragma
is immutable function that returns one. It is defined by plpgsql_check
extension. You can declare alternative plpgsql_check_pragma
function like:
CREATE OR REPLACE FUNCTION plpgsql_check_pragma(VARIADIC args[])
RETURNS int AS $$
SELECT 1
$$ LANGUAGE sql IMMUTABLE;
Using pragma function in declaration part of top block sets options on function level too.
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
DECLARE
aux int := plpgsql_check_pragma('disable:extra_warnings');
...
Shorter syntax for pragma is supported too:
CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
DECLARE r record;
BEGIN
PERFORM 'PRAGMA:TYPE:r (a int, b int)';
PERFORM 'PRAGMA:TABLE: x (like pg_class)';
...
echo:str
- print string (for testing)
status:check
,status:tracer
, status:other_warnings
, status:performance_warnings
, status:extra_warnings
,status:security_warnings
enable:check
,enable:tracer
, enable:other_warnings
, enable:performance_warnings
, enable:extra_warnings
,enable:security_warnings
disable:check
,disable:tracer
, disable:other_warnings
, disable:performance_warnings
, disable:extra_warnings
,disable:security_warnings
type:varname typename
or type:varname (fieldname type, ...)
- set type to variable of record type
table: name (column_name type, ...)
or table: name (like tablename)
- create ephereal table
Pragmas enable:tracer
and disable:tracer
are active for Postgres 12 and higher
Compilation
You need a development environment for PostgreSQL extensions:
make clean
make install
result:
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 clean
rm -f plpgsql_check.so libplpgsql_check.a libplpgsql_check.pc
rm -f plpgsql_check.o
rm -rf results/ regression.diffs regression.out tmp_check/ log/
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 all
clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -I. -I./ -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include/internal -D_GNU_SOURCE -c -o plpgsql_check.o plpgsql_check.c
clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -shared -o plpgsql_check.so plpgsql_check.o -L/usr/local/pgsql/lib -Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags
[pavel@localhost plpgsql_check]$ su root
Password: *******
[root@localhost plpgsql_check]# make USE_PGXS=1 install
/usr/bin/mkdir -p '/usr/local/pgsql/lib'
/usr/bin/mkdir -p '/usr/local/pgsql/share/extension'
/usr/bin/mkdir -p '/usr/local/pgsql/share/extension'
/usr/bin/install -c -m 755 plpgsql_check.so '/usr/local/pgsql/lib/plpgsql_check.so'
/usr/bin/install -c -m 644 plpgsql_check.control '/usr/local/pgsql/share/extension/'
/usr/bin/install -c -m 644 plpgsql_check--0.9.sql '/usr/local/pgsql/share/extension/'
[root@localhost plpgsql_check]# exit
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 installcheck
/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress --inputdir=./ --psqldir='/usr/local/pgsql/bin' --dbname=pl_regression --load-language=plpgsql --dbname=contrib_regression plpgsql_check_passive plpgsql_check_active plpgsql_check_active-9.5
(using postmaster on Unix socket, default port)
============== dropping database "contrib_regression" ==============
DROP DATABASE
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
============== installing plpgsql ==============
CREATE LANGUAGE
============== running regression test queries ==============
test plpgsql_check_passive ... ok
test plpgsql_check_active ... ok
test plpgsql_check_active-9.5 ... ok
=====================
All 3 tests passed.
=====================
Sometimes successful compilation can require libicu-dev package (PostgreSQL 10 and higher - when pg was compiled with ICU support)
sudo apt install libicu-dev
You can check precompiled dll libraries http://okbob.blogspot.cz/2015/02/plpgsqlcheck-is-available-for-microsoft.html
or compile by self:
plpgsql_check.dll
to PostgreSQL\14\lib
plpgsql_check.control
and plpgsql_check--2.1.sql
to PostgreSQL\14\share\extension
Compilation against PostgreSQL 10 requires libICU!
Licence
Copyright (c) Pavel Stehule (pavel.stehule@gmail.com)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Note
If you like it, send a postcard to address
Pavel Stehule
Skalice 12
256 01 Benesov u Prahy
Czech Republic
I invite any questions, comments, bug reports, patches on mail address pavel.stehule@gmail.com
Author: okbob
Source Code: https://github.com/okbob/plpgsql_check
License: View license