Dylan North

Dylan North

1555994855

Generating and Consuming REST APIs with Spring Boot 2, Angular 7 and Swagger 2

Originally published by Swathi Prasad at Medium 

Imagine you want to integrate backend APIs or third party APIs with minimal effort, how do you pull it off?

Integrating Backend APIs manually can be time consuming and error-prone. Hence, Swagger comes into picture. Swagger is the most popular framework to generate and consume OpenAPI specification. The tool greatly relieves the burden of documenting and interacting with APIs.

In this article, we are going to generate API documentation from Spring Boot REST API and generate Angular API client from the documentation using Swagger.

Tools and Technologies Used

  • Spring Boot — 2.1.4.RELEASE
  • JDK — OpenJDK 12
  • Spring Framework — 5.1.6 RELEASE
  • JPA
  • Hibernate
  • H2 in-memory database
  • Swagger — 2
  • springfox-swagger2–2.9.4
  • springfox-swagger-ui — 2.9.4
  • Swagger Codegen CLI — 2.4.4
  • IDE — IntelliJ and Visual Studio Code
  • Angular CLI — 7.2.0
  • Node — 8.11.3
  • NPM — 6.3.0

Generating Swagger API Documentation

Create a sample Spring Boot application. Here is my sample project structure. I have created the project manually, but you could also create using Spring Intializer.


I will go over each package in a bit.

Spring Boot Project Structure

There are several implementations of Swagger 2 which adheres to Open API specification. Springfox is one of those implementations. Currently, Springfox supports only Swagger 1.2 and Swagger 2.0. Add Springfox dependencies as follows:

<dependency>
    <groupId>io.springfox</groupId>
    <artifactId>springfox-swagger2</artifactId>
    <version>2.9.2</version>
</dependency>
<dependency>
    <groupId>io.springfox</groupId>
    <artifactId>springfox-swagger-ui</artifactId>
    <version>2.9.2</version>
</dependency>

Here is the complete Maven POM.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
&lt;groupId&gt;com.swathisprasad&lt;/groupId&gt;
&lt;artifactId&gt;springboot-swagger&lt;/artifactId&gt;
&lt;version&gt;1.0-SNAPSHOT&lt;/version&gt;

&lt;parent&gt;
    &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
    &lt;artifactId&gt;spring-boot-starter-parent&lt;/artifactId&gt;
    &lt;version&gt;2.1.4.RELEASE&lt;/version&gt;
    &lt;relativePath /&gt;
&lt;/parent&gt;

&lt;dependencies&gt;
    &lt;dependency&gt;
        &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
        &lt;artifactId&gt;spring-boot-starter-web&lt;/artifactId&gt;
        &lt;exclusions&gt;
            &lt;exclusion&gt;
                &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
                &lt;artifactId&gt;spring-boot-starter-tomcat&lt;/artifactId&gt;
            &lt;/exclusion&gt;
        &lt;/exclusions&gt;
    &lt;/dependency&gt;
    &lt;dependency&gt;
        &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
        &lt;artifactId&gt;spring-boot-starter-undertow&lt;/artifactId&gt;
    &lt;/dependency&gt;
    &lt;dependency&gt;
        &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
        &lt;artifactId&gt;spring-boot-starter-actuator&lt;/artifactId&gt;
    &lt;/dependency&gt;
    &lt;dependency&gt;
        &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
        &lt;artifactId&gt;spring-boot-starter-data-jpa&lt;/artifactId&gt;
    &lt;/dependency&gt;
    &lt;dependency&gt;
        &lt;groupId&gt;io.springfox&lt;/groupId&gt;
        &lt;artifactId&gt;springfox-swagger2&lt;/artifactId&gt;
        &lt;version&gt;2.9.2&lt;/version&gt;
    &lt;/dependency&gt;
    &lt;dependency&gt;
        &lt;groupId&gt;io.springfox&lt;/groupId&gt;
        &lt;artifactId&gt;springfox-swagger-ui&lt;/artifactId&gt;
        &lt;version&gt;2.9.2&lt;/version&gt;
    &lt;/dependency&gt;
    &lt;dependency&gt;
        &lt;groupId&gt;com.h2database&lt;/groupId&gt;
        &lt;artifactId&gt;h2&lt;/artifactId&gt;
    &lt;/dependency&gt;
&lt;/dependencies&gt;

&lt;build&gt;
    &lt;plugins&gt;
        &lt;plugin&gt;
            &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
            &lt;artifactId&gt;spring-boot-maven-plugin&lt;/artifactId&gt;
        &lt;/plugin&gt;
    &lt;/plugins&gt;
&lt;/build&gt;

</project>

Let us first enable Swagger support in the application. We enable using 

@EnableSwagger2 annotation. Notice that we are specifying base package “com.swathisprasad.springboot”. This is to generate API documentation within our custom package.

If we do not provide base package, Swagger would generate documentation even for Spring Boot actuator endpoints which may not be of our interest.

package com.swathisprasad.springboot.swagger;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import springfox.documentation.builders.ApiInfoBuilder;
import springfox.documentation.builders.PathSelectors;
import springfox.documentation.builders.RequestHandlerSelectors;
import springfox.documentation.service.ApiInfo;
import springfox.documentation.service.Contact;
import springfox.documentation.spi.DocumentationType;
import springfox.documentation.spring.web.plugins.Docket;
import springfox.documentation.swagger2.annotations.EnableSwagger2;

@Configuration
@EnableSwagger2
public class SwaggerConfig {

@Bean
public Docket api() {

    return new Docket(DocumentationType.SWAGGER_2).select()
            .apis(RequestHandlerSelectors
                    .basePackage("com.swathisprasad.springboot"))
            .paths(PathSelectors.any())
            .build().apiInfo(apiEndPointsInfo());
}

private ApiInfo apiEndPointsInfo() {

    return new ApiInfoBuilder().title("Spring Boot REST API")
            .description("Language Management REST API")
            .contact(new Contact("Swathi Prasad", "www.techshard.com", "techshard08@gmail.com"))
            .license("Apache 2.0")
            .licenseUrl("http://www.apache.org/licenses/LICENSE-2.0.html")
            .version("1.0-SNAPSHOT")
            .build();
}

}

Before we create an API endpoint, let’s create a JPA entity Language and a corresponding JPA repository ILanguageRepository. I have also created a DataInitializer class to create mock data at server startup.

package com.swathisprasad.springboot.dao.entity;

import javax.persistence.*;

@Entity
public class Language {

@Id
@Column(name = "ID", nullable = false)
@GeneratedValue(strategy = GenerationType.AUTO)
private long id;

private String name;

public Language() {
}

public String getName() {
    return name;
}

public void setName(final String name) {
    this.name = name;
}

@Override
public int hashCode() {
    final int prime = 31;
    int result = 1;
    result = prime * result + ((name == null) ? 0 : name.hashCode());
    result = prime * result + (int) (id ^ (id &gt;&gt;&gt; 32));
    return result;
}

@Override
public boolean equals(Object obj) {
    if (this == obj)
        return true;
    if (!(obj instanceof Language))
        return false;
    final Language other = (Language) obj;
    if (name == null) {
        if (other.name != null)
            return false;
    } else if (!name.equals(other.name))
        return false;
    return true;
}

}
package com.swathisprasad.springboot.dao.repository;

import com.swathisprasad.springboot.dao.entity.Language;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.stereotype.Repository;

@Repository
public interface ILanguageRepository extends JpaRepository<Language, Long> {
}
package com.swathisprasad.springboot;

import com.swathisprasad.springboot.dao.entity.Language;
import com.swathisprasad.springboot.dao.repository.ILanguageRepository;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

@Component
public class DataInitializer {

private final Logger logger = LoggerFactory.getLogger(DataInitializer.class);

@Autowired
ILanguageRepository languageRepository;

public void initData() {

    try {
        Language language = new Language();
        language.setName("Java");
        languageRepository.save(language);

        language = new Language();
        language.setName("JavaScript");
        languageRepository.save(language);

        language = new Language();
        language.setName("C++");
        languageRepository.save(language);

        language = new Language();
        language.setName("Groovy");
        languageRepository.save(language);

        language = new Language();
        language.setName("Python");
        languageRepository.save(language);

        language = new Language();
        language.setName("Swift");
        languageRepository.save(language);


    } catch (final Exception ex) {
        logger.error("Exception while inserting mock data {}", ex);
    }

}

}

Create a REST controller which contains an API endpoint as below. We can customize API documentation using @API, @APIOperation and @APIResponse annotations provided by Springfox as shown below.

package com.swathisprasad.springboot.controller;

import com.swathisprasad.springboot.dao.entity.Language;
import com.swathisprasad.springboot.dao.repository.ILanguageRepository;
import io.swagger.annotations.Api;
import io.swagger.annotations.ApiOperation;
import io.swagger.annotations.ApiResponse;
import io.swagger.annotations.ApiResponses;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.List;

@RestController
@Api(value=“Language API”, description=“Operations pertaining to Language”)
@RequestMapping(“/api”)
public class LanguageController {

@Autowired
ILanguageRepository languageRepository;

@ApiOperation(value = "View a list of available languages", response = Iterable.class)
@ApiResponses(value = {
        @ApiResponse(code = 200, message = "Successfully retrieved list"),
        @ApiResponse(code = 401, message = "You are not authorized to view the resource"),
        @ApiResponse(code = 403, message = "Accessing the resource you were trying to reach is forbidden"),
        @ApiResponse(code = 404, message = "The resource you were trying to reach is not found")
}
)
@GetMapping(value = "/languages", produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity&lt;List&lt;Language&gt;&gt; getLanguages() {
    final List&lt;Language&gt; languages = languageRepository.findAll();
    return new ResponseEntity&lt;&gt;(languages, HttpStatus.OK);
}

}

Now, let us run Application class as Java application. Note that we will see below warning when Undertow server is started. Undertow complains about XNIO Illegal reflective access since Java 9+ and the issue has been reported here.

WARNING: An illegal reflective access operation has occurred

Once the server has started successfully, open http://localhost:8080/v2/api-docs in the browser. We will see Swagger documentation and save it as JSON on your file system.

This documentation adheres to OpenAPI 2.0 specification.

The Swagger UI can be viewed at http://localhost:8080/swagger-ui.html .

NOTE: If you wish to generate OpenAPI 3.0 spec, the above generated spec can also be imported and converted into OpenAPI 3.0 spec at SwaggerHub.

Generating Angular API Clients with Swagger

The biggest advantage of generated API clients would save a lot of time from writing Angular services and model manually. Let us see how it can be achieved.


In this tutorial, we will use Swagger Codegen library to generate API clients. The library can be downloaded directly from Maven Repository.

Swagger Codegen 2.x supports OpenAPI Spec 2.0 and 3.x supports both OpenAPI spec 2.0 and 3.0. Since our Swagger spec is compliant to OpenAPI spec 2.0, we will use Swagger Codegen 2.x for generating API client.

Download the jar file from Maven repository. Open the command prompt or terminal and run the following command. If you are on windows, remove the backslashes and write the entire command in single line.

java -jar swagger-codegen-cli-2.4.4.jar generate -i api-docs.json </em>
-l typescript-angular </em>
-o angular-client </em>
— additional-properties npmName=@techschard/language-api,snapshot=true,ngVersion=7.2.0
  • -i or –input-spec defines the location of the input swagger spec, as URL or file (required)
  • -l or –lang defines the client language to generate (required)
  • -o or –output defines the output directory, where the generated files should be written to (current directory by default)
  • -c or –config defines the path to an additional JSON configuration file. Supported options can be different for each language. We will look at this argument in the paragraph.

Note that the — additional-properties is added to generate the client as an Angular package. These properties can also be provided via a config file through -c or — config parameter.

The detailed explanation can be found via the following command.

java -jar swagger-codegen-cli.jar help generate

The generated file structure would be as follows:

Here, the .ng_pkg_build directory and ng-package.json can be ignored for now. They are not generated by Swagger Codegen.

This is a complete Angular project with all required configuration files and typescript files to create an angular (ng) package. I have published the generated Angular package to npm. Let us see how to publish to npm.

Publish Angular Package to NPM

Login to NPM and create a new user account if you don’t have one. In the terminal or command prompt, type npm adduser. Provide your new npm username and password. This is to authorize your machine to publish npm packages.


Run npm install in the above generated npm package. Then, run npm run build command.

We might have to install tsickle module as required by npm packager module. Install the module if necessary.

Finally, run npm publish dist — access=public command to publish to npm repository.

Here, the — access=public makes the repository public. By, default the repository would be set to restricted.

Once it is published, you will find your package under your account as below.

Consuming API Clients with Swagger

Now that we have generated API client and uploaded it to NPM, we will create a small Angular project through Angular CLI and include the generated API client.


Run the following command in terminal to create a new Angular project.

ng new angular7-swagger

Once the project is created, install the following npm module within the project.

npm install @techshard/language-rest-api

Here, language-rest-api is the npm package that contains our generated Angular.

Edit the app.module.ts file and include this npm package. Here is the complete file.

import { BrowserModule } from ‘@angular/platform-browser’;

import { NgModule } from ‘@angular/core’;

import { ApiModule, BASE_PATH } from ‘@techshard/language-rest-api’;

import { HttpClientModule } from ‘@angular/common/http’;

import { environment } from ‘./…/environments/environment’;

import { AppRoutingModule } from ‘./app-routing.module’;

import { AppComponent } from ‘./app.component’;

import { LanguageComponent } from ‘./language/language.component’;

@NgModule({

declarations: [

AppComponent,

LanguageComponent

],

imports: [

BrowserModule,

AppRoutingModule,

ApiModule,

HttpClientModule

],

providers: [{ provide: BASE_PATH, useValue: environment.API_BASE_PATH }],

bootstrap: [AppComponent]

})

export class AppModule { }

Let’s create a new module “language” and import generated service and data model in this new component. Then, we will call the API to retrieve the list of languages.

import { Component, OnInit } from ‘@angular/core’;

import { LanguageControllerService, Language } from ‘@techshard/language-rest-api’;

@Component({

selector: ‘app-language’,

templateUrl: ‘./language.component.html’,

styleUrls: [‘./language.component.scss’]

})

export class LanguageComponent implements OnInit {

languages: Language[] = [];

constructor(private languageControllerService: LanguageControllerService) { }

ngOnInit() {

this.languageControllerService.getLanguagesUsingGET().subscribe(res => {

for (var i in res) {

this.languages.push(res[i]);

}

});

}

}

The language.component.html shall be modified as follows to display the list of languages.

<p

*ngFor=“let l of languages; index as i” class=“panel-body”>

<a class=“text-success”>{{l.name}}</a>

</p>

Now, edit the app.component.html and add the language template. The complete HTML file is here:

<div class=“container blue”>

<div class=“page-header”>

<nav class=“navbar navbar-inverse”>

<div class=“container-fluid”>

<div class=“navbar-header”>

<a class=“navbar-brand” href=“https://en.wikipedia.org/wiki/List_of_programming_languages”>

List of programming languages</a>

</div>

</div>

</nav>

</div>

<div class=“col-md-2”>

<app-language></app-language>

</div>

<div class=“col-md-9”>

<router-outlet></router-outlet>

</div>

</div>

Note that I have installed Bootstrap 4 and imported into the project.

Running the Angular App

Run the following command in the terminal.


ng serve

Once the project is compiled successfully, open http://localhost:4200 in the browser and we will see list of languages as follows:

Conclusion

Congratulations! We have learned to automatically generate and consume API code. This approach reduces errors and writing a lot of boilerplate code. Also, this approach can be very useful for developing Micro Frontends.


The complete source code can be found on my GitHub repository.

I hope you enjoyed my article. Feel free to leave any comments or suggestions.

Happy learning!

---------------------------------------------------------------------------------------------------

Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

Learn More

☞ The Complete JavaScript Course 2019: Build Real Projects!

☞ Vue JS 2 - The Complete Guide (incl. Vue Router & Vuex)

☞ JavaScript Bootcamp - Build Real World Applications

☞ The Web Developer Bootcamp

☞ JavaScript: Understanding the Weird Parts

☞ Building a Simple URL Shortener With Just HTML and Javascript

☞ The JavaScript Developer’s Guide To Node.JS

☞ From Javascript to Typescript to Elm

☞ Google’s Go Essentials For Node.js / JavaScript Developers

☞ 3 things you didn’t know about the forEach loop in JavaScript



#javascript #angular #spring-boot #typescript

What is GEEK

Buddha Community

Generating and Consuming REST APIs with Spring Boot 2, Angular 7 and Swagger 2
Veronica  Roob

Veronica Roob

1653475560

A Pure PHP Implementation Of The MessagePack Serialization Format

msgpack.php

A pure PHP implementation of the MessagePack serialization format.

Features

Installation

The recommended way to install the library is through Composer:

composer require rybakit/msgpack

Usage

Packing

To pack values you can either use an instance of a Packer:

$packer = new Packer();
$packed = $packer->pack($value);

or call a static method on the MessagePack class:

$packed = MessagePack::pack($value);

In the examples above, the method pack automatically packs a value depending on its type. However, not all PHP types can be uniquely translated to MessagePack types. For example, the MessagePack format defines map and array types, which are represented by a single array type in PHP. By default, the packer will pack a PHP array as a MessagePack array if it has sequential numeric keys, starting from 0 and as a MessagePack map otherwise:

$mpArr1 = $packer->pack([1, 2]);               // MP array [1, 2]
$mpArr2 = $packer->pack([0 => 1, 1 => 2]);     // MP array [1, 2]
$mpMap1 = $packer->pack([0 => 1, 2 => 3]);     // MP map {0: 1, 2: 3}
$mpMap2 = $packer->pack([1 => 2, 2 => 3]);     // MP map {1: 2, 2: 3}
$mpMap3 = $packer->pack(['a' => 1, 'b' => 2]); // MP map {a: 1, b: 2}

However, sometimes you need to pack a sequential array as a MessagePack map. To do this, use the packMap method:

$mpMap = $packer->packMap([1, 2]); // {0: 1, 1: 2}

Here is a list of type-specific packing methods:

$packer->packNil();           // MP nil
$packer->packBool(true);      // MP bool
$packer->packInt(42);         // MP int
$packer->packFloat(M_PI);     // MP float (32 or 64)
$packer->packFloat32(M_PI);   // MP float 32
$packer->packFloat64(M_PI);   // MP float 64
$packer->packStr('foo');      // MP str
$packer->packBin("\x80");     // MP bin
$packer->packArray([1, 2]);   // MP array
$packer->packMap(['a' => 1]); // MP map
$packer->packExt(1, "\xaa");  // MP ext

Check the "Custom types" section below on how to pack custom types.

Packing options

The Packer object supports a number of bitmask-based options for fine-tuning the packing process (defaults are in bold):

NameDescription
FORCE_STRForces PHP strings to be packed as MessagePack UTF-8 strings
FORCE_BINForces PHP strings to be packed as MessagePack binary data
DETECT_STR_BINDetects MessagePack str/bin type automatically
  
FORCE_ARRForces PHP arrays to be packed as MessagePack arrays
FORCE_MAPForces PHP arrays to be packed as MessagePack maps
DETECT_ARR_MAPDetects MessagePack array/map type automatically
  
FORCE_FLOAT32Forces PHP floats to be packed as 32-bits MessagePack floats
FORCE_FLOAT64Forces PHP floats to be packed as 64-bits MessagePack floats

The type detection mode (DETECT_STR_BIN/DETECT_ARR_MAP) adds some overhead which can be noticed when you pack large (16- and 32-bit) arrays or strings. However, if you know the value type in advance (for example, you only work with UTF-8 strings or/and associative arrays), you can eliminate this overhead by forcing the packer to use the appropriate type, which will save it from running the auto-detection routine. Another option is to explicitly specify the value type. The library provides 2 auxiliary classes for this, Map and Bin. Check the "Custom types" section below for details.

Examples:

// detect str/bin type and pack PHP 64-bit floats (doubles) to MP 32-bit floats
$packer = new Packer(PackOptions::DETECT_STR_BIN | PackOptions::FORCE_FLOAT32);

// these will throw MessagePack\Exception\InvalidOptionException
$packer = new Packer(PackOptions::FORCE_STR | PackOptions::FORCE_BIN);
$packer = new Packer(PackOptions::FORCE_FLOAT32 | PackOptions::FORCE_FLOAT64);

Unpacking

To unpack data you can either use an instance of a BufferUnpacker:

$unpacker = new BufferUnpacker();

$unpacker->reset($packed);
$value = $unpacker->unpack();

or call a static method on the MessagePack class:

$value = MessagePack::unpack($packed);

If the packed data is received in chunks (e.g. when reading from a stream), use the tryUnpack method, which attempts to unpack data and returns an array of unpacked messages (if any) instead of throwing an InsufficientDataException:

while ($chunk = ...) {
    $unpacker->append($chunk);
    if ($messages = $unpacker->tryUnpack()) {
        return $messages;
    }
}

If you want to unpack from a specific position in a buffer, use seek:

$unpacker->seek(42); // set position equal to 42 bytes
$unpacker->seek(-8); // set position to 8 bytes before the end of the buffer

To skip bytes from the current position, use skip:

$unpacker->skip(10); // set position to 10 bytes ahead of the current position

To get the number of remaining (unread) bytes in the buffer:

$unreadBytesCount = $unpacker->getRemainingCount();

To check whether the buffer has unread data:

$hasUnreadBytes = $unpacker->hasRemaining();

If needed, you can remove already read data from the buffer by calling:

$releasedBytesCount = $unpacker->release();

With the read method you can read raw (packed) data:

$packedData = $unpacker->read(2); // read 2 bytes

Besides the above methods BufferUnpacker provides type-specific unpacking methods, namely:

$unpacker->unpackNil();   // PHP null
$unpacker->unpackBool();  // PHP bool
$unpacker->unpackInt();   // PHP int
$unpacker->unpackFloat(); // PHP float
$unpacker->unpackStr();   // PHP UTF-8 string
$unpacker->unpackBin();   // PHP binary string
$unpacker->unpackArray(); // PHP sequential array
$unpacker->unpackMap();   // PHP associative array
$unpacker->unpackExt();   // PHP MessagePack\Type\Ext object

Unpacking options

The BufferUnpacker object supports a number of bitmask-based options for fine-tuning the unpacking process (defaults are in bold):

NameDescription
BIGINT_AS_STRConverts overflowed integers to strings [1]
BIGINT_AS_GMPConverts overflowed integers to GMP objects [2]
BIGINT_AS_DECConverts overflowed integers to Decimal\Decimal objects [3]

1. The binary MessagePack format has unsigned 64-bit as its largest integer data type, but PHP does not support such integers, which means that an overflow can occur during unpacking.

2. Make sure the GMP extension is enabled.

3. Make sure the Decimal extension is enabled.

Examples:

$packedUint64 = "\xcf"."\xff\xff\xff\xff"."\xff\xff\xff\xff";

$unpacker = new BufferUnpacker($packedUint64);
var_dump($unpacker->unpack()); // string(20) "18446744073709551615"

$unpacker = new BufferUnpacker($packedUint64, UnpackOptions::BIGINT_AS_GMP);
var_dump($unpacker->unpack()); // object(GMP) {...}

$unpacker = new BufferUnpacker($packedUint64, UnpackOptions::BIGINT_AS_DEC);
var_dump($unpacker->unpack()); // object(Decimal\Decimal) {...}

Custom types

In addition to the basic types, the library provides functionality to serialize and deserialize arbitrary types. This can be done in several ways, depending on your use case. Let's take a look at them.

Type objects

If you need to serialize an instance of one of your classes into one of the basic MessagePack types, the best way to do this is to implement the CanBePacked interface in the class. A good example of such a class is the Map type class that comes with the library. This type is useful when you want to explicitly specify that a given PHP array should be packed as a MessagePack map without triggering an automatic type detection routine:

$packer = new Packer();

$packedMap = $packer->pack(new Map([1, 2, 3]));
$packedArray = $packer->pack([1, 2, 3]);

More type examples can be found in the src/Type directory.

Type transformers

As with type objects, type transformers are only responsible for serializing values. They should be used when you need to serialize a value that does not implement the CanBePacked interface. Examples of such values could be instances of built-in or third-party classes that you don't own, or non-objects such as resources.

A transformer class must implement the CanPack interface. To use a transformer, it must first be registered in the packer. Here is an example of how to serialize PHP streams into the MessagePack bin format type using one of the supplied transformers, StreamTransformer:

$packer = new Packer(null, [new StreamTransformer()]);

$packedBin = $packer->pack(fopen('/path/to/file', 'r+'));

More type transformer examples can be found in the src/TypeTransformer directory.

Extensions

In contrast to the cases described above, extensions are intended to handle extension types and are responsible for both serialization and deserialization of values (types).

An extension class must implement the Extension interface. To use an extension, it must first be registered in the packer and the unpacker.

The MessagePack specification divides extension types into two groups: predefined and application-specific. Currently, there is only one predefined type in the specification, Timestamp.

Timestamp

The Timestamp extension type is a predefined type. Support for this type in the library is done through the TimestampExtension class. This class is responsible for handling Timestamp objects, which represent the number of seconds and optional adjustment in nanoseconds:

$timestampExtension = new TimestampExtension();

$packer = new Packer();
$packer = $packer->extendWith($timestampExtension);

$unpacker = new BufferUnpacker();
$unpacker = $unpacker->extendWith($timestampExtension);

$packedTimestamp = $packer->pack(Timestamp::now());
$timestamp = $unpacker->reset($packedTimestamp)->unpack();

$seconds = $timestamp->getSeconds();
$nanoseconds = $timestamp->getNanoseconds();

When using the MessagePack class, the Timestamp extension is already registered:

$packedTimestamp = MessagePack::pack(Timestamp::now());
$timestamp = MessagePack::unpack($packedTimestamp);

Application-specific extensions

In addition, the format can be extended with your own types. For example, to make the built-in PHP DateTime objects first-class citizens in your code, you can create a corresponding extension, as shown in the example. Please note, that custom extensions have to be registered with a unique extension ID (an integer from 0 to 127).

More extension examples can be found in the examples/MessagePack directory.

To learn more about how extension types can be useful, check out this article.

Exceptions

If an error occurs during packing/unpacking, a PackingFailedException or an UnpackingFailedException will be thrown, respectively. In addition, an InsufficientDataException can be thrown during unpacking.

An InvalidOptionException will be thrown in case an invalid option (or a combination of mutually exclusive options) is used.

Tests

Run tests as follows:

vendor/bin/phpunit

Also, if you already have Docker installed, you can run the tests in a docker container. First, create a container:

./dockerfile.sh | docker build -t msgpack -

The command above will create a container named msgpack with PHP 8.1 runtime. You may change the default runtime by defining the PHP_IMAGE environment variable:

PHP_IMAGE='php:8.0-cli' ./dockerfile.sh | docker build -t msgpack -

See a list of various images here.

Then run the unit tests:

docker run --rm -v $PWD:/msgpack -w /msgpack msgpack

Fuzzing

To ensure that the unpacking works correctly with malformed/semi-malformed data, you can use a testing technique called Fuzzing. The library ships with a help file (target) for PHP-Fuzzer and can be used as follows:

php-fuzzer fuzz tests/fuzz_buffer_unpacker.php

Performance

To check performance, run:

php -n -dzend_extension=opcache.so \
-dpcre.jit=1 -dopcache.enable=1 -dopcache.enable_cli=1 \
tests/bench.php

Example output

Filter: MessagePack\Tests\Perf\Filter\ListFilter
Rounds: 3
Iterations: 100000

=============================================
Test/Target            Packer  BufferUnpacker
---------------------------------------------
nil .................. 0.0030 ........ 0.0139
false ................ 0.0037 ........ 0.0144
true ................. 0.0040 ........ 0.0137
7-bit uint #1 ........ 0.0052 ........ 0.0120
7-bit uint #2 ........ 0.0059 ........ 0.0114
7-bit uint #3 ........ 0.0061 ........ 0.0119
5-bit sint #1 ........ 0.0067 ........ 0.0126
5-bit sint #2 ........ 0.0064 ........ 0.0132
5-bit sint #3 ........ 0.0066 ........ 0.0135
8-bit uint #1 ........ 0.0078 ........ 0.0200
8-bit uint #2 ........ 0.0077 ........ 0.0212
8-bit uint #3 ........ 0.0086 ........ 0.0203
16-bit uint #1 ....... 0.0111 ........ 0.0271
16-bit uint #2 ....... 0.0115 ........ 0.0260
16-bit uint #3 ....... 0.0103 ........ 0.0273
32-bit uint #1 ....... 0.0116 ........ 0.0326
32-bit uint #2 ....... 0.0118 ........ 0.0332
32-bit uint #3 ....... 0.0127 ........ 0.0325
64-bit uint #1 ....... 0.0140 ........ 0.0277
64-bit uint #2 ....... 0.0134 ........ 0.0294
64-bit uint #3 ....... 0.0134 ........ 0.0281
8-bit int #1 ......... 0.0086 ........ 0.0241
8-bit int #2 ......... 0.0089 ........ 0.0225
8-bit int #3 ......... 0.0085 ........ 0.0229
16-bit int #1 ........ 0.0118 ........ 0.0280
16-bit int #2 ........ 0.0121 ........ 0.0270
16-bit int #3 ........ 0.0109 ........ 0.0274
32-bit int #1 ........ 0.0128 ........ 0.0346
32-bit int #2 ........ 0.0118 ........ 0.0339
32-bit int #3 ........ 0.0135 ........ 0.0368
64-bit int #1 ........ 0.0138 ........ 0.0276
64-bit int #2 ........ 0.0132 ........ 0.0286
64-bit int #3 ........ 0.0137 ........ 0.0274
64-bit int #4 ........ 0.0180 ........ 0.0285
64-bit float #1 ...... 0.0134 ........ 0.0284
64-bit float #2 ...... 0.0125 ........ 0.0275
64-bit float #3 ...... 0.0126 ........ 0.0283
fix string #1 ........ 0.0035 ........ 0.0133
fix string #2 ........ 0.0094 ........ 0.0216
fix string #3 ........ 0.0094 ........ 0.0222
fix string #4 ........ 0.0091 ........ 0.0241
8-bit string #1 ...... 0.0122 ........ 0.0301
8-bit string #2 ...... 0.0118 ........ 0.0304
8-bit string #3 ...... 0.0119 ........ 0.0315
16-bit string #1 ..... 0.0150 ........ 0.0388
16-bit string #2 ..... 0.1545 ........ 0.1665
32-bit string ........ 0.1570 ........ 0.1756
wide char string #1 .. 0.0091 ........ 0.0236
wide char string #2 .. 0.0122 ........ 0.0313
8-bit binary #1 ...... 0.0100 ........ 0.0302
8-bit binary #2 ...... 0.0123 ........ 0.0324
8-bit binary #3 ...... 0.0126 ........ 0.0327
16-bit binary ........ 0.0168 ........ 0.0372
32-bit binary ........ 0.1588 ........ 0.1754
fix array #1 ......... 0.0042 ........ 0.0131
fix array #2 ......... 0.0294 ........ 0.0367
fix array #3 ......... 0.0412 ........ 0.0472
16-bit array #1 ...... 0.1378 ........ 0.1596
16-bit array #2 ........... S ............. S
32-bit array .............. S ............. S
complex array ........ 0.1865 ........ 0.2283
fix map #1 ........... 0.0725 ........ 0.1048
fix map #2 ........... 0.0319 ........ 0.0405
fix map #3 ........... 0.0356 ........ 0.0665
fix map #4 ........... 0.0465 ........ 0.0497
16-bit map #1 ........ 0.2540 ........ 0.3028
16-bit map #2 ............. S ............. S
32-bit map ................ S ............. S
complex map .......... 0.2372 ........ 0.2710
fixext 1 ............. 0.0283 ........ 0.0358
fixext 2 ............. 0.0291 ........ 0.0371
fixext 4 ............. 0.0302 ........ 0.0355
fixext 8 ............. 0.0288 ........ 0.0384
fixext 16 ............ 0.0293 ........ 0.0359
8-bit ext ............ 0.0302 ........ 0.0439
16-bit ext ........... 0.0334 ........ 0.0499
32-bit ext ........... 0.1845 ........ 0.1888
32-bit timestamp #1 .. 0.0337 ........ 0.0547
32-bit timestamp #2 .. 0.0335 ........ 0.0560
64-bit timestamp #1 .. 0.0371 ........ 0.0575
64-bit timestamp #2 .. 0.0374 ........ 0.0542
64-bit timestamp #3 .. 0.0356 ........ 0.0533
96-bit timestamp #1 .. 0.0362 ........ 0.0699
96-bit timestamp #2 .. 0.0381 ........ 0.0701
96-bit timestamp #3 .. 0.0367 ........ 0.0687
=============================================
Total                  2.7618          4.0820
Skipped                     4               4
Failed                      0               0
Ignored                     0               0

With JIT:

php -n -dzend_extension=opcache.so \
-dpcre.jit=1 -dopcache.jit_buffer_size=64M -dopcache.jit=tracing -dopcache.enable=1 -dopcache.enable_cli=1 \
tests/bench.php

Example output

Filter: MessagePack\Tests\Perf\Filter\ListFilter
Rounds: 3
Iterations: 100000

=============================================
Test/Target            Packer  BufferUnpacker
---------------------------------------------
nil .................. 0.0005 ........ 0.0054
false ................ 0.0004 ........ 0.0059
true ................. 0.0004 ........ 0.0059
7-bit uint #1 ........ 0.0010 ........ 0.0047
7-bit uint #2 ........ 0.0010 ........ 0.0046
7-bit uint #3 ........ 0.0010 ........ 0.0046
5-bit sint #1 ........ 0.0025 ........ 0.0046
5-bit sint #2 ........ 0.0023 ........ 0.0046
5-bit sint #3 ........ 0.0024 ........ 0.0045
8-bit uint #1 ........ 0.0043 ........ 0.0081
8-bit uint #2 ........ 0.0043 ........ 0.0079
8-bit uint #3 ........ 0.0041 ........ 0.0080
16-bit uint #1 ....... 0.0064 ........ 0.0095
16-bit uint #2 ....... 0.0064 ........ 0.0091
16-bit uint #3 ....... 0.0064 ........ 0.0094
32-bit uint #1 ....... 0.0085 ........ 0.0114
32-bit uint #2 ....... 0.0077 ........ 0.0122
32-bit uint #3 ....... 0.0077 ........ 0.0120
64-bit uint #1 ....... 0.0085 ........ 0.0159
64-bit uint #2 ....... 0.0086 ........ 0.0157
64-bit uint #3 ....... 0.0086 ........ 0.0158
8-bit int #1 ......... 0.0042 ........ 0.0080
8-bit int #2 ......... 0.0042 ........ 0.0080
8-bit int #3 ......... 0.0042 ........ 0.0081
16-bit int #1 ........ 0.0065 ........ 0.0095
16-bit int #2 ........ 0.0065 ........ 0.0090
16-bit int #3 ........ 0.0056 ........ 0.0085
32-bit int #1 ........ 0.0067 ........ 0.0107
32-bit int #2 ........ 0.0066 ........ 0.0106
32-bit int #3 ........ 0.0063 ........ 0.0104
64-bit int #1 ........ 0.0072 ........ 0.0162
64-bit int #2 ........ 0.0073 ........ 0.0174
64-bit int #3 ........ 0.0072 ........ 0.0164
64-bit int #4 ........ 0.0077 ........ 0.0161
64-bit float #1 ...... 0.0053 ........ 0.0135
64-bit float #2 ...... 0.0053 ........ 0.0135
64-bit float #3 ...... 0.0052 ........ 0.0135
fix string #1 ....... -0.0002 ........ 0.0044
fix string #2 ........ 0.0035 ........ 0.0067
fix string #3 ........ 0.0035 ........ 0.0077
fix string #4 ........ 0.0033 ........ 0.0078
8-bit string #1 ...... 0.0059 ........ 0.0110
8-bit string #2 ...... 0.0063 ........ 0.0121
8-bit string #3 ...... 0.0064 ........ 0.0124
16-bit string #1 ..... 0.0099 ........ 0.0146
16-bit string #2 ..... 0.1522 ........ 0.1474
32-bit string ........ 0.1511 ........ 0.1483
wide char string #1 .. 0.0039 ........ 0.0084
wide char string #2 .. 0.0073 ........ 0.0123
8-bit binary #1 ...... 0.0040 ........ 0.0112
8-bit binary #2 ...... 0.0075 ........ 0.0123
8-bit binary #3 ...... 0.0077 ........ 0.0129
16-bit binary ........ 0.0096 ........ 0.0145
32-bit binary ........ 0.1535 ........ 0.1479
fix array #1 ......... 0.0008 ........ 0.0061
fix array #2 ......... 0.0121 ........ 0.0165
fix array #3 ......... 0.0193 ........ 0.0222
16-bit array #1 ...... 0.0607 ........ 0.0479
16-bit array #2 ........... S ............. S
32-bit array .............. S ............. S
complex array ........ 0.0749 ........ 0.0824
fix map #1 ........... 0.0329 ........ 0.0431
fix map #2 ........... 0.0161 ........ 0.0189
fix map #3 ........... 0.0205 ........ 0.0262
fix map #4 ........... 0.0252 ........ 0.0205
16-bit map #1 ........ 0.1016 ........ 0.0927
16-bit map #2 ............. S ............. S
32-bit map ................ S ............. S
complex map .......... 0.1096 ........ 0.1030
fixext 1 ............. 0.0157 ........ 0.0161
fixext 2 ............. 0.0175 ........ 0.0183
fixext 4 ............. 0.0156 ........ 0.0185
fixext 8 ............. 0.0163 ........ 0.0184
fixext 16 ............ 0.0164 ........ 0.0182
8-bit ext ............ 0.0158 ........ 0.0207
16-bit ext ........... 0.0203 ........ 0.0219
32-bit ext ........... 0.1614 ........ 0.1539
32-bit timestamp #1 .. 0.0195 ........ 0.0249
32-bit timestamp #2 .. 0.0188 ........ 0.0260
64-bit timestamp #1 .. 0.0207 ........ 0.0281
64-bit timestamp #2 .. 0.0212 ........ 0.0291
64-bit timestamp #3 .. 0.0207 ........ 0.0295
96-bit timestamp #1 .. 0.0222 ........ 0.0358
96-bit timestamp #2 .. 0.0228 ........ 0.0353
96-bit timestamp #3 .. 0.0210 ........ 0.0319
=============================================
Total                  1.6432          1.9674
Skipped                     4               4
Failed                      0               0
Ignored                     0               0

You may change default benchmark settings by defining the following environment variables:

NameDefault
MP_BENCH_TARGETSpure_p,pure_u, see a list of available targets
MP_BENCH_ITERATIONS100_000
MP_BENCH_DURATIONnot set
MP_BENCH_ROUNDS3
MP_BENCH_TESTS-@slow, see a list of available tests

For example:

export MP_BENCH_TARGETS=pure_p
export MP_BENCH_ITERATIONS=1000000
export MP_BENCH_ROUNDS=5
# a comma separated list of test names
export MP_BENCH_TESTS='complex array, complex map'
# or a group name
# export MP_BENCH_TESTS='-@slow' // @pecl_comp
# or a regexp
# export MP_BENCH_TESTS='/complex (array|map)/'

Another example, benchmarking both the library and the PECL extension:

MP_BENCH_TARGETS=pure_p,pure_u,pecl_p,pecl_u \
php -n -dextension=msgpack.so -dzend_extension=opcache.so \
-dpcre.jit=1 -dopcache.enable=1 -dopcache.enable_cli=1 \
tests/bench.php

Example output

Filter: MessagePack\Tests\Perf\Filter\ListFilter
Rounds: 3
Iterations: 100000

===========================================================================
Test/Target            Packer  BufferUnpacker  msgpack_pack  msgpack_unpack
---------------------------------------------------------------------------
nil .................. 0.0031 ........ 0.0141 ...... 0.0055 ........ 0.0064
false ................ 0.0039 ........ 0.0154 ...... 0.0056 ........ 0.0053
true ................. 0.0038 ........ 0.0139 ...... 0.0056 ........ 0.0044
7-bit uint #1 ........ 0.0061 ........ 0.0110 ...... 0.0059 ........ 0.0046
7-bit uint #2 ........ 0.0065 ........ 0.0119 ...... 0.0042 ........ 0.0029
7-bit uint #3 ........ 0.0054 ........ 0.0117 ...... 0.0045 ........ 0.0025
5-bit sint #1 ........ 0.0047 ........ 0.0103 ...... 0.0038 ........ 0.0022
5-bit sint #2 ........ 0.0048 ........ 0.0117 ...... 0.0038 ........ 0.0022
5-bit sint #3 ........ 0.0046 ........ 0.0102 ...... 0.0038 ........ 0.0023
8-bit uint #1 ........ 0.0063 ........ 0.0174 ...... 0.0039 ........ 0.0031
8-bit uint #2 ........ 0.0063 ........ 0.0167 ...... 0.0040 ........ 0.0029
8-bit uint #3 ........ 0.0063 ........ 0.0168 ...... 0.0039 ........ 0.0030
16-bit uint #1 ....... 0.0092 ........ 0.0222 ...... 0.0049 ........ 0.0030
16-bit uint #2 ....... 0.0096 ........ 0.0227 ...... 0.0042 ........ 0.0046
16-bit uint #3 ....... 0.0123 ........ 0.0274 ...... 0.0059 ........ 0.0051
32-bit uint #1 ....... 0.0136 ........ 0.0331 ...... 0.0060 ........ 0.0048
32-bit uint #2 ....... 0.0130 ........ 0.0336 ...... 0.0070 ........ 0.0048
32-bit uint #3 ....... 0.0127 ........ 0.0329 ...... 0.0051 ........ 0.0048
64-bit uint #1 ....... 0.0126 ........ 0.0268 ...... 0.0055 ........ 0.0049
64-bit uint #2 ....... 0.0135 ........ 0.0281 ...... 0.0052 ........ 0.0046
64-bit uint #3 ....... 0.0131 ........ 0.0274 ...... 0.0069 ........ 0.0044
8-bit int #1 ......... 0.0077 ........ 0.0236 ...... 0.0058 ........ 0.0044
8-bit int #2 ......... 0.0087 ........ 0.0244 ...... 0.0058 ........ 0.0048
8-bit int #3 ......... 0.0084 ........ 0.0241 ...... 0.0055 ........ 0.0049
16-bit int #1 ........ 0.0112 ........ 0.0271 ...... 0.0048 ........ 0.0045
16-bit int #2 ........ 0.0124 ........ 0.0292 ...... 0.0057 ........ 0.0049
16-bit int #3 ........ 0.0118 ........ 0.0270 ...... 0.0058 ........ 0.0050
32-bit int #1 ........ 0.0137 ........ 0.0366 ...... 0.0058 ........ 0.0051
32-bit int #2 ........ 0.0133 ........ 0.0366 ...... 0.0056 ........ 0.0049
32-bit int #3 ........ 0.0129 ........ 0.0350 ...... 0.0052 ........ 0.0048
64-bit int #1 ........ 0.0145 ........ 0.0254 ...... 0.0034 ........ 0.0025
64-bit int #2 ........ 0.0097 ........ 0.0214 ...... 0.0034 ........ 0.0025
64-bit int #3 ........ 0.0096 ........ 0.0287 ...... 0.0059 ........ 0.0050
64-bit int #4 ........ 0.0143 ........ 0.0277 ...... 0.0059 ........ 0.0046
64-bit float #1 ...... 0.0134 ........ 0.0281 ...... 0.0057 ........ 0.0052
64-bit float #2 ...... 0.0141 ........ 0.0281 ...... 0.0057 ........ 0.0050
64-bit float #3 ...... 0.0144 ........ 0.0282 ...... 0.0057 ........ 0.0050
fix string #1 ........ 0.0036 ........ 0.0143 ...... 0.0066 ........ 0.0053
fix string #2 ........ 0.0107 ........ 0.0222 ...... 0.0065 ........ 0.0068
fix string #3 ........ 0.0116 ........ 0.0245 ...... 0.0063 ........ 0.0069
fix string #4 ........ 0.0105 ........ 0.0253 ...... 0.0083 ........ 0.0077
8-bit string #1 ...... 0.0126 ........ 0.0318 ...... 0.0075 ........ 0.0088
8-bit string #2 ...... 0.0121 ........ 0.0295 ...... 0.0076 ........ 0.0086
8-bit string #3 ...... 0.0125 ........ 0.0293 ...... 0.0130 ........ 0.0093
16-bit string #1 ..... 0.0159 ........ 0.0368 ...... 0.0117 ........ 0.0086
16-bit string #2 ..... 0.1547 ........ 0.1686 ...... 0.1516 ........ 0.1373
32-bit string ........ 0.1558 ........ 0.1729 ...... 0.1511 ........ 0.1396
wide char string #1 .. 0.0098 ........ 0.0237 ...... 0.0066 ........ 0.0065
wide char string #2 .. 0.0128 ........ 0.0291 ...... 0.0061 ........ 0.0082
8-bit binary #1 ........... I ............. I ........... F ............. I
8-bit binary #2 ........... I ............. I ........... F ............. I
8-bit binary #3 ........... I ............. I ........... F ............. I
16-bit binary ............. I ............. I ........... F ............. I
32-bit binary ............. I ............. I ........... F ............. I
fix array #1 ......... 0.0040 ........ 0.0129 ...... 0.0120 ........ 0.0058
fix array #2 ......... 0.0279 ........ 0.0390 ...... 0.0143 ........ 0.0165
fix array #3 ......... 0.0415 ........ 0.0463 ...... 0.0162 ........ 0.0187
16-bit array #1 ...... 0.1349 ........ 0.1628 ...... 0.0334 ........ 0.0341
16-bit array #2 ........... S ............. S ........... S ............. S
32-bit array .............. S ............. S ........... S ............. S
complex array ............. I ............. I ........... F ............. F
fix map #1 ................ I ............. I ........... F ............. I
fix map #2 ........... 0.0345 ........ 0.0391 ...... 0.0143 ........ 0.0168
fix map #3 ................ I ............. I ........... F ............. I
fix map #4 ........... 0.0459 ........ 0.0473 ...... 0.0151 ........ 0.0163
16-bit map #1 ........ 0.2518 ........ 0.2962 ...... 0.0400 ........ 0.0490
16-bit map #2 ............. S ............. S ........... S ............. S
32-bit map ................ S ............. S ........... S ............. S
complex map .......... 0.2380 ........ 0.2682 ...... 0.0545 ........ 0.0579
fixext 1 .................. I ............. I ........... F ............. F
fixext 2 .................. I ............. I ........... F ............. F
fixext 4 .................. I ............. I ........... F ............. F
fixext 8 .................. I ............. I ........... F ............. F
fixext 16 ................. I ............. I ........... F ............. F
8-bit ext ................. I ............. I ........... F ............. F
16-bit ext ................ I ............. I ........... F ............. F
32-bit ext ................ I ............. I ........... F ............. F
32-bit timestamp #1 ....... I ............. I ........... F ............. F
32-bit timestamp #2 ....... I ............. I ........... F ............. F
64-bit timestamp #1 ....... I ............. I ........... F ............. F
64-bit timestamp #2 ....... I ............. I ........... F ............. F
64-bit timestamp #3 ....... I ............. I ........... F ............. F
96-bit timestamp #1 ....... I ............. I ........... F ............. F
96-bit timestamp #2 ....... I ............. I ........... F ............. F
96-bit timestamp #3 ....... I ............. I ........... F ............. F
===========================================================================
Total                  1.5625          2.3866        0.7735          0.7243
Skipped                     4               4             4               4
Failed                      0               0            24              17
Ignored                    24              24             0               7

With JIT:

MP_BENCH_TARGETS=pure_p,pure_u,pecl_p,pecl_u \
php -n -dextension=msgpack.so -dzend_extension=opcache.so \
-dpcre.jit=1 -dopcache.jit_buffer_size=64M -dopcache.jit=tracing -dopcache.enable=1 -dopcache.enable_cli=1 \
tests/bench.php

Example output

Filter: MessagePack\Tests\Perf\Filter\ListFilter
Rounds: 3
Iterations: 100000

===========================================================================
Test/Target            Packer  BufferUnpacker  msgpack_pack  msgpack_unpack
---------------------------------------------------------------------------
nil .................. 0.0001 ........ 0.0052 ...... 0.0053 ........ 0.0042
false ................ 0.0007 ........ 0.0060 ...... 0.0057 ........ 0.0043
true ................. 0.0008 ........ 0.0060 ...... 0.0056 ........ 0.0041
7-bit uint #1 ........ 0.0031 ........ 0.0046 ...... 0.0062 ........ 0.0041
7-bit uint #2 ........ 0.0021 ........ 0.0043 ...... 0.0062 ........ 0.0041
7-bit uint #3 ........ 0.0022 ........ 0.0044 ...... 0.0061 ........ 0.0040
5-bit sint #1 ........ 0.0030 ........ 0.0048 ...... 0.0062 ........ 0.0040
5-bit sint #2 ........ 0.0032 ........ 0.0046 ...... 0.0062 ........ 0.0040
5-bit sint #3 ........ 0.0031 ........ 0.0046 ...... 0.0062 ........ 0.0040
8-bit uint #1 ........ 0.0054 ........ 0.0079 ...... 0.0062 ........ 0.0050
8-bit uint #2 ........ 0.0051 ........ 0.0079 ...... 0.0064 ........ 0.0044
8-bit uint #3 ........ 0.0051 ........ 0.0082 ...... 0.0062 ........ 0.0044
16-bit uint #1 ....... 0.0077 ........ 0.0094 ...... 0.0065 ........ 0.0045
16-bit uint #2 ....... 0.0077 ........ 0.0094 ...... 0.0063 ........ 0.0045
16-bit uint #3 ....... 0.0077 ........ 0.0095 ...... 0.0064 ........ 0.0047
32-bit uint #1 ....... 0.0088 ........ 0.0119 ...... 0.0063 ........ 0.0043
32-bit uint #2 ....... 0.0089 ........ 0.0117 ...... 0.0062 ........ 0.0039
32-bit uint #3 ....... 0.0089 ........ 0.0118 ...... 0.0063 ........ 0.0044
64-bit uint #1 ....... 0.0097 ........ 0.0155 ...... 0.0063 ........ 0.0045
64-bit uint #2 ....... 0.0095 ........ 0.0153 ...... 0.0061 ........ 0.0045
64-bit uint #3 ....... 0.0096 ........ 0.0156 ...... 0.0063 ........ 0.0047
8-bit int #1 ......... 0.0053 ........ 0.0083 ...... 0.0062 ........ 0.0044
8-bit int #2 ......... 0.0052 ........ 0.0080 ...... 0.0062 ........ 0.0044
8-bit int #3 ......... 0.0052 ........ 0.0080 ...... 0.0062 ........ 0.0043
16-bit int #1 ........ 0.0089 ........ 0.0097 ...... 0.0069 ........ 0.0046
16-bit int #2 ........ 0.0075 ........ 0.0093 ...... 0.0063 ........ 0.0043
16-bit int #3 ........ 0.0075 ........ 0.0094 ...... 0.0062 ........ 0.0046
32-bit int #1 ........ 0.0086 ........ 0.0122 ...... 0.0063 ........ 0.0044
32-bit int #2 ........ 0.0087 ........ 0.0120 ...... 0.0066 ........ 0.0046
32-bit int #3 ........ 0.0086 ........ 0.0121 ...... 0.0060 ........ 0.0044
64-bit int #1 ........ 0.0096 ........ 0.0149 ...... 0.0060 ........ 0.0045
64-bit int #2 ........ 0.0096 ........ 0.0157 ...... 0.0062 ........ 0.0044
64-bit int #3 ........ 0.0096 ........ 0.0160 ...... 0.0063 ........ 0.0046
64-bit int #4 ........ 0.0097 ........ 0.0157 ...... 0.0061 ........ 0.0044
64-bit float #1 ...... 0.0079 ........ 0.0153 ...... 0.0056 ........ 0.0044
64-bit float #2 ...... 0.0079 ........ 0.0152 ...... 0.0057 ........ 0.0045
64-bit float #3 ...... 0.0079 ........ 0.0155 ...... 0.0057 ........ 0.0044
fix string #1 ........ 0.0010 ........ 0.0045 ...... 0.0071 ........ 0.0044
fix string #2 ........ 0.0048 ........ 0.0075 ...... 0.0070 ........ 0.0060
fix string #3 ........ 0.0048 ........ 0.0086 ...... 0.0068 ........ 0.0060
fix string #4 ........ 0.0050 ........ 0.0088 ...... 0.0070 ........ 0.0059
8-bit string #1 ...... 0.0081 ........ 0.0129 ...... 0.0069 ........ 0.0062
8-bit string #2 ...... 0.0086 ........ 0.0128 ...... 0.0069 ........ 0.0065
8-bit string #3 ...... 0.0086 ........ 0.0126 ...... 0.0115 ........ 0.0065
16-bit string #1 ..... 0.0105 ........ 0.0137 ...... 0.0128 ........ 0.0068
16-bit string #2 ..... 0.1510 ........ 0.1486 ...... 0.1526 ........ 0.1391
32-bit string ........ 0.1517 ........ 0.1475 ...... 0.1504 ........ 0.1370
wide char string #1 .. 0.0044 ........ 0.0085 ...... 0.0067 ........ 0.0057
wide char string #2 .. 0.0081 ........ 0.0125 ...... 0.0069 ........ 0.0063
8-bit binary #1 ........... I ............. I ........... F ............. I
8-bit binary #2 ........... I ............. I ........... F ............. I
8-bit binary #3 ........... I ............. I ........... F ............. I
16-bit binary ............. I ............. I ........... F ............. I
32-bit binary ............. I ............. I ........... F ............. I
fix array #1 ......... 0.0014 ........ 0.0059 ...... 0.0132 ........ 0.0055
fix array #2 ......... 0.0146 ........ 0.0156 ...... 0.0155 ........ 0.0148
fix array #3 ......... 0.0211 ........ 0.0229 ...... 0.0179 ........ 0.0180
16-bit array #1 ...... 0.0673 ........ 0.0498 ...... 0.0343 ........ 0.0388
16-bit array #2 ........... S ............. S ........... S ............. S
32-bit array .............. S ............. S ........... S ............. S
complex array ............. I ............. I ........... F ............. F
fix map #1 ................ I ............. I ........... F ............. I
fix map #2 ........... 0.0148 ........ 0.0180 ...... 0.0156 ........ 0.0179
fix map #3 ................ I ............. I ........... F ............. I
fix map #4 ........... 0.0252 ........ 0.0201 ...... 0.0214 ........ 0.0167
16-bit map #1 ........ 0.1027 ........ 0.0836 ...... 0.0388 ........ 0.0510
16-bit map #2 ............. S ............. S ........... S ............. S
32-bit map ................ S ............. S ........... S ............. S
complex map .......... 0.1104 ........ 0.1010 ...... 0.0556 ........ 0.0602
fixext 1 .................. I ............. I ........... F ............. F
fixext 2 .................. I ............. I ........... F ............. F
fixext 4 .................. I ............. I ........... F ............. F
fixext 8 .................. I ............. I ........... F ............. F
fixext 16 ................. I ............. I ........... F ............. F
8-bit ext ................. I ............. I ........... F ............. F
16-bit ext ................ I ............. I ........... F ............. F
32-bit ext ................ I ............. I ........... F ............. F
32-bit timestamp #1 ....... I ............. I ........... F ............. F
32-bit timestamp #2 ....... I ............. I ........... F ............. F
64-bit timestamp #1 ....... I ............. I ........... F ............. F
64-bit timestamp #2 ....... I ............. I ........... F ............. F
64-bit timestamp #3 ....... I ............. I ........... F ............. F
96-bit timestamp #1 ....... I ............. I ........... F ............. F
96-bit timestamp #2 ....... I ............. I ........... F ............. F
96-bit timestamp #3 ....... I ............. I ........... F ............. F
===========================================================================
Total                  0.9642          1.0909        0.8224          0.7213
Skipped                     4               4             4               4
Failed                      0               0            24              17
Ignored                    24              24             0               7

Note that the msgpack extension (v2.1.2) doesn't support ext, bin and UTF-8 str types.

License

The library is released under the MIT License. See the bundled LICENSE file for details.

Author: rybakit
Source Code: https://github.com/rybakit/msgpack.php
License: MIT License

#php 

Enhance Amazon Aurora Read/Write Capability with ShardingSphere-JDBC

1. Introduction

Amazon Aurora is a relational database management system (RDBMS) developed by AWS(Amazon Web Services). Aurora gives you the performance and availability of commercial-grade databases with full MySQL and PostgreSQL compatibility. In terms of high performance, Aurora MySQL and Aurora PostgreSQL have shown an increase in throughput of up to 5X over stock MySQL and 3X over stock PostgreSQL respectively on similar hardware. In terms of scalability, Aurora achieves enhancements and innovations in storage and computing, horizontal and vertical functions.

Aurora supports up to 128TB of storage capacity and supports dynamic scaling of storage layer in units of 10GB. In terms of computing, Aurora supports scalable configurations for multiple read replicas. Each region can have an additional 15 Aurora replicas. In addition, Aurora provides multi-primary architecture to support four read/write nodes. Its Serverless architecture allows vertical scaling and reduces typical latency to under a second, while the Global Database enables a single database cluster to span multiple AWS Regions in low latency.

Aurora already provides great scalability with the growth of user data volume. Can it handle more data and support more concurrent access? You may consider using sharding to support the configuration of multiple underlying Aurora clusters. To this end, a series of blogs, including this one, provides you with a reference in choosing between Proxy and JDBC for sharding.

1.1 Why sharding is needed

AWS Aurora offers a single relational database. Primary-secondary, multi-primary, and global database, and other forms of hosting architecture can satisfy various architectural scenarios above. However, Aurora doesn’t provide direct support for sharding scenarios, and sharding has a variety of forms, such as vertical and horizontal forms. If we want to further increase data capacity, some problems have to be solved, such as cross-node database Join, associated query, distributed transactions, SQL sorting, page turning, function calculation, database global primary key, capacity planning, and secondary capacity expansion after sharding.

1.2 Sharding methods

It is generally accepted that when the capacity of a MySQL table is less than 10 million, the time spent on queries is optimal because at this time the height of its BTREE index is between 3 and 5. Data sharding can reduce the amount of data in a single table and distribute the read and write loads to different data nodes at the same time. Data sharding can be divided into vertical sharding and horizontal sharding.

1. Advantages of vertical sharding

  • Address the coupling of business system and make clearer.
  • Implement hierarchical management, maintenance, monitoring, and expansion to data of different businesses, like micro-service governance.
  • In high concurrency scenarios, vertical sharding removes the bottleneck of IO, database connections, and hardware resources on a single machine to some extent.

2. Disadvantages of vertical sharding

  • After splitting the library, Join can only be implemented by interface aggregation, which will increase the complexity of development.
  • After splitting the library, it is complex to process distributed transactions.
  • There is a large amount of data on a single table and horizontal sharding is required.

3. Advantages of horizontal sharding

  • There is no such performance bottleneck as a large amount of data on a single database and high concurrency, and it increases system stability and load capacity.
  • The business modules do not need to be split due to minor modification on the application client.

4. Disadvantages of horizontal sharding

  • Transaction consistency across shards is hard to be guaranteed;
  • The performance of associated query in cross-library Join is poor.
  • It’s difficult to scale the data many times and maintenance is a big workload.

Based on the analysis above, and the available studis on popular sharding middleware, we selected ShardingSphere, an open source product, combined with Amazon Aurora to introduce how the combination of these two products meets various forms of sharding and how to solve the problems brought by sharding.

ShardingSphere is an open source ecosystem including a set of distributed database middleware solutions, including 3 independent products, Sharding-JDBC, Sharding-Proxy & Sharding-Sidecar.

2. ShardingSphere introduction:

The characteristics of Sharding-JDBC are:

  1. With the client end connecting directly to the database, it provides service in the form of jar and requires no extra deployment and dependence.
  2. It can be considered as an enhanced JDBC driver, which is fully compatible with JDBC and all kinds of ORM frameworks.
  3. Applicable in any ORM framework based on JDBC, such as JPA, Hibernate, Mybatis, Spring JDBC Template or direct use of JDBC.
  4. Support any third-party database connection pool, such as DBCP, C3P0, BoneCP, Druid, HikariCP;
  5. Support any kind of JDBC standard database: MySQL, Oracle, SQLServer, PostgreSQL and any databases accessible to JDBC.
  6. Sharding-JDBC adopts decentralized architecture, applicable to high-performance light-weight OLTP application developed with Java

Hybrid Structure Integrating Sharding-JDBC and Applications

Sharding-JDBC’s core concepts

Data node: The smallest unit of a data slice, consisting of a data source name and a data table, such as ds_0.product_order_0.

Actual table: The physical table that really exists in the horizontal sharding database, such as product order tables: product_order_0, product_order_1, and product_order_2.

Logic table: The logical name of the horizontal sharding databases (tables) with the same schema. For instance, the logic table of the order product_order_0, product_order_1, and product_order_2 is product_order.

Binding table: It refers to the primary table and the joiner table with the same sharding rules. For example, product_order table and product_order_item are sharded by order_id, so they are binding tables with each other. Cartesian product correlation will not appear in the multi-tables correlating query, so the query efficiency will increase greatly.

Broadcast table: It refers to tables that exist in all sharding database sources. The schema and data must consist in each database. It can be applied to the small data volume that needs to correlate with big data tables to query, dictionary table and configuration table for example.

3. Testing ShardingSphere-JDBC

3.1 Example project

Download the example project code locally. In order to ensure the stability of the test code, we choose shardingsphere-example-4.0.0 version.

git clone https://github.com/apache/shardingsphere-example.git

Project description:

shardingsphere-example
  ├── example-core
  │   ├── config-utility
  │   ├── example-api
  │   ├── example-raw-jdbc
  │   ├── example-spring-jpa #spring+jpa integration-based entity,repository
  │   └── example-spring-mybatis
  ├── sharding-jdbc-example
  │   ├── sharding-example
  │   │   ├── sharding-raw-jdbc-example
  │   │   ├── sharding-spring-boot-jpa-example #integration-based sharding-jdbc functions
  │   │   ├── sharding-spring-boot-mybatis-example
  │   │   ├── sharding-spring-namespace-jpa-example
  │   │   └── sharding-spring-namespace-mybatis-example
  │   ├── orchestration-example
  │   │   ├── orchestration-raw-jdbc-example
  │   │   ├── orchestration-spring-boot-example #integration-based sharding-jdbc governance function
  │   │   └── orchestration-spring-namespace-example
  │   ├── transaction-example
  │   │   ├── transaction-2pc-xa-example #sharding-jdbc sample of two-phase commit for a distributed transaction
  │   │   └──transaction-base-seata-example #sharding-jdbc distributed transaction seata sample
  │   ├── other-feature-example
  │   │   ├── hint-example
  │   │   └── encrypt-example
  ├── sharding-proxy-example
  │   └── sharding-proxy-boot-mybatis-example
  └── src/resources
        └── manual_schema.sql  

Configuration file description:

application-master-slave.properties #read/write splitting profile
application-sharding-databases-tables.properties #sharding profile
application-sharding-databases.properties       #library split profile only
application-sharding-master-slave.properties    #sharding and read/write splitting profile
application-sharding-tables.properties          #table split profile
application.properties                         #spring boot profile

Code logic description:

The following is the entry class of the Spring Boot application below. Execute it to run the project.

The execution logic of demo is as follows:

3.2 Verifying read/write splitting

As business grows, the write and read requests can be split to different database nodes to effectively promote the processing capability of the entire database cluster. Aurora uses a reader/writer endpoint to meet users' requirements to write and read with strong consistency, and a read-only endpoint to meet the requirements to read without strong consistency. Aurora's read and write latency is within single-digit milliseconds, much lower than MySQL's binlog-based logical replication, so there's a lot of loads that can be directed to a read-only endpoint.

Through the one primary and multiple secondary configuration, query requests can be evenly distributed to multiple data replicas, which further improves the processing capability of the system. Read/write splitting can improve the throughput and availability of system, but it can also lead to data inconsistency. Aurora provides a primary/secondary architecture in a fully managed form, but applications on the upper-layer still need to manage multiple data sources when interacting with Aurora, routing SQL requests to different nodes based on the read/write type of SQL statements and certain routing policies.

ShardingSphere-JDBC provides read/write splitting features and it is integrated with application programs so that the complex configuration between application programs and database clusters can be separated from application programs. Developers can manage the Shard through configuration files and combine it with ORM frameworks such as Spring JPA and Mybatis to completely separate the duplicated logic from the code, which greatly improves the ability to maintain code and reduces the coupling between code and database.

3.2.1 Setting up the database environment

Create a set of Aurora MySQL read/write splitting clusters. The model is db.r5.2xlarge. Each set of clusters has one write node and two read nodes.

3.2.2 Configuring Sharding-JDBC

application.properties spring boot Master profile description:

You need to replace the green ones with your own environment configuration.

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#Activate master-slave configuration item so that sharding-jdbc can use master-slave profile
spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-master-slave.properties sharding-jdbc profile description:

spring.shardingsphere.datasource.names=ds_master,ds_slave_0,ds_slave_1
# data souce-master
spring.shardingsphere.datasource.ds_master.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master.password=Your master DB password
spring.shardingsphere.datasource.ds_master.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master.jdbc-url=Your primary DB data sourceurl spring.shardingsphere.datasource.ds_master.username=Your primary DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_0.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_0.jdbc-url=Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_0.username= Your slave DB username
# data source-slave
spring.shardingsphere.datasource.ds_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_1.password= Your slave DB password
spring.shardingsphere.datasource.ds_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_1.jdbc-url= Your slave DB data source url
spring.shardingsphere.datasource.ds_slave_1.username= Your slave DB username
# Routing Policy Configuration
spring.shardingsphere.masterslave.load-balance-algorithm-type=round_robin
spring.shardingsphere.masterslave.name=ds_ms
spring.shardingsphere.masterslave.master-data-source-name=ds_master
spring.shardingsphere.masterslave.slave-data-source-names=ds_slave_0,ds_slave_1
# sharding-jdbc configures the information storage mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log,and you can see the conversion from logical SQL to actual SQL from the print
spring.shardingsphere.props.sql.show=true

 

3.2.3 Test and verification process description

  • Test environment data initialization: Spring JPA initialization automatically creates tables for testing.

  • Write data to the master instance

As shown in the ShardingSphere-SQL log figure below, the write SQL is executed on the ds_master data source.

  • Data query operations are performed on the slave library.

As shown in the ShardingSphere-SQL log figure below, the read SQL is executed on the ds_slave data source in the form of polling.

[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] Rule Type: master-slave 
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] SQL: select orderentit0_.order_id as order_id1_1_, orderentit0_.address_id as address_2_1_, 
orderentit0_.status as status3_1_, orderentit0_.user_id as user_id4_1_ from t_order orderentit0_ ::: DataSources: ds_slave_0 
---------------------------- Print OrderItem Data -------------------
Hibernate: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, orderiteme1_.user_id 
as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] Rule Type: master-slave 
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] SQL: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, 
orderiteme1_.user_id as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id ::: DataSources: ds_slave_1 

Note: As shown in the figure below, if there are both reads and writes in a transaction, Sharding-JDBC routes both read and write operations to the master library. If the read/write requests are not in the same transaction, the corresponding read requests are distributed to different read nodes according to the routing policy.

@Override
@Transactional // When a transaction is started, both read and write in the transaction go through the master library. When closed, read goes through the slave library and write goes through the master library
public void processSuccess() throws SQLException {
    System.out.println("-------------- Process Success Begin ---------------");
    List<Long> orderIds = insertData();
    printData();
    deleteData(orderIds);
    printData();
    System.out.println("-------------- Process Success Finish --------------");
}

3.2.4 Verifying Aurora failover scenario

The Aurora database environment adopts the configuration described in Section 2.2.1.

3.2.4.1 Verification process description

  1. Start the Spring-Boot project

2. Perform a failover on Aurora’s console

3. Execute the Rest API request

4. Repeatedly execute POST (http://localhost:8088/save-user) until the call to the API failed to write to Aurora and eventually recovered successfully.

5. The following figure shows the process of executing code failover. It takes about 37 seconds from the time when the latest SQL write is successfully performed to the time when the next SQL write is successfully performed. That is, the application can be automatically recovered from Aurora failover, and the recovery time is about 37 seconds.

3.3 Testing table sharding-only function

3.3.1 Configuring Sharding-JDBC

application.properties spring boot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#Activate sharding-tables configuration items
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
# spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-tables.properties sharding-jdbc profile description

## configure primary-key policy
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# configure the binding relation of t_order and t_order_item
spring.shardingsphere.sharding.binding-tables[0]=t_order,t_order_item
# configure broadcast tables
spring.shardingsphere.sharding.broadcast-tables=t_address
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.3.2 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC routing rules are configured, the client executes DDL, and Sharding-JDBC automatically creates corresponding tables according to the table splitting rules. If t_address is a broadcast table, create a t_address because there is only one master instance. Two physical tables t_order_0 and t_order_1 will be created when creating t_order.

2. Write operation

As shown in the figure below, Logic SQL inserts a record into t_order. When Sharding-JDBC is executed, data will be distributed to t_order_0 and t_order_1 according to the table splitting rules.

When t_order and t_order_item are bound, the records associated with order_item and order are placed on the same physical table.

3. Read operation

As shown in the figure below, perform the join query operations to order and order_item under the binding table, and the physical shard is precisely located based on the binding relationship.

The join query operations on order and order_item under the unbound table will traverse all shards.

3.4 Testing database sharding-only function

3.4.1 Setting up the database environment

Create two instances on Aurora: ds_0 and ds_1

When the sharding-spring-boot-jpa-example project is started, tables t_order, t_order_itemt_address will be created on two Aurora instances.

3.4.2 Configuring Sharding-JDBC

application.properties springboot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

# Activate sharding-databases configuration items
spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-databases.properties sharding-jdbc profile description

spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= spring.shardingsphere.datasource.ds_0.username= 
spring.shardingsphere.datasource.ds_0.password=
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url= 
spring.shardingsphere.datasource.ds_1.username= 
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_0

spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.4.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, physical tables will be created on ds_0 and ds_1. The three tables, t_address, t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address, each record written will also be written to the t_address tables of ds_0 and ds_1.

The tables t_order and t_order_item of the slave library are written on the table in the corresponding instance according to the slave library field and routing policy.

3. Read operation

Query order is routed to the corresponding Aurora instance according to the routing rules of the slave library .

Query Address. Since address is a broadcast table, an instance of address will be randomly selected and queried from the nodes used.

As shown in the figure below, perform the join query operations to order and order_item under the binding table, and the physical shard is precisely located based on the binding relationship.

3.5 Verifying sharding function

3.5.1 Setting up the database environment

As shown in the figure below, create two instances on Aurora: ds_0 and ds_1

When the sharding-spring-boot-jpa-example project is started, physical tables t_order_01, t_order_02, t_order_item_01,and t_order_item_02 and global table t_address will be created on two Aurora instances.

3.5.2 Configuring Sharding-JDBC

application.properties springboot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# Activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave

application-sharding-databases.properties sharding-jdbc profile description

spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= 306/dev?useSSL=false&characterEncoding=utf-8
spring.shardingsphere.datasource.ds_0.username= 
spring.shardingsphere.datasource.ds_0.password=
spring.shardingsphere.datasource.ds_0.max-active=16
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url= 
spring.shardingsphere.datasource.ds_1.username= 
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.datasource.ds_1.max-active=16
# default library splitting policy
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
# Tables that do not meet the library splitting policy are placed on ds_0
spring.shardingsphere.sharding.default-data-source-name=ds_0
# t_order table splitting policy
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
# t_order_item table splitting policy
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc mdoe
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.5.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s sharding and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, t_address will be created on both ds_0 and ds_1. The three tables, t_address, t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address, each record written will also be written to the t_address tables of ds_0 and ds_1.

The tables t_order and t_order_item of the sub-library are written to the table on the corresponding instance according to the slave library field and routing policy.

3. Read operation

The read operation is similar to the library split function verification described in section2.4.3.

3.6 Testing database sharding, table sharding and read/write splitting function

3.6.1 Setting up the database environment

The following figure shows the physical table of the created database instance.

3.6.2 Configuring Sharding-JDBC

application.properties spring boot master profile description

# Jpa automatically creates and drops data tables based on entities
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true

# activate sharding-databases-tables configuration items
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
spring.profiles.active=sharding-master-slave

application-sharding-master-slave.properties sharding-jdbc profile description

The url, name and password of the database need to be changed to your own database parameters.

spring.shardingsphere.datasource.names=ds_master_0,ds_master_1,ds_master_0_slave_0,ds_master_0_slave_1,ds_master_1_slave_0,ds_master_1_slave_1
spring.shardingsphere.datasource.ds_master_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0.username= 
spring.shardingsphere.datasource.ds_master_0.password=
spring.shardingsphere.datasource.ds_master_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_0.username= 
spring.shardingsphere.datasource.ds_master_0_slave_0.password=
spring.shardingsphere.datasource.ds_master_0_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_1.username= 
spring.shardingsphere.datasource.ds_master_0_slave_1.password=
spring.shardingsphere.datasource.ds_master_0_slave_1.max-active=16
spring.shardingsphere.datasource.ds_master_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1.jdbc-url= 
spring.shardingsphere.datasource.ds_master_1.username= 
spring.shardingsphere.datasource.ds_master_1.password=
spring.shardingsphere.datasource.ds_master_1.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_0.jdbc-url=
spring.shardingsphere.datasource.ds_master_1_slave_0.username=
spring.shardingsphere.datasource.ds_master_1_slave_0.password=
spring.shardingsphere.datasource.ds_master_1_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_1_slave_1.username=admin
spring.shardingsphere.datasource.ds_master_1_slave_1.password=
spring.shardingsphere.datasource.ds_master_1_slave_1.max-active=16
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_master_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# master/slave data source and slave data source configuration
spring.shardingsphere.sharding.master-slave-rules.ds_0.master-data-source-name=ds_master_0
spring.shardingsphere.sharding.master-slave-rules.ds_0.slave-data-source-names=ds_master_0_slave_0, ds_master_0_slave_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.master-data-source-name=ds_master_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.slave-data-source-names=ds_master_1_slave_0, ds_master_1_slave_1
# sharding-jdbc mode
spring.shardingsphere.mode.type=Memory
# start shardingsphere log
spring.shardingsphere.props.sql.show=true

 

3.6.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, t_address will be created on both ds_0 and ds_1. The three tables, t_address, t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address, each record written will also be written to the t_address tables of ds_0 and ds_1.

The tables t_order and t_order_item of the slave library are written to the table on the corresponding instance according to the slave library field and routing policy.

3. Read operation

The join query operations on order and order_item under the binding table are shown below.

3. Conclusion

As an open source product focusing on database enhancement, ShardingSphere is pretty good in terms of its community activitiy, product maturity and documentation richness.

Among its products, ShardingSphere-JDBC is a sharding solution based on the client-side, which supports all sharding scenarios. And there’s no need to introduce an intermediate layer like Proxy, so the complexity of operation and maintenance is reduced. Its latency is theoretically lower than Proxy due to the lack of intermediate layer. In addition, ShardingSphere-JDBC can support a variety of relational databases based on SQL standards such as MySQL/PostgreSQL/Oracle/SQL Server, etc.

However, due to the integration of Sharding-JDBC with the application program, it only supports Java language for now, and is strongly dependent on the application programs. Nevertheless, Sharding-JDBC separates all sharding configuration from the application program, which brings relatively small changes when switching to other middleware.

In conclusion, Sharding-JDBC is a good choice if you use a Java-based system and have to to interconnect with different relational databases — and don’t want to bother with introducing an intermediate layer.

Author

Sun Jinhua

A senior solution architect at AWS, Sun is responsible for the design and consult on cloud architecture. for providing customers with cloud-related design and consulting services. Before joining AWS, he ran his own business, specializing in building e-commerce platforms and designing the overall architecture for e-commerce platforms of automotive companies. He worked in a global leading communication equipment company as a senior engineer, responsible for the development and architecture design of multiple subsystems of LTE equipment system. He has rich experience in architecture design with high concurrency and high availability system, microservice architecture design, database, middleware, IOT etc.

Franz  Becker

Franz Becker

1648803600

Plpgsql Check: Extension That Allows to Check Plpgsql Source Code.

plpgsql_check

I founded this project, because I wanted to publish the code I wrote in the last two years, when I tried to write enhanced checking for PostgreSQL upstream. It was not fully successful - integration into upstream requires some larger plpgsql refactoring - probably it will not be done in next years (now is Dec 2013). But written code is fully functional and can be used in production (and it is used in production). So, I created this extension to be available for all plpgsql developers.

If you like it and if you would to join to development of this extension, register yourself to postgresql extension hacking google group.

Features

  • check fields of referenced database objects and types inside embedded SQL
  • using correct types of function parameters
  • unused variables and function argumens, unmodified OUT argumens
  • partially detection of dead code (due RETURN command)
  • detection of missing RETURN command in function
  • try to identify unwanted hidden casts, that can be performance issue like unused indexes
  • possibility to collect relations and functions used by function
  • possibility to check EXECUTE stmt agaist SQL injection vulnerability

I invite any ideas, patches, bugreports.

plpgsql_check is next generation of plpgsql_lint. It allows to check source code by explicit call plpgsql_check_function.

PostgreSQL PostgreSQL 10, 11, 12, 13 and 14 are supported.

The SQL statements inside PL/pgSQL functions are checked by validator for semantic errors. These errors can be found by plpgsql_check_function:

Active mode

postgres=# CREATE EXTENSION plpgsql_check;
LOAD
postgres=# CREATE TABLE t1(a int, b int);
CREATE TABLE

postgres=#
CREATE OR REPLACE FUNCTION public.f1()
RETURNS void
LANGUAGE plpgsql
AS $function$
DECLARE r record;
BEGIN
  FOR r IN SELECT * FROM t1
  LOOP
    RAISE NOTICE '%', r.c; -- there is bug - table t1 missing "c" column
  END LOOP;
END;
$function$;

CREATE FUNCTION

postgres=# select f1(); -- execution doesn't find a bug due to empty table t1
  f1 
 ────
   
 (1 row)

postgres=# \x
Expanded display is on.
postgres=# select * from plpgsql_check_function_tb('f1()');
─[ RECORD 1 ]───────────────────────────
functionid │ f1
lineno     │ 6
statement  │ RAISE
sqlstate   │ 42703
message    │ record "r" has no field "c"
detail     │ [null]
hint       │ [null]
level      │ error
position   │ 0
query      │ [null]

postgres=# \sf+ f1
    CREATE OR REPLACE FUNCTION public.f1()
     RETURNS void
     LANGUAGE plpgsql
1       AS $function$
2       DECLARE r record;
3       BEGIN
4         FOR r IN SELECT * FROM t1
5         LOOP
6           RAISE NOTICE '%', r.c; -- there is bug - table t1 missing "c" column
7         END LOOP;
8       END;
9       $function$

Function plpgsql_check_function() has three possible formats: text, json or xml

select * from plpgsql_check_function('f1()', fatal_errors := false);
                         plpgsql_check_function                         
------------------------------------------------------------------------
 error:42703:4:SQL statement:column "c" of relation "t1" does not exist
 Query: update t1 set c = 30
 --                   ^
 error:42P01:7:RAISE:missing FROM-clause entry for table "r"
 Query: SELECT r.c
 --            ^
 error:42601:7:RAISE:too few parameters specified for RAISE
(7 rows)

postgres=# select * from plpgsql_check_function('fx()', format:='xml');
                 plpgsql_check_function                     
────────────────────────────────────────────────────────────────
 <Function oid="16400">                                        ↵
   <Issue>                                                     ↵
     <Level>error</level>                                      ↵
     <Sqlstate>42P01</Sqlstate>                                ↵
     <Message>relation "foo111" does not exist</Message>       ↵
     <Stmt lineno="3">RETURN</Stmt>                            ↵
     <Query position="23">SELECT (select a from foo111)</Query>↵
   </Issue>                                                    ↵
  </Function>
 (1 row)

Arguments

You can set level of warnings via function's parameters:

Mandatory arguments

  • function name or function signature - these functions requires function specification. Any function in PostgreSQL can be specified by Oid or by name or by signature. When you know oid or complete function's signature, you can use a regprocedure type parameter like 'fx()'::regprocedure or 16799::regprocedure. Possible alternative is using a name only, when function's name is unique - like 'fx'. When the name is not unique or the function doesn't exists it raises a error.

Optional arguments

relid DEFAULT 0 - oid of relation assigned with trigger function. It is necessary for check of any trigger function.

fatal_errors boolean DEFAULT true - stop on first error

other_warnings boolean DEFAULT true - show warnings like different attributes number in assignmenet on left and right side, variable overlaps function's parameter, unused variables, unwanted casting, ..

extra_warnings boolean DEFAULT true - show warnings like missing RETURN, shadowed variables, dead code, never read (unused) function's parameter, unmodified variables, modified auto variables, ..

performance_warnings boolean DEFAULT false - performance related warnings like declared type with type modificator, casting, implicit casts in where clause (can be reason why index is not used), ..

security_warnings boolean DEFAULT false - security related checks like SQL injection vulnerability detection

anyelementtype regtype DEFAULT 'int' - a real type used instead anyelement type

anyenumtype regtype DEFAULT '-' - a real type used instead anyenum type

anyrangetype regtype DEFAULT 'int4range' - a real type used instead anyrange type

anycompatibletype DEFAULT 'int' - a real type used instead anycompatible type

anycompatiblerangetype DEFAULT 'int4range' - a real type used instead anycompatible range type

without_warnings DEFAULT false - disable all warnings

all_warnings DEFAULT false - enable all warnings

newtable DEFAULT NULL, oldtable DEFAULT NULL - the names of NEW or OLD transitive tables. These parameters are required when transitive tables are used.

Triggers

When you want to check any trigger, you have to enter a relation that will be used together with trigger function

CREATE TABLE bar(a int, b int);

postgres=# \sf+ foo_trg
    CREATE OR REPLACE FUNCTION public.foo_trg()
         RETURNS trigger
         LANGUAGE plpgsql
1       AS $function$
2       BEGIN
3         NEW.c := NEW.a + NEW.b;
4         RETURN NEW;
5       END;
6       $function$

Missing relation specification

postgres=# select * from plpgsql_check_function('foo_trg()');
ERROR:  missing trigger relation
HINT:  Trigger relation oid must be valid

Correct trigger checking (with specified relation)

postgres=# select * from plpgsql_check_function('foo_trg()', 'bar');
                 plpgsql_check_function                 
--------------------------------------------------------
 error:42703:3:assignment:record "new" has no field "c"
(1 row)

For triggers with transitive tables you can set a oldtable or newtable parameters:

create or replace function footab_trig_func()
returns trigger as $$
declare x int;
begin
  if false then
    -- should be ok;
    select count(*) from newtab into x; 

    -- should fail;
    select count(*) from newtab where d = 10 into x;
  end if;
  return null;
end;
$$ language plpgsql;

select * from plpgsql_check_function('footab_trig_func','footab', newtable := 'newtab');

Mass check

You can use the plpgsql_check_function for mass check functions and mass check triggers. Please, test following queries:

-- check all nontrigger plpgsql functions
SELECT p.oid, p.proname, plpgsql_check_function(p.oid)
   FROM pg_catalog.pg_namespace n
   JOIN pg_catalog.pg_proc p ON pronamespace = n.oid
   JOIN pg_catalog.pg_language l ON p.prolang = l.oid
  WHERE l.lanname = 'plpgsql' AND p.prorettype <> 2279;

or

SELECT p.proname, tgrelid::regclass, cf.*
   FROM pg_proc p
        JOIN pg_trigger t ON t.tgfoid = p.oid 
        JOIN pg_language l ON p.prolang = l.oid
        JOIN pg_namespace n ON p.pronamespace = n.oid,
        LATERAL plpgsql_check_function(p.oid, t.tgrelid) cf
  WHERE n.nspname = 'public' and l.lanname = 'plpgsql'

or

-- check all plpgsql functions (functions or trigger functions with defined triggers)
SELECT
    (pcf).functionid::regprocedure, (pcf).lineno, (pcf).statement,
    (pcf).sqlstate, (pcf).message, (pcf).detail, (pcf).hint, (pcf).level,
    (pcf)."position", (pcf).query, (pcf).context
FROM
(
    SELECT
        plpgsql_check_function_tb(pg_proc.oid, COALESCE(pg_trigger.tgrelid, 0)) AS pcf
    FROM pg_proc
    LEFT JOIN pg_trigger
        ON (pg_trigger.tgfoid = pg_proc.oid)
    WHERE
        prolang = (SELECT lang.oid FROM pg_language lang WHERE lang.lanname = 'plpgsql') AND
        pronamespace <> (SELECT nsp.oid FROM pg_namespace nsp WHERE nsp.nspname = 'pg_catalog') AND
        -- ignore unused triggers
        (pg_proc.prorettype <> (SELECT typ.oid FROM pg_type typ WHERE typ.typname = 'trigger') OR
         pg_trigger.tgfoid IS NOT NULL)
    OFFSET 0
) ss
ORDER BY (pcf).functionid::regprocedure::text, (pcf).lineno

Passive mode

Functions should be checked on start - plpgsql_check module must be loaded.

Configuration

plpgsql_check.mode = [ disabled | by_function | fresh_start | every_start ]
plpgsql_check.fatal_errors = [ yes | no ]

plpgsql_check.show_nonperformance_warnings = false
plpgsql_check.show_performance_warnings = false

Default mode is by_function, that means that the enhanced check is done only in active mode - by plpgsql_check_function. fresh_start means cold start.

You can enable passive mode by

load 'plpgsql'; -- 1.1 and higher doesn't need it
load 'plpgsql_check';
set plpgsql_check.mode = 'every_start';

SELECT fx(10); -- run functions - function is checked before runtime starts it

Limits

plpgsql_check should find almost all errors on really static code. When developer use some PLpgSQL's dynamic features like dynamic SQL or record data type, then false positives are possible. These should be rare - in well written code - and then the affected function should be redesigned or plpgsql_check should be disabled for this function.

CREATE OR REPLACE FUNCTION f1()
RETURNS void AS $$
DECLARE r record;
BEGIN
  FOR r IN EXECUTE 'SELECT * FROM t1'
  LOOP
    RAISE NOTICE '%', r.c;
  END LOOP;
END;
$$ LANGUAGE plpgsql SET plpgsql.enable_check TO false;

A usage of plpgsql_check adds a small overhead (in enabled passive mode) and you should use it only in develop or preprod environments.

Dynamic SQL

This module doesn't check queries that are assembled in runtime. It is not possible to identify results of dynamic queries - so plpgsql_check cannot to set correct type to record variables and cannot to check a dependent SQLs and expressions.

When type of record's variable is not know, you can assign it explicitly with pragma type:

DECLARE r record;
BEGIN
  EXECUTE format('SELECT * FROM %I', _tablename) INTO r;
  PERFORM plpgsql_check_pragma('type: r (id int, processed bool)');
  IF NOT r.processed THEN
    ...

Attention: The SQL injection check can detect only some SQL injection vulnerabilities. This tool cannot be used for security audit! Some issues should not be detected. This check can raise false alarms too - probably when variable is sanitized by other command or when value is of some compose type. 

Refcursors

plpgsql_check should not to detect structure of referenced cursors. A reference on cursor in PLpgSQL is implemented as name of global cursor. In check time, the name is not known (not in all possibilities), and global cursor doesn't exist. It is significant break for any static analyse. PLpgSQL cannot to set correct type for record variables and cannot to check a dependent SQLs and expressions. A solution is same like dynamic SQL. Don't use record variable as target when you use refcursor type or disable plpgsql_check for these functions.

CREATE OR REPLACE FUNCTION foo(refcur_var refcursor)
RETURNS void AS $$
DECLARE
  rec_var record;
BEGIN
  FETCH refcur_var INTO rec_var; -- this is STOP for plpgsql_check
  RAISE NOTICE '%', rec_var;     -- record rec_var is not assigned yet error

In this case a record type should not be used (use known rowtype instead):

CREATE OR REPLACE FUNCTION foo(refcur_var refcursor)
RETURNS void AS $$
DECLARE
  rec_var some_rowtype;
BEGIN
  FETCH refcur_var INTO rec_var;
  RAISE NOTICE '%', rec_var;

Temporary tables

plpgsql_check cannot verify queries over temporary tables that are created in plpgsql's function runtime. For this use case it is necessary to create a fake temp table or disable plpgsql_check for this function.

In reality temp tables are stored in own (per user) schema with higher priority than persistent tables. So you can do (with following trick safetly):

CREATE OR REPLACE FUNCTION public.disable_dml()
RETURNS trigger
LANGUAGE plpgsql AS $function$
BEGIN
  RAISE EXCEPTION SQLSTATE '42P01'
     USING message = format('this instance of %I table doesn''t allow any DML operation', TG_TABLE_NAME),
           hint = format('you should to run "CREATE TEMP TABLE %1$I(LIKE %1$I INCLUDING ALL);" statement',
                         TG_TABLE_NAME);
  RETURN NULL;
END;
$function$;

CREATE TABLE foo(a int, b int); -- doesn't hold data ever
CREATE TRIGGER foo_disable_dml
   BEFORE INSERT OR UPDATE OR DELETE ON foo
   EXECUTE PROCEDURE disable_dml();

postgres=# INSERT INTO  foo VALUES(10,20);
ERROR:  this instance of foo table doesn't allow any DML operation
HINT:  you should to run "CREATE TEMP TABLE foo(LIKE foo INCLUDING ALL);" statement
postgres=# 

CREATE TABLE
postgres=# INSERT INTO  foo VALUES(10,20);
INSERT 0 1

This trick emulates GLOBAL TEMP tables partially and it allows a statical validation. Other possibility is using a [template foreign data wrapper] (https://github.com/okbob/template_fdw)

You can use pragma table and create ephemeral table:

BEGIN
   CREATE TEMP TABLE xxx(a int);
   PERFORM plpgsql_check_pragma('table: xxx(a int)');
   INSERT INTO xxx VALUES(10);

Dependency list

A function plpgsql_show_dependency_tb can show all functions, operators and relations used inside processed function:

postgres=# select * from plpgsql_show_dependency_tb('testfunc(int,float)');
┌──────────┬───────┬────────┬─────────┬────────────────────────────┐
│   type   │  oid  │ schema │  name   │           params           │
╞══════════╪═══════╪════════╪═════════╪════════════════════════════╡
│ FUNCTION │ 36008 │ public │ myfunc1 │ (integer,double precision) │
│ FUNCTION │ 35999 │ public │ myfunc2 │ (integer,double precision) │
│ OPERATOR │ 36007 │ public │ **      │ (integer,integer)          │
│ RELATION │ 36005 │ public │ myview  │                            │
│ RELATION │ 36002 │ public │ mytable │                            │
└──────────┴───────┴────────┴─────────┴────────────────────────────┘
(4 rows)

Profiler

The plpgsql_check contains simple profiler of plpgsql functions and procedures. It can work with/without a access to shared memory. It depends on shared_preload_libraries config. When plpgsql_check was initialized by shared_preload_libraries, then it can allocate shared memory, and function's profiles are stored there. When plpgsql_check cannot to allocate shared momory, the profile is stored in session memory.

Due dependencies, shared_preload_libraries should to contains plpgsql first

postgres=# show shared_preload_libraries ;
┌──────────────────────────┐
│ shared_preload_libraries │
╞══════════════════════════╡
│ plpgsql,plpgsql_check    │
└──────────────────────────┘
(1 row)

The profiler is active when GUC plpgsql_check.profiler is on. The profiler doesn't require shared memory, but if there are not shared memory, then the profile is limmitted just to active session.

When plpgsql_check is initialized by shared_preload_libraries, another GUC is available to configure the amount of shared memory used by the profiler: plpgsql_check.profiler_max_shared_chunks. This defines the maximum number of statements chunk that can be stored in shared memory. For each plpgsql function (or procedure), the whole content is split into chunks of 30 statements. If needed, multiple chunks can be used to store the whole content of a single function. A single chunk is 1704 bytes. The default value for this GUC is 15000, which should be enough for big projects containing hundred of thousands of statements in plpgsql, and will consume about 24MB of memory. If your project doesn't require that much number of chunks, you can set this parameter to a smaller number in order to decrease the memory usage. The minimum value is 50 (which should consume about 83kB of memory), and the maximum value is 100000 (which should consume about 163MB of memory). Changing this parameter requires a PostgreSQL restart.

The profiler will also retrieve the query identifier for each instruction that contains an expression or optimizable statement. Note that this requires pg_stat_statements, or another similar third-party extension), to be installed. There are some limitations to the query identifier retrieval:

  • if a plpgsql expression contains underlying statements, only the top level query identifier will be retrieved
  • the profiler doesn't compute query identifier by itself but relies on external extension, such as pg_stat_statements, for that. It means that depending on the external extension behavior, you may not be able to see a query identifier for some statements. That's for instance the case with DDL statements, as pg_stat_statements doesn't expose the query identifier for such queries.
  • a query identifier is retrieved only for instructions containing expressions. This means that plpgsql_profiler_function_tb() function can report less query identifier than instructions on a single line.

Attention: A update of shared profiles can decrease performance on servers under higher load.

The profile can be displayed by function plpgsql_profiler_function_tb:

postgres=# select lineno, avg_time, source from plpgsql_profiler_function_tb('fx(int)');
┌────────┬──────────┬───────────────────────────────────────────────────────────────────┐
│ lineno │ avg_time │                              source                               │
╞════════╪══════════╪═══════════════════════════════════════════════════════════════════╡
│      1 │          │                                                                   │
│      2 │          │ declare result int = 0;                                           │
│      3 │    0.075 │ begin                                                             │
│      4 │    0.202 │   for i in 1..$1 loop                                             │
│      5 │    0.005 │     select result + i into result; select result + i into result; │
│      6 │          │   end loop;                                                       │
│      7 │        0 │   return result;                                                  │
│      8 │          │ end;                                                              │
└────────┴──────────┴───────────────────────────────────────────────────────────────────┘
(9 rows)

The profile per statements (not per line) can be displayed by function plpgsql_profiler_function_statements_tb:

        CREATE OR REPLACE FUNCTION public.fx1(a integer)
         RETURNS integer
         LANGUAGE plpgsql
1       AS $function$
2       begin
3         if a > 10 then
4           raise notice 'ahoj';
5           return -1;
6         else
7           raise notice 'nazdar';
8           return 1;
9         end if;
10      end;
11      $function$

postgres=# select stmtid, parent_stmtid, parent_note, lineno, exec_stmts, stmtname
             from plpgsql_profiler_function_statements_tb('fx1');
┌────────┬───────────────┬─────────────┬────────┬────────────┬─────────────────┐
│ stmtid │ parent_stmtid │ parent_note │ lineno │ exec_stmts │    stmtname     │
╞════════╪═══════════════╪═════════════╪════════╪════════════╪═════════════════╡
│      0 │             ∅ │ ∅           │      2 │          0 │ statement block │
│      1 │             0 │ body        │      3 │          0 │ IF              │
│      2 │             1 │ then body   │      4 │          0 │ RAISE           │
│      3 │             1 │ then body   │      5 │          0 │ RETURN          │
│      4 │             1 │ else body   │      7 │          0 │ RAISE           │
│      5 │             1 │ else body   │      8 │          0 │ RETURN          │
└────────┴───────────────┴─────────────┴────────┴────────────┴─────────────────┘
(6 rows)

All stored profiles can be displayed by calling function plpgsql_profiler_functions_all:

postgres=# select * from plpgsql_profiler_functions_all();
┌───────────────────────┬────────────┬────────────┬──────────┬─────────────┬──────────┬──────────┐
│        funcoid        │ exec_count │ total_time │ avg_time │ stddev_time │ min_time │ max_time │
╞═══════════════════════╪════════════╪════════════╪══════════╪═════════════╪══════════╪══════════╡
│ fxx(double precision) │          1 │       0.01 │     0.01 │        0.00 │     0.01 │     0.01 │
└───────────────────────┴────────────┴────────────┴──────────┴─────────────┴──────────┴──────────┘
(1 row)

There are two functions for cleaning stored profiles: plpgsql_profiler_reset_all() and plpgsql_profiler_reset(regprocedure).

Coverage metrics

plpgsql_check provides two functions:

  • plpgsql_coverage_statements(name)
  • plpgsql_coverage_branches(name)

Note

There is another very good PLpgSQL profiler - https://bitbucket.org/openscg/plprofiler

My extension is designed to be simple for use and practical. Nothing more or less.

plprofiler is more complex. It build call graphs and from this graph it can creates flame graph of execution times.

Both extensions can be used together with buildin PostgreSQL's feature - tracking functions.

set track_functions to 'pl';
...
select * from pg_stat_user_functions;

Tracer

plpgsql_check provides a tracing possibility - in this mode you can see notices on start or end functions (terse and default verbosity) and start or end statements (verbose verbosity). For default and verbose verbosity the content of function arguments is displayed. The content of related variables are displayed when verbosity is verbose.

postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE:  #0 ->> start of inline_code_block (Oid=0)
NOTICE:  #2   ->> start of function fx(integer,integer,date,text) (Oid=16405)
NOTICE:  #2        call by inline_code_block line 1 at PERFORM
NOTICE:  #2       "a" => '10', "b" => null, "c" => '2020-08-03', "d" => 'stěhule'
NOTICE:  #4     ->> start of function fx(integer) (Oid=16404)
NOTICE:  #4          call by fx(integer,integer,date,text) line 1 at PERFORM
NOTICE:  #4         "a" => '10'
NOTICE:  #4     <<- end of function fx (elapsed time=0.098 ms)
NOTICE:  #2   <<- end of function fx (elapsed time=0.399 ms)
NOTICE:  #0 <<- end of block (elapsed time=0.754 ms)

The number after # is a execution frame counter (this number is related to deep of error context stack). It allows to pair start end and of function.

Tracing is enabled by setting plpgsql_check.tracer to on. Attention - enabling this behaviour has significant negative impact on performance (unlike the profiler). You can set a level for output used by tracer plpgsql_check.tracer_errlevel (default is notice). The output content is limited by length specified by plpgsql_check.tracer_variable_max_length configuration variable.

In terse verbose mode the output is reduced:

postgres=# set plpgsql_check.tracer_verbosity TO terse;
SET
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE:  #0 start of inline code block (oid=0)
NOTICE:  #2 start of fx (oid=16405)
NOTICE:  #4 start of fx (oid=16404)
NOTICE:  #4 end of fx
NOTICE:  #2 end of fx
NOTICE:  #0 end of inline code block

In verbose mode the output is extended about statement details:

postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE:  #0            ->> start of block inline_code_block (oid=0)
NOTICE:  #0.1       1  --> start of PERFORM
NOTICE:  #2              ->> start of function fx(integer,integer,date,text) (oid=16405)
NOTICE:  #2                   call by inline_code_block line 1 at PERFORM
NOTICE:  #2                  "a" => '10', "b" => null, "c" => '2020-08-04', "d" => 'stěhule'
NOTICE:  #2.1       1    --> start of PERFORM
NOTICE:  #2.1                "a" => '10'
NOTICE:  #4                ->> start of function fx(integer) (oid=16404)
NOTICE:  #4                     call by fx(integer,integer,date,text) line 1 at PERFORM
NOTICE:  #4                    "a" => '10'
NOTICE:  #4.1       6      --> start of assignment
NOTICE:  #4.1                  "a" => '10', "b" => '20'
NOTICE:  #4.1              <-- end of assignment (elapsed time=0.076 ms)
NOTICE:  #4.1                  "res" => '130'
NOTICE:  #4.2       7      --> start of RETURN
NOTICE:  #4.2                  "res" => '130'
NOTICE:  #4.2              <-- end of RETURN (elapsed time=0.054 ms)
NOTICE:  #4                <<- end of function fx (elapsed time=0.373 ms)
NOTICE:  #2.1            <-- end of PERFORM (elapsed time=0.589 ms)
NOTICE:  #2              <<- end of function fx (elapsed time=0.727 ms)
NOTICE:  #0.1          <-- end of PERFORM (elapsed time=1.147 ms)
NOTICE:  #0            <<- end of block (elapsed time=1.286 ms)

Special feature of tracer is tracing of ASSERT statement when plpgsql_check.trace_assert is on. When plpgsql_check.trace_assert_verbosity is DEFAULT, then all function's or procedure's variables are displayed when assert expression is false. When this configuration is VERBOSE then all variables from all plpgsql frames are displayed. This behaviour is independent on plpgsql.check_asserts value. It can be used, although the assertions are disabled in plpgsql runtime.

postgres=# set plpgsql_check.tracer to off;
postgres=# set plpgsql_check.trace_assert_verbosity TO verbose;

postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE:  #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false
NOTICE:   "a" => '10', "res" => null, "b" => '20'
NOTICE:  #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
NOTICE:   "a" => '10', "b" => null, "c" => '2020-08-05', "d" => 'stěhule'
NOTICE:  #0 PL/pgSQL function inline_code_block line 1 at PERFORM
ERROR:  assertion failed
CONTEXT:  PL/pgSQL function fx(integer) line 12 at ASSERT
SQL statement "SELECT fx(a)"
PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
SQL statement "SELECT fx(10,null, 'now', e'stěhule')"
PL/pgSQL function inline_code_block line 1 at PERFORM

postgres=# set plpgsql.check_asserts to off;
SET
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE:  #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false
NOTICE:   "a" => '10', "res" => null, "b" => '20'
NOTICE:  #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
NOTICE:   "a" => '10', "b" => null, "c" => '2020-08-05', "d" => 'stěhule'
NOTICE:  #0 PL/pgSQL function inline_code_block line 1 at PERFORM
DO

Attention - SECURITY

Tracer prints content of variables or function arguments. For security definer function, this content can hold security sensitive data. This is reason why tracer is disabled by default and should be enabled only with super user rights plpgsql_check.enable_tracer.

Pragma

You can configure plpgsql_check behave inside checked function with "pragma" function. This is a analogy of PL/SQL or ADA language of PRAGMA feature. PLpgSQL doesn't support PRAGMA, but plpgsql_check detects function named plpgsql_check_pragma and get options from parameters of this function. These plpgsql_check options are valid to end of group of statements.

CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
BEGIN
  ...
  -- for following statements disable check
  PERFORM plpgsql_check_pragma('disable:check');
  ...
  -- enable check again
  PERFORM plpgsql_check_pragma('enable:check');
  ...
END;
$$ LANGUAGE plpgsql;

The function plpgsql_check_pragma is immutable function that returns one. It is defined by plpgsql_check extension. You can declare alternative plpgsql_check_pragma function like:

CREATE OR REPLACE FUNCTION plpgsql_check_pragma(VARIADIC args[])
RETURNS int AS $$
SELECT 1
$$ LANGUAGE sql IMMUTABLE;

Using pragma function in declaration part of top block sets options on function level too.

CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
DECLARE
  aux int := plpgsql_check_pragma('disable:extra_warnings');
  ...

Shorter syntax for pragma is supported too:

CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
DECLARE r record;
BEGIN
  PERFORM 'PRAGMA:TYPE:r (a int, b int)';
  PERFORM 'PRAGMA:TABLE: x (like pg_class)';
  ...

Supported pragmas

echo:str - print string (for testing)

status:check,status:tracer, status:other_warnings, status:performance_warnings, status:extra_warnings,status:security_warnings

enable:check,enable:tracer, enable:other_warnings, enable:performance_warnings, enable:extra_warnings,enable:security_warnings

disable:check,disable:tracer, disable:other_warnings, disable:performance_warnings, disable:extra_warnings,disable:security_warnings

type:varname typename or type:varname (fieldname type, ...) - set type to variable of record type

table: name (column_name type, ...) or table: name (like tablename) - create ephereal table

Pragmas enable:tracer and disable:tracerare active for Postgres 12 and higher

Compilation

You need a development environment for PostgreSQL extensions:

make clean
make install

result:

[pavel@localhost plpgsql_check]$ make USE_PGXS=1 clean
rm -f plpgsql_check.so   libplpgsql_check.a  libplpgsql_check.pc
rm -f plpgsql_check.o
rm -rf results/ regression.diffs regression.out tmp_check/ log/
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 all
clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -I. -I./ -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include/internal -D_GNU_SOURCE   -c -o plpgsql_check.o plpgsql_check.c
clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -shared -o plpgsql_check.so plpgsql_check.o -L/usr/local/pgsql/lib -Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags  
[pavel@localhost plpgsql_check]$ su root
Password: *******
[root@localhost plpgsql_check]# make USE_PGXS=1 install
/usr/bin/mkdir -p '/usr/local/pgsql/lib'
/usr/bin/mkdir -p '/usr/local/pgsql/share/extension'
/usr/bin/mkdir -p '/usr/local/pgsql/share/extension'
/usr/bin/install -c -m 755  plpgsql_check.so '/usr/local/pgsql/lib/plpgsql_check.so'
/usr/bin/install -c -m 644 plpgsql_check.control '/usr/local/pgsql/share/extension/'
/usr/bin/install -c -m 644 plpgsql_check--0.9.sql '/usr/local/pgsql/share/extension/'
[root@localhost plpgsql_check]# exit
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 installcheck
/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress --inputdir=./ --psqldir='/usr/local/pgsql/bin'    --dbname=pl_regression --load-language=plpgsql --dbname=contrib_regression plpgsql_check_passive plpgsql_check_active plpgsql_check_active-9.5
(using postmaster on Unix socket, default port)
============== dropping database "contrib_regression" ==============
DROP DATABASE
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
============== installing plpgsql                     ==============
CREATE LANGUAGE
============== running regression test queries        ==============
test plpgsql_check_passive    ... ok
test plpgsql_check_active     ... ok
test plpgsql_check_active-9.5 ... ok

=====================
 All 3 tests passed. 
=====================

Compilation on Ubuntu

Sometimes successful compilation can require libicu-dev package (PostgreSQL 10 and higher - when pg was compiled with ICU support)

sudo apt install libicu-dev

Compilation plpgsql_check on Windows

You can check precompiled dll libraries http://okbob.blogspot.cz/2015/02/plpgsqlcheck-is-available-for-microsoft.html

or compile by self:

  1. Download and install PostgreSQL for Win32 from http://www.enterprisedb.com
  2. Download and install Microsoft Visual C++ Express
  3. Lern tutorial http://blog.2ndquadrant.com/compiling-postgresql-extensions-visual-studio-windows
  4. Build plpgsql_check.dll
  5. Install plugin
  6. copy plpgsql_check.dll to PostgreSQL\14\lib
  7. copy plpgsql_check.control and plpgsql_check--2.1.sql to PostgreSQL\14\share\extension

Checked on

  • gcc on Linux (against all supported PostgreSQL)
  • clang 3.4 on Linux (against PostgreSQL 10)
  • for success regress tests the PostgreSQL 10 or higher is required

Compilation against PostgreSQL 10 requires libICU!

Licence

Copyright (c) Pavel Stehule (pavel.stehule@gmail.com)

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Note

If you like it, send a postcard to address

Pavel Stehule
Skalice 12
256 01 Benesov u Prahy
Czech Republic

I invite any questions, comments, bug reports, patches on mail address pavel.stehule@gmail.com


Author: okbob
Source Code: https://github.com/okbob/plpgsql_check
License: View license

#postgresql 

Plpgsql Check: Extension That Allows to Check Plpgsql Source Code.

plpgsql_check

I founded this project, because I wanted to publish the code I wrote in the last two years, when I tried to write enhanced checking for PostgreSQL upstream. It was not fully successful - integration into upstream requires some larger plpgsql refactoring - probably it will not be done in next years (now is Dec 2013). But written code is fully functional and can be used in production (and it is used in production). So, I created this extension to be available for all plpgsql developers.

If you like it and if you would to join to development of this extension, register yourself to postgresql extension hacking google group.

Features

  • check fields of referenced database objects and types inside embedded SQL
  • using correct types of function parameters
  • unused variables and function argumens, unmodified OUT argumens
  • partially detection of dead code (due RETURN command)
  • detection of missing RETURN command in function
  • try to identify unwanted hidden casts, that can be performance issue like unused indexes
  • possibility to collect relations and functions used by function
  • possibility to check EXECUTE stmt agaist SQL injection vulnerability

I invite any ideas, patches, bugreports.

plpgsql_check is next generation of plpgsql_lint. It allows to check source code by explicit call plpgsql_check_function.

PostgreSQL PostgreSQL 10, 11, 12, 13 and 14 are supported.

The SQL statements inside PL/pgSQL functions are checked by validator for semantic errors. These errors can be found by plpgsql_check_function:

Active mode

postgres=# CREATE EXTENSION plpgsql_check;
LOAD
postgres=# CREATE TABLE t1(a int, b int);
CREATE TABLE

postgres=#
CREATE OR REPLACE FUNCTION public.f1()
RETURNS void
LANGUAGE plpgsql
AS $function$
DECLARE r record;
BEGIN
  FOR r IN SELECT * FROM t1
  LOOP
    RAISE NOTICE '%', r.c; -- there is bug - table t1 missing "c" column
  END LOOP;
END;
$function$;

CREATE FUNCTION

postgres=# select f1(); -- execution doesn't find a bug due to empty table t1
  f1 
 ────
   
 (1 row)

postgres=# \x
Expanded display is on.
postgres=# select * from plpgsql_check_function_tb('f1()');
─[ RECORD 1 ]───────────────────────────
functionid │ f1
lineno     │ 6
statement  │ RAISE
sqlstate   │ 42703
message    │ record "r" has no field "c"
detail     │ [null]
hint       │ [null]
level      │ error
position   │ 0
query      │ [null]

postgres=# \sf+ f1
    CREATE OR REPLACE FUNCTION public.f1()
     RETURNS void
     LANGUAGE plpgsql
1       AS $function$
2       DECLARE r record;
3       BEGIN
4         FOR r IN SELECT * FROM t1
5         LOOP
6           RAISE NOTICE '%', r.c; -- there is bug - table t1 missing "c" column
7         END LOOP;
8       END;
9       $function$

Function plpgsql_check_function() has three possible formats: text, json or xml

select * from plpgsql_check_function('f1()', fatal_errors := false);
                         plpgsql_check_function                         
------------------------------------------------------------------------
 error:42703:4:SQL statement:column "c" of relation "t1" does not exist
 Query: update t1 set c = 30
 --                   ^
 error:42P01:7:RAISE:missing FROM-clause entry for table "r"
 Query: SELECT r.c
 --            ^
 error:42601:7:RAISE:too few parameters specified for RAISE
(7 rows)

postgres=# select * from plpgsql_check_function('fx()', format:='xml');
                 plpgsql_check_function                     
────────────────────────────────────────────────────────────────
 <Function oid="16400">                                        ↵
   <Issue>                                                     ↵
     <Level>error</level>                                      ↵
     <Sqlstate>42P01</Sqlstate>                                ↵
     <Message>relation "foo111" does not exist</Message>       ↵
     <Stmt lineno="3">RETURN</Stmt>                            ↵
     <Query position="23">SELECT (select a from foo111)</Query>↵
   </Issue>                                                    ↵
  </Function>
 (1 row)

Arguments

You can set level of warnings via function's parameters:

Mandatory arguments

  • function name or function signature - these functions requires function specification. Any function in PostgreSQL can be specified by Oid or by name or by signature. When you know oid or complete function's signature, you can use a regprocedure type parameter like 'fx()'::regprocedure or 16799::regprocedure. Possible alternative is using a name only, when function's name is unique - like 'fx'. When the name is not unique or the function doesn't exists it raises a error.

Optional arguments

relid DEFAULT 0 - oid of relation assigned with trigger function. It is necessary for check of any trigger function.

fatal_errors boolean DEFAULT true - stop on first error

other_warnings boolean DEFAULT true - show warnings like different attributes number in assignmenet on left and right side, variable overlaps function's parameter, unused variables, unwanted casting, ..

extra_warnings boolean DEFAULT true - show warnings like missing RETURN, shadowed variables, dead code, never read (unused) function's parameter, unmodified variables, modified auto variables, ..

performance_warnings boolean DEFAULT false - performance related warnings like declared type with type modificator, casting, implicit casts in where clause (can be reason why index is not used), ..

security_warnings boolean DEFAULT false - security related checks like SQL injection vulnerability detection

anyelementtype regtype DEFAULT 'int' - a real type used instead anyelement type

anyenumtype regtype DEFAULT '-' - a real type used instead anyenum type

anyrangetype regtype DEFAULT 'int4range' - a real type used instead anyrange type

anycompatibletype DEFAULT 'int' - a real type used instead anycompatible type

anycompatiblerangetype DEFAULT 'int4range' - a real type used instead anycompatible range type

without_warnings DEFAULT false - disable all warnings

all_warnings DEFAULT false - enable all warnings

newtable DEFAULT NULL, oldtable DEFAULT NULL - the names of NEW or OLD transitive tables. These parameters are required when transitive tables are used.

Triggers

When you want to check any trigger, you have to enter a relation that will be used together with trigger function

CREATE TABLE bar(a int, b int);

postgres=# \sf+ foo_trg
    CREATE OR REPLACE FUNCTION public.foo_trg()
         RETURNS trigger
         LANGUAGE plpgsql
1       AS $function$
2       BEGIN
3         NEW.c := NEW.a + NEW.b;
4         RETURN NEW;
5       END;
6       $function$

Missing relation specification

postgres=# select * from plpgsql_check_function('foo_trg()');
ERROR:  missing trigger relation
HINT:  Trigger relation oid must be valid

Correct trigger checking (with specified relation)

postgres=# select * from plpgsql_check_function('foo_trg()', 'bar');
                 plpgsql_check_function                 
--------------------------------------------------------
 error:42703:3:assignment:record "new" has no field "c"
(1 row)

For triggers with transitive tables you can set a oldtable or newtable parameters:

create or replace function footab_trig_func()
returns trigger as $$
declare x int;
begin
  if false then
    -- should be ok;
    select count(*) from newtab into x; 

    -- should fail;
    select count(*) from newtab where d = 10 into x;
  end if;
  return null;
end;
$$ language plpgsql;

select * from plpgsql_check_function('footab_trig_func','footab', newtable := 'newtab');

Mass check

You can use the plpgsql_check_function for mass check functions and mass check triggers. Please, test following queries:

-- check all nontrigger plpgsql functions
SELECT p.oid, p.proname, plpgsql_check_function(p.oid)
   FROM pg_catalog.pg_namespace n
   JOIN pg_catalog.pg_proc p ON pronamespace = n.oid
   JOIN pg_catalog.pg_language l ON p.prolang = l.oid
  WHERE l.lanname = 'plpgsql' AND p.prorettype <> 2279;

or

SELECT p.proname, tgrelid::regclass, cf.*
   FROM pg_proc p
        JOIN pg_trigger t ON t.tgfoid = p.oid 
        JOIN pg_language l ON p.prolang = l.oid
        JOIN pg_namespace n ON p.pronamespace = n.oid,
        LATERAL plpgsql_check_function(p.oid, t.tgrelid) cf
  WHERE n.nspname = 'public' and l.lanname = 'plpgsql'

or

-- check all plpgsql functions (functions or trigger functions with defined triggers)
SELECT
    (pcf).functionid::regprocedure, (pcf).lineno, (pcf).statement,
    (pcf).sqlstate, (pcf).message, (pcf).detail, (pcf).hint, (pcf).level,
    (pcf)."position", (pcf).query, (pcf).context
FROM
(
    SELECT
        plpgsql_check_function_tb(pg_proc.oid, COALESCE(pg_trigger.tgrelid, 0)) AS pcf
    FROM pg_proc
    LEFT JOIN pg_trigger
        ON (pg_trigger.tgfoid = pg_proc.oid)
    WHERE
        prolang = (SELECT lang.oid FROM pg_language lang WHERE lang.lanname = 'plpgsql') AND
        pronamespace <> (SELECT nsp.oid FROM pg_namespace nsp WHERE nsp.nspname = 'pg_catalog') AND
        -- ignore unused triggers
        (pg_proc.prorettype <> (SELECT typ.oid FROM pg_type typ WHERE typ.typname = 'trigger') OR
         pg_trigger.tgfoid IS NOT NULL)
    OFFSET 0
) ss
ORDER BY (pcf).functionid::regprocedure::text, (pcf).lineno

Passive mode

Functions should be checked on start - plpgsql_check module must be loaded.

Configuration

plpgsql_check.mode = [ disabled | by_function | fresh_start | every_start ]
plpgsql_check.fatal_errors = [ yes | no ]

plpgsql_check.show_nonperformance_warnings = false
plpgsql_check.show_performance_warnings = false

Default mode is by_function, that means that the enhanced check is done only in active mode - by plpgsql_check_function. fresh_start means cold start.

You can enable passive mode by

load 'plpgsql'; -- 1.1 and higher doesn't need it
load 'plpgsql_check';
set plpgsql_check.mode = 'every_start';

SELECT fx(10); -- run functions - function is checked before runtime starts it

Limits

plpgsql_check should find almost all errors on really static code. When developer use some PLpgSQL's dynamic features like dynamic SQL or record data type, then false positives are possible. These should be rare - in well written code - and then the affected function should be redesigned or plpgsql_check should be disabled for this function.

CREATE OR REPLACE FUNCTION f1()
RETURNS void AS $$
DECLARE r record;
BEGIN
  FOR r IN EXECUTE 'SELECT * FROM t1'
  LOOP
    RAISE NOTICE '%', r.c;
  END LOOP;
END;
$$ LANGUAGE plpgsql SET plpgsql.enable_check TO false;

A usage of plpgsql_check adds a small overhead (in enabled passive mode) and you should use it only in develop or preprod environments.

Dynamic SQL

This module doesn't check queries that are assembled in runtime. It is not possible to identify results of dynamic queries - so plpgsql_check cannot to set correct type to record variables and cannot to check a dependent SQLs and expressions.

When type of record's variable is not know, you can assign it explicitly with pragma type:

DECLARE r record;
BEGIN
  EXECUTE format('SELECT * FROM %I', _tablename) INTO r;
  PERFORM plpgsql_check_pragma('type: r (id int, processed bool)');
  IF NOT r.processed THEN
    ...

Attention: The SQL injection check can detect only some SQL injection vulnerabilities. This tool cannot be used for security audit! Some issues should not be detected. This check can raise false alarms too - probably when variable is sanitized by other command or when value is of some compose type. 

Refcursors

plpgsql_check should not to detect structure of referenced cursors. A reference on cursor in PLpgSQL is implemented as name of global cursor. In check time, the name is not known (not in all possibilities), and global cursor doesn't exist. It is significant break for any static analyse. PLpgSQL cannot to set correct type for record variables and cannot to check a dependent SQLs and expressions. A solution is same like dynamic SQL. Don't use record variable as target when you use refcursor type or disable plpgsql_check for these functions.

CREATE OR REPLACE FUNCTION foo(refcur_var refcursor)
RETURNS void AS $$
DECLARE
  rec_var record;
BEGIN
  FETCH refcur_var INTO rec_var; -- this is STOP for plpgsql_check
  RAISE NOTICE '%', rec_var;     -- record rec_var is not assigned yet error

In this case a record type should not be used (use known rowtype instead):

CREATE OR REPLACE FUNCTION foo(refcur_var refcursor)
RETURNS void AS $$
DECLARE
  rec_var some_rowtype;
BEGIN
  FETCH refcur_var INTO rec_var;
  RAISE NOTICE '%', rec_var;

Temporary tables

plpgsql_check cannot verify queries over temporary tables that are created in plpgsql's function runtime. For this use case it is necessary to create a fake temp table or disable plpgsql_check for this function.

In reality temp tables are stored in own (per user) schema with higher priority than persistent tables. So you can do (with following trick safetly):

CREATE OR REPLACE FUNCTION public.disable_dml()
RETURNS trigger
LANGUAGE plpgsql AS $function$
BEGIN
  RAISE EXCEPTION SQLSTATE '42P01'
     USING message = format('this instance of %I table doesn''t allow any DML operation', TG_TABLE_NAME),
           hint = format('you should to run "CREATE TEMP TABLE %1$I(LIKE %1$I INCLUDING ALL);" statement',
                         TG_TABLE_NAME);
  RETURN NULL;
END;
$function$;

CREATE TABLE foo(a int, b int); -- doesn't hold data ever
CREATE TRIGGER foo_disable_dml
   BEFORE INSERT OR UPDATE OR DELETE ON foo
   EXECUTE PROCEDURE disable_dml();

postgres=# INSERT INTO  foo VALUES(10,20);
ERROR:  this instance of foo table doesn't allow any DML operation
HINT:  you should to run "CREATE TEMP TABLE foo(LIKE foo INCLUDING ALL);" statement
postgres=# 

CREATE TABLE
postgres=# INSERT INTO  foo VALUES(10,20);
INSERT 0 1

This trick emulates GLOBAL TEMP tables partially and it allows a statical validation. Other possibility is using a [template foreign data wrapper] (https://github.com/okbob/template_fdw)

You can use pragma table and create ephemeral table:

BEGIN
   CREATE TEMP TABLE xxx(a int);
   PERFORM plpgsql_check_pragma('table: xxx(a int)');
   INSERT INTO xxx VALUES(10);

Dependency list

A function plpgsql_show_dependency_tb can show all functions, operators and relations used inside processed function:

postgres=# select * from plpgsql_show_dependency_tb('testfunc(int,float)');
┌──────────┬───────┬────────┬─────────┬────────────────────────────┐
│   type   │  oid  │ schema │  name   │           params           │
╞══════════╪═══════╪════════╪═════════╪════════════════════════════╡
│ FUNCTION │ 36008 │ public │ myfunc1 │ (integer,double precision) │
│ FUNCTION │ 35999 │ public │ myfunc2 │ (integer,double precision) │
│ OPERATOR │ 36007 │ public │ **      │ (integer,integer)          │
│ RELATION │ 36005 │ public │ myview  │                            │
│ RELATION │ 36002 │ public │ mytable │                            │
└──────────┴───────┴────────┴─────────┴────────────────────────────┘
(4 rows)

Profiler

The plpgsql_check contains simple profiler of plpgsql functions and procedures. It can work with/without a access to shared memory. It depends on shared_preload_libraries config. When plpgsql_check was initialized by shared_preload_libraries, then it can allocate shared memory, and function's profiles are stored there. When plpgsql_check cannot to allocate shared momory, the profile is stored in session memory.

Due dependencies, shared_preload_libraries should to contains plpgsql first

postgres=# show shared_preload_libraries ;
┌──────────────────────────┐
│ shared_preload_libraries │
╞══════════════════════════╡
│ plpgsql,plpgsql_check    │
└──────────────────────────┘
(1 row)

The profiler is active when GUC plpgsql_check.profiler is on. The profiler doesn't require shared memory, but if there are not shared memory, then the profile is limmitted just to active session.

When plpgsql_check is initialized by shared_preload_libraries, another GUC is available to configure the amount of shared memory used by the profiler: plpgsql_check.profiler_max_shared_chunks. This defines the maximum number of statements chunk that can be stored in shared memory. For each plpgsql function (or procedure), the whole content is split into chunks of 30 statements. If needed, multiple chunks can be used to store the whole content of a single function. A single chunk is 1704 bytes. The default value for this GUC is 15000, which should be enough for big projects containing hundred of thousands of statements in plpgsql, and will consume about 24MB of memory. If your project doesn't require that much number of chunks, you can set this parameter to a smaller number in order to decrease the memory usage. The minimum value is 50 (which should consume about 83kB of memory), and the maximum value is 100000 (which should consume about 163MB of memory). Changing this parameter requires a PostgreSQL restart.

The profiler will also retrieve the query identifier for each instruction that contains an expression or optimizable statement. Note that this requires pg_stat_statements, or another similar third-party extension), to be installed. There are some limitations to the query identifier retrieval:

  • if a plpgsql expression contains underlying statements, only the top level query identifier will be retrieved
  • the profiler doesn't compute query identifier by itself but relies on external extension, such as pg_stat_statements, for that. It means that depending on the external extension behavior, you may not be able to see a query identifier for some statements. That's for instance the case with DDL statements, as pg_stat_statements doesn't expose the query identifier for such queries.
  • a query identifier is retrieved only for instructions containing expressions. This means that plpgsql_profiler_function_tb() function can report less query identifier than instructions on a single line.

Attention: A update of shared profiles can decrease performance on servers under higher load.

The profile can be displayed by function plpgsql_profiler_function_tb:

postgres=# select lineno, avg_time, source from plpgsql_profiler_function_tb('fx(int)');
┌────────┬──────────┬───────────────────────────────────────────────────────────────────┐
│ lineno │ avg_time │                              source                               │
╞════════╪══════════╪═══════════════════════════════════════════════════════════════════╡
│      1 │          │                                                                   │
│      2 │          │ declare result int = 0;                                           │
│      3 │    0.075 │ begin                                                             │
│      4 │    0.202 │   for i in 1..$1 loop                                             │
│      5 │    0.005 │     select result + i into result; select result + i into result; │
│      6 │          │   end loop;                                                       │
│      7 │        0 │   return result;                                                  │
│      8 │          │ end;                                                              │
└────────┴──────────┴───────────────────────────────────────────────────────────────────┘
(9 rows)

The profile per statements (not per line) can be displayed by function plpgsql_profiler_function_statements_tb:

        CREATE OR REPLACE FUNCTION public.fx1(a integer)
         RETURNS integer
         LANGUAGE plpgsql
1       AS $function$
2       begin
3         if a > 10 then
4           raise notice 'ahoj';
5           return -1;
6         else
7           raise notice 'nazdar';
8           return 1;
9         end if;
10      end;
11      $function$

postgres=# select stmtid, parent_stmtid, parent_note, lineno, exec_stmts, stmtname
             from plpgsql_profiler_function_statements_tb('fx1');
┌────────┬───────────────┬─────────────┬────────┬────────────┬─────────────────┐
│ stmtid │ parent_stmtid │ parent_note │ lineno │ exec_stmts │    stmtname     │
╞════════╪═══════════════╪═════════════╪════════╪════════════╪═════════════════╡
│      0 │             ∅ │ ∅           │      2 │          0 │ statement block │
│      1 │             0 │ body        │      3 │          0 │ IF              │
│      2 │             1 │ then body   │      4 │          0 │ RAISE           │
│      3 │             1 │ then body   │      5 │          0 │ RETURN          │
│      4 │             1 │ else body   │      7 │          0 │ RAISE           │
│      5 │             1 │ else body   │      8 │          0 │ RETURN          │
└────────┴───────────────┴─────────────┴────────┴────────────┴─────────────────┘
(6 rows)

All stored profiles can be displayed by calling function plpgsql_profiler_functions_all:

postgres=# select * from plpgsql_profiler_functions_all();
┌───────────────────────┬────────────┬────────────┬──────────┬─────────────┬──────────┬──────────┐
│        funcoid        │ exec_count │ total_time │ avg_time │ stddev_time │ min_time │ max_time │
╞═══════════════════════╪════════════╪════════════╪══════════╪═════════════╪══════════╪══════════╡
│ fxx(double precision) │          1 │       0.01 │     0.01 │        0.00 │     0.01 │     0.01 │
└───────────────────────┴────────────┴────────────┴──────────┴─────────────┴──────────┴──────────┘
(1 row)

There are two functions for cleaning stored profiles: plpgsql_profiler_reset_all() and plpgsql_profiler_reset(regprocedure).

Coverage metrics

plpgsql_check provides two functions:

  • plpgsql_coverage_statements(name)
  • plpgsql_coverage_branches(name)

Note

There is another very good PLpgSQL profiler - https://bitbucket.org/openscg/plprofiler

My extension is designed to be simple for use and practical. Nothing more or less.

plprofiler is more complex. It build call graphs and from this graph it can creates flame graph of execution times.

Both extensions can be used together with buildin PostgreSQL's feature - tracking functions.

set track_functions to 'pl';
...
select * from pg_stat_user_functions;

Tracer

plpgsql_check provides a tracing possibility - in this mode you can see notices on start or end functions (terse and default verbosity) and start or end statements (verbose verbosity). For default and verbose verbosity the content of function arguments is displayed. The content of related variables are displayed when verbosity is verbose.

postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE:  #0 ->> start of inline_code_block (Oid=0)
NOTICE:  #2   ->> start of function fx(integer,integer,date,text) (Oid=16405)
NOTICE:  #2        call by inline_code_block line 1 at PERFORM
NOTICE:  #2       "a" => '10', "b" => null, "c" => '2020-08-03', "d" => 'stěhule'
NOTICE:  #4     ->> start of function fx(integer) (Oid=16404)
NOTICE:  #4          call by fx(integer,integer,date,text) line 1 at PERFORM
NOTICE:  #4         "a" => '10'
NOTICE:  #4     <<- end of function fx (elapsed time=0.098 ms)
NOTICE:  #2   <<- end of function fx (elapsed time=0.399 ms)
NOTICE:  #0 <<- end of block (elapsed time=0.754 ms)

The number after # is a execution frame counter (this number is related to deep of error context stack). It allows to pair start end and of function.

Tracing is enabled by setting plpgsql_check.tracer to on. Attention - enabling this behaviour has significant negative impact on performance (unlike the profiler). You can set a level for output used by tracer plpgsql_check.tracer_errlevel (default is notice). The output content is limited by length specified by plpgsql_check.tracer_variable_max_length configuration variable.

In terse verbose mode the output is reduced:

postgres=# set plpgsql_check.tracer_verbosity TO terse;
SET
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE:  #0 start of inline code block (oid=0)
NOTICE:  #2 start of fx (oid=16405)
NOTICE:  #4 start of fx (oid=16404)
NOTICE:  #4 end of fx
NOTICE:  #2 end of fx
NOTICE:  #0 end of inline code block

In verbose mode the output is extended about statement details:

postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE:  #0            ->> start of block inline_code_block (oid=0)
NOTICE:  #0.1       1  --> start of PERFORM
NOTICE:  #2              ->> start of function fx(integer,integer,date,text) (oid=16405)
NOTICE:  #2                   call by inline_code_block line 1 at PERFORM
NOTICE:  #2                  "a" => '10', "b" => null, "c" => '2020-08-04', "d" => 'stěhule'
NOTICE:  #2.1       1    --> start of PERFORM
NOTICE:  #2.1                "a" => '10'
NOTICE:  #4                ->> start of function fx(integer) (oid=16404)
NOTICE:  #4                     call by fx(integer,integer,date,text) line 1 at PERFORM
NOTICE:  #4                    "a" => '10'
NOTICE:  #4.1       6      --> start of assignment
NOTICE:  #4.1                  "a" => '10', "b" => '20'
NOTICE:  #4.1              <-- end of assignment (elapsed time=0.076 ms)
NOTICE:  #4.1                  "res" => '130'
NOTICE:  #4.2       7      --> start of RETURN
NOTICE:  #4.2                  "res" => '130'
NOTICE:  #4.2              <-- end of RETURN (elapsed time=0.054 ms)
NOTICE:  #4                <<- end of function fx (elapsed time=0.373 ms)
NOTICE:  #2.1            <-- end of PERFORM (elapsed time=0.589 ms)
NOTICE:  #2              <<- end of function fx (elapsed time=0.727 ms)
NOTICE:  #0.1          <-- end of PERFORM (elapsed time=1.147 ms)
NOTICE:  #0            <<- end of block (elapsed time=1.286 ms)

Special feature of tracer is tracing of ASSERT statement when plpgsql_check.trace_assert is on. When plpgsql_check.trace_assert_verbosity is DEFAULT, then all function's or procedure's variables are displayed when assert expression is false. When this configuration is VERBOSE then all variables from all plpgsql frames are displayed. This behaviour is independent on plpgsql.check_asserts value. It can be used, although the assertions are disabled in plpgsql runtime.

postgres=# set plpgsql_check.tracer to off;
postgres=# set plpgsql_check.trace_assert_verbosity TO verbose;

postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE:  #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false
NOTICE:   "a" => '10', "res" => null, "b" => '20'
NOTICE:  #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
NOTICE:   "a" => '10', "b" => null, "c" => '2020-08-05', "d" => 'stěhule'
NOTICE:  #0 PL/pgSQL function inline_code_block line 1 at PERFORM
ERROR:  assertion failed
CONTEXT:  PL/pgSQL function fx(integer) line 12 at ASSERT
SQL statement "SELECT fx(a)"
PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
SQL statement "SELECT fx(10,null, 'now', e'stěhule')"
PL/pgSQL function inline_code_block line 1 at PERFORM

postgres=# set plpgsql.check_asserts to off;
SET
postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$;
NOTICE:  #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false
NOTICE:   "a" => '10', "res" => null, "b" => '20'
NOTICE:  #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM
NOTICE:   "a" => '10', "b" => null, "c" => '2020-08-05', "d" => 'stěhule'
NOTICE:  #0 PL/pgSQL function inline_code_block line 1 at PERFORM
DO

Attention - SECURITY

Tracer prints content of variables or function arguments. For security definer function, this content can hold security sensitive data. This is reason why tracer is disabled by default and should be enabled only with super user rights plpgsql_check.enable_tracer.

Pragma

You can configure plpgsql_check behave inside checked function with "pragma" function. This is a analogy of PL/SQL or ADA language of PRAGMA feature. PLpgSQL doesn't support PRAGMA, but plpgsql_check detects function named plpgsql_check_pragma and get options from parameters of this function. These plpgsql_check options are valid to end of group of statements.

CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
BEGIN
  ...
  -- for following statements disable check
  PERFORM plpgsql_check_pragma('disable:check');
  ...
  -- enable check again
  PERFORM plpgsql_check_pragma('enable:check');
  ...
END;
$$ LANGUAGE plpgsql;

The function plpgsql_check_pragma is immutable function that returns one. It is defined by plpgsql_check extension. You can declare alternative plpgsql_check_pragma function like:

CREATE OR REPLACE FUNCTION plpgsql_check_pragma(VARIADIC args[])
RETURNS int AS $$
SELECT 1
$$ LANGUAGE sql IMMUTABLE;

Using pragma function in declaration part of top block sets options on function level too.

CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
DECLARE
  aux int := plpgsql_check_pragma('disable:extra_warnings');
  ...

Shorter syntax for pragma is supported too:

CREATE OR REPLACE FUNCTION test()
RETURNS void AS $$
DECLARE r record;
BEGIN
  PERFORM 'PRAGMA:TYPE:r (a int, b int)';
  PERFORM 'PRAGMA:TABLE: x (like pg_class)';
  ...

Supported pragmas

echo:str - print string (for testing)

status:check,status:tracer, status:other_warnings, status:performance_warnings, status:extra_warnings,status:security_warnings

enable:check,enable:tracer, enable:other_warnings, enable:performance_warnings, enable:extra_warnings,enable:security_warnings

disable:check,disable:tracer, disable:other_warnings, disable:performance_warnings, disable:extra_warnings,disable:security_warnings

type:varname typename or type:varname (fieldname type, ...) - set type to variable of record type

table: name (column_name type, ...) or table: name (like tablename) - create ephereal table

Pragmas enable:tracer and disable:tracerare active for Postgres 12 and higher

Compilation

You need a development environment for PostgreSQL extensions:

make clean
make install

result:

[pavel@localhost plpgsql_check]$ make USE_PGXS=1 clean
rm -f plpgsql_check.so   libplpgsql_check.a  libplpgsql_check.pc
rm -f plpgsql_check.o
rm -rf results/ regression.diffs regression.out tmp_check/ log/
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 all
clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -I. -I./ -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include/internal -D_GNU_SOURCE   -c -o plpgsql_check.o plpgsql_check.c
clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -shared -o plpgsql_check.so plpgsql_check.o -L/usr/local/pgsql/lib -Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags  
[pavel@localhost plpgsql_check]$ su root
Password: *******
[root@localhost plpgsql_check]# make USE_PGXS=1 install
/usr/bin/mkdir -p '/usr/local/pgsql/lib'
/usr/bin/mkdir -p '/usr/local/pgsql/share/extension'
/usr/bin/mkdir -p '/usr/local/pgsql/share/extension'
/usr/bin/install -c -m 755  plpgsql_check.so '/usr/local/pgsql/lib/plpgsql_check.so'
/usr/bin/install -c -m 644 plpgsql_check.control '/usr/local/pgsql/share/extension/'
/usr/bin/install -c -m 644 plpgsql_check--0.9.sql '/usr/local/pgsql/share/extension/'
[root@localhost plpgsql_check]# exit
[pavel@localhost plpgsql_check]$ make USE_PGXS=1 installcheck
/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress --inputdir=./ --psqldir='/usr/local/pgsql/bin'    --dbname=pl_regression --load-language=plpgsql --dbname=contrib_regression plpgsql_check_passive plpgsql_check_active plpgsql_check_active-9.5
(using postmaster on Unix socket, default port)
============== dropping database "contrib_regression" ==============
DROP DATABASE
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
============== installing plpgsql                     ==============
CREATE LANGUAGE
============== running regression test queries        ==============
test plpgsql_check_passive    ... ok
test plpgsql_check_active     ... ok
test plpgsql_check_active-9.5 ... ok

=====================
 All 3 tests passed. 
=====================

Compilation on Ubuntu

Sometimes successful compilation can require libicu-dev package (PostgreSQL 10 and higher - when pg was compiled with ICU support)

sudo apt install libicu-dev

Compilation plpgsql_check on Windows

You can check precompiled dll libraries http://okbob.blogspot.cz/2015/02/plpgsqlcheck-is-available-for-microsoft.html

or compile by self:

  1. Download and install PostgreSQL for Win32 from http://www.enterprisedb.com
  2. Download and install Microsoft Visual C++ Express
  3. Lern tutorial http://blog.2ndquadrant.com/compiling-postgresql-extensions-visual-studio-windows
  4. Build plpgsql_check.dll
  5. Install plugin
  6. copy plpgsql_check.dll to PostgreSQL\14\lib
  7. copy plpgsql_check.control and plpgsql_check--2.1.sql to PostgreSQL\14\share\extension

Checked on

  • gcc on Linux (against all supported PostgreSQL)
  • clang 3.4 on Linux (against PostgreSQL 10)
  • for success regress tests the PostgreSQL 10 or higher is required

Compilation against PostgreSQL 10 requires libICU!

Licence

Copyright (c) Pavel Stehule (pavel.stehule@gmail.com)

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Note

If you like it, send a postcard to address

Pavel Stehule
Skalice 12
256 01 Benesov u Prahy
Czech Republic

I invite any questions, comments, bug reports, patches on mail address pavel.stehule@gmail.com


Author: okbob
Source Code: https://github.com/okbob/plpgsql_check
License: View license

#postgresql 

James  Renouf

James Renouf

1655038615

TweeterBucks -What Is TweeterBucks All About?

TweeterBucks -What Is TweeterBucks All About? [ Information Source ] , Introducing TweeterBucks by Wesley King Wes Virgin

What Is TweeterBucks?

  • Free Traffic In 5-10 Minutes: Although it takes a tiny amount of work… You WILL get traffic. As long as you follow the simple, kindergarten level instructions, you won’t have any problems…
  • Build A Monster-Sized Email List: Use their done-for-you optin pages & traffic to take your email audience to the next level. If you want to have a long term online business, an email list is a great place to start…
  • Receive Affiliate Commissions: Send the traffic from TweeterBucks to affiliate networks like Warrior+, JVZoo & ClickBank to get paid commissions whenever you make a sale. These offers are done for you and it’s the easiest way to begin profiting.
  • Grow A Large Twitter Following: 330M users are now at your reach, with just your fingertips…With all the features included inside, you’ll have no problem growing a goliath sized Twitter following.
  • Sell Your Own Products: Do you sell any kind of product or service? With TweeterBucks, you can start sending visitors instantly to anything that you’re selling…
  • Sell The Traffic You Get: You can make money by selling the traffic you get from TweeterBucks… It’s not uncommon to sell it for $0.50-1.00, so it can be quite profitable.

ABOUT CREATOR

AzCashCode-Review-Creator

Information Source ]

TWEETERBUCKS FEATURES:

 

  • Free Traffic In 5-10 Minutes: Although it takes a tiny amount of work… You WILL get traffic. As long as you follow the simple, kindergarten level instructions, you won’t have any problems…
  • Build A Monster-Sized Email List: Use their done-for-you optin pages & traffic to take your email audience to the next level. If you want to have a long term online business, an email list is a great place to start…
  • Receive Affiliate Commissions: Send the traffic from TweeterBucks to affiliate networks like Warrior+, JVZoo & ClickBank to get paid commissions whenever you make a sale. These offers are done for you and it’s the easiest way to begin profiting.
  • Grow A Large Twitter Following: 330M users are now at your reach, with just your fingertips…With all the features included inside, you’ll have no problem growing a goliath sized Twitter following.
  • Sell Your Own Products: Do you sell any kind of product or service? With TweeterBucks, you can start sending visitors instantly to anything that you’re selling…
  • Sell The Traffic You Get: You can make money by selling the traffic you get from TweeterBucks… It’s not uncommon to sell it for $0.50-1.00, so it can be quite profitable.

 

TWEETERBUCKS OTO:

 

  • UPSELL OTO #1 : TweeterBucks Ultimate
  • UPSELL OTO #2 : TweeterBucks Done For You
  • UPSELL OTO #3 : TweeterBucks Automation
  • UPSELL OTO #4 : TweeterBucks Instant Profits
  • UPSELL OTO #5 : TweeterBucks Traffic On Tap
  • UPSELL OTO #6 : TweeterBucks Franchise
  • UPSELL OTO #7 : TweeterBucks Multiple Income Streams

 

TWEETERBUCKS FREQUENTLY ASKED QUESTIONS:

 

  • My computer isn’t the best, will TweeterBucks work on mine? As long as it connects to the internet… YES. And this is the same exact for any other device. The only thing is, it needs to have access to a web browser. If that’s the case, you’re all good to go.
  • How long will it take to receive profits? This really varies, but the vast majority of theirusers report receiving it within 12-24 hours after activating it. In short, the quicker you activate, the sooner you’ll likely profit.
  • Do I need any tech skills? None whatsoever. I hate technical crap as much as you do, so we designed TweeterBucks in mind for the average joes and janes… After all, not everyone has a computer science degree.
  • Do I need to buy any traffic? Nope… You don’t have to mess around with that.
  • What if I get confused along the way? Don’t worry, they have video training that shows you every step of the way, from A-Z. They’ll show you everything you need to know so you can begin getting results…

HOW DOES IT WORK?

Let’s see how to use TweeterBucks.

GKn5zeo

TWEETERBUCKS OTO – IS IT WORTH BUYING?

Believe me, TweeterBucks is the tool to get you a failproof route to success.

OlIg1uN

No more relying on expensive & time-consuming methods that bring minimal results.

Most importantly, no more waking up to zeros.

This is not just another rehashed method.

There are plenty of those in the market, that is why they decided to create something different.

AkP8rFq

1. TRAFFIC GETTING CONTENT GENERATOR

Quickly get high-quality content to post on Twitter for niches like:

  • Weight Loss
  • Make Money Online
  • Dating
  • Finance
  • And so much more

Just enter any keyword and get unique content in just 2-3 minutes.

2. ONE-CLICK MONETIZATION

Traffic is good, but all that matters is turning it into profit

So we’re monetizing it with offers from:

  • WarriorPlus
  • JVZoo
  • ClickBank

All you have to do is copy and paste your link so you can begin receiving commissions

*Note: You don’t have to monetize the traffic with these affiliate networks. You can send it anywhere you want

3. 100% TARGETED TRAFFIC IN ANY NICHE

This isn’t any ordinary run-of-the-mill traffic. You’ll be able to get free targeted traffic in any niche of your choice…

Just type in any keyword, like “weight loss” or “make money online” so you’re reaching targeted people.

4. 1-CLICK LINK SHORTENER

Protect Twitter From Blocking Your Links And get higher clickthrough rates with our free link shortener.

Simply enter any link and hit “enter” to shorten the link.

5. INSTANT PRE-APPROVAL FOR AFFILIATE OFFERS

Have you ever been denied when promoting affiliate offers on networks like WarriorPlus, ClickBank, or JVZoo, due to a lack of sales?

The great news is, that we’re hooking you up with pre-approval for some of the highest converting affiliate offers out there.

All you have to do is send the traffic from TweeterBucks and you’re good to go…

6. 1000+ SCROLL-STOPPING IMAGES & VIDEOS TO POST ON TWITTER

Dramatically skyrocket your traffic by posting eye-catching images & videos from our massive library

*Fact: Videos boost engagement by 10X on Twitter

7. EXPLODE YOUR EMAIL LIST WITH HIGH CONVERTING OPTIN PAGES

If you want to build a huge email list, send the traffic you get from Twitter to these proven email collecting pages.

We’ve built up a huge email list from this, and we get paid a handsome amount of earnings every time we send an email…

8. 100% FREE EMAIL MARKETING SUITE

Normally, you’d pay hundreds of dollars per month for an email marketing service.

In just one year, this would cost you thousands of dollars!

That’s why we put together a 100% free email marketing suite where you can message the emails you get from TweeterBucks and get amazing deliverability without paying any monthly fees or anything at all…

9. FREE BLAZING FAST SERVERS

We’ll host your email collection pages on our free, blazing-fast servers

Normally, you’d pay thousands of dollars a year for this, but you get it for free when you pick up TweeterBucks.

PLUS: You get a free domain, so you don’t have to pay for that either!

10. FREE TRAFFIC IN 5-10 MINUTES

Although it takes a tiny amount of work, You WILL get traffic.

As long as you follow the simple, kindergarten-level instructions, you won’t have any problems…

11. BUILD A MONSTER-SIZED EMAIL LIST

Use our done-for-you opt-in pages & traffic to take your email audience to the next level.

If you want to have a long-term online business, an email list is a great place to start.

12. RECEIVE AFFILIATE COMMISSIONS

Send the traffic from TweeterBucks to affiliate networks like Warrior+, JVZoo & ClickBank to get paid commissions whenever you make a sale.

These offers are done for you and it’s the easiest way to begin profiting.

13. GROW A LARGE TWITTER FOLLOWING

330M users are now at your reach, with just your fingertips.

With all the features included inside, you’ll have no problem growing a goliath-sized Twitter following.

14. SELL YOUR OWN PRODUCTS

Do you sell any kind of product or service?

With TweeterBucks, you can start sending visitors instantly to anything that you’re selling…

15. SELL THE TRAFFIC YOU GET

You can make money by selling the traffic you get from TweeterBucks.

It’s not uncommon to sell it for $0.50-1.00, so it can be quite profitable.

a73fvkJ

This is your chance to take a shortcut. You can skip all the trial and error, and get straight to the good stuff.

Remember, with TweeterBucks, all you need is an ordinary computer or phone and you’re getting paid THOUSANDS by TikTok.

This is the same exact app that’s allowing you to:

  • Live The Laptop Lifestyle
  • Stop Worrying About Money
  • Receive Daily Deposits
  • Make Money While You Sleep

Does the sound of that bring music to your ears?

And now, it’s your turn to take out TweeterBucks for a spin. The ball is in your court now.

Click the button below to get a copy of TweeterBucks while it’s still available.

Don’t let it miss you!

IFGLGdb

XfcwCRN

WHAT PEOPLE SAY ABOUT TWEETERBUCKS

You can be 100% confident using TweeterBucks. Here’s why:

JOjhjPH

WHO SHOULD USE IT?

Generally, TweeterBucks is a MUST-HAVE item for:

♦ Affiliate Marketers

♦ Marketers

♦ Com + Amazon

♦ Freelancers

♦ Website Owners

♦ Social Media Marketers

♦ Local businesses

♦ Any other kinds of online business

PROS AND CONS

PROS

✅ *ZERO* Twitter Following Required (Start From Scratch)

✅ Access To 330 Million Twitter Users At Your Fingertips

✅ FREE Traffic In Just 5-10 Minutes Flat *no joke*

✅ Dominate Platforms Like ClickBank, Warrior+Plus, And JVZoo

✅ No Hidden Fees Or Extra Expenses, Period

✅ 100% FREE Email Marketing Suite Included

✅ Zero Monthly Charges, Pay Just Once (Limited Time Only)

✅ PLUS: They’ll Pay You $100 If You Can’t Get Traffic

✅ 365 Day Money Back Guarantee

CONS

❌ No Cons Found.

PRICING

TweeterBucks has 1 Front-end & 7 OTOs:

– FRONT-END: TWEETERBUCKS ($19) (SEE DETAILS)

C4rJYBc

OR TWEETERBUCKS ($19) + 1K/DAY ADDON ($19.95) ($38.95)

  • 100% Brand New TweeterBucks System – Value: $997
  • TweeterBucks Upload Monetization Technology – Value: $497
  • TweeterBucks Mobile Edition – Value: $397
  • TweeterBucks Step-By-Step Training Video – Value: $197
  • FREE Support Team – Value: PRICELESS!
  • Bonus #1 – $500/Day Playing Games – Value: $297
  • Bonus #2 – $300/Day By Copying & Pasting – Value: $297
  • Bonus #3 – $800 In 1 Hour With Google Chrome – Value: $297
  • Bonus #4 – $400/Day With WhatsApp – Value: $297

– OTO1: TWEETERBUCKS ULTIMATE ($197)

DOWNSELL TO $97

10X USAGE OF TWEETERBUCKS ($497 VALUE)

They’ll be allowing you to have 10X more usage from your TweeterBucks account, so you have 10x usage of everything.

  • 10x more traffic
  • 10x more campaigns
  • 10x more profit potential
  • There will be ZERO limits and barriers in your way

If you are looking to take this seriously and scale your income higher, then you’ll definitely want this upgrade.

10X MORE DEVICES ($497 VALUE)

Currently, you’re limited to using only one device. But with the 10x more edition, you’ll be able to use 10x more devices.

This will allow you to profit from MULTIPLE devices at once.

You could make good use of your old phone or computer collecting dust… and collect profits online.

FASTER SERVERS ($497 VALUE)

You’ll be instantly upgraded to faster servers when you get TweeterBucks unlimited… This will allow TweeterBucks to operate at maximum efficiency.

  • Make more profits
  • Profit quicker
  • Waste less time
  • This feature alone is worth the price of the upgrade

PRO VIDEO TRAINING ($297 VALUE)

You’ll see some additional tips and tricks they have for making the most profits with TweeterBucks, and how to make the most out of your account in general.

Be sure to watch it, you will DEFINITELY find value in it.

PREMIUM SUPPORT – PRICELESS!

If you happen to experience any issues, don’t worry. You’ll unlock access to their priority support team, ready to help you around the clock, 24/7.

They’ll do everything in their power to make sure you’re seeing results.

– OTO2: TWEETERBUCKS DONE FOR YOU SETUP ($297)

DOWNSELL TO $197

They Setup Your Account So You Can Start Making Money NOW. Their Users Are Making $500-800 Per Day

  • Make Failure Impossible, They’re Doing The Work For You
  • Enjoy The THRILL Of Making 500 Per Day Online
  • Skip The Hassles, And Start Profiting Right Away
  • Save Your Time By Getting It Done For You
  • Sit Back & Relax While They Work Hard
  • Skip The Work & Start Making Money Right Away
  • They’ll Setup & Fine Tune Your Account To The Best Settings Possible!
  • Your Success Is Guaranteed
  • Bonus #1: How To Make $1,400 With Google Docs
  • Bonus #2: $600 Per Day With Instagram

– OTO3: TWEETERBUCKS AUTOPILOT ($67)

DOWNSELL TO $47

Fully *AUTOMATE* Your TweeterBucks Account In Just 1 Minute

Yes, I’m talking about being paid while you’re:

  1. Hanging Out With Friends…
  2. Spending Time With Family…
  3. Sleeping At Night…

Would the sound of that interest you?

If you answered yes, then today is your lucky day!

  • Receive Profits On Complete Autopilot
  • Make Profits Nonstop, 24/7, Every Possible Second Of The Day…
  • Time Is PRICELESS… Save It With TweeterBucks Autopilot
  • 365 Day, Risk-Free, Money-Back Guarantee
  • Bonus #1: How I Make $1,000 DAILY With A Chinese Website..
  • Bonus #2: Make $500 With Google

– OTO4: TWEETERBUCKS INSTANT PROFITS ($67)

DOWNSELL TO $47

With the Instant Profits edition of TweeterBucks, you’ll be able to profit instantly instead of waiting for 1 week.

If you hate waiting, then I highly recommend you pick up this upgrade.

  • Enjoy INSTANT Profits
  • Get Paid NOW, No Waiting Around
  • 365 Day Money Back Guarantee
  • Takes Just 30 Seconds To Upgrade
  • Bonus #1: $209 Per Day With Instagram Trick
  • Bonus #2: Earn $800 Copying & Pasting Links

– OTO5: TWEETERBUCKS TRAFFIC ($197)

DOWNSELL TO $97

Make An Extra $200-300 Daily By Getting 1000-2000 Hits Every Day Of His Buyer Traffic

I cannot describe in words the value you’re getting today…

Normally, you’d pay a MINIMUM of $2-3 per click for highly targeted traffic…

And with this offer, you get to receive thousands of clicks daily, just for a low, one-time investment

They could easily sell this for $10,000+ but they won’t.

They love to spoil their customers and help them succeed.

So it’s all worth it in the end

  • They’ll Send You their OWN Personal Traffic That’s Making Them Money
  • An Easy Way To Make Additional Income
  • They’re Making MILLIONS Of Dollars With This Targeted Traffic
  • Bonus #1: 3 NEW Apps That Pays $300+ On Autopilot
  • Bonus #2: How To Make $300 to $500 Per Day With Google Maps

– OTO6: TWEETERBUCKS 7 STREAMS OF INCOME ($47)

DOWNSELL TO $37

It’s always a good idea to not rely on just one stream of income. With TweeterBucks 7X, you get to add new income streams to your online business.

This will ensure that cash continues rolling in.

Because even if one stops working, you still have money coming in from the other income streams.

So you can continue getting paid day in, day out!

Multiple streams of income are the exact reason why some people are millionaires and why others aren’t.

  • Enjoy AUTOMATED Income From 7 New Income Streams…
  • Virtually Effortless To Setup
  • Make More Money Without Working Extra
  • 365 Day Money Back Guarantee

– OTO7: TWEETERBUCKS RESELLER ($197)

DOWNSELL TO $97

They Pay YOU Up To $698 For Every Click You Send Them. Sell TweeterBucks & Keep 100% Of The Profits Made

  • Earn 100% commission on their pages with a proven track record
  • Instantly create a new income stream with no effort
  • Save thousands using their record-breaking sales materials
  • Nothing to upload configure, host, or deliver, because we do it all for you

Note:

Here are some Upgrade links for your reference. You must buy the Front-End (FE) firstly and then you could buy any OTOs if you love.

If you buy OTOs alone, you will receive NOTHING and it takes your time to request for refund. Please remember FE is a must-have package to at least make sure the product is working well.

FREQUENTLY ASKED QUESTIONS

  • My computer isn’t the best, will TweeterBucks work on mine?
    As long as it connects to the internet. YES. And this is the same exact for any other device. The only thing is, it needs to have access to a web browser. If that’s the case, you’re all good to go.
  • How long will it take to receive profits?
    This really varies, but the vast majority of their users report receiving it within 12-24 hours after activating it. In short, the quicker you activate, the sooner you’ll likely profit.
  • Do I need any tech skills?
    None whatsoever. I hate technical crap as much as you do, so they designed TweeterBucks in mind for the average joes and janes. After all, not everyone has a computer science degree.
  • Do I need to buy any traffic?
    Nope. You don’t have to mess around.
  • What if I get confused along the way?
    Don’t worry, they have video training that shows you every step of the way, from A-Z. They’ll show you everything you need to know so you can begin getting results.
  • I’m sold. How do I get started?
    Click the button below to get TweeterBucks for the lowest price.

CONCLUSION

I hope that my TweeterBucks OTO can help you to make up your mind more accurately and quickly. If you have any questions leave a comment on the blog post to let me know. Thank you for reading. Good luck to you!

Information Source ]

Tags: Buy TweeterBucks, Download TweeterBucks, Get TweeterBucks, Purchase TweeterBucks,

Review TweeterBucks, TweeterBucks, TweeterBucks Bonus, TweeterBucks by Billion Dollar Virgin,

TweeterBucks by Wesley Virgin, TweeterBucks Case Study, TweeterBucks Coaching, TweeterBucks

Demo, TweeterBucks Make Money Online, TweeterBucks Money Page Creator, TweeterBucks OTO,

TweeterBucks OTO Review, TweeterBucks OTO UPSELL, TweeterBucks Pricing, TweeterBucks Pro,

TweeterBucks Pro Oto, TweeterBucks PRO Review, TweeterBucks Pro Software, TweeterBucks PRO

Upsell, TweeterBucks Reviews, TweeterBucks Social Media, TweeterBucks Software, TweeterBucks

Training, TweeterBucks Training Course, TweeterBucks Twitter, TweeterBucks Twitter Traffic,

TweeterBucks Upsell, UPSELL OTO #1 TweeterBucks Ultimate, UPSELL OTO #2 TweeterBucks Done

For You, UPSELL OTO #3 TweeterBucks Automation, UPSELL OTO #4 TweeterBucks Instant Profits,

UPSELL OTO #5 TweeterBucks Traffic On Tap, UPSELL OTO #6 TweeterBucks Franchise, UPSELL OTO

#7 TweeterBucks Multiple Income Streams