Monty  Boehm

Monty Boehm

1678831740

Go-xmlstruct: Generate Go Structs From Multiple XML Documents

Go-xmlstruct

Generate Go structs from multiple XML documents.

What does go-xmlstruct do and why should I use it?

go-xmlstruct generates Go structs from XML documents. Alternatively put, go-xmlstruct infers XML schemas from one or more example XML documents. For example, given this XML document, go-xmlstruct generates this Go source code.

Compared to existing Go struct generators like zek, XMLGen, and chidley, go-xmlstruct offers:

  • Takes multiple XML documents as input.
  • Generates field types of bool, int, string, or time.Time as appropriate.
  • Creates named types for all elements.
  • Handles optional attributes and elements.
  • Handles repeated attributes and elements.
  • Ignores empty chardata.
  • Provides a CLI for simple use.
  • Usable as a Go package for advanced use, including configurable field naming.

go-xmlstruct is useful for quick-and-dirty unmarshalling of arbitrary XML documents, especially when you have no schema or the schema is extremely complex and you want something that "just works" with the documents you have.

Install

Install the goxmlstruct CLI with:

$ go install github.com/twpayne/go-xmlstruct/cmd/goxmlstruct@latest

Example

Feed goxmlstruct the simple XML document:

<parent>
  <child flag="true">
    chardata
  </child>
</parent>

by running:

$ echo '<parent><child flag="true">text</child></parent>' | goxmlstruct

This produces the output:

// This file is automatically generated. DO NOT EDIT.

package main

type Parent struct {
        Child struct {
                Flag     bool   `xml:"flag,attr"`
                CharData string `xml:",chardata"`
        } `xml:"child"`
}

This demonstrates:

  • A Go struct is generated from the structure of the input XML document.
  • Attributes, child elements, and chardata are all considered.
  • Field names are generated automatically.
  • Field types are detected automatically.

For a full list of options to the goxmlstruct CLI run:

$ goxmlstruct -help

You can run a more advanced example with:

$ git clone https://github.com/twpayne/go-xmlstruct.git
$ cd go-xmlstruct
$ goxmlstruct internal/tests/gpx/testdata/*.gpx

This demonstrates generating a Go struct from multiple XML complex documents.

For an example of configurable field naming and named types by using go-xmlstruct as a package, see internal/tests/play/play_test.go.

For an example of a complex schema, see internal/tests/aixm/aixm_test.go.

How does go-xmlstruct work?

Similar to go-jsonstruct, go-xmlstruct consists of two phases:

  1. Firstly, go-xmlstruct explores all input XML documents to determine their structure. It gathers statistics on the types used for each attribute, chardata, and child element.
  2. Secondly, go-xmlstruct generates a Go struct based on the observed structure using the gathered statistics to determine the type of each field.

Download Details:

Author: twpayne
Source Code: https://github.com/twpayne/go-xmlstruct 
License: MIT license

#go #golang #schema #generator #xml 

Go-xmlstruct: Generate Go Structs From Multiple XML Documents
Royce  Reinger

Royce Reinger

1678479000

A Light-weight, Flexible, Expressive Statistical Data Testing Library

Pandera


A Statistical Data Testing Toolkit

A data validation library for scientists, engineers, and analysts seeking correctness.

pandera provides a flexible and expressive API for performing data validation on dataframe-like objects to make data processing pipelines more readable and robust.

Dataframes contain information that pandera explicitly validates at runtime. This is useful in production-critical or reproducible research settings. With pandera, you can:

  1. Define a schema once and use it to validate different dataframe types including pandas, dask, modin, and pyspark.
  2. Check the types and properties of columns in a DataFrame or values in a Series.
  3. Perform more complex statistical validation like hypothesis testing.
  4. Seamlessly integrate with existing data analysis/processing pipelines via function decorators.
  5. Define dataframe models with the class-based API with pydantic-style syntax and validate dataframes using the typing syntax.
  6. Synthesize data from schema objects for property-based testing with pandas data structures.
  7. Lazily Validate dataframes so that all validation checks are executed before raising an error.
  8. Integrate with a rich ecosystem of python tools like pydantic, fastapi, and mypy.

Install

Using pip:

pip install pandera

Using conda:

conda install -c conda-forge pandera

Extras

Installing additional functionality:

pip

pip install pandera[hypotheses]  # hypothesis checks
pip install pandera[io]          # yaml/script schema io utilities
pip install pandera[strategies]  # data synthesis strategies
pip install pandera[mypy]        # enable static type-linting of pandas
pip install pandera[fastapi]     # fastapi integration
pip install pandera[dask]        # validate dask dataframes
pip install pandera[pyspark]     # validate pyspark dataframes
pip install pandera[modin]       # validate modin dataframes
pip install pandera[modin-ray]   # validate modin dataframes with ray
pip install pandera[modin-dask]  # validate modin dataframes with dask
pip install pandera[geopandas]   # validate geopandas geodataframes

conda

conda install -c conda-forge pandera-hypotheses  # hypothesis checks
conda install -c conda-forge pandera-io          # yaml/script schema io utilities
conda install -c conda-forge pandera-strategies  # data synthesis strategies
conda install -c conda-forge pandera-mypy        # enable static type-linting of pandas
conda install -c conda-forge pandera-fastapi     # fastapi integration
conda install -c conda-forge pandera-dask        # validate dask dataframes
conda install -c conda-forge pandera-pyspark     # validate pyspark dataframes
conda install -c conda-forge pandera-modin       # validate modin dataframes
conda install -c conda-forge pandera-modin-ray   # validate modin dataframes with ray
conda install -c conda-forge pandera-modin-dask  # validate modin dataframes with dask
conda install -c conda-forge pandera-geopandas   # validate geopandas geodataframes

Quick Start

import pandas as pd
import pandera as pa


# data to validate
df = pd.DataFrame({
    "column1": [1, 4, 0, 10, 9],
    "column2": [-1.3, -1.4, -2.9, -10.1, -20.4],
    "column3": ["value_1", "value_2", "value_3", "value_2", "value_1"]
})

# define schema
schema = pa.DataFrameSchema({
    "column1": pa.Column(int, checks=pa.Check.le(10)),
    "column2": pa.Column(float, checks=pa.Check.lt(-1.2)),
    "column3": pa.Column(str, checks=[
        pa.Check.str_startswith("value_"),
        # define custom checks as functions that take a series as input and
        # outputs a boolean or boolean Series
        pa.Check(lambda s: s.str.split("_", expand=True).shape[1] == 2)
    ]),
})

validated_df = schema(df)
print(validated_df)

#     column1  column2  column3
#  0        1     -1.3  value_1
#  1        4     -1.4  value_2
#  2        0     -2.9  value_3
#  3       10    -10.1  value_2
#  4        9    -20.4  value_1

DataFrame Model

pandera also provides an alternative API for expressing schemas inspired by dataclasses and pydantic. The equivalent DataFrameModel for the above DataFrameSchema would be:

from pandera.typing import Series

class Schema(pa.DataFrameModel):

    column1: Series[int] = pa.Field(le=10)
    column2: Series[float] = pa.Field(lt=-1.2)
    column3: Series[str] = pa.Field(str_startswith="value_")

    @pa.check("column3")
    def column_3_check(cls, series: Series[str]) -> Series[bool]:
        """Check that values have two elements after being split with '_'"""
        return series.str.split("_", expand=True).shape[1] == 2

Schema.validate(df)

Development Installation

git clone https://github.com/pandera-dev/pandera.git
cd pandera
pip install -r requirements-dev.txt
pip install -e .

Tests

pip install pytest
pytest tests

Contributing to pandera GitHub contributors

All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.

A detailed overview on how to contribute can be found in the contributing guide on GitHub.

Issues

Go here to submit feature requests or bugfixes.

Need Help?

There are many ways of getting help with your questions. You can ask a question on Github Discussions page or reach out to the maintainers and pandera community on Discord

Why pandera?

Alternative Data Validation Libraries

Here are a few other alternatives for validating Python data structures.

Generic Python object data validation

pandas-specific data validation

Other tools for data validation

How to Cite

If you use pandera in the context of academic or industry research, please consider citing the paper and/or software package.

Paper

@InProceedings{ niels_bantilan-proc-scipy-2020,
  author    = { {N}iels {B}antilan },
  title     = { pandera: {S}tatistical {D}ata {V}alidation of {P}andas {D}ataframes },
  booktitle = { {P}roceedings of the 19th {P}ython in {S}cience {C}onference },
  pages     = { 116 - 124 },
  year      = { 2020 },
  editor    = { {M}eghann {A}garwal and {C}hris {C}alloway and {D}illon {N}iederhut and {D}avid {S}hupe },
  doi       = { 10.25080/Majora-342d178e-010 }
}

Software Package

DOI


Documentation

The official documentation is hosted on ReadTheDocs: https://pandera.readthedocs.io


Download Details:

Author: unionai-oss
Source Code: https://github.com/unionai-oss/pandera 
License: MIT license

#machinelearning #python #testing #schema #validation 

A Light-weight, Flexible, Expressive Statistical Data Testing Library

Find Database Schema Examples

Database schema examples are easy to find. But not all of them will meet your needs. Here’s how to find helpful examples of database design.

Database schema examples are easy to find. A Google search will return millions of results. The problem is that you don’t need millions of sample database models. You need at most two or three that solve your specific needs. What you should look for is a curated list of database schema examples and learning resources.

The difference between a broad search and a curated list is that the curated list contains only the most relevant items. If you are looking for valuable information, a curated list will allow you to save time; you’re reaping the fruits of the labor someone else took to separate the worthwhile from the irrelevant. And that’s just what this article is: a curated list of places to find precisely the database schema examples you need.

To make this list even more useful, it is divided into two main groups: resources for beginners (who need to understand the basics of database schema design), and resources for expert designers looking to expand their knowledge of specific topics.

Database Schema Examples for Beginners

If you are looking for database schema examples that will help you take your first steps in data modeling, I advise you to go for simple examples. These illustrate basic design concepts: relationship types, normal forms, OLAP schema types, etc. To find these kinds of examples, you can resort to courses, conferences, basic-level books, and company websites that publicly offer resources for learning database design. There are also sites where database designers chat and exchange information, although these will be more useful only when you have more experience.

If you are learning about database design, there’s one thing it’s very important to understand. To fully grasp the concepts, it is not enough to copy database diagram examples. For those examples to really work for you, you need to know the underlying logic that led to their creation. That is why the courses, articles, and books I mention below give the explanation behind their schemas. You will also find the theoretical foundations necessary to be able to build those schemas.

Most of these learning resources take you by the hand through a gradual learning path, starting from what a database is all the way to how to optimize your database schemas for maximum performance and data integrity.

Relational Database Design (Udemy Course)

This course gives you the necessary knowledge to design relational databases. It does this by taking a walk through database schema examples. You don’t need to know SQL or have programming experience to take this course. In addition, it explains normalization, normal forms, relationships, primary/foreign keys, and other important topics. With almost three hours of video lessons, this course demonstrates how to shape a database using entity-relationship diagrams.

By following this course, you will learn other fundamental issues of software development – e.g. how to do requirements gathering to create your database. The course also explains how to identify entities and their attributes from a requirements definition.

Examples of Basic Schemas (Hevodata Guide)

Hevodata is a data source integration platform. As part of its outreach efforts, it offers a comprehensive database design guide with a large number of schema examples. In this guide, you can find simple schema examples that illustrate the basic concepts of data modeling. It also covers specific examples for real applications like online banking, hotel reservations, and financial transactions.

In addition, Hevo’s guide also offers a series of best practices for schema design, including the correct use of nulls, protecting data integrity, and applying naming standards.

The Vertabelo Blog

If you are reading this, you are probably already familiar with VERTABELO’s blog and the wealth of material it offers for both novice and experienced database designers. What you may not know is that there is an easy way to find all the posts that contain database model examples. If you go to the PAGE CORRESPONDING TO THE TAG "EXAMPLE DATA MODEL", you’ll get an extensive list of articles featuring example data models. You will not only see the data models themselves, you will also get the explanations of the design decisions that guided their creation. You will also find A DETAILED GUIDE TO DATABASE SCHEMA DESIGN that you could follow as a sure path to a successful database schema.

The examples on Vertabelo’s blog are not just for beginners. There is everything from basic models that illustrate some theoretical aspect of design – such as 5 EXAMPLES OF CONCEPTUAL DIAGRAMS – to models created for very specific uses, such as an EMERGENCY CALL SERVICE.

Check out Vertabelo’s other useful resources on database design. Here are a few to get you started:

Database Design for Mere Mortals: A Hands-On Guide to Relational Database Design (Book)

This book provides a non-academic introduction to database design. It is geared toward “mere mortals” who are looking to create database schemas without having to rely on an expert designer. The author includes a wealth of useful tips for novice designers. Both concepts and tips are illustrated by database schema examples that can be used in creating a database from scratch.

Some readers point out that the work methodology proposed by the author requires too many meetings and interviews with users. Others, however, say that all that extra work is what is needed to successfully move from unclear initial definitions to concrete and effective designs.

The Complete Database Design & Modeling Beginners Tutorial (Udemy Course)

This course offers a complete introduction to database design and modeling. It starts with the basic theory and moves to working on a real project: the creation of a MySQL database for an online store. The course covers a very complete set of basic to advanced database schema design topics.

Throughout the lessons, you’ll learn what a database is, what a relational database is, and what database design is. There’s even a list of the most frequently asked questions in job interviews for a database designer position (along with their answers).

 

If you follow the course to its end, you will have a real database schema that you can use for an e-commerce website.

Database Design (Datacamp Course)

In this course, you will get several OLAP (online analytics processing) and OLTP (online transaction processing) database model examples. The OLAP examples include schemas for different types of data warehouses, such as star and snowflake. In addition, the course teaches you how to work with different types of views. It covers database administration concepts, such as access and user management, table partitioning, and storage management.

You’ll need at least 4 hours for the 13 videos and 52 exercises in this course. You’ll learn how to organize and store information efficiently as well as how to structure your schemas through normalization. The examples include schemas for book sales, car rentals, and music reviews.

Beginning Database Design: From Novice to Professional (Book)

If you accept the fact that you are a beginner in database design – but one that’s determined to follow the path to database mastery – this is the book for you. It has examples of well-done database schemas as well as a few examples of bad schemas. The bad examples include a detailed explanation of what the problems are and how to correct them.

The author takes examples of database schemas from her real-life experience to highlight the kinds of problems that can result from poor design. Her goal is to motivate the reader to adopt good design practices, whether they intend to transform their designs into relational database schemas or just Excel spreadsheets.

Database Schema Examples for Experienced Designers

Database designers with a few years of experience also need database schema examples from time to time. These examples could save the experienced designer a lot of effort and build on the path paved by others who have previously faced the same challenges.

But the resources that may serve an experienced designer are not the same as those for a beginner. For example, an experienced designer would not need a book that gives the reader a tour of the theoretical foundations of data modeling. For that reason, the resources listed below are curated specifically with the needs of an experienced designer in mind. You could start by browsing some interesting BLOG ARTICLES ON DATABASE DESIGN BEST PRACTICES AND TIPS, which can give you a quick answer to your specific needs. What you would surely need as an experienced database schema designer are these TIPS FOR STAYING RELEVANT AS A DATA MODELER.

Database System Concepts (Book)

At more than 1,000 pages, this book has become one of the fundamental texts on database design. It offers a large number of database schema examples which are used extensively in lieu of formal proofs of theoretical concepts.

Some familiarity with basic data structures, computer organization, and high-level programming languages is a prerequisite for reading this book. Although much of the material is aimed at students in the first years of database careers, there is also supplementary content that is extremely useful for expert designers looking for answers to specific questions.

SQL Antipatterns: Avoiding the Pitfalls of Database Programming (Book)

If you are looking for examples of the pitfalls to avoid when designing a database schema, you need this book. Of the four major sections into which the book is divided, two are devoted exclusively to analyzing erroneous design patterns (one for logical database design, the other for physical design).

This book is not just for database designers. It is material that should be read by all application programmers who use databases, as well as any data analyst or data scientist. Reading these pages reaffirms concepts that are perfectly clear from the early days of the relational model but that many so-called experts still don't understand. To mention a couple of examples, these include the fact that normalizing a schema “too much” can hurt performance and that some tables don’t need a primary key.

If this book was required reading in programming courses, many application development problems that end up being attributed to poor database management system performance would be avoided.

Stack Overflow Tag Search

It’s well known that virtually any software development question will have some answer on Stack Overflow. The downside is that, with such a wealth of information, it is often difficult to find just what you need. To refine the results of a search on Stack Overflow, it is a good idea to use tags. These are indicated in square brackets in the site’s search bar and act as filters to retrieve only articles that have been tagged with the text written in square brackets. For example, placing the text [database-design] in the search bar will return posts tagged with database-design.

Even so, the number of results can still be overwhelming. A more effective filter can be obtained by applying these three tags at the same time: database-design, relational-database and entity-relationship. These three tags can be specified in the search bar, like this:

Database Schema Examples

Or you can simply enter a URL with all the three tags already applied:

[HTTPS://STACKOVERFLOW.COM/QUESTIONS/TAGGED/DATABASE-DESIGN+RELATIONAL-DATABASE+ENTITY-RELATIONSHIP]

Examples and answers to thousands of advanced questions about database schema design can be found in the results of this search.

To find publications about a specific text within tag results, you can add free text to the search. This will return only publications that include that text in any part of their content.

For example, suppose you want to search for recursive relationships in the results of the tags for database design, relational database, and entity-relationship. To do this, you simply have to add the text “recursive relationships” to the tags already entered in the search bar, like this:

Database Schema Examples

Don’t Settle for Just Database Schema Examples

If you were to learn basic algebra, you would have a hard time doing it solely with examples of mathematical operations and their results. You can see hundreds of examples of addition, subtraction, multiplication, and division. But if you don’t study the theoretical foundation of each operation, you are unlikely to really learn basic algebra. The same goes for database design.

Examples of database schemas are very useful so that you don’t have to reinvent the wheel every time you must solve a problem that someone else has already solved before. But it is important to grasp the theoretical foundation behind each example. Understand why each design decision was made so that you can learn from the examples rather than just using them.

Original article source at: https://www.vertabelo.com/

#database #schema #examples 

Find Database Schema Examples
Lawrence  Lesch

Lawrence Lesch

1673691300

Typia: Super-fast Runtime Validator (type Checker) with only One Line

Typia

// RUNTIME VALIDATORS
export function is<T>(input: unknown | T): input is T; // returns boolean
export function assert<T>(input: unknown | T): T; // throws TypeGuardError
export function validate<T>(input: unknown | T): IValidation<T>; // detailed

// STRICT VALIDATORS
export function equals<T>(input: unknown | T): input is T;
export function assertEquals<T>(input: unknown | T): T;
export function validateEquals<T>(input: unknown | T): IValidation<T>;

// JSON
export function application<T>(): IJsonApplication; // JSON schema
export function assertParse<T>(input: string): T; // type safe parser
export function assertStringify<T>(input: T): string; // safe and faster
    // +) isParse, validateParse 
    // +) stringify, isStringify, validateStringify

typia is a transformer library of TypeScript, supporting below features:

  • Super-fast Runtime Validators
  • Safe JSON parse and fast stringify functions
  • JSON schema generator

All functions in typia require only one line. You don't need any extra dedication like JSON schema definitions or decorator function calls. Just call typia function with only one line like typia.assert<T>(input).

Also, as typia performs AOT (Ahead of Time) compilation skill, its performance is much faster than other competitive libaries. For an example, when comparing validate function is() with other competitive libraries, typia is maximum 15,000x times faster than class-validator.

Is Function Benchmark

Measured on Intel i5-1135g7, Surface Pro 8

Setup

Setup Wizard

npx typia setup

Just type npx typia setup, that's all.

Also, you can specify package manager or target tsconfig.json file like below:

npx typia setup --manager npm
npx typia setup --manager pnpm
npx typia setup --manager yarn

npx typia setup --project tsconfig.json
npx typia setup --project tsconfig.test.json

After the setup, you can compile typia utilization code by using ttsc (ttypescript) command. If you want to run your TypeScript file directly through ts-node, add -C ttypescript argument like below:

# COMPILE THROUGH TTYPESCRIPT
npx ttsc

# RUN TS-NODE WITH TTYPESCRIPT
npx ts-node -C ttypescript src/index.ts

Manual Setup

If you want to install and setup typia manually, read Guide Documents - Setup.

Also, by Guide Documents - Setup section, you can learn how to use pure TypeScript compiler tsc through ts-patch, instead of using the ttypescript compiler with ttsc command.

Vite

When you want to setup typia on your frontend project with vite, just configure vite.config.ts like below.

For reference, don't forget running Setup Wizard before.

import { defineConfig } from "vite";
import react from "@vitejs/plugin-react";
import typescript from "@rollup/plugin-typescript";
import ttsc from "ttypescript";

// https://vitejs.dev/config/
export default defineConfig({
  plugins: [
    react(),
    typescript({
      typescript: ttsc,
    })
  ]
});

Features

Guide Documents

In here README documents, only summarized informations are provided.

For more details, refer to the Guide Documents (wiki).

Runtime Validators

// ALLOW SUPERFLUOUS PROPERTIES
export function is<T>(input: T | unknown): input is T; // returns boolean
export function assert<T>(input: T | unknown): T; // throws `TypeGuardError`
export function validate<T>(input: T | unknown): IValidation<T>; // detailed

// DO NOT ALLOW SUPERFLUOUS PROPERTIES
export function equals<T>(input: T | unknown): input is T;
export function assertEquals<T>(input: T | unknown): T;
export function validateEquals<T>(input: T | unknown): IValidation<T>;

// REUSABLE FACTORY FUNCTIONS
export function createIs<T>(): (input: unknown) => input is T;
export function createAssert<T>(): (input: unknown) => T;
export function createValidate<T>(): (input: unknown) => IValidation<T>;
export function createEquals<T>(): (input: unknown) => input is T;
export function createAssertEquals<T>(): (input: unknown) => T;
export function createValidateEquals<T>(): (input: unknown) => IValidation<T>;

typia supports three type of validator functions:

Also, if you want more strict validator functions that even do not allowing superfluous properties not written in the type T, you can use those functions instead; equals(), assertEquals(), validateEquals(). Otherwise you want to create resuable validator functions, you can utilize factory functions like createIs() instead.

When you want to add special validation logics, like limiting range of numeric values, you can do it through comment tags. If you want to know about it, visit the Guide Documents (Features > Runtime Validators > Comment Tags).

Enhanced JSON

// JSON SCHEMA GENERATOR
export function application<
    Types extends unknown[],
    Purpose extends "swagger" | "ajv" = "swagger",
    Prefix extends string = Purpose extends "swagger"
        ? "#/components/schemas"
        : "components#/schemas",
>(): IJsonApplication;

// SAFE PARSER FUNCTIONS
export function isParse<T>(input: string): T | null;
export function assertParse<T>(input: string): T;
export function validateParse<T>(input: string): IValidation<T>;

// FASTER STRINGIFY FUNCTIONS
export function stringify<T>(input: T): string; // unsafe
export function isStringify<T>(input: T): string | null; // safe
export function assertStringify<T>(input: T): string;
export function validateStringify<T>(input: T): IValidation<string>;

// FACTORY FUNCTIONS
export function createAssertParse<T>(): (input: string) => T;
export function createAssertStringify<T>(): (input: T) => string;
    // +) createIsParse, createValidateParse
    // +) createStringify, createIsStringify, createValidateStringify

typia supports enhanced JSON functions.

  • application(): generate JSON schema with only one line
    • you can complement JSON schema contents through comment tags
  • assertParse(): parse JSON string safely with type validation
  • isStringify(): maximum 10x faster JSON stringify fuction even type safe

JSON string conversion speed

Measured on AMD R7 5800H

Appendix

Nestia

GitHub license npm version Downloads Build Status Guide Documents

Nestia is a set of helper libraries for NestJS, supporting below features:

  • @nestia/core: 15,000x times faster validation decorator using typia
  • @nestia/sdk: evolved SDK and Swagger generator for @nestia/core
  • nestia: just CLI (command line interface) tool
import { Controller } from "@nestjs/common";
import { TypedBody, TypedRoute } from "@nestia/core";

import type { IBbsArticle } from "@bbs-api/structures/IBbsArticle";

@Controller("bbs/articles")
export class BbsArticlesController {
    /** 
     * Store a new content.
     * 
     * @param inupt Content to store
     * @returns Newly archived article
     */
    @TypedRoute.Post() // 10x faster and safer JSON.stringify()
    public async store(
        @TypedBody() input: IBbsArticle.IStore // super-fast validator
    ): Promise<IBbsArticle>; 
        // do not need DTO class definition, 
        // just fine with interface
}

Download Details:

Author: Samchon
Source Code: https://github.com/samchon/typia 
License: MIT license

#typescript #fasting #json #schema 

Typia: Super-fast Runtime Validator (type Checker) with only One Line
Hermann  Frami

Hermann Frami

1673587800

Angular: JSON Powered forms For Angular

Form.io Angular JSON Form Renderer

This library serves as a Dynamic JSON Powered Form rendering library for Angular. This works by providing a JSON schema to a <formio> Angular component, where that form is dynamically rendered within the front end application. This allows forms to be dynamically built using JSON schemas.

Angular Material

If you are looking for Angular Material support, then this is within a separate library @ https://github.com/formio/angular-material-formio

Running Demo

To run a demo of the Form.io Angular renderer, please follow these steps.

  1. Make sure you have the Angular CLI installed on your machine.
  2. Download the Angular Demo Application to your computer.
  3. With your terminal, type npm install
  4. Now type ng serve

This will startup an example application where you can see all the features provided by this module.

Here is the hosted demo application https://formio.github.io/angular-demo/

Using within your application

You can easily render a form within your Angular application by referencing the URL of that form as follows.

<formio src='https://examples.form.io/example'></formio>

You can also pass the JSON form directly to the renderer as follows.

<formio [form]='{
    "title": "My Test Form",
    "components": [
        {
            "type": "textfield",
            "input": true,
            "tableView": true,
            "inputType": "text",
            "inputMask": "",
            "label": "First Name",
            "key": "firstName",
            "placeholder": "Enter your first name",
            "prefix": "",
            "suffix": "",
            "multiple": false,
            "defaultValue": "",
            "protected": false,
            "unique": false,
            "persistent": true,
            "validate": {
                "required": true,
                "minLength": 2,
                "maxLength": 10,
                "pattern": "",
                "custom": "",
                "customPrivate": false
            },
            "conditional": {
                "show": "",
                "when": null,
                "eq": ""
            }
        },
        {
            "type": "textfield",
            "input": true,
            "tableView": true,
            "inputType": "text",
            "inputMask": "",
            "label": "Last Name",
            "key": "lastName",
            "placeholder": "Enter your last name",
            "prefix": "",
            "suffix": "",
            "multiple": false,
            "defaultValue": "",
            "protected": false,
            "unique": false,
            "persistent": true,
            "validate": {
                "required": true,
                "minLength": 2,
                "maxLength": 10,
                "pattern": "",
                "custom": "",
                "customPrivate": false
            },
            "conditional": {
                "show": "",
                "when": null,
                "eq": ""
            }
        },
        {
            "input": true,
            "label": "Submit",
            "tableView": false,
            "key": "submit",
            "size": "md",
            "leftIcon": "",
            "rightIcon": "",
            "block": false,
            "action": "submit",
            "disableOnInvalid": true,
            "theme": "primary",
            "type": "button"
        }
    ]
}'></formio>

This is a very simple example. This library is capable of building very complex forms which include e-signatures, columns, panels, field conditionals, validation requirements, and the list goes on and on.

Usage

To use this library within your project, you will first need to install it as a dependency.

npm install --save @formio/angular formiojs

You can now include the module in your Angular application like so.

import { FormioModule } from '@formio/angular';
@NgModule({
    imports: [ BrowserModule, CommonModule, FormioModule ],
    declarations: [ AppComponent ],
    bootstrap: [ AppComponent ]
})
export class AppModule { }

Included Libraries

This library is a combination of multiple libraries that enable rapid Serverless application development using Form.io. These libraries are as follows.

  1. Form Renderer - The form renderer in Angular
  2. Form Builder - The form builder in Angular
  3. Form Manager - The form management application used to manage forms.
  4. Authentication - Allows an easy way to provide Form.io authentication into your application.
  5. Resource - A way to include the Resources within your application with full CRUDI support (Create, Read, Update, Delete, Index)
  6. Data Table (Grid) - A way to render data within a Table format, which includes pagination, sorting, etc.

Click on each of those links to read more about how they work and how to utilize them to their fullest potential.

Demo Application

If you would like to run a demonstration of all the features of this module, then you can check out the Angular Demo Application, which is the code behind the following hosted application @ https://formio.github.io/angular-demo

Application Starter Kit

For help in getting started using this library, we created the angular-app-starterkit repository to help you get started with best practices with using Form.io within an Angular application. You can try this applicatoin by downloading that application and then doing the following.

npm install
npm start

Full Documentation

To read up on the full documentation of this library, please check out the Wiki Page

About Form.io

Form.io is a combined form and data management API platform created for developers who are building "Serverless" form-based applications. Form.io provides an easy drag-and-drop form builder workflow allowing you to build complex forms for enterprise applications quickly and easily. These forms are then embedded directly into your application with a single line of code that dynamically renders the form (using Angular or React) in your app while at the very same time generating the RESTful API to support those forms. The Form.io platform also offers numerous 3rd-party services that are fully integrated into the form building process allowing you to extend the power and capability of your apps while saving time and effort.

You can use this renderer with Form.io by simply pointing the src parameter to the URL of the form. For example, the following URL points to the JSON schema of a form built on Form.io.

https://pjmfogrfqptslvi.form.io/test

To render this form, you simply provide that URL to the <formio> directive like so.

<formio src="https://pjmfogrfqptslvi.form.io/test"></formio>

Not only will this render the form, but it will also submit that form to the provided API endpoint.

Download Details:

Author: formio
Source Code: https://github.com/formio/angular 
License: MIT license

#serverless #angular #json #schema #forms 

Angular: JSON Powered forms For Angular
Lawrence  Lesch

Lawrence Lesch

1673452161

Compile JSONSchema to TypeScript Type Declarations

json-schema-to-typescript 

Compile json schema to typescript typings

Example

Input:

{
  "title": "Example Schema",
  "type": "object",
  "properties": {
    "firstName": {
      "type": "string"
    },
    "lastName": {
      "type": "string"
    },
    "age": {
      "description": "Age in years",
      "type": "integer",
      "minimum": 0
    },
    "hairColor": {
      "enum": ["black", "brown", "blue"],
      "type": "string"
    }
  },
  "additionalProperties": false,
  "required": ["firstName", "lastName"]
}

Output:

export interface ExampleSchema {
  firstName: string;
  lastName: string;
  /**
   * Age in years
   */
  age?: number;
  hairColor?: "black" | "brown" | "blue";
}

Installation

# Using Yarn:
yarn add json-schema-to-typescript

# Or, using NPM:
npm install json-schema-to-typescript --save

Usage

import { compile, compileFromFile } from 'json-schema-to-typescript'

// compile from file
compileFromFile('foo.json')
  .then(ts => fs.writeFileSync('foo.d.ts', ts))

// or, compile a JS object
let mySchema = {
  properties: [...]
}
compile(mySchema, 'MySchema')
  .then(ts => ...)

See server demo and browser demo for full examples.

Options

compileFromFile and compile accept options as their last argument (all keys are optional):

keytypedefaultdescription
additionalPropertiesbooleantrueDefault value for additionalProperties, when it is not explicitly set
bannerCommentstring"/* eslint-disable */\n/**\n* This file was automatically generated by json-schema-to-typescript.\n* DO NOT MODIFY IT BY HAND. Instead, modify the source JSONSchema file,\n* and run json-schema-to-typescript to regenerate this file.\n*/"Disclaimer comment prepended to the top of each generated file
cwdstringprocess.cwd()Root directory for resolving $refs
declareExternallyReferencedbooleantrueDeclare external schemas referenced via $ref?
enableConstEnumsbooleantruePrepend enums with const?
formatbooleantrueFormat code? Set this to false to improve performance.
ignoreMinAndMaxItemsbooleanfalseIgnore maxItems and minItems for array types, preventing tuples being generated.
maxItemsnumber20Maximum number of unioned tuples to emit when representing bounded-size array types, before falling back to emitting unbounded arrays. Increase this to improve precision of emitted types, decrease it to improve performance, or set it to -1 to ignore maxItems.
styleobject{ bracketSpacing: false, printWidth: 120, semi: true, singleQuote: false, tabWidth: 2, trailingComma: 'none', useTabs: false }A Prettier configuration
unknownAnybooleantrueUse unknown instead of any where possible
unreachableDefinitionsbooleanfalseGenerates code for $defs that aren't referenced by the schema.
strictIndexSignaturesbooleanfalseAppend all index signatures with | undefined so that they are strictly typed.
$refOptionsobject{}$RefParser Options, used when resolving $refs

CLI

A CLI utility is provided with this package.

cat foo.json | json2ts > foo.d.ts
# or
json2ts foo.json > foo.d.ts
# or
json2ts foo.json foo.d.ts
# or
json2ts --input foo.json --output foo.d.ts
# or
json2ts -i foo.json -o foo.d.ts
# or (quote globs so that your shell doesn't expand them)
json2ts -i 'schemas/**/*.json'
# or
json2ts -i schemas/ -o types/

You can pass any of the options described above (including style options) as CLI flags. Boolean values can be set to false using the no- prefix.

# generate code for definitions that aren't referenced
json2ts -i foo.json -o foo.d.ts --unreachableDefinitions
# use single quotes and disable trailing semicolons
json2ts -i foo.json -o foo.d.ts --style.singleQuote --no-style.semi

Tests

npm test

Features

  •  title => interface
  •  Primitive types:
    •  array
    •  homogeneous array
    •  boolean
    •  integer
    •  number
    •  null
    •  object
    •  string
    •  homogeneous enum
    •  heterogeneous enum
  •  Non/extensible interfaces
  •  Custom JSON-schema extensions
  •  Nested properties
  •  Schema definitions
  •  Schema references
  •  Local (filesystem) schema references
  •  External (network) schema references
  •  Add support for running in browser
  •  default interface name
  •  infer unnamed interface name from filename
  •  allOf ("intersection")
  •  anyOf ("union")
  •  oneOf (treated like anyOf)
  •  maxItems (eg)
  •  minItems (eg)
  •  additionalProperties of type
  •  patternProperties (partial support)
  •  extends
  •  required properties on objects (eg)
  •  validateRequired (eg)
  •  literal objects in enum (eg)
  •  referencing schema by id (eg)
  •  custom typescript types via tsType

Custom schema properties:

  • tsType: Overrides the type that's generated from the schema. Useful for forcing a type to any or when using non-standard JSON schema extensions (eg).
  • tsEnumNames: Overrides the names used for the elements in an enum. Can also be used to create string enums (eg).

Not expressible in TypeScript:

FAQ

JSON-Schema-to-TypeScript is crashing on my giant file. What can I do?

Prettier is known to run slowly on really big files. To skip formatting and improve performance, set the format option to false.

Further Reading

Who uses JSON-Schema-to-TypeScript?

Download Details:

Author: Bcherny
Source Code: https://github.com/bcherny/json-schema-to-typescript 

#typescript #json #schema 

Compile JSONSchema to TypeScript Type Declarations

Learn Default Unused Managed Properties in SharePoint Search Schema

Introduction

In This blog post, we will talk about the default unused managed properties in SharePoint Online. Such as what type of managed properties are available, total number of various managed properties and also the most important, ID of the managed properties.

Default Unused Managed Properties

In SharePoint, a new site collection comes with different types of managed properties. Some of them are already used by SharePoint and rest of the properties remain unused for us to reuse so that we don’t need to create a new managed property. We can also rename these unused managed properties using the alias as per our requirement.

See the below table that provides an overview of such managed properties which are unused and available by default:

Managed property typeCountMultiQuerySearchRetrieveRefineSortManaged property name range
Date10QueryDate00 to Date09
Date20MultiQueryRetrieveRefineSortRefinableDate00 to RefinableDate19
Date2QueryRetrieveRefineSortRefinableDateInvariant00 to RefinableDateInvariant01
Date5QueryRetrieveRefineSortRefinableDateSingle00 to RefinableDateSingle04
Decimal10QueryDecimal00 to Decimal09
Decimal10MultiQueryRetrieveRefineSortRefinableDecimal00 to RefinableDecimal09
Double10QueryDouble00 to Double09
Double10MultiQueryRetrieveRefineSortRefinableDouble00 to RefinableDouble09
Integer50QueryInt00 to Int49
Integer50MultiQueryRetrieveRefineSortRefinableInt00 to RefinableInt49
String200MultiQueryRetrieveRefineSortRefinableString00 to RefinableString199
String40MultiQueryRetrieveRefineSortRefinableStringFirst00 to RefinableStringFirst39
String10MultiQueryRetrieveRefineSortRefinableStringLn00 to RefinableStringLn09
String50QueryRetrieveRefineSortRefinableStringWbOff00 to RefinableStringWbOff49
String50MultiQueryRetrieveRefineSortRefinableStringWbOffFirst00 to RefinableStringWbOffFirst49

ID of the Managed Properties

When we need to provision the managed properties, especially using the XML schema, the most important part is the ID of the managed properties and as a SharePoint developer, I feel that this is the most critical information that must be handy. See the following table for the Id of various managed properties:

Sr NoManaged Property TypePIDExample
1RefinableString10000000RefinableString00 -> 1000000000
2Int10000001Int00 -> 1000000100
3Date10000002Date00 -> 1000000200
4Decimal10000003Decimal00 -> 1000000300
5Double10000004Double00 -> 1000000400
6RefinableInt10000005RefinableInt00 -> 1000000500
7RefinableDate10000006RefinableDate00 -> 1000000600
8RefinableDateSingle0100000065RefinableDateSingle00 -> 1000000650
9RefinableDateInvariant0100000066RefinableDateInvariant00 -> 1000000660
10RefinableDecimal10000007RefinableDecimal00 -> 1000000700
11RefinableDouble10000008RefinableDouble00 -> 1000000800
12RefinableString110000009RefinableString100 -> 1000000900

For more details about the Search Schema in SharePoint, refer to the official documentation from Microsoft at Manage the search schema in SharePoint – SharePoint in Microsoft 365 | Microsoft Learn

I hope you enjoyed reading this blog post. Please feel free to leave a comment in case you have any feedback, suggestions, or queries.

Original article source at: https://www.c-sharpcorner.com/

#sharepoint #search #schema 

Learn Default Unused Managed Properties in SharePoint Search Schema
Monty  Boehm

Monty Boehm

1670252345

Put Tables in A Vertabelo Data Model Into A Particular Schema

There is more to a database structure than just tables. Tables are logically divided into groups and stored in database schemas. Read along to find out how to include schemas in your ER diagram using Vertabelo.

A database consists of one or more database schemas, and a database schema consists of one or more tables. A good data modeling tool helps you set them all up. In this article, you’ll see how to create database schemas in Vertabelo and assign tables to them.

Need to recall some basics on data modeling? Make sure to check out the article “DATA MODELING BASICS IN 10 MINUTES” before jumping into database schemas.

Let’s get started.

How to Create a Schema in Vertabelo

First things first! Log in to your Vertabelo account and create a model.

Let’s create a physical data model by clicking the Create new document icon in the menu toolbar.

set schema in Vertabelo

Next, we choose the physical data model.

set schema in Vertabelo

Now, we give a name to our model and choose a database engine.

set schema in Vertabelo

Before we start creating tables, let’s create the schemas to be used in this model.

On the right-side panel, you see the Model Properties pane. To create schemas, expand the Additional SQL scripts tab, like this:

set schema in Vertabelo

Now, let’s input the SQL statements for schema creation.

set schema in Vertabelo

The CREATE SCHEMA statement is very straightforward. It creates a schema with the name specified in the statement. Note that there is no semicolon after the last statement. Vertabelo adds it by default as you’ll see in the upcoming examples.

To learn more about the Additional SQL scripts tab, read the article “WHAT ARE ADDITIONAL SQL SCRIPTS AND WHAT DO YOU USE THEM FOR IN VERTABELO?”

Now, our database schemas are ready! The next step is to create tables and assign them to the schemas.

In THIS ARTICLE, you find some good tips on how to work with Vertabelo editor features efficiently.

How to Set a Schema in Vertabelo for Tables

Below is the ER diagram of the Company data model.

set schema in Vertabelo

Starting from the left, the EmployeeDetails table stores the contact details of each employee. That’s why there is a one-to-one link between the Employee and the EmployeeDetails tables. The Employee table stores the department, the position, and the salary amount for each employee. These two tables are in the employees schema.

Next, the Client table stores a unique identification number for each client and a unique identification number of the employee assigned to the client. Each client is assigned to one employee, and each employee is assigned to zero or more clients. Similar to the EmployeeDetails table, the ClientDetails table stores the contact details of each client. These two tables are in the clients schema.

Real-world ER diagrams consist of many tables. Often, it is difficult to locate specific tables in an ER diagram. As an ERD modeler, Vertabelo offers different ways of finding them. Check them all out HERE!

Now, let’s assign the Employee and EmployeeDetails tables to the employees schema.

To do so, we first select the Employee table and go to the Table Properties pane on the right-side panel. In the Additional properties tab, we set our schema as below.

set schema in Vertabelo

Let’s follow the same process for the EmployeeDetails table.

To verify that both the Employee and EmployeeDetails tables are in the employees schema, we generate SQL scripts by selecting each table one by one and clicking the SQL preview button on the right-side panel.

set schema in Vertabelo

The SQL scripts generated for each of the tables start by creating the employees and clients schemas (here is the missing semicolon!).

We know a table is created in a particular schema when a schema name precedes a table name with a dot in between. The syntax is <schema_name>.<table_name>.

In the generated SQL scripts, we see two CREATE TABLE statements that create the Employee and EmployeeDetails tables in the employees schema. Notice the table names used in the CREATE TABLE statements are employees.Employee and employees.EmployeeDetails. It means our tables belong to the employees schema.

Now, let’s assign the Client and ClientDetails tables to the clients schema. The process is the same as shown above for the Employee table.

set schema in Vertabelo

And we get the following SQL scripts:

set schema in Vertabelo

Just like with the employees schema, the Client and the ClientDetails tables are created in the clients schema.

As a data modeling tool, Vertabelo generates scripts not only for creation but also for many others. Make sure to check out THIS ARTICLE to learn more.

How to Set a Default Schema for New Tables in Vertabelo

You may want to have a default schema to which all the newly created tables are assigned. Let’s set it up!

We are going to add more tables to the employees schema. But we don’t want to waste time assigning a schema to each table individually. No problem! We set the employees schema as the default schema for each table created in Vertabelo from now on.

To do so, we go to the Model Properties pane, expand the Default Additional Properties tab, and set the schema value to the employees schema, like this:

set schema in Vertabelo

Let’s create a table to check if it works.

set schema in Vertabelo

We’ve created the Department table assigned to the employees schema by default.

The Department table stores the unique identifier, the name, and the location of each department. For the sake of completeness, the Department table is linked to the Employee table. Each employee is assigned to exactly one department, and each department has one or more employees.

Here is the SQL script for the Department table:

set schema in Vertabelo

As you see, CREATE TABLE creates the Department table in the employees schema, as expected.

Divide and Conquer With Database Schemas

You may wonder why it is beneficial to use multiple schemas in a database. Database schemas let us divide the database and conquer the user access rights to each schema for each type of user.

Let me explain with our Company data model example we have used throughout this article. Imagine a requirement that an employee be able to search clients in the database without being able to access other employees’ data. This is implemented using schemas. In this case, employees have access rights to the clients schema but not to the employees schema.

Now You Know How to Set Schemas in Vertabelo!

Now, you’re all set. Vertabelo is a great data modeling tool that lets you do all this. Go ahead and create your own database with multiple schemas. Good luck!

Original article source at: https://www.vertabelo.com/

#data #table #schema 

Put Tables in A Vertabelo Data Model Into A Particular Schema
Desmond  Gerber

Desmond Gerber

1670069949

Best 7 Database Schema Design Tools

Database schema design tools are essential in your toolkit if you are a database professional. Your database development journey begins with these tools. Let's read on to find out the top database schema design tools.

This article lists my top database schema design tools with their key features. These tools are important since we start our database development journey with DATA MODELING using a database design tool.

Data modeling is a step-by-step process of arranging data entities in an information system, starting from concepts up to building your physical database. It goes through three phases: conceptual, logical, and physical.

At the conceptual level, you show a very high-level summary of your entire data model. In logical data modeling, you go into more detail by expanding the entities and displaying their attributes, data types, etc. Finally, in physical data modeling, you create a model ready to be converted into a physical database. Learn more about CONCEPTUAL, LOGICAL, AND PHYSICAL DATA MODELS.

The ER diagram or entity-relationship diagram (ERD) is most commonly used for representing the outcome of the data modeling stages. An ER diagram shows the entities and their relationships with other related information, such as attributes, primary keys that identify each entity uniquely, and foreign keys that create relationships between entities, among others. There are many notations available for drawing ER diagrams, such as CROW'S FOOT NOTATION, BAKER'S NOTATIONS, AND IDEF1X NOTATION.

Let's see the top database design tools for designing our database schema.

1. Vertabelo

VERTABELO is an online database schema design tool. It offers many fabulous features to software engineers and database designers as one of the best database design tools.

Vertabelo supports many popular database management systems (DBMSs) such as SQL Server, MySQL, PostgreSQL, and Oracle. As an online tool, it lets you design your database schema in any platform from anywhere.

You can design your database schema effortlessly with its many modern automated features. Its stylish, user-friendly, and clean user interface makes your work easier while supporting many ERD notations like crow's foot, IDEF1X, and UML.

Collaboration is an important part of your work when you are on a project. You can COLLABORATE and SHARE YOUR WORK with your teammates at different access levels with Vertabelo.

With the automated features, you save a substantial amount of time and money. Two examples of these automated features are SQL generation and reverse engineering.

Using SQL GENERATION, you can create or remove elements in your physical database from a physical data model. If you want to convert your database into a physical model in Vertabelo and make changes in its unique environment, you can take advantage of Vertabelo's REVERSE ENGINEERING feature.

Vertabelo consists of many other key features such as MODEL VALIDATION, BUILT-IN VERSION CONTROL, and VERSION CONTROL WITH GIT. It is fast becoming one of the top database schema design tools available in the market.

2. Visual Paradigm

Top 7 Database Schema Design Tools

VISUAL PARADIGM is a multi-diagramming tool with an online version that lets users work collaboratively from anywhere on any platform. It supports many databases including SQL Server, MySQL, Oracle, and MariaDB. It helps you draw your ER diagrams from the conceptual to physical level by providing the most popular notations like crow's foot.

This database schema design tool lets you enter sample data and understand the nature of the data in the physical database with its "table record editor" option. Also, its "model transitor" feature automates creating logical and physical data models from their previous levels, thus allowing you to maintain traceability.

Like many top database schema design tools, Visual Paradigm comes with forward engineering. It helps you create SQL DDL files from a physical data model and generate the corresponding physical database.

Visual Paradigm can also compare your physical database with its physical data model to create patches to the database as SQL scripts. Reverse engineering helps you convert your physical database back to its physical model and perform modifications in a user-friendly environment. You can collaborate with your team remotely when you use Visual Paradigm.

3. ER Studio

IDERA, Inc. has developed ER STUDIO as a data architecture and database design tool. This offline ERD tool is available for Microsoft Windows and is one of the best database design tools.

Top 7 Database Schema Design Tools

It supports many DBMSs like MySQL, SQL Server, Sybase, Oracle, and DB2 DBMSs. ER Studio works with cloud services like Amazon RDS & S3, Azure SQL Database, Blob storage, Google Database Service, and Oracle MySQL Cloud Service, among others.

ER Studio provides a great environment and all required notations for modeling your data at all three levels: conceptual, logical, and physical. It checks for normalization and compliance with the target database. The tool lets you generate a physical data model from a logical model with its automated feature.

ER Studio has features for forward and reverse engineering like other top database design tools. Using the forward engineering feature, you can create a DDL script from your physical data model and generate your physical database. The reverse engineering feature helps you convert your physical database into a physical data model in ER Studio.

This tool's advanced bi-directional comparison feature as well as its capability to merge between the model and the database makes it easy for you to perform revisions.

4. Navicat

NAVICAT supports major databases such as SQL Server, MySQL, Oracle, and MariaDB, among others. It is an offline ERD tool available for Windows, Linux, and Mac OS. You can model your data from the conceptual to physical levels, with any of the three standard notations.

Top 7 Database Schema Design Tools

Navicat's reverse engineering feature lets you import an existing database and edit it visually. Also, you can create SQL scripts for each component of your physical data model with its Export SQL feature.

5. ERDPlus

Top 7 Database Schema Design Tools

ERDPLUS is an online ER diagram tool that lets you model your data from anywhere and on any platform. You have all the necessary notations like crow's foot notation to draw your conceptual, logical, and physical data models with ERDPlus.

It also generates SQL DDL files for creating your physical database from a physical data model. It supports many DBMSs like MySQL, SQL Server, Oracle, and IBM DB2, among others.

6. Astah Professional

Top 7 Database Schema Design Tools

ASTAH PROFESSIONAL is a multi-diagramming tool that supports drawing various diagrams, including ER diagrams with the required notations and features. Astah Professional comes in Windows, macOS, Ubuntu, and CentOS versions as an offline ERD tool. You can download and install the version compatible with your operating system.

Astah Professional provides a great user-friendly UI for drawing your ER diagram with IDEF1X and crow's foot notations. Also, you can create an ER diagram of an existing database with its reverse engineering feature. Its SQL Export feature exports entities of an ER diagram to SQL (SQL-92).

However, as an offline tool, it does not support collaborative work or model sharing.

7. SqlDBM

Top 7 Database Schema Design Tools

SQLDBM supports many databases like MySQL, SQL Server, and Amazon Redshift as an online database design tool. You can use it to create your conceptual to physical data models with popular notations like crow's foot and IDEF1X.

This database design tool also helps you collaborate on different platforms. You just need to provide the email addresses of your team members and check or uncheck the access level options to share your data models with them.

SqlDBM comes with a forward engineering feature for creating DDL scripts from physical models. Like other top database schema design tools, it has a reverse engineering feature for creating a data model from an existing database. Version control and revision comparison are among other great features available in this tool.

Which Is the Best Database Design Tool for You?

Data modeling comes first in your database project. Therefore, as a database professional, you need to have the best database design tools in your toolkit. Your database design tool helps you draw ER diagrams and model your data in the data modeling process. You can graphically show the entities and their relationships using ER diagrams.

There are hundreds of data modeling tools available in the market. You need to have a list of the best tools so that you choose the perfect one for your database project. However, the same modeling tool does not suit everyone.

Compare your requirements with the tool's capabilities when choosing a data modeling tool for your project. Learn more about SELECTING THE BEST DATABASE DESIGN TOOL for creating your data model.

Original article source at: https://www.vertabelo.com/

#database #schema #design 

Best 7 Database Schema Design Tools

How to Managing Database Schema with Liquibase

What is Liquibase?

First, Liquibase is an open-source database schema change management tool that makes it simple for you to handle database change revisions.

How Does Liquibase Work?

Regardless of your database platform, changes are defined in a platform-neutral language. In essence, you maintain a running list of modifications. And, Liquibase uses its execution engine to make those modifications for you. It requires the appropriate JDBC driver to update the database because it runs on Java. You must also have the most recent JRE installed, of course. You can run Liquibase manually or from any deployment pipeline because it operates from a shell or command line. To maintain consistency and prevent corruption due to wrongly modified changelogs, Liquibase tracks changes using its own tables in your schema. To prevent you from mistakenly executing two updates at once, it “locks” your database while operating.

Where can we use Liquibase?

  • Database schema change using CD
  • Controlling schema change versions
  • Application and database modifications should be deployed simultaneously to ensure consistency.

Liquibase basics – changeLog files

The changeLog files are the foundation of Liquibase usage. A changelog file is an XML document that records every change that needs to be made to update the database. Tag is parsing when we run the Liquibase migrator, the databaseChangeLog>. We can add changeSet> tags to the databaseChangeLog> tag to organize database changes. The ‘id’ and ‘author’ attributes as well as the name of the changelog file classpath serve to identify each changeSet specifically.

Example:

<databaseChangeLog

    xmlns="http://www.liquibase.org/xml/ns/dbchangelog"

    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

    xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog 

http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.5.xsd">

</databaseChangeLog>

Updating the database

We will examine how to edit the database to add, update, and remove tables, columns, and data. Now that we have a simple changeLog file. By the principle of “one change per changeSet,” it is counseles to create a new changeSet for each distinct insert, update, and delete action. As a result, we use Liquibase to execute a version-based database migration while updating an existing database.

Manipulating database schema and tables

For the most part, you require some fundamental Data Definition Language (DDL) functions to define data structures in databases.

1) Create schema: Since Liquibase is planned to manage objects within the application’s schema, there is no “create schema” tag. However, we can incorporate the creation of a schema into our migration by using a unique SQL statement inside a “SQL” tag.

<changeSet author="asdf" id="1234">

    <sql dbms="h2" endDelimiter=";">

        CREATE SCHEMA schema

    </sql>

</changeSet>

2) Create Table: The following code is added to the databaseChangeLog tag when a table is created:

<changeSet author="asdf" id="1234">

    <createTable tableName="newTable">

        <column type="INT" name="newColumn"/>
 
    </createTable>

</changeSet> 

3) Drop Table: We must mention the table’s name and the schema when deleting a table. We also remove all constraints referring to primary and unique keys in the drop table when cascadeConstraints are set to true. It will after delete the corresponding records in the child table or tables. The database will return an error and won’t drop the table if there is a referential integrity constraint but we don’t set the cascadeConstraints to true.

<changeSet author="asdf" id="1234">

    <dropTable tableName="newTable" schemaName="public"
 
cascadeConstraints="true"/>

</changeSet>

4) Change existing data structure with alter table: The table can be changed by adding, renaming, and dropping columns as well as changing the data type. To change the table’s name, use the tag renameTable.

4) a. Rename Table: We must specify the new table name, the old table name, and the schema name inside the tag “renameTable”.

<changeSet author="asdf" id="1234">

    <renameTable newTableName="newName" oldTableName="table" 

schemaName='schema'/>

</changeSet>

4) b. Rename Column: A column’s data type, new column name, old column name, schema name, and table name must all be provided to rename a column.

<changeSet author="asdf" id="1234">

    <renameColumn columnDataType="varchar(255)" newColumnName="newColumn" 

oldColumnName="column" schemaName="schema" tableName="table"/>

</changeSet>

4) c. Add Column: The schema name, table name, and the name and type of the new column must be included in the inner tag <column> of the tag <addColumn>.

<changeSet author="asdf" id="1234">

    <addColumn schemaName="schema" tableName="table">

        <column name="newColumn" type="varchar(255)"/>

    </addColumn>

</changeSet>

4) d. Drop column: Column name, table name, and schema name must all be specified to delete a column.

<changeSet author="asdf" id="1234">

    <dropColumn columnName="column" tableName="table", schemaName="schema"/>

</changeSet>

4) e. Modify the data type: We need the column name, new data type, schema name, and table name when changing a data type.

<changeSet author="asdf" id="1234">

    <modifyDataType columnName="column" newDataType="int" schemaName="schema"
 
tableName="table"/>

</changeSet>

Liquibase with Maven

To configure and run Liquibase with Maven, we need to add the following configuration to our pom.xml file:

<dependency>

    <groupId>org.liquibase</groupId>

    <artifactId>liquibase-core</artifactId>

    <version>x.x.x</version>

</dependency> 

Maven also enables us to automatically generate a changelog file from:

  • already existing database
mvn liquibase:generateChangeLog
  • the difference between the two databases
mvn liquibase:diff

Conclusion:

In conclusion, in this blog, we have learned about how can we manage databases with the help of Liquibase. I will be covering more topics on Liquibase in my future blogs, stay connected. Happy learning 🙂

For more, you can refer to the Liquibase documentation: https://docs.liquibase.com/home.html

For a more technical blog, you can refer to the Knoldus blog: https://blog.knoldus.com/

Original article source at: https://blog.knoldus.com/

#database #schema #liquibase 

How to Managing Database Schema with Liquibase
Desmond  Gerber

Desmond Gerber

1669807080

Popular Practices for Database Schema Name Conventions

Assigning names to objects in your database is not a trivial task. This list of best practices for naming conventions in data modeling will help you do it the right way.

The task of designing a database schema is similar to that of creating the plans of a building. Both are carried out by making drawings in an abstract, theoretical framework. But they must be done with the understanding that these drawings will later become robust structures that support large constructions – in one case, constructions made of solid materials; in the other, constructions made of data.

(If you’re not familiar with the basics of data modeling and database design, read WHAT A DATABASE SCHEMA IS to clear up any doubts before you continue.)

Like the plans of a building, the design of a database schema cannot readily be changed when construction is underway. In our case, this is when the database is already up and running and its tables are populated with data.

Even the names of objects cannot be changed without the risk of breaking the applications accessing the database. This highlights the importance of establishing very clear database schema naming conventions as the first step in schema design. Then you can go on following the steps on HOW TO DRAW A DATABASE SCHEMA FROM SCRATCH.

Let’s look at some best practices for database naming conventions that could help you avoid name-related problems during the database life cycle.

Foundational Database Naming Conventions

Before you start any actual modeling – or come up with any naming conventions, for that matter – you should lay the foundations for your database naming convention.

Document the Naming Convention in Your ERD

The first best practice for naming conventions in data modeling is to write down all the criteria defining the adopted naming convention. So that it is always visible and at hand, it should be included as a text annotation together with the entity-relationship diagram (ERD). If you use a database design tool like VERTABELO, your NAMING CONVENTIONS IN DATABASE MODELING can be documented in virtual sticky notes that will always be attached to your ERDs. This and other DATABASE MODELING TIPS can save you time and effort when doing data modeling work.

Use Meaningful Names

I’ve had to work with many databases where the names of objects were intentionally obfuscated or reduced to short, fixed-length strings that were impossible to understand without resorting to a data dictionary. In some cases, this was because of a not-very-logical confidentiality requirement; in other cases, it was because whoever designed the schema thought that encoding the names might be useful for some purpose.

In my experience, adopting a database schema naming convention that forces developers and designers to encode object names, or to use names with no meaning at all, is a complication without any benefit.

So, the first best naming convention practice is to use meaningful names. Avoid abstract, cryptic, or coded combinations of words. To see other bad examples of naming conventions in data modeling, check out THE 11 WORST DATABASE NAMING CONVENTIONS I’VE SEEN IN REAL LIFE.

Also, choosing one of the TOP 7 DATABASE SCHEMA DESIGN TOOLS could be considered one of the best practices for database naming conventions. There is no point in using common drawing tools when there are intelligent tools that pave the way to create flawless diagrams that turn into robust databases.

Specify the Language

In software systems that are used and developed in a single country, there’s probably no need to specify the language. But if a system involves designers or developers of different nationalities (which is becoming increasingly common), language choice is not trivial. In such situations, the language used to name schema objects must be clearly specified in the naming convention and must be respected. This way, there will be no chances of finding names in different languages.

Best Practices for Database Schema Name Conventions

Database schema naming conventions must be explicit in ER diagrams.

Best Practices for Naming Tables and Columns

Database tables represent real-world entities, so it is appropriate to use nouns when choosing their names. When considering database table naming conventions, you must make a decision that seems trivial but is actually crucial: use plural or singular nouns for the names. (In the case of database column naming conventions, this problem does not arise, as column names are always singular.)

Plural or Singular Table Names

Some say that the singular should be used because tables represent a single entity, not a collection of things. For example: Client, Item, Order, Invoice, etc. Others prefer to use the plural, considering that the table is a container for a collection of things. So, with the same criteria as they would label a box for storing toys, they give the tables plural names: Customers, Items, Orders, Invoices.

Personally, I’m on Team Plural. I find it easier to imagine the table as a labeled box containing many instances of what the label says than to imagine it as an individual entity. But it's just a matter of preference. What is important – as with all aspects of database schema naming conventions – is to make a choice before you start designing and stick with it throughout the life cycle of the database.

Alphabet and Character Sets

While it is possible to use spaces and any printable character to name tables and other database objects, this practice is strongly discouraged. The use of spaces and special characters requires that object names be enclosed in delimiters so that they do not invalidate SQL statements.

For example, if you have a table in your schema called Used Cars (with a space between Used and Cars), you’d have to write the table name between delimiters in any query that uses it. For example:

SELECT * FROM `Used Cars`

or

SELECT * FROM [Used Cars]

This is not only a nuisance for anyone writing SQL code, it is also an invitation to make mistakes. An easy mistake (but one that’s difficult to detect) would be unintentionally typing two spaces instead of one.

The best practice is to avoid spaces and special characters when naming database objects, with the exception of the underscore. The latter can be used (as explained below) to separate words in compound names.

Case Sensitivity

SQL is case-insensitive, so the case-sensitivity of object names is irrelevant when writing queries or issuing SQL commands to your database. However, a good practice for database schema naming is to clearly define a case-sensitive criteria for object names. The criteria adopted will affect the readability of the schema, its neatness, and the interpretation of its elements.

For example, you can use names with all capital letters for tables and upper/lower case for columns. This will make it easier to identify objects in database diagrams as well as in SQL statements and other database tools. If you are ever faced with the task of reviewing an execution log with thousands of SQL commands, you will be thankful that you have adopted a case-sensitive approach.

Compound Names

The ideal name for any database object should strike the optimal balance between synthesis and self-explanation. Ideally, each name should contain an explanation of what it represents in the real world and also be able to be synthesized in one word. This is easy to achieve in some cases, particularly when you create a conceptual schema with tables that contain information on tangible elements: Users, Employees, Roles, Payrolls, etc.

But as you get a bit more detailed in your diagrams, you will come across elements that cannot be self-explanatory with a single word. You will then have to define names for objects that represent, for example, roles per user, payroll items, ID numbers, joining dates, and many others.

Another good schema naming practice is to adopt clear criteria for the use of compound names. Otherwise, each designer or programmer will use their own criteria – and your schema will quickly become full of random names.

There are two popular naming options for using compound names. You can either use camel case (e.g. PayrollItems or DateOfBirth) or you can separate the words with an underscore, (e.g. PAYROLL_ITEMS or DATE_OF_BIRTH).

Using underscore as a word separator in compound names is the way to go for capitalized names; it ensures the names can be easily read at a glance.

Abbreviations

Using abbreviations for object names is inadvisable, but so is using names that are too long. Ideally, a middle ground should be found. For example, I recommend only abbreviating object names if the name exceeds 20 characters.

If you make heavy use of abbreviations because many objects in your schema have long names, the list of abbreviations to be used should be explicit for all users of your schema. In addition, this list should be part of the naming convention of your schema. You can add a sticky note to your diagrams where you explicitly detail all abbreviations used along with their extended meaning.

Best Practices for Database Schema Name Conventions

Abbreviations and prefixes should be avoided; they only cause confusion to everyone that needs to work with the database.

Prefixes and Suffixes

Some people use prefixes or suffixes to denote an element’s type so that it can be easily identified without referencing the schema description or the database metadata. For example, they add the prefix T_ to all tables and V_ to all views. Or they add a suffix that denotes the data type of each column.

Using suffixes or prefixes may result in two objects of different types with similar names. For example, you could easily have a table named T_CUSTOMERS and a view named V_CUSTOMERS. Whoever has to write a query may not know which of the two should be used and what the difference is between them.

Remember that a view name should indicate its purpose. It would be more helpful if the view name were, for example, NEW_CUSTOMERS, indicating that it is a subset of the CUSTOMERS table.

Using a suffix indicating the data type of each column does not add useful information. The exception is when you need to use a counter-intuitive data type for a column. For example, if you need a column to store a date in integer format, then you could use the int suffix and name the column something like Date_int.

Another common (but discouraged!) practice is to prefix each column name with an abbreviation of the table name. This adds unnecessary redundancy and makes queries difficult to read and write.

Prefixes for Naming Dependent Objects

A prefix denoting the type of object is considered good practice when naming table- or column-dependent objects (e.g. indexes, triggers, or constraints). Such objects are not usually represented in database diagrams, instead being mentioned in schema metadata queries, in logs or execution plans, or in error messages thrown by the database engine. Some commonly used prefixes are:

  • PK for primary key constraints.
  • FK for foreign key constraints.
  • UK for unique key constraints.
  • IX for indexes.
  • TG for triggers.

The suggested way to use these prefixes is to concatenate them with the table name and an additional element denoting the function the constraint performs. In a foreign key constraint, we might indicate the table at the other end of the constraint; in an index, we might indicate the column names that compose this index. A foreign key constraint between the Customers table and the Orders table could be named FK_Customers_Orders.

This way of naming dependent objects makes it easier to relate the object to the table(s) on which it depends. This is important when it is mentioned in an execution log or error message.

Since you don’t usually have to write the names of dependent objects (like foreign keys or indexes) in SQL statements, it’s not really important if they are long or do not meet the same naming criteria as objects like tables, fields, or views.

Prefixes for Grouping Objects by Functional Areas

Another commonly accepted use for prefixes is to quickly distinguish sets of objects that belong to a functional or logical area of the schema. For example, in a data warehouse schema prefixes let us distinguish dimension tables from fact tables. They can also distinguish tables with “cold data” from tables with “hot data”, if this kind of distinction is a top priority in your schema.

In a schema that is used by different applications, prefixes can help to easily identify the tables that are used by each application. For example, you can establish that tables starting with INV belong to an invoicing app and those starting with PAY belong to a payroll app.

As I recommended above for abbreviations, these prefixes need to be made explicit in the database diagram, either through sticky notes or some other form of documentation. This form of grouping by prefix will make it easier to manage object permissions according to the application that uses them.

Naming Conventions for Views

It is quite common to create views in a database schema to facilitate the writing of queries by solving common filtering criteria. In a schema that stores membership information, for example, you could create a view of approved memberships. This would save database developers the task of finding out what conditions a membership must meet to be approved.

Naming Views

For the above reason, it is common for view names to consist of the table name plus a qualifier designating the purpose of that view. Since views named in this way often have compound names, you should use the criteria you’ve adopted for compound names. In the example above, the view might be called ApprovedMemberships or APPROVED_MEMBERSHIPS, depending on the criteria chosen for compound names. In turn, you could create a view of memberships pending approval called PendingMemberships or PENDING_MEMBERSHIPS.

Since views are used as if they were tables, it is good practice that their names follow the same naming convention as table names – e.g. if you use all uppercase for table names, you should also use all uppercase for view names.

Best Practices for Database Schema Name Conventions

It is a good practice to name views after their “mother” table (when there is one), adding a qualifier that designates the purpose of that view.

Making Views Visible

It’s important to make views known. Anyone who uses the database for querying or design work should know that there are views that can simplify their work.

One way to force users to use views is to restrict access to tables. This ensures that users use the views and not the tables and that there is no confusion about how to filter the data to get subsets of the tables.

In the case of the membership schema mentioned above, you can restrict access to the Memberships table and only provide access to the ApprovedMemberships and PendingMemberships views. This ensures that no one has to define what criteria to use to determine whether a membership is approved or pending.

It is also good practice to include the views in the database diagram and explain their usefulness with sticky notes. Any user looking at the diagram will also be aware of the existence of the views.

Compliance and Practicality for Database Naming Conventions

Naming convention criteria cannot be enforced by the database engine. This means that compliance must be overseen by a designer who controls the work of anyone who has permissions to create or modify the structure of a database. If no one is charged with overseeing naming convention adherence, it is of no use. While intelligent database design tools such as Vertabelo help ensure that certain naming criteria are met, full monitoring of the criteria requires a trained human eye.

On the other hand, the best way to enforce the criteria of a naming convention is for those criteria to be useful and practical. If they are not, users will comply with them reluctantly and drop them as soon as they can. If you have been given the task of defining a database schema naming convention, it is important that you create it with the purpose of benefiting the users. And make sure all users are clear about those benefits so they’ll comply with the convention without protest.

Original article source at: https://www.vertabelo.com/

#database #schema #name 

Popular Practices for Database Schema Name Conventions
Dexter  Goodwin

Dexter Goodwin

1667495640

Superstruct: A Simple & Composable Way to Validate Data in JavaScript

Superstruct

A simple and composable way to validate data in JavaScript (and TypeScript).

Superstruct makes it easy to define interfaces and then validate JavaScript data against them. Its type annotation API was inspired by Typescript, Flow, Go, and GraphQL, giving it a familiar and easy to understand API.

But Superstruct is designed for validating data at runtime, so it throws (or returns) detailed runtime errors for you or your end users. This is especially useful in situations like accepting arbitrary input in a REST or GraphQL API. But it can even be used to validate internal data structures at runtime when needed.

Usage

Superstruct allows you to define the shape of data you want to validate:

import { assert, object, number, string, array } from 'superstruct'

const Article = object({
  id: number(),
  title: string(),
  tags: array(string()),
  author: object({
    id: number(),
  }),
})

const data = {
  id: 34,
  title: 'Hello World',
  tags: ['news', 'features'],
  author: {
    id: 1,
  },
}

assert(data, Article)
// This will throw an error when the data is invalid.
// If you'd rather not throw, you can use `is()` or `validate()`.

Superstruct ships with validators for all the common JavaScript data types, and you can define custom ones too:

import { is, define, object, string } from 'superstruct'
import isUuid from 'is-uuid'
import isEmail from 'is-email'

const Email = define('Email', isEmail)
const Uuid = define('Uuid', isUuid.v4)

const User = object({
  id: Uuid,
  email: Email,
  name: string(),
})

const data = {
  id: 'c8d63140-a1f7-45e0-bfc6-df72973fea86',
  email: 'jane@example.com',
  name: 'Jane',
}

if (is(data, User)) {
  // Your data is guaranteed to be valid in this block.
}

Superstruct can also handle coercion of your data before validating it, for example to mix in default values:

import { create, object, number, string, defaulted } from 'superstruct'

let i = 0

const User = object({
  id: defaulted(number(), () => i++),
  name: string(),
})

const data = {
  name: 'Jane',
}

// You can apply the defaults to your data while validating.
const user = create(data, User)
// {
//   id: 0,
//   name: 'Jane',
// }

And if you use TypeScript, Superstruct automatically ensures that your data has proper typings whenever you validate it:

import { is, object, number, string } from 'superstruct'

const User = object({
  id: number(),
  name: string()
})

const data: unknown = { ... }

if (is(data, User)) {
  // TypeScript knows the shape of `data` here, so it is safe to access
  // properties like `data.id` and `data.name`.
}

Superstruct supports more complex use cases too like defining arrays or nested objects, composing structs inside each other, returning errors instead of throwing them, and more! For more information read the full Documentation.

Why?

There are lots of existing validation libraries—joi, express-validator, validator.js, yup, ajv, is-my-json-valid... But they exhibit many issues that lead to your codebase becoming hard to maintain...

They don't expose detailed errors. Many validators simply return string-only errors or booleans without any details as to why, making it difficult to customize the errors to be helpful for end-users.

They make custom types hard. Many validators ship with built-in types like emails, URLs, UUIDs, etc. with no way to know what they check for, and complicated APIs for defining new types.

They don't encourage single sources of truth. Many existing APIs encourage re-defining custom data types over and over, with the source of truth being spread out across your entire code base.

They don't throw errors. Many don't actually throw the errors, forcing you to wrap everywhere. Although helpful in the days of callbacks, not using throw in modern JavaScript makes code much more complex.

They're tightly coupled to other concerns. Many validators are tightly coupled to Express or other frameworks, which results in one-off, confusing code that isn't reusable across your code base.

They use JSON Schema. Don't get me wrong, JSON Schema can be useful. But it's kind of like HATEOAS—it's usually way more complexity than you need and you aren't using any of its benefits. (Sorry, I said it.)

Of course, not every validation library suffers from all of these issues, but most of them exhibit at least one. If you've run into this problem before, you might like Superstruct.

Which brings me to how Superstruct solves these issues…

Principles

Customizable types. Superstruct's power is in making it easy to define an entire set of custom data types that are specific to your application, and defined in a single place, so you have full control over your requirements.

Unopinionated defaults. Superstruct ships with native JavaScript types, and everything else is customizable, so you never have to fight to override decisions made by "core" that differ from your application's needs.

Composable interfaces. Superstruct interfaces are composable, so you can break down commonly-repeated pieces of data into components, and compose them to build up the more complex objects.

Useful errors. The errors that Superstruct throws contain all the information you need to convert them into your own application-specific errors easy, which means more helpful errors for your end users!

Familiar API. The Superstruct API was heavily inspired by Typescript, Flow, Go, and GraphQL. If you're familiar with any of those, then its schema definition API will feel very natural to use, so you can get started quickly.

Demo

Try out the live demo on JSFiddle to get an idea for how the API works, or to quickly verify your use case:

Demo screenshot.

Examples

Superstruct's API is very flexible, allowing it to be used for a variety of use cases on your servers and in the browser. Here are a few examples of common patterns...

Documentation

Read the getting started guide to familiarize yourself with how Superstruct works. After that, check out the full API reference for more detailed information about structs, types and errors...

Docs screenshot.


 

Download Details:

Author: ianstormtaylor
Source Code: https://github.com/ianstormtaylor/superstruct 
License: MIT license

#typescript #javascript #schema #validator #types 

Superstruct: A Simple & Composable Way to Validate Data in JavaScript

Better-ajv-errors: JSON Schema Validation for Human

Better-ajv-errors
 

JSON Schema validation for Human 👨‍🎤

Main goal of this library is to provide relevant error messages like the following:

Installation

$ npm i better-ajv-errors
$ # Or
$ yarn add better-ajv-errors

Also make sure that you installed ajv package to validate data against JSON schemas.

Usage

First, you need to validate your payload with ajv. If it's invalid then you can pass validate.errors object into better-ajv-errors.

import Ajv from 'ajv';
import betterAjvErrors from 'better-ajv-errors';
// const Ajv = require('ajv');
// const betterAjvErrors = require('better-ajv-errors').default;
// Or
// const { default: betterAjvErrors } = require('better-ajv-errors');

// You need to pass `{ jsonPointers: true }` for older versions of ajv
const ajv = new Ajv();

// Load schema and data
const schema = ...;
const data = ...;

const validate = ajv.compile(schema);
const valid = validate(data);

if (!valid) {
  const output = betterAjvErrors(schema, data, validate.errors);
  console.log(output);
}

API

betterAjvErrors(schema, data, errors, [options])

Returns formatted validation error to print in console. See options.format for further details.

schema

Type: Object

The JSON Schema you used for validation with ajv

data

Type: Object

The JSON payload you validate against using ajv

errors

Type: Array

Array of ajv validation errors

options

Type: Object

format

Type: string
Default: cli
Values: cli js

Use default cli output format if you want to print beautiful validation errors like following:

Or, use js if you are planning to use this with some API. Your output will look like following:

[
  {
    start: { line: 6, column: 15, offset: 70 },
    end: { line: 6, column: 26, offset: 81 },
    error:
      '/content/0/type should be equal to one of the allowed values: panel, paragraph, ...',
    suggestion: 'Did you mean paragraph?',
  },
];

indent

Type: number null
Default: null

If you have an unindented JSON payload and you want the error output indented.

This option have no effect when using the json option.

json

Type: string null
Default: null

Raw JSON payload used when formatting codeframe. Gives accurate line and column listings.

Download Details:

Author: Atlassian
Source Code: https://github.com/atlassian/better-ajv-errors 
License: View license

#javascript #json #schema 

Better-ajv-errors: JSON Schema Validation for Human
Elian  Harber

Elian Harber

1665009240

Schema: Package Gorilla/schema Fills A Struct with form Values

Schema

Package gorilla/schema converts structs to and from form values.

Example

Here's a quick example: we parse POST form values and then decode them into a struct:

// Set a Decoder instance as a package global, because it caches
// meta-data about structs, and an instance can be shared safely.
var decoder = schema.NewDecoder()

type Person struct {
    Name  string
    Phone string
}

func MyHandler(w http.ResponseWriter, r *http.Request) {
    err := r.ParseForm()
    if err != nil {
        // Handle error
    }

    var person Person

    // r.PostForm is a map of our POST form values
    err = decoder.Decode(&person, r.PostForm)
    if err != nil {
        // Handle error
    }

    // Do something with person.Name or person.Phone
}

Conversely, contents of a struct can be encoded into form values. Here's a variant of the previous example using the Encoder:

var encoder = schema.NewEncoder()

func MyHttpRequest() {
    person := Person{"Jane Doe", "555-5555"}
    form := url.Values{}

    err := encoder.Encode(person, form)

    if err != nil {
        // Handle error
    }

    // Use form values, for example, with an http client
    client := new(http.Client)
    res, err := client.PostForm("http://my-api.test", form)
}

To define custom names for fields, use a struct tag "schema". To not populate certain fields, use a dash for the name and it will be ignored:

type Person struct {
    Name  string `schema:"name,required"`  // custom name, must be supplied
    Phone string `schema:"phone"`          // custom name
    Admin bool   `schema:"-"`              // this field is never set
}

The supported field types in the struct are:

  • bool
  • float variants (float32, float64)
  • int variants (int, int8, int16, int32, int64)
  • string
  • uint variants (uint, uint8, uint16, uint32, uint64)
  • struct
  • a pointer to one of the above types
  • a slice or a pointer to a slice of one of the above types

Unsupported types are simply ignored, however custom types can be registered to be converted.

More examples are available on the Gorilla website: https://www.gorillatoolkit.org/pkg/schema

Download Details:

Author: Gorilla
Source Code: https://github.com/gorilla/schema 
License: BSD-3-Clause license

#go #golang #http #schema #forms 

Schema: Package Gorilla/schema Fills A Struct with form Values
Nigel  Uys

Nigel Uys

1663208940

10 Best Libraries for Database Schema Migration in Go

In today's post we will learn about 10 Best Libraries for Database Schema Migration in Go. 

Managing database schema is an essential tool that every data baked applications needed. Different frameworks offer different tools to make this flow easy, but in this article we will take a look at database migration tool in Go, check how it works and how to integrate it into Go project.

Table of contents:

  • Atlas - A Database Toolkit. A CLI designed to help companies better work with their data.
  • Avro - Discover SQL schemas and convert them to AVRO schemas. Query SQL records into AVRO bytes.
  • Bytebase - Safe database schema change and version control for DevOps teams.
  • Darwin - Database schema evolution library for Go.
  • Go-fixtures - Django style fixtures for Golang's excellent built-in database/sql library.
  • Go-pg-migrate - CLI-friendly package for go-pg migrations management.
  • Go-pg-migrations - A Go package to help write migrations with go-pg/pg.
  • Goavro - A Go package that encodes and decodes Avro data.
  • Godfish - Database migration manager, works with native query language. Support for cassandra, mysql, postgres, sqlite3.
  • Goose - Database migration tool. You can manage your database's evolution by creating incremental SQL or Go scripts.

1 - Atlas: A Database Toolkit. A CLI designed to help companies better work with their data.

Atlas CLI is an open source tool that helps developers manage their database schemas by applying modern DevOps principles. Contrary to existing tools, Atlas intelligently plans schema migrations for you. Atlas users can use the Atlas DDL (data definition language) to describe their desired database schema and use the command-line tool to plan and apply the migrations to their systems.

Quick Installation

On macOS:

brew install ariga/tap/atlas

Click here to read instructions for other platforms.

Getting Started

Get started with Atlas by following the Getting Started docs. This tutorial teaches you how to inspect a database, generate a migration plan and apply the migration to your database.

Features

  • Inspecting a database: easily inspect your database schema by providing a database URL.
atlas schema inspect -u "mysql://root:pass@localhost:3306/example" > atlas.hcl
  • Applying a migration: generate a migration plan to apply on the database by providing an HCL file with the desired Atlas schema.
atlas schema apply -u "mysql://root:pass@localhost:3306/example" -f atlas.hcl
  • Declarative Migrations vs. Versioned Migrations: Atlas offers two workflows. Declarative migrations allow the user to provide a desired state and Atlas gets the schema there instantly (simply using inspect and apply commands). Alternatively, versioned migrations are explicitly defined and assigned a version. Atlas can then bring a schema to the desired version by following the migrations between the current version and the specified one.

View on Github

2 - Avro: Discover SQL schemas and convert them to AVRO schemas. Query SQL records into AVRO bytes.

The purpose of this package is to facilitate use of AVRO with go strong typing.

What is AVRO

Apache AVRO is a data serialization system which relies on JSON schemas.

It provides:

  • Rich data structures
  • A compact, fast, binary data format
  • A container file, to store persistent data
  • Remote procedure call (RPC)

AVRO binary encoded data comes together with its schema and therefore is fully self-describing.

When AVRO data is read, the schema used when writing it is always present. This permits each datum to be written with no per-value overheads, making serialization both fast and small.

When AVRO data is stored in a file, its schema is stored with it, so that files may be processed later by any program. If the program reading the data expects a different schema this can be easily resolved, since both schemas are present.

Examples

Schema Marshal/Unmarshal

package main

import (
  "encoding/json"
  "fmt"

  "github.com/khezen/avro"
)

func main() {
  schemaBytes := []byte(
    `{
      "type": "record",
      "namespace": "test",
      "name": "LongList",
      "aliases": [
        "LinkedLongs"
      ],
      "doc": "linked list of 64 bits integers",
      "fields": [
        {
          "name": "value",
          "type": "long"
        },
        {
          "name": "next",
          "type": [
            "null",
            "LongList"
          ]
        }
      ]
    }`,
  )

  // Unmarshal JSON  bytes to Schema interface
  var anySchema avro.AnySchema
  err := json.Unmarshal(schemaBytes, &anySchema)
  if err != nil {
    panic(err)
  }
  schema := anySchema.Schema()  
  // Marshal Schema interface to JSON bytes
  schemaBytes, err = json.Marshal(schema)
  if err != nil {
    panic(err)
  }
  fmt.Println(string(schemaBytes))
}

View on Github

3 - Bytebase: Safe database schema change and version control for DevOps teams.

Bytebase is a web-based, zero-config, dependency-free database schema change and version control management tool for the DevOps team.

For Developer and DevOps Engineer - Holistic view of database schema changes

Regardless of working as an IC in a team or managing your own side project, developers using Bytebase will have a holistic view of all the related database info, the ongoing database schema change tasks and the past database migration history.

For DBA - 10x operational efficiency

A collaborative web-console to allow DBAs to manage database tasks and handle developer tickets much more efficiently than traditonal tools.

For Tech Lead - Improve team velocity and reduce risk

Teams using Bytebase will naturally adopt industry best practice for managing database schema changes. Tech leads will see an improved development velocity and reduced outages caused by database changes.

Prerequisites

  • Go (1.19 or later)
  • pnpm
  • Air (must use 1.30.0). This is for backend live reload.

Steps

Install Air v1.30.0. Use 1.30.0 because the newer version changes the behavior and won't shutdown the previous service properly.

go install github.com/cosmtrek/air@v1.30.0

Pull source.

git clone https://github.com/bytebase/bytebase

Start backend using air (with live reload).

air -c scripts/.air.toml

Change the open file limit if you encounter "error: too many open files".

ulimit -n 10240

If you need additional runtime parameters such as --backup-bucket, please add them like this:

air -c scripts/.air.toml -- --backup-region us-east-1 --backup-bucket s3:\\/\\/example-bucket --backup-credential ~/.aws/credentials

Start frontend (with live reload).

cd frontend && pnpm i && pnpm dev

Bytebase should now be running at http://localhost:3000 and change either frontend or backend code would trigger live reload.

(Optional) Install pre-commit.

cd bytebase
pre-commit install
pre-commit install --hook-type commit-msg

View on Github

4 - Darwin: Database schema evolution library for Go.

Example

package main

import (
	"database/sql"
	"log"

	"github.com/GuiaBolso/darwin"
	_ "github.com/go-sql-driver/mysql"
)

var (
	migrations = []darwin.Migration{
		{
			Version:     1,
			Description: "Creating table posts",
			Script: `CREATE TABLE posts (
						id INT 		auto_increment, 
						title 		VARCHAR(255),
						PRIMARY KEY (id)
					 ) ENGINE=InnoDB CHARACTER SET=utf8;`,
		},
		{
			Version:     2,
			Description: "Adding column body",
			Script:      "ALTER TABLE posts ADD body TEXT AFTER title;",
		},
	}
)

func main() {
	database, err := sql.Open("mysql", "root:@/darwin")

	if err != nil {
		log.Fatal(err)
	}

	driver := darwin.NewGenericDriver(database, darwin.MySQLDialect{})

	d := darwin.New(driver, migrations, nil)
	err = d.Migrate()

	if err != nil {
		log.Println(err)
	}
}

Questions

Q. Why there is not a command line utility?

A. The purpose of this library is just be a library.

Q. How can I read migrations from file system?

A. You can read with the standard library and build the migration list.

Q. Can I put more than one statement in the same Script migration?

A. I do not recommend. Put one database change per migration, if some migration fail, you exactly what statement caused the error. Also only postgres correctly handle rollback in DDL transactions.

To be less annoying you can organize your migrations like? 1.0, 1.1, 1.2 and so on.

Q. Why does not exists downgrade migrations?

A. Please read https://flywaydb.org/documentation/faq#downgrade

Q. Does Darwin perform a roll back if a migration fails?

A. Please read https://flywaydb.org/documentation/faq#rollback

Q. What is the best strategy for dealing with hot fixes?

A. Plese read https://flywaydb.org/documentation/faq#hot-fixes

View on Github

5 - Go-fixtures: Django style fixtures for Golang's excellent built-in database/sql library.

Django style fixtures for Golang's excellent built-in database/sql library. Currently only YAML fixtures are supported.

There are two reserved values you can use for datetime fields:

  • ON_INSERT_NOW() will only be used when a row is being inserted
  • ON_UPDATE_NOW() will only be used when a row is being updated

Example YAML fixture:

---

- table: 'some_table'
  pk:
    id: 1
  fields:
    string_field: 'foobar'
    boolean_field: true
    created_at: 'ON_INSERT_NOW()'
    updated_at: 'ON_UPDATE_NOW()'

- table: 'other_table'
  pk:
    id: 2
  fields:
    int_field: 123
    boolean_field: false
    created_at: 'ON_INSERT_NOW()'
    updated_at: 'ON_UPDATE_NOW()'

- table: 'join_table'
  pk:
    some_id: 1

View on Github

6 - Go-pg-migrate: CLI-friendly package for go-pg migrations management.

Installation

Requires Go Modules enabled.

go get github.com/lawzava/go-pg-migrate/v2

Usage

Initialize the migrate with options payload where choices are:

DatabaseURI database connection string. In a format of postgres://user:password@host:port/database?sslmode=disable.

VersionNumberToApply uint value of a migration number up to which the migrations should be applied. When the requested migration number is lower than currently applied migration number it will run backward migrations, otherwise it will run forward migrations.

PrintVersionAndExit if true, the currently applied version number will be printed into stdout and the migrations will not be applied.

ForceVersionWithoutMigrations if true, the migrations will not be applied, but they will be registered as applied up to the specified version number.

RefreshSchema if true, public schema will be dropped and recreated before the migrations are applied. Useful for frequent testing and CI environments.

View on Github

7 - Go-pg-migrations: A Go package to help write migrations with go-pg/pg.

Usage

Installation

Because go-pg now has Go modules support, go-pg-migrations also has modules support; it currently depends on v10 of go-pg. To install it, use the following command in a project with a go.mod:

$ go get github.com/robinjoseph08/go-pg-migrations/v3

If you are not yet using Go modules, you can still use v1 of this package.

Running

To see how this package is intended to be used, you can look at the example directory. All you need to do is have a main package (e.g. example); call migrations.Run with the directory you want the migration files to be saved in (which will be the same directory of the main package, e.g. example), an instance of *pg.DB, and os.Args; and log any potential errors that could be returned.

Once this has been set up, then you can use the create, migrate, rollback, help commands like so:

$ go run example/*.go create create_users_table
Creating example/20180812001528_create_users_table.go...

$ go run example/*.go migrate
Running batch 1 with 1 migration(s)...
Finished running "20180812001528_create_users_table"

$ go run example/*.go rollback
Rolling back batch 1 with 1 migration(s)...
Finished rolling back "20180812001528_create_users_table"

$ go run example/*.go help
Usage:
  go run example/*.go [command]

Commands:
  create   - create a new migration in example with the provided name
  migrate  - run any migrations that haven't been run yet
  rollback - roll back the previous run batch of migrations
  help     - print this help text

Examples:
  go run example/*.go create create_users_table
  go run example/*.go migrate
  go run example/*.go rollback
  go run example/*.go help

While this works when you have the Go toolchain installed, there might be a scenario where you have to run migrations and you don't have the toolchain available (e.g. in a scratch or alpine Docker image deployed to production). In that case, you should compile another binary (in addition to your actual application) and copy it into the final image. This will include all of your migrations and allow you to run it by overriding the command when running the Docker container.

View on Github

8 - Goavro: A Go package that encodes and decodes Avro data.

Goavro is a library that encodes and decodes Avro data.

Description

  • Encodes to and decodes from both binary and textual JSON Avro data.
  • Codec is stateless and is safe to use by multiple goroutines.

With the exception of features not yet supported, goavro attempts to be fully compliant with the most recent version of the Avro specification.

Dependency Notice

All usage of gopkg.in has been removed in favor of Go modules. Please update your import paths to github.com/linkedin/goavro/v2. v1 users can still use old versions of goavro by adding a constraint to your go.mod or Gopkg.toml file.

require (
    github.com/linkedin/goavro v1.0.5
)
[[constraint]]
name = "github.com/linkedin/goavro"
version = "=1.0.5"

Usage

Documentation is available via GoDoc.

package main

import (
    "fmt"

    "github.com/linkedin/goavro/v2"
)

func main() {
    codec, err := goavro.NewCodec(`
        {
          "type": "record",
          "name": "LongList",
          "fields" : [
            {"name": "next", "type": ["null", "LongList"], "default": null}
          ]
        }`)
    if err != nil {
        fmt.Println(err)
    }

    // NOTE: May omit fields when using default value
    textual := []byte(`{"next":{"LongList":{}}}`)

    // Convert textual Avro data (in Avro JSON format) to native Go form
    native, _, err := codec.NativeFromTextual(textual)
    if err != nil {
        fmt.Println(err)
    }

    // Convert native Go form to binary Avro data
    binary, err := codec.BinaryFromNative(nil, native)
    if err != nil {
        fmt.Println(err)
    }

    // Convert binary Avro data back to native Go form
    native, _, err = codec.NativeFromBinary(binary)
    if err != nil {
        fmt.Println(err)
    }

    // Convert native Go form to textual Avro data
    textual, err = codec.TextualFromNative(nil, native)
    if err != nil {
        fmt.Println(err)
    }

    // NOTE: Textual encoding will show all fields, even those with values that
    // match their default values
    fmt.Println(string(textual))
    // Output: {"next":{"LongList":{"next":null}}}
}

Also please see the example programs in the examples directory for reference.

View on Github

9 - Godfish: Database migration manager, works with native query language. Support for cassandra, mysql, postgres, sqlite3.

godfish is a database migration manager, similar to the very good dogfish, but written in golang.

goals

  • use the native query language in the migration files, no other high-level DSLs
  • interface with many DBs
  • light on dependencies
  • not terrible error messages

build

Make a CLI binary for the DB you want to use. This tool comes with some driver implementations. Build one like so:

make build-cassandra
make build-mysql
make build-postgres
make build-sqlite3
make build-sqlserver

From there you could move it to $GOPATH/bin, move it to your project or whatever else you need to do.

usage

godfish help
godfish -h
godfish <command> -h

Configuration options are read from command line flags first. If those are not set, then it checks the configuration file.

connecting to the db

Database connection parameters are always read from environment variables. Set:

DB_DSN=

View on Github

10 - Goose: Database migration tool. You can manage your database's evolution by creating incremental SQL or Go scripts.

Goose is a database migration tool. Manage your database schema by creating incremental SQL changes or Go functions.

Starting with v3.0.0 this project adds Go module support, but maintains backwards compatibility with older v2.x.y tags.

Goose supports embedding SQL migrations, which means you'll need go1.16 and up. If using go1.15 or lower, then pin v3.0.1.

Install

$ go install github.com/pressly/goose/v3/cmd/goose@latest

This will install the goose binary to your $GOPATH/bin directory.

For a lite version of the binary without DB connection dependent commands, use the exclusive build tags:

$ go build -tags='no_postgres no_mysql no_sqlite3' -o goose ./cmd/goose

For macOS users goose is available as a Homebrew Formulae:

$ brew install goose

See the docs for more installation instructions.

Usage

Usage: goose [OPTIONS] DRIVER DBSTRING COMMAND

Drivers:
    postgres
    mysql
    sqlite3
    mssql
    redshift
    tidb
    clickhouse

Examples:
    goose sqlite3 ./foo.db status
    goose sqlite3 ./foo.db create init sql
    goose sqlite3 ./foo.db create add_some_column sql
    goose sqlite3 ./foo.db create fetch_user_data go
    goose sqlite3 ./foo.db up

    goose postgres "user=postgres password=postgres dbname=postgres sslmode=disable" status
    goose mysql "user:password@/dbname?parseTime=true" status
    goose redshift "postgres://user:password@qwerty.us-east-1.redshift.amazonaws.com:5439/db" status
    goose tidb "user:password@/dbname?parseTime=true" status
    goose mssql "sqlserver://user:password@dbname:1433?database=master" status

Options:

  -allow-missing
    	applies missing (out-of-order) migrations
  -certfile string
    	file path to root CA's certificates in pem format (only support on mysql)
  -dir string
    	directory with migration files (default ".")
  -h	print help
  -no-versioning
    	apply migration commands with no versioning, in file order, from directory pointed to
  -s	use sequential numbering for new migrations
  -ssl-cert string
    	file path to SSL certificates in pem format (only support on mysql)
  -ssl-key string
    	file path to SSL key in pem format (only support on mysql)
  -table string
    	migrations table name (default "goose_db_version")
  -v	enable verbose mode
  -version
    	print version

Commands:
    up                   Migrate the DB to the most recent version available
    up-by-one            Migrate the DB up by 1
    up-to VERSION        Migrate the DB to a specific VERSION
    down                 Roll back the version by 1
    down-to VERSION      Roll back to a specific VERSION
    redo                 Re-run the latest migration
    reset                Roll back all migrations
    status               Dump the migration status for the current DB
    version              Print the current version of the database
    create NAME [sql|go] Creates new migration file with the current timestamp
    fix                  Apply sequential ordering to migrations

View on Github

Thank you for following this article.

Related videos:

Golang Tools: Database schema migrations

#go #golang #database #schema 

10 Best Libraries for Database Schema Migration in Go