JSON

JSON

JSON is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate.

Command Line Tool for Generating Dart Floor Entities/Models

Command line tool for generating Dart Floor(provides a neat SQLite) entities/models from Json file.

inspired by json_to_model v2.3.1.

based of the json_to_model v2.3.1

Contents

Features

FeatureStatus
Null safety
toJson/fromJson
@entity classes
copyWith generation
clone and deepclone
nested json classes
alter tables and field
INTEGER(int) support
REAL(num) support
TEXT(String) support
BLOB(Uint8List) support

Installation

on pubspec.yaml

dev_dependencies:
  json_to_floor_entity: last version

install using pub get command or if you using dart vscode/android studio, you can use install option.

What does this library do

Command line tool to convert .json files into immutable .dart models.

Get started

The command will run through your json files and find possible type, variable name, import uri, decorator and class name, and will write it into the templates.

Create/copy .json files into ./jsons/(default) on root of your project, and run flutter pub run json_to_model.

Examples

Input Consider this files named product.json and employee.json

product.json

{
  "id": "123",
  "caseId?": "123",
  "startDate?": "2020-08-08",
  "endDate?": "2020-10-10",
  "placementDescription?": "Description string"
}

eployee.json

{
  "id": "123",
  "displayName?": "Jan Jansen",
  "@ignore products?": "$[]product"
}

Output This will generate this product.dart and employee.dart

product.dart

import 'package:floor/floor.dart';

@entity
class Product {

  const Product({
    required this.id,
    
    this.caseId,
    
    this.startDate,
    
    this.endDate,
    
    this.placementDescription,
  });

  @primaryKey
  final int id;
  final String? caseId;
  final String? startDate;
  final String? endDate;
  final String? placementDescription;

  factory Product.fromJson(Map<String,dynamic> json) => Product(
    id: json['id'] as String,
    caseId: json['caseId'] != null ? json['caseId'] as String : null,
    startDate: json['startDate'] != null ? json['startDate'] as String : null,
    endDate: json['endDate'] != null ? json['endDate'] as String : null,
    placementDescription: json['placementDescription'] != null ? json['placementDescription'] as String : null
  );

  Map<String, dynamic> toJson() => {
    'id': id,
    'caseId': caseId,
    'startDate': startDate,
    'endDate': endDate,
    'placementDescription': placementDescription
  };

  Product clone() => Product(
    id: id,
    caseId: caseId,
    startDate: startDate,
    endDate: endDate,
    placementDescription: placementDescription
  );


  Product copyWith({
    int? id,
    String? caseId,
    String? startDate,
    String? endDate,
    String? placementDescription
  }) => Product(
    id: id ?? this.id,
    caseId: caseId ?? this.caseId,
    startDate: startDate ?? this.startDate,
    endDate: endDate ?? this.endDate,
    placementDescription: placementDescription ?? this.placementDescription,
  );

  @override
  bool operator ==(Object other) => identical(this, other)
    || other is Product && id == other.id && caseId == other.caseId && startDate == other.startDate && endDate == other.endDate && placementDescription == other.placementDescription;

  @override
  int get hashCode => id.hashCode ^ caseId.hashCode ^ startDate.hashCode ^ endDate.hashCode ^ placementDescription.hashCode;
}

eployee.dart

import 'package:floor/floor.dart';
import 'product.dart';

@entity
class Employee {

  const Employee({
    required this.id,
    this.displayName,
    this.products,
  });

  @primaryKey
  final int id;
  final String? displayName;
  final List<Product>? products;

  factory Employee.fromJson(Map<String,dynamic> json) => Employee(
    id: json['id'] as String,
    displayName: json['displayName'] != null ? json['displayName'] as String : null
  );

  Map<String, dynamic> toJson() => {
    'id': id,
    'displayName': displayName
  };

  Employee clone() => Employee(
    id: id,
    displayName: displayName,
    products: products?.map((e) => e.clone()).toList()
  );


  Employee copyWith({
    int? id,
    String? displayName,
    List<Product>? products
  }) => Employee(
    id: id ?? this.id,
    displayName: displayName ?? this.displayName,
    products: products ?? this.products,
  );

  @override
  bool operator ==(Object other) => identical(this, other)
    || other is Employee && id == other.id
    && displayName == other.displayName
    && products == other.products;

  @override
  int get hashCode => id.hashCode ^
    displayName.hashCode ^
    products.hashCode;
}

Create a DAO (Data Access Object)

This component is responsible for managing access to the underlying SQLite database. Auto create a dao like this:

import 'package:floor/floor.dart';


@dao
abstract class NewsDao {

  @Query('SELECT * FROM News')
  Future<List<News>> findAll();

  @Query('SELECT * FROM News WHERE id = :id')
  Future<News?> findById(int id);

  @insert
  Future<void> add(News entity);
  
  @insert
  Future<void> addList(List<News> entities);

  @update
  Future<void> edit(News entity);

  @update
  Future<void> editList(List<News> entities);

  @delete
  Future<void> remove(News entity);

  @delete
  Future<void> removeList(List<News> entities);

}

These files will not be deleted or updated after they are created.

###Create the Database It has to be an abstract class which extends FloorDatabase. Auto create a dao like this:

// database.dart
// required package imports
import 'dart:async';
import 'package:floor/floor.dart';
import 'package:sqflite/sqflite.dart' as sqflite;

import 'dao/person_dao.dart';
import 'entity/person.dart';

part 'database.g.dart'; // the generated code will be there

@Database(version: 1, entities: [Person])
abstract class AppDatabase extends FloorDatabase {
  PersonDao get personDao;
}

Getting started

  1. Create a directory jsons(default) at root of your project
  2. Put all or Create json files inside jsons directory
  3. run
    pub run json_to_floor_entity
    or
    pub run json_to_floor_entity -s assets/api_jsons -o lib/models
    or
    flutter pub run json_to_floor_entity -s assets/api_jsons -o lib/models
    in flutter project
  4. run
    flutter packages pub run build_runner build

Usage

you can also use it for dart model.

this package will read .json file, and generate .dart file, asign the type of the value as variable type and key as the variable name.

DescriptionExpressionInput (Example)Output(declaration)Output(import)
declare type depends on the json value{...:any type}{"id": 1, "message":"hello world"},int id;
String message;
 
import model and asign type{...:"$value"}{"auth":"$user"}User auth;import 'user.dart'
import from path{...:"$../pathto/value"}{"price":"$../product/price"}Price price;import '../product/price.dart'
asign list of type and import (can also be recursive){...:"$[]value"}{"addreses":"$[]address"}List<Address> addreses;import 'address.dart'
import other library(input value can be array){"@import":...}{"@import":"package:otherlibrary/otherlibrary.dart"} import 'package:otherlibrary/otherlibrary.dart'
Datetime type{...:"@datetime"}{"createdAt": "@datetime:2020-02-15T15:47:51.742Z"}DateTime createdAt; 
Enum type{...:"@enum:(folowed by enum separated by ',')"}{"@import":"@enum:admin,app_user,normal"}enum UserTypeEnum { Admin, AppUser, Normal } 
Enum type with values {...:"@enum:(folowed by enum separated by ',')"}{"@import":"@enum:admin(0),app_user(1),normal(2)"}   

Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add json_to_floor_entity

With Flutter:

 $ flutter pub add json_to_floor_entity

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

dependencies:
  json_to_floor_entity: ^1.1.2

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:json_to_floor_entity/json_to_floor_entity.dart'; 

Download Details:

Author: zxskigg

Source Code: https://github.com/zxskigg/flutter_json_to_floor_entity

#flutter #json #floor 

Command Line Tool for Generating Dart Floor Entities/Models
Dexter  Goodwin

Dexter Goodwin

1660375140

Fluent-json-schema: A Fluent API to Generate JSON Schemas

fluent-json-schema

A fluent API to generate JSON schemas (draft-07) for Node.js and browser. Framework agnostic.  

Features

  • Fluent schema implements JSON Schema draft-07 standards
  • Faster and shorter way to write a JSON Schema via a fluent API
  • Runtime errors for invalid options or keywords misuse
  • JavaScript constants can be used in the JSON schema (e.g. enum, const, default ) avoiding discrepancies between model and schema
  • TypeScript definitions
  • Coverage 99%

Install

npm i fluent-json-schema

or

yarn add fluent-json-schema

Usage

const S = require('fluent-json-schema')

const ROLES = {
  ADMIN: 'ADMIN',
  USER: 'USER',
}

const schema = S.object()
  .id('http://foo/user')
  .title('My First Fluent JSON Schema')
  .description('A simple user')
  .prop('email', S.string().format(S.FORMATS.EMAIL).required())
  .prop('password', S.string().minLength(8).required())
  .prop('role', S.string().enum(Object.values(ROLES)).default(ROLES.USER))
  .prop(
    'birthday',
    S.raw({ type: 'string', format: 'date', formatMaximum: '2020-01-01' }) // formatMaximum is an AJV custom keywords
  )
  .definition(
    'address',
    S.object()
      .id('#address')
      .prop('line1', S.anyOf([S.string(), S.null()])) // JSON Schema nullable
      .prop('line2', S.string().raw({ nullable: true })) // Open API / Swagger  nullable
      .prop('country', S.string())
      .prop('city', S.string())
      .prop('zipcode', S.string())
      .required(['line1', 'country', 'city', 'zipcode'])
  )
  .prop('address', S.ref('#address'))

console.log(JSON.stringify(schema.valueOf(), undefined, 2))

Schema generated:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "definitions": {
    "address": {
      "type": "object",
      "$id": "#address",
      "properties": {
        "line1": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ]
        },
        "line2": {
          "type": "string",
          "nullable": true
        },
        "country": {
          "type": "string"
        },
        "city": {
          "type": "string"
        },
        "zipcode": {
          "type": "string"
        }
      },
      "required": ["line1", "country", "city", "zipcode"]
    }
  },
  "type": "object",
  "$id": "http://foo/user",
  "title": "My First Fluent JSON Schema",
  "description": "A simple user",
  "properties": {
    "email": {
      "type": "string",
      "format": "email"
    },
    "password": {
      "type": "string",
      "minLength": 8
    },
    "birthday": {
      "type": "string",
      "format": "date",
      "formatMaximum": "2020-01-01"
    },
    "role": {
      "type": "string",
      "enum": ["ADMIN", "USER"],
      "default": "USER"
    },
    "address": {
      "$ref": "#address"
    }
  },
  "required": ["email", "password"]
}

TypeScript

With "esModuleInterop": true activated in the tsconfig.json:

import S from 'fluent-json-schema'

const schema = S.object()
  .prop('foo', S.string())
  .prop('bar', S.number())
  .valueOf()

With "esModuleInterop": false in the tsconfig.json:

import * as S from 'fluent-json-schema'

const schema = S.object()
  .prop('foo', S.string())
  .prop('bar', S.number())
  .valueOf()

Validation

Fluent schema does not validate a JSON schema. However, many libraries can do that for you. Below a few examples using AJV:

npm i ajv

or

yarn add ajv

Validate an empty model

Snippet:

const ajv = new Ajv({ allErrors: true })
const validate = ajv.compile(schema.valueOf())
let user = {}
let valid = validate(user)
console.log({ valid }) //=> {valid: false}
console.log(validate.errors) //=> {valid: false}

Output:

{valid: false}
errors: [
  {
    keyword: 'required',
    dataPath: '',
    schemaPath: '#/required',
    params: { missingProperty: 'email' },
    message: "should have required property 'email'",
  },
  {
    keyword: 'required',
    dataPath: '',
    schemaPath: '#/required',
    params: { missingProperty: 'password' },
    message: "should have required property 'password'",
  },
]

Validate a partially filled model

Snippet:

user = { email: 'test', password: 'password' }
valid = validate(user)
console.log({ valid })
console.log(validate.errors)

Output:

{valid: false}
errors:
[ { keyword: 'format',
    dataPath: '.email',
    schemaPath: '#/properties/email/format',
    params: { format: 'email' },
    message: 'should match format "email"' } ]

Validate a model with a wrong format attribute

Snippet:

user = { email: 'test@foo.com', password: 'password' }
valid = validate(user)
console.log({ valid })
console.log('errors:', validate.errors)

Output:

{valid: false}
errors: [ { keyword: 'required',
    dataPath: '.address',
    schemaPath: '#definitions/address/required',
    params: { missingProperty: 'country' },
    message: 'should have required property \'country\'' },
  { keyword: 'required',
    dataPath: '.address',
    schemaPath: '#definitions/address/required',
    params: { missingProperty: 'city' },
    message: 'should have required property \'city\'' },
  { keyword: 'required',
    dataPath: '.address',
    schemaPath: '#definitions/address/required',
    params: { missingProperty: 'zipcoce' },
    message: 'should have required property \'zipcode\'' } ]

Valid model

Snippet:

user = { email: 'test@foo.com', password: 'password' }
valid = validate(user)
console.log({ valid })

Output:

{valid: true}

Extend schema

Normally inheritance with JSON Schema is achieved with allOf. However when .additionalProperties(false) is used the validator won't understand which properties come from the base schema. S.extend creates a schema merging the base into the new one so that the validator knows all the properties because it is evaluating only a single schema. For example, in a CRUD API POST /users could use the userBaseSchema rather than GET /users or PATCH /users use the userSchema which contains the id, createdAt and updatedAt generated server side.

const S = require('fluent-json-schema')
const userBaseSchema = S.object()
  .additionalProperties(false)
  .prop('username', S.string())
  .prop('password', S.string())

const userSchema = S.object()
  .prop('id', S.string().format('uuid'))
  .prop('createdAt', S.string().format('time'))
  .prop('updatedAt', S.string().format('time'))
  .extend(userBaseSchema)

console.log(userSchema)

Selecting certain properties of your schema

In addition to extending schemas, it is also possible to reduce them into smaller schemas. This comes in handy when you have a large Fluent Schema, and would like to re-use some of its properties.

Select only properties you want to keep.

const S = require('fluent-json-schema')
const userSchema = S.object()
  .prop('username', S.string())
  .prop('password', S.string())
  .prop('id', S.string().format('uuid'))
  .prop('createdAt', S.string().format('time'))
  .prop('updatedAt', S.string().format('time'))

const loginSchema = userSchema.only(['username', 'password'])

Or remove properties you dont want to keep.

const S = require('fluent-json-schema')
const personSchema = S.object()
  .prop('name', S.string())
  .prop('age', S.number())
  .prop('id', S.string().format('uuid'))
  .prop('createdAt', S.string().format('time'))
  .prop('updatedAt', S.string().format('time'))

const bodySchema = personSchema.without(['createdAt', 'updatedAt'])

Detect Fluent Schema objects

Every Fluent Schema object contains a boolean isFluentSchema. In this way, you can write your own utilities that understands the Fluent Schema API and improve the user experience of your tool.

const S = require('fluent-json-schema')
const schema = S.object().prop('foo', S.string()).prop('bar', S.number())
console.log(schema.isFluentSchema) // true

Documentation

Acknowledgments

Thanks to Matteo Collina for pushing me to implement this utility! 🙏

Related projects

Download Details:

Author: Fastify
Source Code: https://github.com/fastify/fluent-json-schema 
License: MIT license

#javascript #json #schema 

Fluent-json-schema: A Fluent API to Generate JSON Schemas
Nat  Grady

Nat Grady

1660296849

Elastic: R Client for The Elasticsearch HTTP API

elastic   

A general purpose R interface to Elasticsearch

Compatibility

This client is developed following the latest stable releases, currently v7.10.0. It is generally compatible with older versions of Elasticsearch. Unlike the Python client, we try to keep as much compatibility as possible within a single version of this client, as that's an easier setup in R world.

Security

You're fine running ES locally on your machine, but be careful just throwing up ES on a server with a public IP address - make sure to think about security.

  • Elastic has paid products - but probably only applicable to enterprise users
  • DIY security - there are a variety of techniques for securing your Elasticsearch installation. A number of resources are collected in a blog post - tools include putting your ES behind something like Nginx, putting basic auth on top of it, using https, etc.

Installation

Stable version from CRAN

install.packages("elastic")

Development version from GitHub

remotes::install_github("ropensci/elastic")
library('elastic')

Install Elasticsearch

w/ Docker

Pull the official elasticsearch image

# elasticsearch needs to have a version tag. We're pulling 7.10.1 here
docker pull elasticsearch:7.10.1

Then start up a container

docker run -d -p 9200:9200 elasticsearch:7.10.1

Then elasticsearch should be available on port 9200, try curl localhost:9200 and you should get the familiar message indicating ES is on.

If you're using boot2docker, you'll need to use the IP address in place of localhost. Get it by doing boot2docker ip.

on OSX

  • Download zip or tar file from Elasticsearch see here for download, e.g., curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.0-darwin-x86_64.tar.gz
  • Extract: tar -zxvf elasticsearch-7.10.0-darwin-x86_64.tar.gz
  • Move it: sudo mv elasticsearch-7.10.0 /usr/local
  • Navigate to /usr/local: cd /usr/local
  • Delete symlinked elasticsearch directory: rm -rf elasticsearch
  • Add shortcut: sudo ln -s elasticsearch-7.10.0 elasticsearch (replace version with your version)

You can also install via Homebrew: brew install elasticsearch

Note: for the 1.6 and greater upgrades of Elasticsearch, they want you to have java 8 or greater. I downloaded Java 8 from here http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html and it seemed to work great.

Upgrading Elasticsearch

I am not totally clear on best practice here, but from what I understand, when you upgrade to a new version of Elasticsearch, place old elasticsearch/data and elasticsearch/config directories into the new installation (elasticsearch/ dir). The new elasticsearch instance with replaced data and config directories should automatically update data to the new version and start working. Maybe if you use homebrew on a Mac to upgrade it takes care of this for you - not sure.

Obviously, upgrading Elasticsearch while keeping it running is a different thing (some help here from Elastic).

Start Elasticsearch

  • Navigate to elasticsearch: cd /usr/local/elasticsearch
  • Start elasticsearch: bin/elasticsearch

I create a little bash shortcut called es that does both of the above commands in one step (cd /usr/local/elasticsearch && bin/elasticsearch).

Initialization

The function connect() is used before doing anything else to set the connection details to your remote or local elasticsearch store. The details created by connect() are written to your options for the current session, and are used by elastic functions.

x <- connect(port = 9200)

If you're following along here with a local instance of Elasticsearch, you'll use x below to do more stuff.

For AWS hosted elasticsearch, make sure to specify path = "" and the correct port - transport schema pair.

connect(host = <aws_es_endpoint>, path = "", port = 80, transport_schema  = "http")
  # or
connect(host = <aws_es_endpoint>, path = "", port = 443, transport_schema  = "https")

If you are using Elastic Cloud or an installation with authentication (X-pack), make sure to specify path = "", user = "", pwd = "" and the correct port - transport schema pair.

connect(host = <ec_endpoint>, path = "", user="test", pwd = "1234", port = 9243, transport_schema  = "https")


 

Get some data

Elasticsearch has a bulk load API to load data in fast. The format is pretty weird though. It's sort of JSON, but would pass no JSON linter. I include a few data sets in elastic so it's easy to get up and running, and so when you run examples in this package they'll actually run the same way (hopefully).

I have prepare a non-exported function useful for preparing the weird format that Elasticsearch wants for bulk data loads, that is somewhat specific to PLOS data (See below), but you could modify for your purposes. See make_bulk_plos() and make_bulk_gbif() here.

Shakespeare data

Elasticsearch provides some data on Shakespeare plays. I've provided a subset of this data in this package. Get the path for the file specific to your machine:

shakespeare <- system.file("examples", "shakespeare_data.json", package = "elastic")
# If you're on Elastic v6 or greater, use this one
shakespeare <- system.file("examples", "shakespeare_data_.json", package = "elastic")
shakespeare <- type_remover(shakespeare)

Then load the data into Elasticsearch:

make sure to create your connection object with connect()

# x <- connect()  # do this now if you didn't do this above
invisible(docs_bulk(x, shakespeare))

If you need some big data to play with, the shakespeare dataset is a good one to start with. You can get the whole thing and pop it into Elasticsearch (beware, may take up to 10 minutes or so.):

curl -XGET https://download.elastic.co/demos/kibana/gettingstarted/shakespeare_6.0.json > shakespeare.json
curl -XPUT localhost:9200/_bulk --data-binary @shakespeare.json

Public Library of Science (PLOS) data

A dataset inluded in the elastic package is metadata for PLOS scholarly articles. Get the file path, then load:

if (index_exists(x, "plos")) index_delete(x, "plos")
plosdat <- system.file("examples", "plos_data.json", package = "elastic")
plosdat <- type_remover(plosdat)
invisible(docs_bulk(x, plosdat))

Global Biodiversity Information Facility (GBIF) data

A dataset inluded in the elastic package is data for GBIF species occurrence records. Get the file path, then load:

if (index_exists(x, "gbif")) index_delete(x, "gbif")
gbifdat <- system.file("examples", "gbif_data.json", package = "elastic")
gbifdat <- type_remover(gbifdat)
invisible(docs_bulk(x, gbifdat))

GBIF geo data with a coordinates element to allow geo_shape queries

if (index_exists(x, "gbifgeo")) index_delete(x, "gbifgeo")
gbifgeo <- system.file("examples", "gbif_geo.json", package = "elastic")
gbifgeo <- type_remover(gbifgeo)
invisible(docs_bulk(x, gbifgeo))

More data sets

There are more datasets formatted for bulk loading in the sckott/elastic_data GitHub repository. Find it at https://github.com/sckott/elastic_data

Search

Search the plos index and only return 1 result

Search(x, index = "plos", size = 1)$hits$hits
#> [[1]]
#> [[1]]$`_index`
#> [1] "plos"
#> 
#> [[1]]$`_type`
#> [1] "_doc"
#> 
#> [[1]]$`_id`
#> [1] "0"
#> 
#> [[1]]$`_score`
#> [1] 1
#> 
#> [[1]]$`_source`
#> [[1]]$`_source`$id
#> [1] "10.1371/journal.pone.0007737"
#> 
#> [[1]]$`_source`$title
#> [1] "Phospholipase C-\u03b24 Is Essential for the Progression of the Normal Sleep Sequence and Ultradian Body Temperature Rhythms in Mice"

Search the plos index, and query for antibody, limit to 1 result

Search(x, index = "plos", q = "antibody", size = 1)$hits$hits
#> [[1]]
#> [[1]]$`_index`
#> [1] "plos"
#> 
#> [[1]]$`_type`
#> [1] "_doc"
#> 
#> [[1]]$`_id`
#> [1] "813"
#> 
#> [[1]]$`_score`
#> [1] 5.18676
#> 
#> [[1]]$`_source`
#> [[1]]$`_source`$id
#> [1] "10.1371/journal.pone.0107638"
#> 
#> [[1]]$`_source`$title
#> [1] "Sortase A Induces Th17-Mediated and Antibody-Independent Immunity to Heterologous Serotypes of Group A Streptococci"

Get documents

Get document with id=4

docs_get(x, index = 'plos', id = 4)
#> $`_index`
#> [1] "plos"
#> 
#> $`_type`
#> [1] "_doc"
#> 
#> $`_id`
#> [1] "4"
#> 
#> $`_version`
#> [1] 1
#> 
#> $`_seq_no`
#> [1] 4
#> 
#> $`_primary_term`
#> [1] 1
#> 
#> $found
#> [1] TRUE
#> 
#> $`_source`
#> $`_source`$id
#> [1] "10.1371/journal.pone.0107758"
#> 
#> $`_source`$title
#> [1] "Lactobacilli Inactivate Chlamydia trachomatis through Lactic Acid but Not H2O2"

Get certain fields

docs_get(x, index = 'plos', id = 4, fields = 'id')
#> $`_index`
#> [1] "plos"
#> 
#> $`_type`
#> [1] "_doc"
#> 
#> $`_id`
#> [1] "4"
#> 
#> $`_version`
#> [1] 1
#> 
#> $`_seq_no`
#> [1] 4
#> 
#> $`_primary_term`
#> [1] 1
#> 
#> $found
#> [1] TRUE

Get multiple documents via the multiget API

Same index and different document ids

docs_mget(x, index = "plos", id = 1:2)
#> $docs
#> $docs[[1]]
#> $docs[[1]]$`_index`
#> [1] "plos"
#> 
#> $docs[[1]]$`_type`
#> [1] "_doc"
#> 
#> $docs[[1]]$`_id`
#> [1] "1"
#> 
#> $docs[[1]]$`_version`
#> [1] 1
#> 
#> $docs[[1]]$`_seq_no`
#> [1] 1
#> 
#> $docs[[1]]$`_primary_term`
#> [1] 1
#> 
#> $docs[[1]]$found
#> [1] TRUE
#> 
#> $docs[[1]]$`_source`
#> $docs[[1]]$`_source`$id
#> [1] "10.1371/journal.pone.0098602"
#> 
#> $docs[[1]]$`_source`$title
#> [1] "Population Genetic Structure of a Sandstone Specialist and a Generalist Heath Species at Two Levels of Sandstone Patchiness across the Strait of Gibraltar"
#> 
#> 
#> 
#> $docs[[2]]
#> $docs[[2]]$`_index`
#> [1] "plos"
#> 
#> $docs[[2]]$`_type`
#> [1] "_doc"
#> 
#> $docs[[2]]$`_id`
#> [1] "2"
#> 
#> $docs[[2]]$`_version`
#> [1] 1
#> 
#> $docs[[2]]$`_seq_no`
#> [1] 2
#> 
#> $docs[[2]]$`_primary_term`
#> [1] 1
#> 
#> $docs[[2]]$found
#> [1] TRUE
#> 
#> $docs[[2]]$`_source`
#> $docs[[2]]$`_source`$id
#> [1] "10.1371/journal.pone.0107757"
#> 
#> $docs[[2]]$`_source`$title
#> [1] "Cigarette Smoke Extract Induces a Phenotypic Shift in Epithelial Cells; Involvement of HIF1\u03b1 in Mesenchymal Transition"

Parsing

You can optionally get back raw json from Search(), docs_get(), and docs_mget() setting parameter raw=TRUE.

For example:

(out <- docs_mget(x, index = "plos", id = 1:2, raw = TRUE))
#> [1] "{\"docs\":[{\"_index\":\"plos\",\"_type\":\"_doc\",\"_id\":\"1\",\"_version\":1,\"_seq_no\":1,\"_primary_term\":1,\"found\":true,\"_source\":{\"id\":\"10.1371/journal.pone.0098602\",\"title\":\"Population Genetic Structure of a Sandstone Specialist and a Generalist Heath Species at Two Levels of Sandstone Patchiness across the Strait of Gibraltar\"}},{\"_index\":\"plos\",\"_type\":\"_doc\",\"_id\":\"2\",\"_version\":1,\"_seq_no\":2,\"_primary_term\":1,\"found\":true,\"_source\":{\"id\":\"10.1371/journal.pone.0107757\",\"title\":\"Cigarette Smoke Extract Induces a Phenotypic Shift in Epithelial Cells; Involvement of HIF1\u03b1 in Mesenchymal Transition\"}}]}"
#> attr(,"class")
#> [1] "elastic_mget"

Then parse

jsonlite::fromJSON(out)
#> $docs
#>   _index _type _id _version _seq_no _primary_term found
#> 1   plos  _doc   1        1       1             1  TRUE
#> 2   plos  _doc   2        1       2             1  TRUE
#>                     _source.id
#> 1 10.1371/journal.pone.0098602
#> 2 10.1371/journal.pone.0107757
#>                                                                                                                                                _source.title
#> 1 Population Genetic Structure of a Sandstone Specialist and a Generalist Heath Species at Two Levels of Sandstone Patchiness across the Strait of Gibraltar
#> 2                                Cigarette Smoke Extract Induces a Phenotypic Shift in Epithelial Cells; Involvement of HIF1\u03b1 in Mesenchymal Transition

Known pain points

  • On secure Elasticsearch servers:
    • HEAD requests don't seem to work, not sure why
    • If you allow only GET requests, a number of functions that require POST requests obviously then won't work. A big one is Search(), but you can use Search_uri() to get around this, which uses GET instead of POST, but you can't pass a more complicated query via the body

Screencast

A screencast introducing the package: vimeo.com/124659179

Meta

  • Please report any issues or bugs
  • License: MIT
  • Get citation information for elastic in R doing citation(package = 'elastic')
  • Please note that this package is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.

Elasticsearch info

Download Details:

Author: Ropensci
Source Code: https://github.com/ropensci/elastic 
License: Unknown, MIT licenses found

#r #http #elasticsearch #json 

Elastic: R Client for The Elasticsearch HTTP API

TypeScript JSON: 2x Faster JSON Stringify Function with only One Line

TypeScript-JSON

Super-fast Runtime type checker and JSON.stringify() functions, with only one line.    

import TSON from "typescript-json";

// RUNTIME TYPE CHECKERS
TSON.assertType<T>(input); // throws exception
TSON.is<T>(input); // returns boolean value
TSON.validate<T>(input); // archives all type errors

// STRINGIFY
TSON.stringify<T>(input); // 5x faster JSON.stringify()

// APPENDIX FUNCTIONS
TSON.application<[T, U, V], "swagger">(); // JSON schema application generator
TSON.create<T>(input); // 2x faster object creator (only one-time construction)

typescript-json is a transformer library providing JSON related functions.

  • Powerful Runtime type checkers:
    • Performed by only one line, TSON.assertType<T>(input)
    • Only one library which can validate union type
    • Maximum 100x faster than other libraries
  • 5x faster JSON.stringify() function:
    • Performed by only one line: TSON.stringify<T>(input)
    • Only one library which can stringify union type
    • 10,000x faster optimizer construction time than similar libraries

JSON String Conversion Benchmark

Measured on AMD R7 5800HS, ASUS ROG FLOW X13 (numeric option: false)

Setup

NPM Package

At first, install this typescript-json by the npm install command.

Also, you need additional devDependencies to compile the TypeScript code with transformation. Therefore, install those all libraries typescript, ttypescript and ts-node. Inform that, ttypescript is not mis-writing. Do not forget to install the ttypescript.

npm install --save typescript-json

# ENSURE THOSE PACKAGES ARE INSTALLED
npm install --save-dev typescript
npm install --save-dev ttypescript
npm install --save-dev ts-node

tsconfig.json

After the installation, you've to configure tsconfig.json file like below.

Add a property transform and its value as typescript-json/lib/transform into compilerOptions.plugins array. When configuring, I recommend you to use the strict option, to enforce developers to distinguish whether each property is nullable or undefindable.

Also, you can configure additional properties like numeric and functional. The first, numeric is an option whether to test Number.isNaN() and Number.isFinite() to numeric value or not. The second, functional is an option whether to test function type or not. Default values of those options are all true.

{
  "compilerOptions": {
    "strict": true,
    "plugins": [
      {
        "transform": "typescript-json/lib/transform",
        // "functional": true, // test function type
        // "numeric": true, // test `isNaN()` and `isFinite()`
      }
    ]
  }
}

After the tsconfig.json definition, you can compile typescript-json utilized code by using ttypescript. If you want to run your TypeScript file through ts-node, use -C ttypescript argument like below:

# COMPILE
npx ttsc

# WITH TS-NODE
npx ts-node -C ttypescript

webpack

If you're using webpack with ts-loader, configure the webpack.config.js file like below:

const transform = require("typescript-json/lib/transform").default;

module.exports = {
    // I am hiding the rest of the webpack config
    module: {
        rules: [
            {
                test: /\.ts$/,
                exclude: /node_modules/,
                loader: 'ts-loader',
                options: {
                    getCustomTransformers: program => ({
                        before: [transform(program)]
                        // before: [
                        //     transform(program, {
                        //         functional: true,
                        //         numeric: true
                        // })
                        // ]
                    })
                }
            }
        ]
    }
};

Features

Runtime Type Checkers

export function assertType<T>(input: T): T;
export function is<T>(input: T): boolean;
export function validate<T>(input: T): IValidation;

export interface IValidation {
    success: boolean;
    errors: IValidation.IError[];
}
export namespace IValidation {
    export interface IError {
        path: string;
        expected: string;
        value: any;
    }
}

typescript-json provides three runtime type checker functions.

The first, assertType() is a function throwing TypeGuardError when an input value is different with its type, generic argument T. The second function, is() returns a boolean value meaning whether matched or not. The last validate() function archives all type errors into an IValidation.errors array.

Comparing those type checker functions with other similar libraries, typescript-json is much easier than others, except only typescript-is. For example, ajv requires complicate JSON schema definition that is different with the TypeScript type. Besides, typescript-json requires only one line.

Also, only typescript-json can validate union typed structure exactly. All the other libraries can check simple object type, however, none of them can validate complicate union type. The fun thing is, ajv requires JSON schema definition for validation, but it can't validate the JSON schema type. How contradict it is.

ComponentsTSONT.ISajvio-tsC.V.
Easy to use
Object (simple)
Object (hierarchical)
Object (recursive)
Object (union, implicit)
Object (union, explicit)
Array (hierarchical)
Array (recursive)
Array (recursive, union)
Array (R+U, implicit)
Ultimate Union Type
  • TSON: typescript-json
  • T.IS: typescript-is
  • C.V.: class-validator

Furthermore, when union type comes, typescript-json is extremely faster than others.

As you can see from the above table, ajv and typescript-is are fallen in the most union type cases. Also, they're even showing a huge different from typescript-json, in the time benchmark that does not care whether the validation is exact or not.

The extreme different is shown in the "ultimate union" type, when validating JSON schema.

Super-fast runtime type checker

Measured on Intel i5-1135g7, Surface Pro 8

Fastest JSON String Converter

export function stringify<T>(input: T): string;

Super-fast JSON string conversion function.

If you call TSON.stringify() function instead of the native JSON.stringify(), the JSON conversion time would be 5x times faster. Also, you can perform such super-fast JSON string conversion very easily, by only one line: TSON.stringify<T>(input).

On the other side, other similary library like fast-json-stringify requires complicate JSON schema definition. Furthermore, typescript-json can convert complicate structured data that fast-json-stringify cannot convert.

Comparing performance, typescript-json is about 5x times faster when comparing only JSON string conversion time. If compare optimizer construction time, typescript-json is even 10,000x times faster.

JSON conversion speed on each CPU

AMD CPU shows dramatic improvement

JSON Schema Generation

export function application<
    Types extends unknown[],
    Purpose extends "swagger" | "ajv" = "swagger",
    Prefix extends string = Purpose extends "swagger"
        ? "#/components/schemas"
        : "components#/schemas",
>(): IJsonApplication;

typescript-json even supports JSON schema application generation.

When you need to share your TypeScript types to other language, this application() function would be useful. It generates JSON schema definition by analyzing your Types. Therefore, with typescript-json and its application() function, you don't need to write JSON schema definition manually.

By the way, the reason why you're using this application() is for generating a swagger documents, I recommend you to use my another library nestia. It will automate the swagger documents generation, by analyzing your entire backend server code.

Appendix

Nestia

https://github.com/samchon/nestia

Automatic SDK and Swagger generator for NestJS, evolved than ever.

nestia is an evolved SDK and Swagger generator, which analyzes your NestJS server code in the compilation level. With nestia and compilation level analyzer, you don't need to write any swagger or class-validator decorators.

Reading below table and example code, feel how the "compilation level" makes nestia stronger.

Componentsnestia::SDKnestia::swagger@nestjs/swagger
Pure DTO interface
Description comments
Simple structure
Generic type
Union type
Intersection type
Conditional type
Auto completion
Type hints
5x faster JSON.stringify()
Ensure type safety
// IMPORT SDK LIBRARY GENERATED BY NESTIA
import api from "@samchon/shopping-api";
import { IPage } from "@samchon/shopping-api/lib/structures/IPage";
import { ISale } from "@samchon/shopping-api/lib/structures/ISale";
import { ISaleArticleComment } from "@samchon/shopping-api/lib/structures/ISaleArticleComment";
import { ISaleQuestion } from "@samchon/shopping-api/lib/structures/ISaleQuestion";

export async function trace_sale_question_and_comment
    (connection: api.IConnection): Promise<void>
{
    // LIST UP SALE SUMMARIES
    const index: IPage<ISale.ISummary> = await api.functional.shoppings.sales.index
    (
        connection,
        "general",
        { limit: 100, page: 1 }
    );

    // PICK A SALE
    const sale: ISale = await api.functional.shoppings.sales.at
    (
        connection, 
        index.data[0].id
    );
    console.log("sale", sale);

    // WRITE A QUESTION
    const question: ISaleQuestion = await api.functional.shoppings.sales.questions.store
    (
        connection,
        "general",
        sale.id,
        {
            title: "How to use this product?",
            body: "The description is not fully enough. Can you introduce me more?",
            files: []
        }
    );
    console.log("question", question);

    // WRITE A COMMENT
    const comment: ISaleArticleComment = await api.functional.shoppings.sales.comments.store
    (
        connection,
        "general",
        sale.id,
        question.id,
        {
            body: "p.s) Can you send me a detailed catalogue?",
            anonymous: false
        }
    );
    console.log("comment", comment);
}

Nestia-Helper

https://github.com/samchon/nestia-helper

Helper library of NestJS, using this typescript-json.

nestia-helper is a helper library of NestJS, which boosts up the JSON.stringify() speed 5x times faster about the API responses. Also, nestia-helper supports automatic valiation of request body, too.

import helper from "nestia-helper";
import * as nest from "@nestjs/common";

@nest.Controller("bbs/articles")
export class BbsArticlesController
{
    // TSON.stringify() for response body
    @helper.TypedRoute.Get()
    public store(
        // TSON.assertType() for request body
        @helper.TypedBody() input: IBbsArticle.IStore
    ): Promise<IBbsArticle>;
}

Author: samchon
Source code: https://github.com/samchon/typescript-json
License: MIT license
#react-native  #typescript  #javascript #json 

TypeScript JSON: 2x Faster JSON Stringify Function with only One Line
Thai  Son

Thai Son

1660096500

Làm Thế Nào để Chuyển đổi Bộ Sưu Tập Sang JSON Trong Laravel?

Hướng dẫn này tập trung vào cách chuyển đổi bộ sưu tập sang json trong laravel. Bài đăng này sẽ cung cấp cho bạn một ví dụ đơn giản về đối tượng chuyển đổi laravel thành json. 

Bạn có thể sử dụng ví dụ này với phiên bản laravel 6, laravel 7, laravel 8 và laravel 9.

Đôi khi, chúng tôi đang lấy dữ liệu từ cơ sở dữ liệu và bạn cần chuyển đổi dữ liệu hùng hồn thành JSON thì bạn sẽ thực hiện việc này như thế nào? đừng lo lắng, có nhiều cách để chuyển đổi bộ sưu tập sang JSON trong laravel. chúng ta sẽ sử dụng phương thức toJson () và json_encode () để chuyển mảng đối tượng thành JSON trong laravel.

vì vậy chúng ta hãy xem từng ví dụ dưới đây:

Ví dụ 1: get () với toJson ()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->get()
                        ->toJson();
  
        dd($posts);
    }
}

Đầu ra:

[
   {
      "id":40,
      "title":"Post title 1",
      "body":"Post body"
   },
   {
      "id":39,
      "title":"Post title 2",
      "body":"Post body"
   },
   {
      "id":38,
      "title":"Post title 3",
      "body":"Post body"
   }
]

Ví dụ 2: find () với toJson ()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $post = Post::find(40)->toJson();
  
        dd($post);
    }
}

Đầu ra:

{
   "id":40,
   "title":"Post title 1",
   "slug":null,
   "body":"Post body",
   "created_at":"2022-08-05",
   "updated_at":"2022-08-05T13:21:10.000000Z",
   "status":1
}

Ví dụ 3: json_encode ()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->take(5)
                        ->get();
  
        $posts = json_encode($posts);
  
        dd($posts);
    }
}

Đầu ra:

[
   {
      "id":40,
      "title":"Post title 1",
      "body":"Post body"
   },
   {
      "id":39,
      "title":"Post title 2",
      "body":"Post body"
   },
   {
      "id":38,
      "title":"Post title 3",
      "body":"Post body"
   }
]

Ví dụ 4: Bộ sưu tập tùy chỉnh sử dụng toJson ()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = collect([
                ['id' => 1, 'title' => 'Title One', 'body' => 'Body One'],
                ['id' => 2, 'title' => 'Title Two', 'body' => 'Body Two'],
        ]);
  
        $posts = $posts->toJson();
  
        dd($posts);
    }
}

Đầu ra:

[
   {
      "id":1,
      "title":"Title One",
      "body":"Body One"
   },
   {
      "id":2,
      "title":"Title Two",
      "body":"Body Two"
   }
]

Ví dụ 5: Phản hồi JSON

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
  
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->get();
  
        return response()->json(['posts' => $posts]);
    }
}

Đầu ra:

{"posts":[{"id":40,"title":"Post title 1","body":"Post body"},{"id":39,"title":"Post title 2","body":"Post body"},{"id":38,"title":"Post title 3","body":"Post body"},{"id":37,"title":"Post title 4","body":"Post body"},{"id":36,"title":"Post title 5","body":"Post body"}]}

Tôi hy vọng nó có thể giúp bạn...

Nguồn: https://www.itsolutionstuff.com/post/how-to-convert-collection-to-json-in-laravelexample.html

#json #laravel 

Làm Thế Nào để Chuyển đổi Bộ Sưu Tập Sang JSON Trong Laravel?

¿Cómo Convertir La Colección A JSON En Laravel?

Este tutorial se centra en cómo convertir una colección a json en laravel. Esta publicación le dará un ejemplo simple de laravel convert object to json. 

Puede usar este ejemplo con laravel 6, laravel 7, laravel 8 y laravel 9 también.

A veces, estamos obteniendo datos de la base de datos y necesita convertir datos elocuentes en JSON, entonces, ¿cómo hará esto? no te preocupes, hay muchas formas de convertir la colección a JSON en laravel. Usaremos el método toJson() y json_encode() para convertir la matriz de objetos a JSON en laravel.

así que veamos a continuación uno por un ejemplo:

Ejemplo 1: get() con toJson()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->get()
                        ->toJson();
  
        dd($posts);
    }
}

Producción:

[
   {
      "id":40,
      "title":"Post title 1",
      "body":"Post body"
   },
   {
      "id":39,
      "title":"Post title 2",
      "body":"Post body"
   },
   {
      "id":38,
      "title":"Post title 3",
      "body":"Post body"
   }
]

Ejemplo 2: find() con toJson()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $post = Post::find(40)->toJson();
  
        dd($post);
    }
}

Producción:

{
   "id":40,
   "title":"Post title 1",
   "slug":null,
   "body":"Post body",
   "created_at":"2022-08-05",
   "updated_at":"2022-08-05T13:21:10.000000Z",
   "status":1
}

Ejemplo 3: json_encode()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->take(5)
                        ->get();
  
        $posts = json_encode($posts);
  
        dd($posts);
    }
}

Producción:

[
   {
      "id":40,
      "title":"Post title 1",
      "body":"Post body"
   },
   {
      "id":39,
      "title":"Post title 2",
      "body":"Post body"
   },
   {
      "id":38,
      "title":"Post title 3",
      "body":"Post body"
   }
]

Ejemplo 4: Colección personalizada usando toJson()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = collect([
                ['id' => 1, 'title' => 'Title One', 'body' => 'Body One'],
                ['id' => 2, 'title' => 'Title Two', 'body' => 'Body Two'],
        ]);
  
        $posts = $posts->toJson();
  
        dd($posts);
    }
}

Producción:

[
   {
      "id":1,
      "title":"Title One",
      "body":"Body One"
   },
   {
      "id":2,
      "title":"Title Two",
      "body":"Body Two"
   }
]

Ejemplo 5: Respuesta JSON

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
  
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->get();
  
        return response()->json(['posts' => $posts]);
    }
}

Producción:

{"posts":[{"id":40,"title":"Post title 1","body":"Post body"},{"id":39,"title":"Post title 2","body":"Post body"},{"id":38,"title":"Post title 3","body":"Post body"},{"id":37,"title":"Post title 4","body":"Post body"},{"id":36,"title":"Post title 5","body":"Post body"}]}

Espero que te pueda ayudar...

Fuente: https://www.itsolutionstuff.com/post/how-to-convert-collection-to-json-in-laravelexample.html

#json #laravel 

¿Cómo Convertir La Colección A JSON En Laravel?
Anne  de Morel

Anne de Morel

1660091400

Comment Convertir Une Collection En JSON Dans Laravel ?

Ce tutoriel se concentre sur la façon de convertir une collection en json dans laravel. Cet article vous donnera un exemple simple de conversion d'objet laravel en json. 

Vous pouvez également utiliser cet exemple avec les versions laravel 6, laravel 7, laravel 8 et laravel 9.

Parfois, nous obtenons des données de la base de données et vous devez convertir des données éloquentes en JSON, alors comment allez-vous procéder ? ne vous inquiétez pas, il existe de nombreuses façons de convertir la collection en JSON dans laravel. nous utiliserons les méthodes toJson() et json_encode() pour convertir le tableau d'objets en JSON dans laravel.

alors voyons ci-dessous un par un exemple:

Exemple 1 : get() avec toJson()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->get()
                        ->toJson();
  
        dd($posts);
    }
}

Production:

[
   {
      "id":40,
      "title":"Post title 1",
      "body":"Post body"
   },
   {
      "id":39,
      "title":"Post title 2",
      "body":"Post body"
   },
   {
      "id":38,
      "title":"Post title 3",
      "body":"Post body"
   }
]

Exemple 2 : find() avec toJson()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $post = Post::find(40)->toJson();
  
        dd($post);
    }
}

Production:

{
   "id":40,
   "title":"Post title 1",
   "slug":null,
   "body":"Post body",
   "created_at":"2022-08-05",
   "updated_at":"2022-08-05T13:21:10.000000Z",
   "status":1
}

Exemple 3 : json_encode()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->take(5)
                        ->get();
  
        $posts = json_encode($posts);
  
        dd($posts);
    }
}

Production:

[
   {
      "id":40,
      "title":"Post title 1",
      "body":"Post body"
   },
   {
      "id":39,
      "title":"Post title 2",
      "body":"Post body"
   },
   {
      "id":38,
      "title":"Post title 3",
      "body":"Post body"
   }
]

Exemple 4 : Collection personnalisée utilisant toJson()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = collect([
                ['id' => 1, 'title' => 'Title One', 'body' => 'Body One'],
                ['id' => 2, 'title' => 'Title Two', 'body' => 'Body Two'],
        ]);
  
        $posts = $posts->toJson();
  
        dd($posts);
    }
}

Production:

[
   {
      "id":1,
      "title":"Title One",
      "body":"Body One"
   },
   {
      "id":2,
      "title":"Title Two",
      "body":"Body Two"
   }
]

Exemple 5 : réponse JSON

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
  
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->get();
  
        return response()->json(['posts' => $posts]);
    }
}

Production:

{"posts":[{"id":40,"title":"Post title 1","body":"Post body"},{"id":39,"title":"Post title 2","body":"Post body"},{"id":38,"title":"Post title 3","body":"Post body"},{"id":37,"title":"Post title 4","body":"Post body"},{"id":36,"title":"Post title 5","body":"Post body"}]}

J'espère que cela peut vous aider...

Source : https://www.itsolutionstuff.com/post/how-to-convert-collection-to-json-in-laravelexample.html

#json #laravel 

Comment Convertir Une Collection En JSON Dans Laravel ?
Rui  Silva

Rui Silva

1660089600

Como Converter Coleção Para JSON Em Laravel?

Este tutorial é focado em como converter coleção para json em laravel. Este post lhe dará um exemplo simples de laravel convert object to json. 

Você pode usar este exemplo com a versão laravel 6, laravel 7, laravel 8 e laravel 9 também.

Às vezes, estamos obtendo dados do banco de dados e você precisa converter dados eloquentes em JSON, então como você fará isso? não se preocupe, existem muitas maneiras de converter a coleção para JSON em laravel. usaremos o método toJson() e json_encode() para converter array de objetos para JSON em laravel.

então vamos ver abaixo um por um exemplo:

Exemplo 1: get() com toJson()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->get()
                        ->toJson();
  
        dd($posts);
    }
}

Resultado:

[
   {
      "id":40,
      "title":"Post title 1",
      "body":"Post body"
   },
   {
      "id":39,
      "title":"Post title 2",
      "body":"Post body"
   },
   {
      "id":38,
      "title":"Post title 3",
      "body":"Post body"
   }
]

Exemplo 2: find() com toJson()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $post = Post::find(40)->toJson();
  
        dd($post);
    }
}

Resultado:

{
   "id":40,
   "title":"Post title 1",
   "slug":null,
   "body":"Post body",
   "created_at":"2022-08-05",
   "updated_at":"2022-08-05T13:21:10.000000Z",
   "status":1
}

Exemplo 3: json_encode()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->take(5)
                        ->get();
  
        $posts = json_encode($posts);
  
        dd($posts);
    }
}

Resultado:

[
   {
      "id":40,
      "title":"Post title 1",
      "body":"Post body"
   },
   {
      "id":39,
      "title":"Post title 2",
      "body":"Post body"
   },
   {
      "id":38,
      "title":"Post title 3",
      "body":"Post body"
   }
]

Exemplo 4: Coleção personalizada usando toJson()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = collect([
                ['id' => 1, 'title' => 'Title One', 'body' => 'Body One'],
                ['id' => 2, 'title' => 'Title Two', 'body' => 'Body Two'],
        ]);
  
        $posts = $posts->toJson();
  
        dd($posts);
    }
}

Resultado:

[
   {
      "id":1,
      "title":"Title One",
      "body":"Body One"
   },
   {
      "id":2,
      "title":"Title Two",
      "body":"Body Two"
   }
]

Exemplo 5: resposta JSON

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
  
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->get();
  
        return response()->json(['posts' => $posts]);
    }
}

Resultado:

{"posts":[{"id":40,"title":"Post title 1","body":"Post body"},{"id":39,"title":"Post title 2","body":"Post body"},{"id":38,"title":"Post title 3","body":"Post body"},{"id":37,"title":"Post title 4","body":"Post body"},{"id":36,"title":"Post title 5","body":"Post body"}]}

Espero que possa te ajudar...

Fonte: https://www.itsolutionstuff.com/post/how-to-convert-collection-to-json-in-laravelexample.html

#json #laravel 

Como Converter Coleção Para JSON Em Laravel?

コレクションをLaravelでJSONに変換する方法は?

このチュートリアルは、laravel でコレクションを json に変換する方法に焦点を当てています。この投稿では、laravel オブジェクトを json に変換する簡単な例を示します。 

この例は、laravel 6、laravel 7、laravel 8、laravel 9 バージョンでも使用できます。

時々、データベースからデータを取得していて、雄弁なデータを JSON に変換する必要があります。laravel でコレクションを JSON に変換する方法はたくさんあります。toJson() および json_encode() メソッドを使用して、laravel でオブジェクト配列を JSON に変換します。

それでは、以下の例を 1 つずつ見てみましょう。

例 1: get() と toJson()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->get()
                        ->toJson();
  
        dd($posts);
    }
}

出力:

[
   {
      "id":40,
      "title":"Post title 1",
      "body":"Post body"
   },
   {
      "id":39,
      "title":"Post title 2",
      "body":"Post body"
   },
   {
      "id":38,
      "title":"Post title 3",
      "body":"Post body"
   }
]

例 2: find() と toJson()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $post = Post::find(40)->toJson();
  
        dd($post);
    }
}

出力:

{
   "id":40,
   "title":"Post title 1",
   "slug":null,
   "body":"Post body",
   "created_at":"2022-08-05",
   "updated_at":"2022-08-05T13:21:10.000000Z",
   "status":1
}

例 3: json_encode()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->take(5)
                        ->get();
  
        $posts = json_encode($posts);
  
        dd($posts);
    }
}

出力:

[
   {
      "id":40,
      "title":"Post title 1",
      "body":"Post body"
   },
   {
      "id":39,
      "title":"Post title 2",
      "body":"Post body"
   },
   {
      "id":38,
      "title":"Post title 3",
      "body":"Post body"
   }
]

例 4: toJson() を使用したカスタム コレクション

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = collect([
                ['id' => 1, 'title' => 'Title One', 'body' => 'Body One'],
                ['id' => 2, 'title' => 'Title Two', 'body' => 'Body Two'],
        ]);
  
        $posts = $posts->toJson();
  
        dd($posts);
    }
}

出力:

[
   {
      "id":1,
      "title":"Title One",
      "body":"Body One"
   },
   {
      "id":2,
      "title":"Title Two",
      "body":"Body Two"
   }
]

例 5: JSON レスポンス

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
  
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->get();
  
        return response()->json(['posts' => $posts]);
    }
}

出力:

{"posts":[{"id":40,"title":"Post title 1","body":"Post body"},{"id":39,"title":"Post title 2","body":"Post body"},{"id":38,"title":"Post title 3","body":"Post body"},{"id":37,"title":"Post title 4","body":"Post body"},{"id":36,"title":"Post title 5","body":"Post body"}]}

お役に立てば幸いです...

ソース: https://www.itsolutionstuff.com/post/how-to-convert-collection-to-json-in-laravelexample.html

#json #laravel 

コレクションをLaravelでJSONに変換する方法は?
顾 静

顾 静

1660086000

如何在 Laravel 中将集合转换为 JSON?

本教程重点介绍如何在 laravel 中将集合转换为 json。这篇文章会给你一个简单的 laravel 将对象转换为 json 的例子。 

您也可以将此示例与 laravel 6、laravel 7、laravel 8 和 laravel 9 版本一起使用。

有时,我们从数据库中获取数据,您需要将 eloquent 数据转换为 JSON,那么您将如何做到这一点?不用担心,在 laravel 中有很多方法可以将集合转换为 JSON。我们将在 laravel 中使用 toJson() 和 json_encode() 方法将对象数组转换为 JSON。

那么让我们一一来看下面的例子:

示例 1:get() 与 toJson()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->get()
                        ->toJson();
  
        dd($posts);
    }
}

输出:

[
   {
      "id":40,
      "title":"Post title 1",
      "body":"Post body"
   },
   {
      "id":39,
      "title":"Post title 2",
      "body":"Post body"
   },
   {
      "id":38,
      "title":"Post title 3",
      "body":"Post body"
   }
]

示例 2:find() 和 toJson()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $post = Post::find(40)->toJson();
  
        dd($post);
    }
}

输出:

{
   "id":40,
   "title":"Post title 1",
   "slug":null,
   "body":"Post body",
   "created_at":"2022-08-05",
   "updated_at":"2022-08-05T13:21:10.000000Z",
   "status":1
}

示例 3:json_encode()

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->take(5)
                        ->get();
  
        $posts = json_encode($posts);
  
        dd($posts);
    }
}

输出:

[
   {
      "id":40,
      "title":"Post title 1",
      "body":"Post body"
   },
   {
      "id":39,
      "title":"Post title 2",
      "body":"Post body"
   },
   {
      "id":38,
      "title":"Post title 3",
      "body":"Post body"
   }
]

示例 4:使用 toJson() 的自定义集合

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
        $posts = collect([
                ['id' => 1, 'title' => 'Title One', 'body' => 'Body One'],
                ['id' => 2, 'title' => 'Title Two', 'body' => 'Body Two'],
        ]);
  
        $posts = $posts->toJson();
  
        dd($posts);
    }
}

输出:

[
   {
      "id":1,
      "title":"Title One",
      "body":"Body One"
   },
   {
      "id":2,
      "title":"Title Two",
      "body":"Body Two"
   }
]

示例 5:JSON 响应

PostController.php

<?php
  
namespace App\Http\Controllers;
  
use Illuminate\Http\Request;
use App\Models\Post;
  
class PostController extends Controller
{
    /**
     * Write code on Method
     *
     * @return response()
     */
    public function index(Request $request)
    {
  
        $posts = Post::select("id", "title", "body")
                        ->latest()
                        ->get();
  
        return response()->json(['posts' => $posts]);
    }
}

输出:

{"posts":[{"id":40,"title":"Post title 1","body":"Post body"},{"id":39,"title":"Post title 2","body":"Post body"},{"id":38,"title":"Post title 3","body":"Post body"},{"id":37,"title":"Post title 4","body":"Post body"},{"id":36,"title":"Post title 5","body":"Post body"}]}

我希望它可以帮助你...

来源:https ://www.itsolutionstuff.com/post/how-to-convert-collection-to-json-in-laravelexample.html

#json #laravel 

如何在 Laravel 中将集合转换为 JSON?

How to Convert Collection to JSON in Laravel?

This tutorial is focused on how to convert collection to json in laravel. This post will give you a simple example of laravel convert object to json. 

You can use this example with laravel 6, laravel 7, laravel 8 and laravel 9 version as well.

Sometimes, we are getting data from the database and you need to convert eloquent data into JSON then how you will do this? don't worry, there are many ways to convert the collection to JSON in laravel. we will use toJson() and json_encode() method to convert objects array to JSON in laravel.

so let's see below one by one example:

See more at: https://www.itsolutionstuff.com/post/how-to-convert-collection-to-json-in-laravelexample.html

#json #laravel 

How to Convert Collection to JSON in Laravel?
Elian  Harber

Elian Harber

1659933720

Tormenta: Embedded Object-persistence Layer / Simple JSON Database

⚡ Tormenta

WIP: Master branch is under active development. API still in flux. Not ready for serious use yet.

Tormenta is a functionality layer over BadgerDB key/value store. It provides simple, embedded-object persistence for Go projects with indexing, data querying capabilities and ORM-like features, including loading of relations. It uses date-based IDs so is particuarly good for data sets that are naturally chronological, like financial transactions, soical media posts etc. Greatly inspired by Storm.

Why would you use this?

Becuase you want to simplify your data persistence and you don't forsee the need for a mult-server setup in the future. Tormenta relies on an embedded key/value store. It's fast and simple, but embedded, so you won't be able to go multi-server and talk to a central DB. If you can live with that, and without the querying power of SQL, Tormenta gives you simplicty - there are no database servers to run, configure and maintain, no schemas, no SQL, no ORMs etc. You just open a connection to the DB, feed in your Go structs and get normal Go functions with which to persist, retrieve and query your data. If you've been burned by complex database setups, errors in SQL strings or overly complex ORMs, you might appreciate Tormenta's simplicity.

Features

  • JSON for serialisation of data. Uses std lib by default, but you can specify custom serialise/unserialise functions, making it a snip to use JSONIter or ffjson for speed
  • Date-stamped UUIDs mean no need to maintain an ID counter, and
  • You get date range querying and 'created at' field baked in
  • Simple basic API for saving and retrieving your objects
  • Automatic indexing on all fields (can be skipped)
  • Option to index by individual words in strings (split index)
  • More complex querying of indices including exact matches, text prefix, ranges, reverse, limit, offset and order by
  • Combine many index queries with AND/OR logic (but no complex nesting/bracketing of ANDs/ORs)
  • Fast counts and sums using Badger's 'key only' iteration
  • Business logic using 'triggers' on save and get, including the ability to pass a 'context' through a query
  • String / URL parameter -> query builder, for quick construction of queries from URL strings
  • Helpers for loading relations

Quick How To (in place of better docs to come)

  • Add import "github.com/jpincas/tormenta"
  • Add tormenta.Model to structs you want to persist
  • Add tormenta:"-" tag to fields you want to exclude from saving
  • Add tormenta:"noindex" tag to fields you want to exclude from secondary indexing
  • Add tormenta:"split" tag to string fields where you'd like to index each word separately instead of the the whole sentence
  • Add tormenta:"nested" tag to struct fields where you'd like to index each member (using the index syntax "toplevelfield.nextlevelfield")
  • Open a DB connection with standard options with db, err := tormenta.Open("mydatadirectory") (dont forget to defer db.Close()). For auto-deleting test DB, use tormenta.OpenTest
  • If you want faster serialisation, I suggest JSONIter
  • Save a single entity with db.Save(&MyEntity) or multiple (possibly different type) entities in a transaction with db.Save(&MyEntity1, &MyEntity2).
  • Get a single entity by ID with db.Get(&MyEntity, entityID).
  • Construct a query to find single or mutliple entities with db.First(&MyEntity) or db.Find(&MyEntities) respectively.
  • Build up the query by chaining methods.
  • Add From()/.To() to restrict result to a date range (both are optional).
  • Add index-based filters: Match("indexName", value), Range("indexname", start, end) and StartsWith("indexname", "prefix") for a text prefix search.
  • Chain multiple index filters together. Default combination is AND - switch to OR with Or().
  • Shape results with .Reverse(), .Limit()/.Offset() and Order().
  • Execute the query with .Run(), .Count() or .Sum().
  • Add business logic by specifying .PreSave(), .PostSave() and .PostGet() methods on your structs.

See the example to get a better idea of how to use.

Gotchas

  • Be type-specific when specifying index searches; e.g. Match("int16field", int(16)") if you are searching on an int16 field. This is due to slight encoding differences between variable/fixed length ints, signed/unsigned ints and floats. If you let the compiler infer the type and the type you are searching on isn't the default int (or int32) or float64, you'll get odd results. I understand this is a pain - perhaps we should switch to a fixed indexing scheme in all cases?
  • 'Defined' time.Time fields e.g. myTime time.Time won't serialise properly as the fields on the underlying struct are unexported and you lose the marshal/unmarshal methods specified by time.Time. If you must use defined time fields, specify custom marshalling functions.

Help Needed / Contributing

  • I don't have a lot of low level Go experience, so I reckon the reflect and/or concurrency code could be significantly improved
  • I could really do with some help setting up some proper benchmarks
  • Load testing or anything similar
  • A performant command-line backup utility that could read raw JSON from keys and write to files in a folder structure, without even going through Tormenta (i.e. just hitting the Badger KV store and writing each key to a json file)

To Do

  • More tests for indexes: more fields, post deletion, interrupted save transactions
  • Nuke/rebuild indices command
  • Documentation / Examples
  • Better protection against unsupported types being passed around as interfaces
  • Fully benchmarked simulation of a real-world use case

Maybe

  • JSON dump/ backup
  • JSON 'pass through' functionality for where you don't need to do any processing and therefore can skip unmarshalling.
  • Partial JSON return, combined with above, using https://github.com/buger/jsonparser

Download Details: 

Author: jpincas
Source Code: https://github.com/jpincas/tormenta 

#go #golang #json 

Tormenta: Embedded Object-persistence Layer / Simple JSON Database
Elian  Harber

Elian Harber

1659888840

Dcrdata: Decred Block Explorer, with Packages, App for Data Collection

dcrdata

Overview

dcrdata is an original Decred block explorer, with packages and apps for data collection, presentation, and storage. The backend and middleware are written in Go. On the front end, Webpack enables the use of modern javascript features, as well as SCSS for styling.

Release Status

Always run the Current release or on the Current stable branch. Do not use master in production.

 SeriesBranchLatest release tagdcrd RPC server version required
Development6.1masterN/A^7.0.0 (dcrd v1.7 release)
Current6.06.0-stablerelease-v6.0^6.2.0 (dcrd v1.6 release)

Repository Overview

../dcrdata                The main Go MODULE. See cmd/dcrdata for the explorer executable.
├── api/types             The exported structures used by the dcrdata and Insight APIs.
├── blockdata             Package blockdata is the primary data collection and
|                           storage hub, and chain monitor.
├── cmd
│   └── dcrdata           MODULE for the dcrdata explorer executable.
│       ├── api           dcrdata's own HTTP API
│       │   └── insight   The Insight API
│       ├── explorer      Powers the block explorer pages.
│       ├── middleware    HTTP router middleware used by the explorer
│       ├── notification  Manages dcrd notifications synchronous data collection.
│       ├── public        Public resources for block explorer (css, js, etc.)
│       └── views         HTML templates for block explorer
├── db
│   ├── cache             Package cache provides a caching layer that is used by dcrpg.
│   ├── dbtypes           Package dbtypes with common data types.
│   └── dcrpg             MODULE and package dcrpg providing PostgreSQL backend.
├── dev                   Shell scripts for maintenance and deployment.
├── docs                  Extra documentation.
├── exchanges             MODULE and package for gathering data from public exchange APIs
│   ├── rateserver        rateserver app, which runs an exchange bot for collecting
│   |                       exchange rate data, and a gRPC server for providing this
│   |                       data to multiple clients like dcrdata.
|   └── ratesproto        Package dcrrates implementing a gRPC protobuf service for
|                           communicating exchange rate data with a rateserver.
├── explorer/types        Types used primarily by the explorer pages.
├── gov                   MODULE for the on- and off-chain governance packages.
│   ├── agendas           Package agendas defines a consensus deployment/agenda DB.
│   └── politeia          Package politeia defines a Politeia proposal DB.
│       ├── piclient      Package piclient provides functions for retrieving data
|       |                   from the Politeia web API.
│       └── types         Package types provides several JSON-tagged structs for
|                           dealing with Politeia data exchange.
├── mempool               Package mempool for monitoring mempool for transactions,
|                           data collection, distribution, and storage.
├── netparams             Package netparams defines the TCP port numbers for the
|                           various networks (mainnet, testnet, simnet).
├── pubsub                Package pubsub implements a websocket-based pub-sub server
|   |                       for blockchain data.
│   ├── democlient        democlient app provides an example for using psclient to
|   |                       register for and receive messages from a pubsub server.
│   ├── psclient          Package psclient is a basic client for the pubsub server.
│   └── types             Package types defines types used by the pubsub client
|                           and server.
├── rpcutils              Package rpcutils contains helper types and functions for
|                           interacting with a chain server via RPC.
├── semver                Defines the semantic version types.
├── stakedb               Package stakedb, for tracking tickets
├── testutil
│   ├── apiload           An HTTP API load testing application
|   └── dbload            A DB load testing application
└── txhelpers             Package txhelpers provides many functions and types for
                            processing blocks, transactions, voting, etc.

Requirements

  • Go 1.17 or 1.18
  • Node.js 16.x or 17.x. Node.js is only used as a build tool, and is not used at runtime.
  • Running dcrd running with --txindex --addrindex, and synchronized to the current best block on the network. On startup, dcrdata will verify that the dcrd version is compatible.
  • PostgreSQL 11+

Docker Support

Dockerfiles are provided for convenience, but NOT SUPPORTED. See the Docker documentation for more information. The supported dcrdata build instructions are described below.

Building

The dcrdata build process comprises two general steps:

  1. Bundle the static web page assets with Webpack (via the npm tool).
  2. Build the dcrdata executable from the Go source files.

These steps are described in detail in the following sections.

NOTE: The following instructions assume a Unix-like shell (e.g. bash).

Preparation

Install Go

Verify Go installation:

go env GOROOT GOPATH

Ensure $GOPATH/bin is on your $PATH.

Clone the dcrdata repository. It is conventional to put it under GOPATH, but this is no longer necessary (or recommend) with Go modules. For example:

git clone https://github.com/decred/dcrdata $HOME/go-work/github/decred/dcrdata

Install Node.js, which is required to lint and package the static web assets.

Note that none of the above is required at runtime.

Package the Static Web Assets

Webpack, a JavaScript module bundler, is used to compile and package the static assets in the cmd/dcrdata/public folder. Node.js' npm tool is used to install the required Node.js dependencies and build the bundled JavaScript distribution for deployment.

First, install the build dependencies:

cd cmd/dcrdata
npm clean-install # creates node_modules folder fresh

Then, for production, build the webpack bundle:

npm run build # creates public/dist folder

Alternatively, for development, npm can be made to watch for and integrate JavaScript source changes:

npm run watch

See Front End Development for more information.

Building dcrdata with Go

Change to the cmd/dcrdata folder and build:

cd cmd/dcrdata
go build -v

The go tool will process the source code and automatically download dependencies. If the dependencies are configured correctly, there will be no modifications to the go.mod and go.sum files.

Note that performing the above commands with older versions of Go within $GOPATH may require setting GO111MODULE=on.

As a reward for reading this far, you may use the build.sh script to mostly automate the build steps.

Setting build version flags

By default, the version string will be postfixed with "-pre+dev". For example, dcrdata version 5.1.0-pre+dev (Go version go1.12.7). However, it may be desirable to set the "pre" and "dev" values to different strings, such as "beta" or the actual commit hash. To set these values, build with the -ldflags switch as follows:

go build -v -ldflags \
  "-X main.appPreRelease=beta -X main.appBuild=`git rev-parse --short HEAD`"

This produces a string like dcrdata version 6.0.0-beta+750fd6c2 (Go version go1.16.2).

Runtime Resources

The config file, logs, and data files are stored in the application data folder, which may be specified via the -A/--appdata and -b/--datadir settings. However, the location of the config file may also be set with -C/--configfile. The default paths for your system are shown in the --help description. If encountering errors involving file system paths, check the permissions on these folders to ensure that the user running dcrdata is able to access these paths.

The "public" and "views" folders must be in the same folder as the dcrdata executable. Set read-only permissions as appropriate.

Updating

Update the repository (assuming you have master checked out in GOPATH):

cd $HOME/go-work/github/decred/dcrdata
git pull origin master

Look carefully for errors with git pull, and reset locally modified files if necessary.

Next, build dcrdata and bundle the web assets:

cd cmd/dcrdata
go build -v
npm clean-install
npm run build # or npm run watch

Note that performing the above commands with versions of Go prior to 1.16 within $GOPATH may require setting GO111MODULE=on.

Upgrading Instructions

From v3.x or later

No special actions are required. Simply start the new dcrdata and automatic database schema upgrades and table data patches will begin.

From v2.x or earlier

The database scheme change from dcrdata v2.x to v3.x does not permit an automatic migration. The tables must be rebuilt from scratch:

Drop the old dcrdata database, and create a new empty dcrdata database.

-- Drop the old database.
DROP DATABASE dcrdata;

-- Create a new database with the same "pguser" set in the dcrdata.conf.
CREATE DATABASE dcrdata OWNER dcrdata;

Delete the dcrdata data folder (i.e. corresponding to the datadir setting). By default, datadir is in {appdata}/data:

  • Linux: ~/.dcrdata/data
  • Mac: ~/Library/Application Support/Dcrdata/data
  • Windows: C:\Users\<your-username>\AppData\Local\Dcrdata\data (%localappdata%\Dcrdata\data)

With dcrd synchronized to the network's best block, start dcrdata to begin the initial block data sync.

Getting Started

Configuring PostgreSQL (IMPORTANT! Seriously, read this.)

It is crucial that you configure your PostgreSQL server for your hardware and the dcrdata workload.

Read postgresql-tuning.conf carefully for details on how to make the necessary changes to your system. A helpful online tool for determining good settings for your system is called PGTune. Note that when using this tool to subtract 1.5-2GB from your system RAM so dcrdata itself will have plenty of memory. DO NOT simply use this file in place of your existing postgresql.conf. DO NOT simply copy and paste these settings into the existing postgresql.conf. It is necessary to edit the existing postgresql.conf, reviewing all the settings to ensure the same configuration parameters are not set in two different places in the file (postgres will not complain).

If you tune PostgreSQL to fully utilize remaining RAM, you are limiting the RAM available to the dcrdata process, which will increase as request volume increases and its cache becomes fully utilized. Allocate sufficient memory to dcrdata for your application, and use a reverse proxy such as nginx with cache locking features to prevent simultaneous requests to the same resource.

On Linux, you may wish to use a unix domain socket instead of a TCP connection. The path to the socket depends on the system, but it is commonly /var/run/postgresql. Just set this path in pghost.

Creating the dcrdata Configuration File

Begin with the sample configuration file. With the default appdata directory for the current user on Linux:

cp sample-dcrdata.conf ~/.dcrdata/dcrdata.conf

Then edit dcrdata.conf with your dcrd RPC settings. See the output of dcrdata --help for a list of all options and their default values.

Indexing the Blockchain

If dcrdata has not previously been run with the PostgreSQL database backend, it is necessary to perform a bulk import of blockchain data and generate table indexes. This will be done automatically by dcrdata on a fresh startup. Do NOT interrupt the initial sync or use the browser interface until it is completed.

Note that dcrdata requires that dcrd is running with some optional indexes enabled. By default, these indexes are not turned on when dcrd is installed. To enable them, set the following in dcrd.conf:

txindex=1
addrindex=1

If these parameters are not set, dcrdata will be unable to retrieve transaction details and perform address searches, and will exit with an error mentioning these indexes.

Starting dcrdata

Launch the dcrdata daemon and allow the databases to process new blocks. Concurrent synchronization of both stake and PostgreSQL databases is performed, typically requiring between 1.5 to 8 hours. See System Hardware Requirements for more information. Please reread Configuring PostgreSQL (IMPORTANT! Seriously, read this.) of you have performance issues.

On subsequent launches, only blocks new to dcrdata are processed.

./dcrdata    # don't forget to configure dcrdata.conf in the appdata folder!

Do NOT interrupt the initial sync or use the browser interface until it is completed. Follow the messages carefully, and if you are uncertain of the current sync status, check system resource utilization. Interrupting the initial sync can leave dcrdata and it's databases in an unrecoverable or suboptimal state. The main steps of the initial sync process are:

  1. Initial block data import
  2. Indexing
  3. Spending transaction relationship updates
  4. Final DB analysis and indexing
  5. Catch-up to network in normal sync mode
  6. Populate charts historical data
  7. Update Pi repo and parse proposal records (git will be running)
  8. Final catch-up and UTXO cache pre-warming
  9. Update project fund data and then idle

Unlike dcrdata.conf, which must be placed in the appdata folder or explicitly set with -C, the "public" and "views" folders must be in the same folder as the dcrdata executable.

System Hardware Requirements

The time required to sync varies greatly with system hardware and software configuration. The most important factor is the storage medium on the database machine. An SSD (preferably NVMe, not SATA) is REQUIRED. The PostgreSQL operations are extremely disk intensive, especially during the initial synchronization process. Both high throughput and low latencies for fast random accesses are essential.

dcrdata only (PostgreSQL on other host)

Without PostgreSQL, the dcrdata process can get by with:

  • 1 CPU core
  • 2 GB RAM
  • HDD with 8GB free space

dcrdata and PostgreSQL on same host

These specifications assume dcrdata and postgres are running on the same machine.

Minimum:

  • 2 CPU core
  • 6 GB RAM
  • SSD with 120GB free space (no spinning hard drive for the DB!)

Recommend:

  • 3+ CPU cores
  • 12+ GB RAM
  • NVMe SSD with 120 GB free space

dcrdata Daemon

The cmd/dcrdata folder contains the main package for the dcrdata app, which has several components including:

  1. Block explorer (web interface).
  2. Blockchain monitoring and data collection.
  3. Mempool monitoring and reporting.
  4. Database backend interfaces.
  5. RESTful JSON API (custom and Insight) over HTTP(S).
  6. Websocket-based pub-sub server.
  7. Exchange rate bot and gRPC server.

Block Explorer

After dcrdata syncs with the blockchain server via RPC, by default it will begin listening for HTTP connections on http://127.0.0.1:7777/. This means it starts a web server listening on IPv4 localhost, port 7777. Both the interface and port are configurable. The block explorer and the JSON APIs are both provided by the server on this port.

Note that while dcrdata can be started with HTTPS support, it is recommended to employ a reverse proxy such as Nginx ("engine x"). See sample-nginx.conf for an example Nginx configuration.

APIs

The dcrdata block explorer is exposed by two APIs: a Decred implementation of the Insight API, and its own JSON HTTP API. The Insight API uses the path prefix /insight/api. The dcrdata API uses the path prefix /api. File downloads are served from the /download path.

Insight API

The Insight API is accessible via HTTP via REST or WebSocket.

See the Insight API documentation for further details.

dcrdata API

The dcrdata API is a REST API accessible via HTTP. To call the dcrdata API, use the /api path prefix.

Endpoint List

Best blockPathType
Summary/block/best?txtotals=[true|false]types.BlockDataBasic
Stake info/block/best/postypes.StakeInfoExtended
Header/block/best/headerdcrjson.GetBlockHeaderVerboseResult
Raw Header (hex)/block/best/header/rawstring
Hash/block/best/hashstring
Height/block/best/heightint
Raw Block (hex)/block/best/rawstring
Size/block/best/sizeint32
Subsidy/block/best/subsidytypes.BlockSubsidies
Transactions/block/best/txtypes.BlockTransactions
Transactions Count/block/best/tx/counttypes.BlockTransactionCounts
Verbose block result/block/best/verbosedcrjson.GetBlockVerboseResult
Block X (block index)PathType
Summary/block/Xtypes.BlockDataBasic
Stake info/block/X/postypes.StakeInfoExtended
Header/block/X/headerdcrjson.GetBlockHeaderVerboseResult
Raw Header (hex)/block/X/header/rawstring
Hash/block/X/hashstring
Raw Block (hex)/block/X/rawstring
Size/block/X/sizeint32
Subsidy/block/best/subsidytypes.BlockSubsidies
Transactions/block/X/txtypes.BlockTransactions
Transactions Count/block/X/tx/counttypes.BlockTransactionCounts
Verbose block result/block/X/verbosedcrjson.GetBlockVerboseResult
Block H (block hash)PathType
Summary/block/hash/Htypes.BlockDataBasic
Stake info/block/hash/H/postypes.StakeInfoExtended
Header/block/hash/H/headerdcrjson.GetBlockHeaderVerboseResult
Raw Header (hex)/block/hash/H/header/rawstring
Height/block/hash/H/heightint
Raw Block (hex)/block/hash/H/rawstring
Size/block/hash/H/sizeint32
Subsidy/block/best/subsidytypes.BlockSubsidies
Transactions/block/hash/H/txtypes.BlockTransactions
Transactions count/block/hash/H/tx/counttypes.BlockTransactionCounts
Verbose block result/block/hash/H/verbosedcrjson.GetBlockVerboseResult
Block range (X < Y)PathType
Summary array for blocks on [X,Y]/block/range/X/Y[]types.BlockDataBasic
Summary array with block index step S/block/range/X/Y/S[]types.BlockDataBasic
Size (bytes) array/block/range/X/Y/size[]int32
Size array with step S/block/range/X/Y/S/size[]int32
Transaction T (transaction id)PathType
Transaction details/tx/T?spends=[true|false]types.Tx
Transaction details w/o block info/tx/trimmed/Ttypes.TrimmedTx
Inputs/tx/T/in[]types.TxIn
Details for input at index X/tx/T/in/Xtypes.TxIn
Outputs/tx/T/out[]types.TxOut
Details for output at index X/tx/T/out/Xtypes.TxOut
Vote info (ssgen transactions only)/tx/T/vinfotypes.VoteInfo
Ticket info (sstx transactions only)/tx/T/tinfotypes.TicketInfo
Serialized bytes of the transaction/tx/hex/Tstring
Same as /tx/trimmed/T/tx/decoded/Ttypes.TrimmedTx
Transactions (batch)PathType
Transaction details (POST body is JSON of types.Txns)/txs?spends=[true|false][]types.Tx
Transaction details w/o block info/txs/trimmed[]types.TrimmedTx
Address APathType
Summary of last 10 transactions/address/Atypes.Address
Number and value of spent and unspent outputs/address/A/totalstypes.AddressTotals
Verbose transaction result for last 
10 transactions
/address/A/rawtypes.AddressTxRaw
Summary of last N transactions/address/A/count/Ntypes.Address
Verbose transaction result for last 
N transactions
/address/A/count/N/rawtypes.AddressTxRaw
Summary of last N transactions, skipping M/address/A/count/N/skip/Mtypes.Address
Verbose transaction result for last 
N transactions, skipping M
/address/A/count/N/skip/M/rawtypes.AddressTxRaw
Transaction inputs and outputs as a CSV formatted file./download/address/io/ACSV file
Stake Difficulty (Ticket Price)PathType
Current sdiff and estimates/stake/difftypes.StakeDiff
Sdiff for block X/stake/diff/b/X[]float64
Sdiff for block range [X,Y] (X <= Y)/stake/diff/r/X/Y[]float64
Current sdiff separately/stake/diff/currentdcrjson.GetStakeDifficultyResult
Estimates separately/stake/diff/estimatesdcrjson.EstimateStakeDiffResult
Ticket PoolPathType
Current pool info (size, total value, and average price)/stake/pooltypes.TicketPoolInfo
Current ticket pool, in a JSON object with a "tickets" key holding an array of ticket hashes/stake/pool/full[]string
Pool info for block X/stake/pool/b/Xtypes.TicketPoolInfo
Full ticket pool at block height or hash H/stake/pool/b/H/full[]string
Pool info for block range [X,Y] (X <= Y)/stake/pool/r/X/Y?arrays=[true|false]*[]apitypes.TicketPoolInfo

The full ticket pool endpoints accept the URL query ?sort=[true|false] for requesting the tickets array in lexicographical order. If a sorted list or list with deterministic order is not required, using sort=false will reduce server load and latency. However, be aware that the ticket order will be random, and will change each time the tickets are requested.

*For the pool info block range endpoint that accepts the arrays url query, a value of true will put all pool values and pool sizes into separate arrays, rather than having a single array of pool info JSON objects. This may make parsing more efficient for the client.

Votes and Agendas InfoPathType
The current agenda and its status/stake/vote/infodcrjson.GetVoteInfoResult
All agendas high level details/agendas[]types.AgendasInfo
Details for agenda {agendaid}/agendas/{agendaid}types.AgendaAPIResponse
MempoolPathType
Ticket fee rate summary/mempool/sstxapitypes.MempoolTicketFeeInfo
Ticket fee rate list (all)/mempool/sstx/feesapitypes.MempoolTicketFees
Ticket fee rate list (N highest)/mempool/sstx/fees/Napitypes.MempoolTicketFees
Detailed ticket list (fee, hash, size, age, etc.)/mempool/sstx/detailsapitypes.MempoolTicketDetails
Detailed ticket list (N highest fee rates)/mempool/sstx/details/Napitypes.MempoolTicketDetails
ExchangesPathType
Exchange data summary/exchangesexchanges.ExchangeBotState
List of available currency codes/exchanges/codes[]string

Exchange monitoring is off by default. Server must be started with --exchange-monitor to enable exchange data. The server will set a default currency code. To use a different code, pass URL parameter ?code=[code]. For example, /exchanges?code=EUR.

OtherPathType
Status/statustypes.Status
Health (HTTP 200 or 503)/status/happytypes.Happy
Coin Supply/supplytypes.CoinSupply
Coin Supply Circulating (Mined)/supply/circulating?dcr=[true|false]int (default) or float (dcr=true)
Endpoint list (always indented)/list[]string

All JSON endpoints accept the URL query indent=[true|false]. For example, /stake/diff?indent=true. By default, indentation is off. The characters to use for indentation may be specified with the indentjson string configuration option.

Important Note About Mempool

Although there is mempool data collection and serving, it is very important to keep in mind that the mempool in your node (dcrd) is not likely to be exactly the same as other nodes' mempool. Also, your mempool is cleared out when you shutdown dcrd. So, if you have recently (e.g. after the start of the current ticket price window) started dcrd, your mempool will be missing transactions that other nodes have.

Front End Development

Make sure you have a recent version of node and npm installed.

From the cmd/dcrdata directory, run the following command to install the node modules.

npm clean-install

This will create and install into a directory named node_modules.

You'll also want to run npm clean-install after merging changes from upstream. It is run for you when you use the build script (./dev/build.sh).

For development, there's a webpack script that watches for file changes and automatically bundles. To use it, run the following command in a separate terminal and leave it running while you work. You'll only use this command if you are editing javascript files.

npm run watch

For production, bundle assets via:

npm run build

You will need to at least build if changes have been made. watch essentially runs build after file changes, but also performs some additional checks.

CSS Guidelines

Webpack compiles SCSS to CSS while bundling. The watch script described above also watches for changes in these files and performs linting to ensure syntax compliance.

Before you write any CSS, see if you can achieve your goal by using existing classes available in Bootstrap 4. This helps prevent our stylesheets from getting bloated and makes it easier for things to work well across a wide range browsers & devices. Please take the time to Read the docs

Note there is a dark mode, so make sure things look good with the dark background as well.

HTML

The core functionality of dcrdata is server-side rendered in Go and designed to work well with javascript disabled. For users with javascript enabled, Turbolinks creates a persistent single page application that handles all HTML rendering.

.tmpl files are cached by the backend, and can be reloaded via running killall -USR1 dcrdata from the command line.

Javascript

To encourage code that is idiomatic to Turbolinks based execution environment, javascript based enhancements should use Stimulus controllers with corresponding actions and targets. Keeping things tightly scoped with controllers and modules helps to localize complexity and maintain a clean application lifecycle. When using events handlers, bind and unbind them in the connect and disconnect function of controllers which executes when they get removed from the DOM.

Web Performance

The core functionality of dcrdata should perform well in low power device / high latency scenarios (eg. a cheap smart phone with poor reception). This means that heavy assets should be lazy loaded when they are actually needed. Simple tasks like checking a transaction or address should have a very fast initial page load.

Helper Packages

package dbtypes defines the data types used by the DB backends to model the block, transaction, and related blockchain data structures. Functions for converting from standard Decred data types (e.g. wire.MsgBlock) are also provided.

package rpcutils includes helper functions for interacting with a rpcclient.Client.

package stakedb defines the StakeDatabase and ChainMonitor types for efficiently tracking live tickets, with the primary purpose of computing ticket pool value quickly. It uses the database.DB type from github.com/decred/dcrd/database with an ffldb storage backend from github.com/decred/dcrd/database/ffldb. It also makes use of the stake.Node type from github.com/decred/dcrd/blockchain/stake. The ChainMonitor type handles connecting new blocks and chain reorganization in response to notifications from dcrd.

package txhelpers includes helper functions for working with the common types dcrutil.Tx, dcrutil.Block, chainhash.Hash, and others.

Internal-use Packages

Some packages are currently designed only for internal use by other dcrdata packages, but may be of general value in the future.

blockdata defines:

  • The chainMonitor type and its BlockConnectedHandler() method that handles block-connected notifications and triggers data collection and storage.
  • The BlockData type and methods for converting to API types.
  • The blockDataCollector type and its Collect() and CollectHash() methods that are called by the chain monitor when a new block is detected.
  • The BlockDataSaver interface required by chainMonitor for storage of collected data.

dcrpg defines:

  • The ChainDB type, which is the primary exported type from dcrpg, providing an interface for a PostgreSQL database.
  • A large set of lower-level functions to perform a range of queries given a *sql.DB instance and various parameters.
  • The internal package contains the raw SQL statements.

package mempool defines a MempoolMonitor type that can monitor a node's mempool using the OnTxAccepted notification handler to send newly received transaction hashes via a designated channel. Ticket purchases (SSTx) are triggers for mempool data collection, which is handled by the DataCollector class, and data storage, which is handled by any number of objects implementing the MempoolDataSaver interface.

Plans

See the GitHub issue trackers and the project milestones.

Contributing

Yes, please! See CONTRIBUTING.md for details, but here's the gist of it:

  1. Fork the repo.
  2. Create a branch for your work (git checkout -b cool-stuff).
  3. Code something great.
  4. Commit and push to your repo.
  5. Create a pull request.

DO NOT merge from master to your feature branch; rebase.

Also, come chat with us on Matrix in the dcrdata channel!

Author: Decred
Source Code: https://github.com/decred/dcrdata 
License: ISC license

#go #golang #json #restapi #blockchain 

Dcrdata: Decred Block Explorer, with Packages, App for Data Collection
Thierry  Perret

Thierry Perret

1659797832

Comment Créer Facilement Un Fichier JSON En Python

JSON est devenu le standard de facto pour échanger des données entre client et serveur. Python a un package intégré appelé json pour encoder et décoder les données JSON. Pour lire et écrire les données json, nous devons utiliser le package json. Pour la gestion des fichiers, Python fournit de nombreuses fonctions qui feront le travail.

Comment créer un fichier JSON en Python

Pour créer un fichier json en Python , utilisez- le avec la fonction open() . La fonction open() prend le nom de fichier et le mode comme argument. Si le fichier n'est pas là, alors il sera créé. 

Python With Statement est utilisé pour ouvrir des fichiers. L' instruction with est recommandée pour travailler avec des fichiers car elle garantit que les descripteurs de fichiers ouverts sont automatiquement fermés après que l'exécution du programme quitte le contexte de l' instruction with  .

# app.py

import json

with open('new_file.json', 'w') as f:
    print("The json file is created")

Dans ce code, nous essayons d'ouvrir un fichier appelé new_file.json  avec  le mode . Cependant, le fichier n'existe pas dans le système de fichiers, créant un nouveau fichier dans le même dossier.

Créer un fichier json à partir du fichier json existant en Python

Pour créer un fichier json à partir d'un fichier json existant, ouvrez le fichier existant en mode lecture et lisez le contenu de ce fichier et utilisez l' instruction open() et with en mode écriture et videz les données json dans un nouveau fichier json.

Disons que nous avons le fichier existant data.json .

{
  "data": [
    {
      "color": "red",
      "value": "#f00"
    },
    {
      "color": "green",
      "value": "#0f0"
    },
    {
      "color": "blue",
      "value": "#00f"
    },
    {
      "color": "black",
      "value": "#000"
    }
  ]
}

Maintenant, nous allons créer un nouveau fichier json à partir de ce fichier data.json .

# app.py

import json

with open('data.json') as f:
    data = json.load(f)

with open('new_file.json', 'w') as f:
    json.dump(data, f, indent=2)
    print("New json file is created from data.json file")

Production

python3 app.py
New json file is created from data.json file

Donc, fondamentalement, nous lisons le fichier json existant et créons un nouveau fichier json, et vidons le contenu dans ce nouveau fichier.

Conclusion

En utilisant le gestionnaire de contexte de Python, vous pouvez créer un fichier json et l'ouvrir en mode écriture. Les fichiers JSON se terminent commodément par une extension .json.

Pour travailler avec des fichiers json en Python :

  1. Importez le package json.
  2. Pour lire les données, utilisez la fonction load() ou load().
  3. Ensuite, vous traitez les données.
  4. Pour modifier les données, utilisez la fonction dump() ou dumps() .

Ce n'est pas toujours le cas, mais vous suivrez probablement ces étapes.

Lien : https://appdividend.com/2022/03/10/how-to-create-json-file-in-python/

#json #python

Comment Créer Facilement Un Fichier JSON En Python
Hans  Marvin

Hans Marvin

1659790620

How to Create A JSON File in Python Easily

JSON has become the de facto standard to exchange data between client and server. Python has an inbuilt package called json for encoding and decoding JSON data. To read and write the json data, we have to use the json package. For file handling, Python provides many functions that will do the job.

In this brief guide, We will share How to Create A JSON File in Python Easily

See more at: https://appdividend.com/2022/03/10/how-to-create-json-file-in-python/

#json #python

How to Create A JSON File in Python Easily