1660398152
Command line tool for generating Dart Floor(provides a neat SQLite) entities/models from Json file.
inspired by json_to_model v2.3.1.
based of the json_to_model v2.3.1
Feature | Status |
---|---|
Null safety | ✅ |
toJson/fromJson | ✅ |
@entity classes | ✅ |
copyWith generation | ✅ |
clone and deepclone | ✅ |
nested json classes | ✅ |
alter tables and field | ❌ |
INTEGER(int) support | ✅ |
REAL(num) support | ✅ |
TEXT(String) support | ✅ |
BLOB(Uint8List) support | ✅ |
on pubspec.yaml
dev_dependencies:
json_to_floor_entity: last version
install using pub get
command or if you using dart vscode/android studio, you can use install option.
Command line tool to convert .json
files into immutable .dart
models.
The command will run through your json files and find possible type, variable name, import uri, decorator and class name, and will write it into the templates.
Create/copy .json
files into ./jsons/
(default) on root of your project, and run flutter pub run json_to_model
.
Input Consider this files named product.json and employee.json
product.json
{
"id": "123",
"caseId?": "123",
"startDate?": "2020-08-08",
"endDate?": "2020-10-10",
"placementDescription?": "Description string"
}
eployee.json
{
"id": "123",
"displayName?": "Jan Jansen",
"@ignore products?": "$[]product"
}
Output This will generate this product.dart and employee.dart
product.dart
import 'package:floor/floor.dart';
@entity
class Product {
const Product({
required this.id,
this.caseId,
this.startDate,
this.endDate,
this.placementDescription,
});
@primaryKey
final int id;
final String? caseId;
final String? startDate;
final String? endDate;
final String? placementDescription;
factory Product.fromJson(Map<String,dynamic> json) => Product(
id: json['id'] as String,
caseId: json['caseId'] != null ? json['caseId'] as String : null,
startDate: json['startDate'] != null ? json['startDate'] as String : null,
endDate: json['endDate'] != null ? json['endDate'] as String : null,
placementDescription: json['placementDescription'] != null ? json['placementDescription'] as String : null
);
Map<String, dynamic> toJson() => {
'id': id,
'caseId': caseId,
'startDate': startDate,
'endDate': endDate,
'placementDescription': placementDescription
};
Product clone() => Product(
id: id,
caseId: caseId,
startDate: startDate,
endDate: endDate,
placementDescription: placementDescription
);
Product copyWith({
int? id,
String? caseId,
String? startDate,
String? endDate,
String? placementDescription
}) => Product(
id: id ?? this.id,
caseId: caseId ?? this.caseId,
startDate: startDate ?? this.startDate,
endDate: endDate ?? this.endDate,
placementDescription: placementDescription ?? this.placementDescription,
);
@override
bool operator ==(Object other) => identical(this, other)
|| other is Product && id == other.id && caseId == other.caseId && startDate == other.startDate && endDate == other.endDate && placementDescription == other.placementDescription;
@override
int get hashCode => id.hashCode ^ caseId.hashCode ^ startDate.hashCode ^ endDate.hashCode ^ placementDescription.hashCode;
}
eployee.dart
import 'package:floor/floor.dart';
import 'product.dart';
@entity
class Employee {
const Employee({
required this.id,
this.displayName,
this.products,
});
@primaryKey
final int id;
final String? displayName;
final List<Product>? products;
factory Employee.fromJson(Map<String,dynamic> json) => Employee(
id: json['id'] as String,
displayName: json['displayName'] != null ? json['displayName'] as String : null
);
Map<String, dynamic> toJson() => {
'id': id,
'displayName': displayName
};
Employee clone() => Employee(
id: id,
displayName: displayName,
products: products?.map((e) => e.clone()).toList()
);
Employee copyWith({
int? id,
String? displayName,
List<Product>? products
}) => Employee(
id: id ?? this.id,
displayName: displayName ?? this.displayName,
products: products ?? this.products,
);
@override
bool operator ==(Object other) => identical(this, other)
|| other is Employee && id == other.id
&& displayName == other.displayName
&& products == other.products;
@override
int get hashCode => id.hashCode ^
displayName.hashCode ^
products.hashCode;
}
This component is responsible for managing access to the underlying SQLite database. Auto create a dao like this:
import 'package:floor/floor.dart';
@dao
abstract class NewsDao {
@Query('SELECT * FROM News')
Future<List<News>> findAll();
@Query('SELECT * FROM News WHERE id = :id')
Future<News?> findById(int id);
@insert
Future<void> add(News entity);
@insert
Future<void> addList(List<News> entities);
@update
Future<void> edit(News entity);
@update
Future<void> editList(List<News> entities);
@delete
Future<void> remove(News entity);
@delete
Future<void> removeList(List<News> entities);
}
These files will not be deleted or updated after they are created.
###Create the Database It has to be an abstract class which extends FloorDatabase. Auto create a dao like this:
// database.dart
// required package imports
import 'dart:async';
import 'package:floor/floor.dart';
import 'package:sqflite/sqflite.dart' as sqflite;
import 'dao/person_dao.dart';
import 'entity/person.dart';
part 'database.g.dart'; // the generated code will be there
@Database(version: 1, entities: [Person])
abstract class AppDatabase extends FloorDatabase {
PersonDao get personDao;
}
jsons
(default) at root of your projectjsons
directorypub run json_to_floor_entity
pub run json_to_floor_entity -s assets/api_jsons -o lib/models
flutter pub run json_to_floor_entity -s assets/api_jsons -o lib/models
flutter packages pub run build_runner build
you can also use it for dart model.
this package will read .json
file, and generate .dart
file, asign the type of the value
as variable type
and key
as the variable name
.
Description | Expression | Input (Example) | Output(declaration) | Output(import) |
---|---|---|---|---|
declare type depends on the json value | {... :any type } | {"id": 1, "message":"hello world"} , | int id; String message; | |
import model and asign type | {... :"$value" } | {"auth":"$user"} | User auth; | import 'user.dart' |
import from path | {... :"$../pathto/value" } | {"price":"$../product/price"} | Price price; | import '../product/price.dart' |
asign list of type and import (can also be recursive) | {... :"$[]value" } | {"addreses":"$[]address"} | List<Address> addreses; | import 'address.dart' |
import other library(input value can be array) | {"@import" :... } | {"@import":"package:otherlibrary/otherlibrary.dart"} | import 'package:otherlibrary/otherlibrary.dart' | |
Datetime type | {... :"@datetime" } | {"createdAt": "@datetime:2020-02-15T15:47:51.742Z"} | DateTime createdAt; | |
Enum type | {... :"@enum:(folowed by enum separated by ',')" } | {"@import":"@enum:admin,app_user,normal"} | enum UserTypeEnum { Admin, AppUser, Normal } | |
Enum type with values {... :"@enum:(folowed by enum separated by ',')" } | {"@import":"@enum:admin(0),app_user(1),normal(2)"} |
Run this command:
With Dart:
$ dart pub add json_to_floor_entity
With Flutter:
$ flutter pub add json_to_floor_entity
This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get
):
dependencies:
json_to_floor_entity: ^1.1.2
Alternatively, your editor might support dart pub get
or flutter pub get
. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:json_to_floor_entity/json_to_floor_entity.dart';
Download Details:
Author: zxskigg
Source Code: https://github.com/zxskigg/flutter_json_to_floor_entity
1660375140
A fluent API to generate JSON schemas (draft-07) for Node.js and browser. Framework agnostic.
npm i fluent-json-schema
or
yarn add fluent-json-schema
const S = require('fluent-json-schema')
const ROLES = {
ADMIN: 'ADMIN',
USER: 'USER',
}
const schema = S.object()
.id('http://foo/user')
.title('My First Fluent JSON Schema')
.description('A simple user')
.prop('email', S.string().format(S.FORMATS.EMAIL).required())
.prop('password', S.string().minLength(8).required())
.prop('role', S.string().enum(Object.values(ROLES)).default(ROLES.USER))
.prop(
'birthday',
S.raw({ type: 'string', format: 'date', formatMaximum: '2020-01-01' }) // formatMaximum is an AJV custom keywords
)
.definition(
'address',
S.object()
.id('#address')
.prop('line1', S.anyOf([S.string(), S.null()])) // JSON Schema nullable
.prop('line2', S.string().raw({ nullable: true })) // Open API / Swagger nullable
.prop('country', S.string())
.prop('city', S.string())
.prop('zipcode', S.string())
.required(['line1', 'country', 'city', 'zipcode'])
)
.prop('address', S.ref('#address'))
console.log(JSON.stringify(schema.valueOf(), undefined, 2))
Schema generated:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"address": {
"type": "object",
"$id": "#address",
"properties": {
"line1": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
]
},
"line2": {
"type": "string",
"nullable": true
},
"country": {
"type": "string"
},
"city": {
"type": "string"
},
"zipcode": {
"type": "string"
}
},
"required": ["line1", "country", "city", "zipcode"]
}
},
"type": "object",
"$id": "http://foo/user",
"title": "My First Fluent JSON Schema",
"description": "A simple user",
"properties": {
"email": {
"type": "string",
"format": "email"
},
"password": {
"type": "string",
"minLength": 8
},
"birthday": {
"type": "string",
"format": "date",
"formatMaximum": "2020-01-01"
},
"role": {
"type": "string",
"enum": ["ADMIN", "USER"],
"default": "USER"
},
"address": {
"$ref": "#address"
}
},
"required": ["email", "password"]
}
With "esModuleInterop": true
activated in the tsconfig.json
:
import S from 'fluent-json-schema'
const schema = S.object()
.prop('foo', S.string())
.prop('bar', S.number())
.valueOf()
With "esModuleInterop": false
in the tsconfig.json
:
import * as S from 'fluent-json-schema'
const schema = S.object()
.prop('foo', S.string())
.prop('bar', S.number())
.valueOf()
Fluent schema does not validate a JSON schema. However, many libraries can do that for you. Below a few examples using AJV:
npm i ajv
or
yarn add ajv
Snippet:
const ajv = new Ajv({ allErrors: true })
const validate = ajv.compile(schema.valueOf())
let user = {}
let valid = validate(user)
console.log({ valid }) //=> {valid: false}
console.log(validate.errors) //=> {valid: false}
Output:
{valid: false}
errors: [
{
keyword: 'required',
dataPath: '',
schemaPath: '#/required',
params: { missingProperty: 'email' },
message: "should have required property 'email'",
},
{
keyword: 'required',
dataPath: '',
schemaPath: '#/required',
params: { missingProperty: 'password' },
message: "should have required property 'password'",
},
]
Snippet:
user = { email: 'test', password: 'password' }
valid = validate(user)
console.log({ valid })
console.log(validate.errors)
Output:
{valid: false}
errors:
[ { keyword: 'format',
dataPath: '.email',
schemaPath: '#/properties/email/format',
params: { format: 'email' },
message: 'should match format "email"' } ]
Snippet:
user = { email: 'test@foo.com', password: 'password' }
valid = validate(user)
console.log({ valid })
console.log('errors:', validate.errors)
Output:
{valid: false}
errors: [ { keyword: 'required',
dataPath: '.address',
schemaPath: '#definitions/address/required',
params: { missingProperty: 'country' },
message: 'should have required property \'country\'' },
{ keyword: 'required',
dataPath: '.address',
schemaPath: '#definitions/address/required',
params: { missingProperty: 'city' },
message: 'should have required property \'city\'' },
{ keyword: 'required',
dataPath: '.address',
schemaPath: '#definitions/address/required',
params: { missingProperty: 'zipcoce' },
message: 'should have required property \'zipcode\'' } ]
Snippet:
user = { email: 'test@foo.com', password: 'password' }
valid = validate(user)
console.log({ valid })
Output:
{valid: true}
Normally inheritance with JSON Schema is achieved with allOf
. However when .additionalProperties(false)
is used the validator won't understand which properties come from the base schema. S.extend
creates a schema merging the base into the new one so that the validator knows all the properties because it is evaluating only a single schema. For example, in a CRUD API POST /users
could use the userBaseSchema
rather than GET /users
or PATCH /users
use the userSchema
which contains the id
, createdAt
and updatedAt
generated server side.
const S = require('fluent-json-schema')
const userBaseSchema = S.object()
.additionalProperties(false)
.prop('username', S.string())
.prop('password', S.string())
const userSchema = S.object()
.prop('id', S.string().format('uuid'))
.prop('createdAt', S.string().format('time'))
.prop('updatedAt', S.string().format('time'))
.extend(userBaseSchema)
console.log(userSchema)
In addition to extending schemas, it is also possible to reduce them into smaller schemas. This comes in handy when you have a large Fluent Schema, and would like to re-use some of its properties.
Select only properties you want to keep.
const S = require('fluent-json-schema')
const userSchema = S.object()
.prop('username', S.string())
.prop('password', S.string())
.prop('id', S.string().format('uuid'))
.prop('createdAt', S.string().format('time'))
.prop('updatedAt', S.string().format('time'))
const loginSchema = userSchema.only(['username', 'password'])
Or remove properties you dont want to keep.
const S = require('fluent-json-schema')
const personSchema = S.object()
.prop('name', S.string())
.prop('age', S.number())
.prop('id', S.string().format('uuid'))
.prop('createdAt', S.string().format('time'))
.prop('updatedAt', S.string().format('time'))
const bodySchema = personSchema.without(['createdAt', 'updatedAt'])
Every Fluent Schema object contains a boolean isFluentSchema
. In this way, you can write your own utilities that understands the Fluent Schema API and improve the user experience of your tool.
const S = require('fluent-json-schema')
const schema = S.object().prop('foo', S.string()).prop('bar', S.number())
console.log(schema.isFluentSchema) // true
Thanks to Matteo Collina for pushing me to implement this utility! 🙏
Author: Fastify
Source Code: https://github.com/fastify/fluent-json-schema
License: MIT license
1660296849
A general purpose R interface to Elasticsearch
This client is developed following the latest stable releases, currently v7.10.0
. It is generally compatible with older versions of Elasticsearch. Unlike the Python client, we try to keep as much compatibility as possible within a single version of this client, as that's an easier setup in R world.
You're fine running ES locally on your machine, but be careful just throwing up ES on a server with a public IP address - make sure to think about security.
Stable version from CRAN
install.packages("elastic")
Development version from GitHub
remotes::install_github("ropensci/elastic")
library('elastic')
w/ Docker
Pull the official elasticsearch image
# elasticsearch needs to have a version tag. We're pulling 7.10.1 here
docker pull elasticsearch:7.10.1
Then start up a container
docker run -d -p 9200:9200 elasticsearch:7.10.1
Then elasticsearch should be available on port 9200, try curl localhost:9200
and you should get the familiar message indicating ES is on.
If you're using boot2docker, you'll need to use the IP address in place of localhost. Get it by doing boot2docker ip
.
on OSX
curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.0-darwin-x86_64.tar.gz
tar -zxvf elasticsearch-7.10.0-darwin-x86_64.tar.gz
sudo mv elasticsearch-7.10.0 /usr/local
cd /usr/local
elasticsearch
directory: rm -rf elasticsearch
sudo ln -s elasticsearch-7.10.0 elasticsearch
(replace version with your version)You can also install via Homebrew: brew install elasticsearch
Note: for the 1.6 and greater upgrades of Elasticsearch, they want you to have java 8 or greater. I downloaded Java 8 from here http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html and it seemed to work great.
I am not totally clear on best practice here, but from what I understand, when you upgrade to a new version of Elasticsearch, place old elasticsearch/data
and elasticsearch/config
directories into the new installation (elasticsearch/
dir). The new elasticsearch instance with replaced data and config directories should automatically update data to the new version and start working. Maybe if you use homebrew on a Mac to upgrade it takes care of this for you - not sure.
Obviously, upgrading Elasticsearch while keeping it running is a different thing (some help here from Elastic).
cd /usr/local/elasticsearch
bin/elasticsearch
I create a little bash shortcut called es
that does both of the above commands in one step (cd /usr/local/elasticsearch && bin/elasticsearch
).
The function connect()
is used before doing anything else to set the connection details to your remote or local elasticsearch store. The details created by connect()
are written to your options for the current session, and are used by elastic
functions.
x <- connect(port = 9200)
If you're following along here with a local instance of Elasticsearch, you'll use
x
below to do more stuff.
For AWS hosted elasticsearch, make sure to specify path = "" and the correct port - transport schema pair.
connect(host = <aws_es_endpoint>, path = "", port = 80, transport_schema = "http")
# or
connect(host = <aws_es_endpoint>, path = "", port = 443, transport_schema = "https")
If you are using Elastic Cloud or an installation with authentication (X-pack), make sure to specify path = "", user = "", pwd = "" and the correct port - transport schema pair.
connect(host = <ec_endpoint>, path = "", user="test", pwd = "1234", port = 9243, transport_schema = "https")
Elasticsearch has a bulk load API to load data in fast. The format is pretty weird though. It's sort of JSON, but would pass no JSON linter. I include a few data sets in elastic
so it's easy to get up and running, and so when you run examples in this package they'll actually run the same way (hopefully).
I have prepare a non-exported function useful for preparing the weird format that Elasticsearch wants for bulk data loads, that is somewhat specific to PLOS data (See below), but you could modify for your purposes. See make_bulk_plos()
and make_bulk_gbif()
here.
Elasticsearch provides some data on Shakespeare plays. I've provided a subset of this data in this package. Get the path for the file specific to your machine:
shakespeare <- system.file("examples", "shakespeare_data.json", package = "elastic")
# If you're on Elastic v6 or greater, use this one
shakespeare <- system.file("examples", "shakespeare_data_.json", package = "elastic")
shakespeare <- type_remover(shakespeare)
Then load the data into Elasticsearch:
make sure to create your connection object with
connect()
# x <- connect() # do this now if you didn't do this above
invisible(docs_bulk(x, shakespeare))
If you need some big data to play with, the shakespeare dataset is a good one to start with. You can get the whole thing and pop it into Elasticsearch (beware, may take up to 10 minutes or so.):
curl -XGET https://download.elastic.co/demos/kibana/gettingstarted/shakespeare_6.0.json > shakespeare.json
curl -XPUT localhost:9200/_bulk --data-binary @shakespeare.json
A dataset inluded in the elastic
package is metadata for PLOS scholarly articles. Get the file path, then load:
if (index_exists(x, "plos")) index_delete(x, "plos")
plosdat <- system.file("examples", "plos_data.json", package = "elastic")
plosdat <- type_remover(plosdat)
invisible(docs_bulk(x, plosdat))
A dataset inluded in the elastic
package is data for GBIF species occurrence records. Get the file path, then load:
if (index_exists(x, "gbif")) index_delete(x, "gbif")
gbifdat <- system.file("examples", "gbif_data.json", package = "elastic")
gbifdat <- type_remover(gbifdat)
invisible(docs_bulk(x, gbifdat))
GBIF geo data with a coordinates element to allow geo_shape
queries
if (index_exists(x, "gbifgeo")) index_delete(x, "gbifgeo")
gbifgeo <- system.file("examples", "gbif_geo.json", package = "elastic")
gbifgeo <- type_remover(gbifgeo)
invisible(docs_bulk(x, gbifgeo))
There are more datasets formatted for bulk loading in the sckott/elastic_data
GitHub repository. Find it at https://github.com/sckott/elastic_data
Search the plos
index and only return 1 result
Search(x, index = "plos", size = 1)$hits$hits
#> [[1]]
#> [[1]]$`_index`
#> [1] "plos"
#>
#> [[1]]$`_type`
#> [1] "_doc"
#>
#> [[1]]$`_id`
#> [1] "0"
#>
#> [[1]]$`_score`
#> [1] 1
#>
#> [[1]]$`_source`
#> [[1]]$`_source`$id
#> [1] "10.1371/journal.pone.0007737"
#>
#> [[1]]$`_source`$title
#> [1] "Phospholipase C-\u03b24 Is Essential for the Progression of the Normal Sleep Sequence and Ultradian Body Temperature Rhythms in Mice"
Search the plos
index, and query for antibody, limit to 1 result
Search(x, index = "plos", q = "antibody", size = 1)$hits$hits
#> [[1]]
#> [[1]]$`_index`
#> [1] "plos"
#>
#> [[1]]$`_type`
#> [1] "_doc"
#>
#> [[1]]$`_id`
#> [1] "813"
#>
#> [[1]]$`_score`
#> [1] 5.18676
#>
#> [[1]]$`_source`
#> [[1]]$`_source`$id
#> [1] "10.1371/journal.pone.0107638"
#>
#> [[1]]$`_source`$title
#> [1] "Sortase A Induces Th17-Mediated and Antibody-Independent Immunity to Heterologous Serotypes of Group A Streptococci"
Get document with id=4
docs_get(x, index = 'plos', id = 4)
#> $`_index`
#> [1] "plos"
#>
#> $`_type`
#> [1] "_doc"
#>
#> $`_id`
#> [1] "4"
#>
#> $`_version`
#> [1] 1
#>
#> $`_seq_no`
#> [1] 4
#>
#> $`_primary_term`
#> [1] 1
#>
#> $found
#> [1] TRUE
#>
#> $`_source`
#> $`_source`$id
#> [1] "10.1371/journal.pone.0107758"
#>
#> $`_source`$title
#> [1] "Lactobacilli Inactivate Chlamydia trachomatis through Lactic Acid but Not H2O2"
Get certain fields
docs_get(x, index = 'plos', id = 4, fields = 'id')
#> $`_index`
#> [1] "plos"
#>
#> $`_type`
#> [1] "_doc"
#>
#> $`_id`
#> [1] "4"
#>
#> $`_version`
#> [1] 1
#>
#> $`_seq_no`
#> [1] 4
#>
#> $`_primary_term`
#> [1] 1
#>
#> $found
#> [1] TRUE
Same index and different document ids
docs_mget(x, index = "plos", id = 1:2)
#> $docs
#> $docs[[1]]
#> $docs[[1]]$`_index`
#> [1] "plos"
#>
#> $docs[[1]]$`_type`
#> [1] "_doc"
#>
#> $docs[[1]]$`_id`
#> [1] "1"
#>
#> $docs[[1]]$`_version`
#> [1] 1
#>
#> $docs[[1]]$`_seq_no`
#> [1] 1
#>
#> $docs[[1]]$`_primary_term`
#> [1] 1
#>
#> $docs[[1]]$found
#> [1] TRUE
#>
#> $docs[[1]]$`_source`
#> $docs[[1]]$`_source`$id
#> [1] "10.1371/journal.pone.0098602"
#>
#> $docs[[1]]$`_source`$title
#> [1] "Population Genetic Structure of a Sandstone Specialist and a Generalist Heath Species at Two Levels of Sandstone Patchiness across the Strait of Gibraltar"
#>
#>
#>
#> $docs[[2]]
#> $docs[[2]]$`_index`
#> [1] "plos"
#>
#> $docs[[2]]$`_type`
#> [1] "_doc"
#>
#> $docs[[2]]$`_id`
#> [1] "2"
#>
#> $docs[[2]]$`_version`
#> [1] 1
#>
#> $docs[[2]]$`_seq_no`
#> [1] 2
#>
#> $docs[[2]]$`_primary_term`
#> [1] 1
#>
#> $docs[[2]]$found
#> [1] TRUE
#>
#> $docs[[2]]$`_source`
#> $docs[[2]]$`_source`$id
#> [1] "10.1371/journal.pone.0107757"
#>
#> $docs[[2]]$`_source`$title
#> [1] "Cigarette Smoke Extract Induces a Phenotypic Shift in Epithelial Cells; Involvement of HIF1\u03b1 in Mesenchymal Transition"
You can optionally get back raw json
from Search()
, docs_get()
, and docs_mget()
setting parameter raw=TRUE
.
For example:
(out <- docs_mget(x, index = "plos", id = 1:2, raw = TRUE))
#> [1] "{\"docs\":[{\"_index\":\"plos\",\"_type\":\"_doc\",\"_id\":\"1\",\"_version\":1,\"_seq_no\":1,\"_primary_term\":1,\"found\":true,\"_source\":{\"id\":\"10.1371/journal.pone.0098602\",\"title\":\"Population Genetic Structure of a Sandstone Specialist and a Generalist Heath Species at Two Levels of Sandstone Patchiness across the Strait of Gibraltar\"}},{\"_index\":\"plos\",\"_type\":\"_doc\",\"_id\":\"2\",\"_version\":1,\"_seq_no\":2,\"_primary_term\":1,\"found\":true,\"_source\":{\"id\":\"10.1371/journal.pone.0107757\",\"title\":\"Cigarette Smoke Extract Induces a Phenotypic Shift in Epithelial Cells; Involvement of HIF1\u03b1 in Mesenchymal Transition\"}}]}"
#> attr(,"class")
#> [1] "elastic_mget"
Then parse
jsonlite::fromJSON(out)
#> $docs
#> _index _type _id _version _seq_no _primary_term found
#> 1 plos _doc 1 1 1 1 TRUE
#> 2 plos _doc 2 1 2 1 TRUE
#> _source.id
#> 1 10.1371/journal.pone.0098602
#> 2 10.1371/journal.pone.0107757
#> _source.title
#> 1 Population Genetic Structure of a Sandstone Specialist and a Generalist Heath Species at Two Levels of Sandstone Patchiness across the Strait of Gibraltar
#> 2 Cigarette Smoke Extract Induces a Phenotypic Shift in Epithelial Cells; Involvement of HIF1\u03b1 in Mesenchymal Transition
HEAD
requests don't seem to work, not sure whyGET
requests, a number of functions that require POST
requests obviously then won't work. A big one is Search()
, but you can use Search_uri()
to get around this, which uses GET
instead of POST
, but you can't pass a more complicated query via the bodyA screencast introducing the package: vimeo.com/124659179
elastic
in R doing citation(package = 'elastic')
Author: Ropensci
Source Code: https://github.com/ropensci/elastic
License: Unknown, MIT licenses found
1660290300
Super-fast Runtime type checker and JSON.stringify()
functions, with only one line.
import TSON from "typescript-json";
// RUNTIME TYPE CHECKERS
TSON.assertType<T>(input); // throws exception
TSON.is<T>(input); // returns boolean value
TSON.validate<T>(input); // archives all type errors
// STRINGIFY
TSON.stringify<T>(input); // 5x faster JSON.stringify()
// APPENDIX FUNCTIONS
TSON.application<[T, U, V], "swagger">(); // JSON schema application generator
TSON.create<T>(input); // 2x faster object creator (only one-time construction)
typescript-json
is a transformer library providing JSON related functions.
TSON.assertType<T>(input)
JSON.stringify()
function:TSON.stringify<T>(input)
Measured on AMD R7 5800HS, ASUS ROG FLOW X13 (numeric option:
false
)
At first, install this typescript-json
by the npm install
command.
Also, you need additional devDependencies
to compile the TypeScript code with transformation. Therefore, install those all libraries typescript
, ttypescript
and ts-node
. Inform that, ttypescript
is not mis-writing. Do not forget to install the ttypescript
.
npm install --save typescript-json
# ENSURE THOSE PACKAGES ARE INSTALLED
npm install --save-dev typescript
npm install --save-dev ttypescript
npm install --save-dev ts-node
After the installation, you've to configure tsconfig.json
file like below.
Add a property transform
and its value as typescript-json/lib/transform
into compilerOptions.plugins
array. When configuring, I recommend you to use the strict
option, to enforce developers to distinguish whether each property is nullable or undefindable.
Also, you can configure additional properties like numeric
and functional
. The first, numeric
is an option whether to test Number.isNaN()
and Number.isFinite()
to numeric value or not. The second, functional
is an option whether to test function type or not. Default values of those options are all true
.
{
"compilerOptions": {
"strict": true,
"plugins": [
{
"transform": "typescript-json/lib/transform",
// "functional": true, // test function type
// "numeric": true, // test `isNaN()` and `isFinite()`
}
]
}
}
After the tsconfig.json
definition, you can compile typescript-json
utilized code by using ttypescript
. If you want to run your TypeScript file through ts-node
, use -C ttypescript
argument like below:
# COMPILE
npx ttsc
# WITH TS-NODE
npx ts-node -C ttypescript
If you're using webpack
with ts-loader
, configure the webpack.config.js
file like below:
const transform = require("typescript-json/lib/transform").default;
module.exports = {
// I am hiding the rest of the webpack config
module: {
rules: [
{
test: /\.ts$/,
exclude: /node_modules/,
loader: 'ts-loader',
options: {
getCustomTransformers: program => ({
before: [transform(program)]
// before: [
// transform(program, {
// functional: true,
// numeric: true
// })
// ]
})
}
}
]
}
};
export function assertType<T>(input: T): T;
export function is<T>(input: T): boolean;
export function validate<T>(input: T): IValidation;
export interface IValidation {
success: boolean;
errors: IValidation.IError[];
}
export namespace IValidation {
export interface IError {
path: string;
expected: string;
value: any;
}
}
typescript-json
provides three runtime type checker functions.
The first, assertType()
is a function throwing TypeGuardError
when an input
value is different with its type, generic argument T
. The second function, is()
returns a boolean
value meaning whether matched or not. The last validate()
function archives all type errors into an IValidation.errors
array.
Comparing those type checker functions with other similar libraries, typescript-json
is much easier than others, except only typescript-is
. For example, ajv
requires complicate JSON schema definition that is different with the TypeScript type. Besides, typescript-json
requires only one line.
Also, only typescript-json
can validate union typed structure exactly. All the other libraries can check simple object type, however, none of them can validate complicate union type. The fun thing is, ajv
requires JSON schema definition for validation, but it can't validate the JSON schema type. How contradict it is.
Components | TSON | T.IS | ajv | io-ts | C.V. |
---|---|---|---|---|---|
Easy to use | ✔ | ✔ | ❌ | ❌ | ❌ |
Object (simple) | ✔ | ✔ | ✔ | ✔ | ✔ |
Object (hierarchical) | ✔ | ✔ | ❌ | ✔ | ✔ |
Object (recursive) | ✔ | ✔ | ✔ | ✔ | ✔ |
Object (union, implicit) | ✅ | ❌ | ❌ | ❌ | ❌ |
Object (union, explicit) | ✔ | ❌ | ✔ | ✔ | ❌ |
Array (hierarchical) | ✔ | ✔ | ❌ | ✔ | ✔ |
Array (recursive) | ✔ | ✔ | ❌ | ✔ | ✔ |
Array (recursive, union) | ✔ | ✔ | ❌ | ❌ | ❌ |
Array (R+U, implicit) | ✅ | ❌ | ❌ | ❌ | ❌ |
Ultimate Union Type | ✅ | ❌ | ❌ | ❌ | ❌ |
- TSON:
typescript-json
- T.IS:
typescript-is
- C.V.:
class-validator
Furthermore, when union type comes, typescript-json
is extremely faster than others.
As you can see from the above table, ajv
and typescript-is
are fallen in the most union type cases. Also, they're even showing a huge different from typescript-json
, in the time benchmark that does not care whether the validation is exact or not.
The extreme different is shown in the "ultimate union" type, when validating JSON schema.
Measured on Intel i5-1135g7, Surface Pro 8
export function stringify<T>(input: T): string;
Super-fast JSON string conversion function.
If you call TSON.stringify()
function instead of the native JSON.stringify()
, the JSON conversion time would be 5x times faster. Also, you can perform such super-fast JSON string conversion very easily, by only one line: TSON.stringify<T>(input)
.
On the other side, other similary library like fast-json-stringify
requires complicate JSON schema definition. Furthermore, typescript-json
can convert complicate structured data that fast-json-stringify
cannot convert.
Comparing performance, typescript-json
is about 5x times faster when comparing only JSON string conversion time. If compare optimizer construction time, typescript-json
is even 10,000x times faster.
AMD CPU shows dramatic improvement
export function application<
Types extends unknown[],
Purpose extends "swagger" | "ajv" = "swagger",
Prefix extends string = Purpose extends "swagger"
? "#/components/schemas"
: "components#/schemas",
>(): IJsonApplication;
typescript-json
even supports JSON schema application generation.
When you need to share your TypeScript types to other language, this application()
function would be useful. It generates JSON schema definition by analyzing your Types
. Therefore, with typescript-json
and its application()
function, you don't need to write JSON schema definition manually.
By the way, the reason why you're using this application()
is for generating a swagger documents, I recommend you to use my another library nestia. It will automate the swagger documents generation, by analyzing your entire backend server code.
https://github.com/samchon/nestia
Automatic SDK
and Swagger
generator for NestJS
, evolved than ever.
nestia
is an evolved SDK
and Swagger
generator, which analyzes your NestJS
server code in the compilation level. With nestia
and compilation level analyzer, you don't need to write any swagger or class-validator decorators.
Reading below table and example code, feel how the "compilation level" makes nestia
stronger.
Components | nestia ::SDK | nestia ::swagger | @nestjs/swagger |
---|---|---|---|
Pure DTO interface | ✔ | ✔ | ❌ |
Description comments | ✔ | ✔ | ❌ |
Simple structure | ✔ | ✔ | ✔ |
Generic type | ✔ | ✔ | ❌ |
Union type | ✔ | ✔ | ▲ |
Intersection type | ✔ | ✔ | ▲ |
Conditional type | ✔ | ▲ | ❌ |
Auto completion | ✔ | ❌ | ❌ |
Type hints | ✔ | ❌ | ❌ |
5x faster JSON.stringify() | ✔ | ❌ | ❌ |
Ensure type safety | ✅ | ❌ | ❌ |
// IMPORT SDK LIBRARY GENERATED BY NESTIA
import api from "@samchon/shopping-api";
import { IPage } from "@samchon/shopping-api/lib/structures/IPage";
import { ISale } from "@samchon/shopping-api/lib/structures/ISale";
import { ISaleArticleComment } from "@samchon/shopping-api/lib/structures/ISaleArticleComment";
import { ISaleQuestion } from "@samchon/shopping-api/lib/structures/ISaleQuestion";
export async function trace_sale_question_and_comment
(connection: api.IConnection): Promise<void>
{
// LIST UP SALE SUMMARIES
const index: IPage<ISale.ISummary> = await api.functional.shoppings.sales.index
(
connection,
"general",
{ limit: 100, page: 1 }
);
// PICK A SALE
const sale: ISale = await api.functional.shoppings.sales.at
(
connection,
index.data[0].id
);
console.log("sale", sale);
// WRITE A QUESTION
const question: ISaleQuestion = await api.functional.shoppings.sales.questions.store
(
connection,
"general",
sale.id,
{
title: "How to use this product?",
body: "The description is not fully enough. Can you introduce me more?",
files: []
}
);
console.log("question", question);
// WRITE A COMMENT
const comment: ISaleArticleComment = await api.functional.shoppings.sales.comments.store
(
connection,
"general",
sale.id,
question.id,
{
body: "p.s) Can you send me a detailed catalogue?",
anonymous: false
}
);
console.log("comment", comment);
}
https://github.com/samchon/nestia-helper
Helper library of NestJS
, using this typescript-json
.
nestia-helper
is a helper library of NestJS
, which boosts up the JSON.stringify()
speed 5x times faster about the API responses. Also, nestia-helper
supports automatic valiation of request body, too.
import helper from "nestia-helper";
import * as nest from "@nestjs/common";
@nest.Controller("bbs/articles")
export class BbsArticlesController
{
// TSON.stringify() for response body
@helper.TypedRoute.Get()
public store(
// TSON.assertType() for request body
@helper.TypedBody() input: IBbsArticle.IStore
): Promise<IBbsArticle>;
}
Author: samchon
Source code: https://github.com/samchon/typescript-json
License: MIT license
#react-native #typescript #javascript #json
1660096500
Hướng dẫn này tập trung vào cách chuyển đổi bộ sưu tập sang json trong laravel. Bài đăng này sẽ cung cấp cho bạn một ví dụ đơn giản về đối tượng chuyển đổi laravel thành json.
Bạn có thể sử dụng ví dụ này với phiên bản laravel 6, laravel 7, laravel 8 và laravel 9.
Đôi khi, chúng tôi đang lấy dữ liệu từ cơ sở dữ liệu và bạn cần chuyển đổi dữ liệu hùng hồn thành JSON thì bạn sẽ thực hiện việc này như thế nào? đừng lo lắng, có nhiều cách để chuyển đổi bộ sưu tập sang JSON trong laravel. chúng ta sẽ sử dụng phương thức toJson () và json_encode () để chuyển mảng đối tượng thành JSON trong laravel.
vì vậy chúng ta hãy xem từng ví dụ dưới đây:
Ví dụ 1: get () với toJson ()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->get()
->toJson();
dd($posts);
}
}
Đầu ra:
[
{
"id":40,
"title":"Post title 1",
"body":"Post body"
},
{
"id":39,
"title":"Post title 2",
"body":"Post body"
},
{
"id":38,
"title":"Post title 3",
"body":"Post body"
}
]
Ví dụ 2: find () với toJson ()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$post = Post::find(40)->toJson();
dd($post);
}
}
Đầu ra:
{
"id":40,
"title":"Post title 1",
"slug":null,
"body":"Post body",
"created_at":"2022-08-05",
"updated_at":"2022-08-05T13:21:10.000000Z",
"status":1
}
Ví dụ 3: json_encode ()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->take(5)
->get();
$posts = json_encode($posts);
dd($posts);
}
}
Đầu ra:
[
{
"id":40,
"title":"Post title 1",
"body":"Post body"
},
{
"id":39,
"title":"Post title 2",
"body":"Post body"
},
{
"id":38,
"title":"Post title 3",
"body":"Post body"
}
]
Ví dụ 4: Bộ sưu tập tùy chỉnh sử dụng toJson ()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = collect([
['id' => 1, 'title' => 'Title One', 'body' => 'Body One'],
['id' => 2, 'title' => 'Title Two', 'body' => 'Body Two'],
]);
$posts = $posts->toJson();
dd($posts);
}
}
Đầu ra:
[
{
"id":1,
"title":"Title One",
"body":"Body One"
},
{
"id":2,
"title":"Title Two",
"body":"Body Two"
}
]
Ví dụ 5: Phản hồi JSON
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->get();
return response()->json(['posts' => $posts]);
}
}
Đầu ra:
{"posts":[{"id":40,"title":"Post title 1","body":"Post body"},{"id":39,"title":"Post title 2","body":"Post body"},{"id":38,"title":"Post title 3","body":"Post body"},{"id":37,"title":"Post title 4","body":"Post body"},{"id":36,"title":"Post title 5","body":"Post body"}]}
Tôi hy vọng nó có thể giúp bạn...
Nguồn: https://www.itsolutionstuff.com/post/how-to-convert-collection-to-json-in-laravelexample.html
1660095000
Este tutorial se centra en cómo convertir una colección a json en laravel. Esta publicación le dará un ejemplo simple de laravel convert object to json.
Puede usar este ejemplo con laravel 6, laravel 7, laravel 8 y laravel 9 también.
A veces, estamos obteniendo datos de la base de datos y necesita convertir datos elocuentes en JSON, entonces, ¿cómo hará esto? no te preocupes, hay muchas formas de convertir la colección a JSON en laravel. Usaremos el método toJson() y json_encode() para convertir la matriz de objetos a JSON en laravel.
así que veamos a continuación uno por un ejemplo:
Ejemplo 1: get() con toJson()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->get()
->toJson();
dd($posts);
}
}
Producción:
[
{
"id":40,
"title":"Post title 1",
"body":"Post body"
},
{
"id":39,
"title":"Post title 2",
"body":"Post body"
},
{
"id":38,
"title":"Post title 3",
"body":"Post body"
}
]
Ejemplo 2: find() con toJson()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$post = Post::find(40)->toJson();
dd($post);
}
}
Producción:
{
"id":40,
"title":"Post title 1",
"slug":null,
"body":"Post body",
"created_at":"2022-08-05",
"updated_at":"2022-08-05T13:21:10.000000Z",
"status":1
}
Ejemplo 3: json_encode()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->take(5)
->get();
$posts = json_encode($posts);
dd($posts);
}
}
Producción:
[
{
"id":40,
"title":"Post title 1",
"body":"Post body"
},
{
"id":39,
"title":"Post title 2",
"body":"Post body"
},
{
"id":38,
"title":"Post title 3",
"body":"Post body"
}
]
Ejemplo 4: Colección personalizada usando toJson()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = collect([
['id' => 1, 'title' => 'Title One', 'body' => 'Body One'],
['id' => 2, 'title' => 'Title Two', 'body' => 'Body Two'],
]);
$posts = $posts->toJson();
dd($posts);
}
}
Producción:
[
{
"id":1,
"title":"Title One",
"body":"Body One"
},
{
"id":2,
"title":"Title Two",
"body":"Body Two"
}
]
Ejemplo 5: Respuesta JSON
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->get();
return response()->json(['posts' => $posts]);
}
}
Producción:
{"posts":[{"id":40,"title":"Post title 1","body":"Post body"},{"id":39,"title":"Post title 2","body":"Post body"},{"id":38,"title":"Post title 3","body":"Post body"},{"id":37,"title":"Post title 4","body":"Post body"},{"id":36,"title":"Post title 5","body":"Post body"}]}
Espero que te pueda ayudar...
Fuente: https://www.itsolutionstuff.com/post/how-to-convert-collection-to-json-in-laravelexample.html
1660091400
Ce tutoriel se concentre sur la façon de convertir une collection en json dans laravel. Cet article vous donnera un exemple simple de conversion d'objet laravel en json.
Vous pouvez également utiliser cet exemple avec les versions laravel 6, laravel 7, laravel 8 et laravel 9.
Parfois, nous obtenons des données de la base de données et vous devez convertir des données éloquentes en JSON, alors comment allez-vous procéder ? ne vous inquiétez pas, il existe de nombreuses façons de convertir la collection en JSON dans laravel. nous utiliserons les méthodes toJson() et json_encode() pour convertir le tableau d'objets en JSON dans laravel.
alors voyons ci-dessous un par un exemple:
Exemple 1 : get() avec toJson()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->get()
->toJson();
dd($posts);
}
}
Production:
[
{
"id":40,
"title":"Post title 1",
"body":"Post body"
},
{
"id":39,
"title":"Post title 2",
"body":"Post body"
},
{
"id":38,
"title":"Post title 3",
"body":"Post body"
}
]
Exemple 2 : find() avec toJson()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$post = Post::find(40)->toJson();
dd($post);
}
}
Production:
{
"id":40,
"title":"Post title 1",
"slug":null,
"body":"Post body",
"created_at":"2022-08-05",
"updated_at":"2022-08-05T13:21:10.000000Z",
"status":1
}
Exemple 3 : json_encode()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->take(5)
->get();
$posts = json_encode($posts);
dd($posts);
}
}
Production:
[
{
"id":40,
"title":"Post title 1",
"body":"Post body"
},
{
"id":39,
"title":"Post title 2",
"body":"Post body"
},
{
"id":38,
"title":"Post title 3",
"body":"Post body"
}
]
Exemple 4 : Collection personnalisée utilisant toJson()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = collect([
['id' => 1, 'title' => 'Title One', 'body' => 'Body One'],
['id' => 2, 'title' => 'Title Two', 'body' => 'Body Two'],
]);
$posts = $posts->toJson();
dd($posts);
}
}
Production:
[
{
"id":1,
"title":"Title One",
"body":"Body One"
},
{
"id":2,
"title":"Title Two",
"body":"Body Two"
}
]
Exemple 5 : réponse JSON
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->get();
return response()->json(['posts' => $posts]);
}
}
Production:
{"posts":[{"id":40,"title":"Post title 1","body":"Post body"},{"id":39,"title":"Post title 2","body":"Post body"},{"id":38,"title":"Post title 3","body":"Post body"},{"id":37,"title":"Post title 4","body":"Post body"},{"id":36,"title":"Post title 5","body":"Post body"}]}
J'espère que cela peut vous aider...
Source : https://www.itsolutionstuff.com/post/how-to-convert-collection-to-json-in-laravelexample.html
1660089600
Este tutorial é focado em como converter coleção para json em laravel. Este post lhe dará um exemplo simples de laravel convert object to json.
Você pode usar este exemplo com a versão laravel 6, laravel 7, laravel 8 e laravel 9 também.
Às vezes, estamos obtendo dados do banco de dados e você precisa converter dados eloquentes em JSON, então como você fará isso? não se preocupe, existem muitas maneiras de converter a coleção para JSON em laravel. usaremos o método toJson() e json_encode() para converter array de objetos para JSON em laravel.
então vamos ver abaixo um por um exemplo:
Exemplo 1: get() com toJson()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->get()
->toJson();
dd($posts);
}
}
Resultado:
[
{
"id":40,
"title":"Post title 1",
"body":"Post body"
},
{
"id":39,
"title":"Post title 2",
"body":"Post body"
},
{
"id":38,
"title":"Post title 3",
"body":"Post body"
}
]
Exemplo 2: find() com toJson()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$post = Post::find(40)->toJson();
dd($post);
}
}
Resultado:
{
"id":40,
"title":"Post title 1",
"slug":null,
"body":"Post body",
"created_at":"2022-08-05",
"updated_at":"2022-08-05T13:21:10.000000Z",
"status":1
}
Exemplo 3: json_encode()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->take(5)
->get();
$posts = json_encode($posts);
dd($posts);
}
}
Resultado:
[
{
"id":40,
"title":"Post title 1",
"body":"Post body"
},
{
"id":39,
"title":"Post title 2",
"body":"Post body"
},
{
"id":38,
"title":"Post title 3",
"body":"Post body"
}
]
Exemplo 4: Coleção personalizada usando toJson()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = collect([
['id' => 1, 'title' => 'Title One', 'body' => 'Body One'],
['id' => 2, 'title' => 'Title Two', 'body' => 'Body Two'],
]);
$posts = $posts->toJson();
dd($posts);
}
}
Resultado:
[
{
"id":1,
"title":"Title One",
"body":"Body One"
},
{
"id":2,
"title":"Title Two",
"body":"Body Two"
}
]
Exemplo 5: resposta JSON
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->get();
return response()->json(['posts' => $posts]);
}
}
Resultado:
{"posts":[{"id":40,"title":"Post title 1","body":"Post body"},{"id":39,"title":"Post title 2","body":"Post body"},{"id":38,"title":"Post title 3","body":"Post body"},{"id":37,"title":"Post title 4","body":"Post body"},{"id":36,"title":"Post title 5","body":"Post body"}]}
Espero que possa te ajudar...
Fonte: https://www.itsolutionstuff.com/post/how-to-convert-collection-to-json-in-laravelexample.html
1660087800
このチュートリアルは、laravel でコレクションを json に変換する方法に焦点を当てています。この投稿では、laravel オブジェクトを json に変換する簡単な例を示します。
この例は、laravel 6、laravel 7、laravel 8、laravel 9 バージョンでも使用できます。
時々、データベースからデータを取得していて、雄弁なデータを JSON に変換する必要があります。laravel でコレクションを JSON に変換する方法はたくさんあります。toJson() および json_encode() メソッドを使用して、laravel でオブジェクト配列を JSON に変換します。
それでは、以下の例を 1 つずつ見てみましょう。
例 1: get() と toJson()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->get()
->toJson();
dd($posts);
}
}
出力:
[
{
"id":40,
"title":"Post title 1",
"body":"Post body"
},
{
"id":39,
"title":"Post title 2",
"body":"Post body"
},
{
"id":38,
"title":"Post title 3",
"body":"Post body"
}
]
例 2: find() と toJson()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$post = Post::find(40)->toJson();
dd($post);
}
}
出力:
{
"id":40,
"title":"Post title 1",
"slug":null,
"body":"Post body",
"created_at":"2022-08-05",
"updated_at":"2022-08-05T13:21:10.000000Z",
"status":1
}
例 3: json_encode()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->take(5)
->get();
$posts = json_encode($posts);
dd($posts);
}
}
出力:
[
{
"id":40,
"title":"Post title 1",
"body":"Post body"
},
{
"id":39,
"title":"Post title 2",
"body":"Post body"
},
{
"id":38,
"title":"Post title 3",
"body":"Post body"
}
]
例 4: toJson() を使用したカスタム コレクション
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = collect([
['id' => 1, 'title' => 'Title One', 'body' => 'Body One'],
['id' => 2, 'title' => 'Title Two', 'body' => 'Body Two'],
]);
$posts = $posts->toJson();
dd($posts);
}
}
出力:
[
{
"id":1,
"title":"Title One",
"body":"Body One"
},
{
"id":2,
"title":"Title Two",
"body":"Body Two"
}
]
例 5: JSON レスポンス
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->get();
return response()->json(['posts' => $posts]);
}
}
出力:
{"posts":[{"id":40,"title":"Post title 1","body":"Post body"},{"id":39,"title":"Post title 2","body":"Post body"},{"id":38,"title":"Post title 3","body":"Post body"},{"id":37,"title":"Post title 4","body":"Post body"},{"id":36,"title":"Post title 5","body":"Post body"}]}
お役に立てば幸いです...
ソース: https://www.itsolutionstuff.com/post/how-to-convert-collection-to-json-in-laravelexample.html
1660086000
本教程重点介绍如何在 laravel 中将集合转换为 json。这篇文章会给你一个简单的 laravel 将对象转换为 json 的例子。
您也可以将此示例与 laravel 6、laravel 7、laravel 8 和 laravel 9 版本一起使用。
有时,我们从数据库中获取数据,您需要将 eloquent 数据转换为 JSON,那么您将如何做到这一点?不用担心,在 laravel 中有很多方法可以将集合转换为 JSON。我们将在 laravel 中使用 toJson() 和 json_encode() 方法将对象数组转换为 JSON。
那么让我们一一来看下面的例子:
示例 1:get() 与 toJson()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->get()
->toJson();
dd($posts);
}
}
输出:
[
{
"id":40,
"title":"Post title 1",
"body":"Post body"
},
{
"id":39,
"title":"Post title 2",
"body":"Post body"
},
{
"id":38,
"title":"Post title 3",
"body":"Post body"
}
]
示例 2:find() 和 toJson()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$post = Post::find(40)->toJson();
dd($post);
}
}
输出:
{
"id":40,
"title":"Post title 1",
"slug":null,
"body":"Post body",
"created_at":"2022-08-05",
"updated_at":"2022-08-05T13:21:10.000000Z",
"status":1
}
示例 3:json_encode()
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->take(5)
->get();
$posts = json_encode($posts);
dd($posts);
}
}
输出:
[
{
"id":40,
"title":"Post title 1",
"body":"Post body"
},
{
"id":39,
"title":"Post title 2",
"body":"Post body"
},
{
"id":38,
"title":"Post title 3",
"body":"Post body"
}
]
示例 4:使用 toJson() 的自定义集合
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = collect([
['id' => 1, 'title' => 'Title One', 'body' => 'Body One'],
['id' => 2, 'title' => 'Title Two', 'body' => 'Body Two'],
]);
$posts = $posts->toJson();
dd($posts);
}
}
输出:
[
{
"id":1,
"title":"Title One",
"body":"Body One"
},
{
"id":2,
"title":"Title Two",
"body":"Body Two"
}
]
示例 5:JSON 响应
PostController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\Post;
class PostController extends Controller
{
/**
* Write code on Method
*
* @return response()
*/
public function index(Request $request)
{
$posts = Post::select("id", "title", "body")
->latest()
->get();
return response()->json(['posts' => $posts]);
}
}
输出:
{"posts":[{"id":40,"title":"Post title 1","body":"Post body"},{"id":39,"title":"Post title 2","body":"Post body"},{"id":38,"title":"Post title 3","body":"Post body"},{"id":37,"title":"Post title 4","body":"Post body"},{"id":36,"title":"Post title 5","body":"Post body"}]}
我希望它可以帮助你...
来源:https ://www.itsolutionstuff.com/post/how-to-convert-collection-to-json-in-laravelexample.html
1660057200
This tutorial is focused on how to convert collection to json in laravel. This post will give you a simple example of laravel convert object to json.
You can use this example with laravel 6, laravel 7, laravel 8 and laravel 9 version as well.
Sometimes, we are getting data from the database and you need to convert eloquent data into JSON then how you will do this? don't worry, there are many ways to convert the collection to JSON in laravel. we will use toJson() and json_encode() method to convert objects array to JSON in laravel.
so let's see below one by one example:
See more at: https://www.itsolutionstuff.com/post/how-to-convert-collection-to-json-in-laravelexample.html
1659933720
Tormenta is a functionality layer over BadgerDB key/value store. It provides simple, embedded-object persistence for Go projects with indexing, data querying capabilities and ORM-like features, including loading of relations. It uses date-based IDs so is particuarly good for data sets that are naturally chronological, like financial transactions, soical media posts etc. Greatly inspired by Storm.
Becuase you want to simplify your data persistence and you don't forsee the need for a mult-server setup in the future. Tormenta relies on an embedded key/value store. It's fast and simple, but embedded, so you won't be able to go multi-server and talk to a central DB. If you can live with that, and without the querying power of SQL, Tormenta gives you simplicty - there are no database servers to run, configure and maintain, no schemas, no SQL, no ORMs etc. You just open a connection to the DB, feed in your Go structs and get normal Go functions with which to persist, retrieve and query your data. If you've been burned by complex database setups, errors in SQL strings or overly complex ORMs, you might appreciate Tormenta's simplicity.
"github.com/jpincas/tormenta"
tormenta.Model
to structs you want to persisttormenta:"-"
tag to fields you want to exclude from savingtormenta:"noindex"
tag to fields you want to exclude from secondary indexingtormenta:"split"
tag to string fields where you'd like to index each word separately instead of the the whole sentencetormenta:"nested"
tag to struct fields where you'd like to index each member (using the index syntax "toplevelfield.nextlevelfield")db, err := tormenta.Open("mydatadirectory")
(dont forget to defer db.Close()
). For auto-deleting test DB, use tormenta.OpenTest
db.Save(&MyEntity)
or multiple (possibly different type) entities in a transaction with db.Save(&MyEntity1, &MyEntity2)
.db.Get(&MyEntity, entityID)
.db.First(&MyEntity)
or db.Find(&MyEntities)
respectively.From()/.To()
to restrict result to a date range (both are optional).Match("indexName", value)
, Range("indexname", start, end)
and StartsWith("indexname", "prefix")
for a text prefix search.Or()
..Reverse()
, .Limit()/.Offset()
and Order()
..Run()
, .Count()
or .Sum()
..PreSave()
, .PostSave()
and .PostGet()
methods on your structs.See the example to get a better idea of how to use.
Match("int16field", int(16)")
if you are searching on an int16
field. This is due to slight encoding differences between variable/fixed length ints, signed/unsigned ints and floats. If you let the compiler infer the type and the type you are searching on isn't the default int
(or int32
) or float64
, you'll get odd results. I understand this is a pain - perhaps we should switch to a fixed indexing scheme in all cases?time.Time
fields e.g. myTime time.Time
won't serialise properly as the fields on the underlying struct are unexported and you lose the marshal/unmarshal methods specified by time.Time
. If you must use defined time fields, specify custom marshalling functions.Author: jpincas
Source Code: https://github.com/jpincas/tormenta
1659888840
dcrdata is an original Decred block explorer, with packages and apps for data collection, presentation, and storage. The backend and middleware are written in Go. On the front end, Webpack enables the use of modern javascript features, as well as SCSS for styling.
Always run the Current release or on the Current stable branch. Do not use master
in production.
Series | Branch | Latest release tag | dcrd RPC server version required | |
---|---|---|---|---|
Development | 6.1 | master | N/A | ^7.0.0 (dcrd v1.7 release) |
Current | 6.0 | 6.0-stable | release-v6.0 | ^6.2.0 (dcrd v1.6 release) |
../dcrdata The main Go MODULE. See cmd/dcrdata for the explorer executable.
├── api/types The exported structures used by the dcrdata and Insight APIs.
├── blockdata Package blockdata is the primary data collection and
| storage hub, and chain monitor.
├── cmd
│ └── dcrdata MODULE for the dcrdata explorer executable.
│ ├── api dcrdata's own HTTP API
│ │ └── insight The Insight API
│ ├── explorer Powers the block explorer pages.
│ ├── middleware HTTP router middleware used by the explorer
│ ├── notification Manages dcrd notifications synchronous data collection.
│ ├── public Public resources for block explorer (css, js, etc.)
│ └── views HTML templates for block explorer
├── db
│ ├── cache Package cache provides a caching layer that is used by dcrpg.
│ ├── dbtypes Package dbtypes with common data types.
│ └── dcrpg MODULE and package dcrpg providing PostgreSQL backend.
├── dev Shell scripts for maintenance and deployment.
├── docs Extra documentation.
├── exchanges MODULE and package for gathering data from public exchange APIs
│ ├── rateserver rateserver app, which runs an exchange bot for collecting
│ | exchange rate data, and a gRPC server for providing this
│ | data to multiple clients like dcrdata.
| └── ratesproto Package dcrrates implementing a gRPC protobuf service for
| communicating exchange rate data with a rateserver.
├── explorer/types Types used primarily by the explorer pages.
├── gov MODULE for the on- and off-chain governance packages.
│ ├── agendas Package agendas defines a consensus deployment/agenda DB.
│ └── politeia Package politeia defines a Politeia proposal DB.
│ ├── piclient Package piclient provides functions for retrieving data
| | from the Politeia web API.
│ └── types Package types provides several JSON-tagged structs for
| dealing with Politeia data exchange.
├── mempool Package mempool for monitoring mempool for transactions,
| data collection, distribution, and storage.
├── netparams Package netparams defines the TCP port numbers for the
| various networks (mainnet, testnet, simnet).
├── pubsub Package pubsub implements a websocket-based pub-sub server
| | for blockchain data.
│ ├── democlient democlient app provides an example for using psclient to
| | register for and receive messages from a pubsub server.
│ ├── psclient Package psclient is a basic client for the pubsub server.
│ └── types Package types defines types used by the pubsub client
| and server.
├── rpcutils Package rpcutils contains helper types and functions for
| interacting with a chain server via RPC.
├── semver Defines the semantic version types.
├── stakedb Package stakedb, for tracking tickets
├── testutil
│ ├── apiload An HTTP API load testing application
| └── dbload A DB load testing application
└── txhelpers Package txhelpers provides many functions and types for
processing blocks, transactions, voting, etc.
dcrd
running with --txindex --addrindex
, and synchronized to the current best block on the network. On startup, dcrdata will verify that the dcrd version is compatible.Dockerfiles are provided for convenience, but NOT SUPPORTED. See the Docker documentation for more information. The supported dcrdata build instructions are described below.
The dcrdata build process comprises two general steps:
npm
tool).dcrdata
executable from the Go source files.These steps are described in detail in the following sections.
NOTE: The following instructions assume a Unix-like shell (e.g. bash).
Verify Go installation:
go env GOROOT GOPATH
Ensure $GOPATH/bin
is on your $PATH
.
Clone the dcrdata repository. It is conventional to put it under GOPATH
, but this is no longer necessary (or recommend) with Go modules. For example:
git clone https://github.com/decred/dcrdata $HOME/go-work/github/decred/dcrdata
Install Node.js, which is required to lint and package the static web assets.
Note that none of the above is required at runtime.
Webpack, a JavaScript module bundler, is used to compile and package the static assets in the cmd/dcrdata/public
folder. Node.js' npm
tool is used to install the required Node.js dependencies and build the bundled JavaScript distribution for deployment.
First, install the build dependencies:
cd cmd/dcrdata
npm clean-install # creates node_modules folder fresh
Then, for production, build the webpack bundle:
npm run build # creates public/dist folder
Alternatively, for development, npm
can be made to watch for and integrate JavaScript source changes:
npm run watch
See Front End Development for more information.
Change to the cmd/dcrdata
folder and build:
cd cmd/dcrdata
go build -v
The go tool will process the source code and automatically download dependencies. If the dependencies are configured correctly, there will be no modifications to the go.mod
and go.sum
files.
Note that performing the above commands with older versions of Go within $GOPATH
may require setting GO111MODULE=on
.
As a reward for reading this far, you may use the build.sh script to mostly automate the build steps.
By default, the version string will be postfixed with "-pre+dev". For example, dcrdata version 5.1.0-pre+dev (Go version go1.12.7)
. However, it may be desirable to set the "pre" and "dev" values to different strings, such as "beta" or the actual commit hash. To set these values, build with the -ldflags
switch as follows:
go build -v -ldflags \
"-X main.appPreRelease=beta -X main.appBuild=`git rev-parse --short HEAD`"
This produces a string like dcrdata version 6.0.0-beta+750fd6c2 (Go version go1.16.2)
.
The config file, logs, and data files are stored in the application data folder, which may be specified via the -A/--appdata
and -b/--datadir
settings. However, the location of the config file may also be set with -C/--configfile
. The default paths for your system are shown in the --help
description. If encountering errors involving file system paths, check the permissions on these folders to ensure that the user running dcrdata is able to access these paths.
The "public" and "views" folders must be in the same folder as the dcrdata
executable. Set read-only permissions as appropriate.
Update the repository (assuming you have master
checked out in GOPATH
):
cd $HOME/go-work/github/decred/dcrdata
git pull origin master
Look carefully for errors with git pull
, and reset locally modified files if necessary.
Next, build dcrdata
and bundle the web assets:
cd cmd/dcrdata
go build -v
npm clean-install
npm run build # or npm run watch
Note that performing the above commands with versions of Go prior to 1.16 within $GOPATH
may require setting GO111MODULE=on
.
No special actions are required. Simply start the new dcrdata and automatic database schema upgrades and table data patches will begin.
The database scheme change from dcrdata v2.x to v3.x does not permit an automatic migration. The tables must be rebuilt from scratch:
Drop the old dcrdata database, and create a new empty dcrdata database.
-- Drop the old database.
DROP DATABASE dcrdata;
-- Create a new database with the same "pguser" set in the dcrdata.conf.
CREATE DATABASE dcrdata OWNER dcrdata;
Delete the dcrdata data folder (i.e. corresponding to the datadir
setting). By default, datadir
is in {appdata}/data
:
~/.dcrdata/data
~/Library/Application Support/Dcrdata/data
C:\Users\<your-username>\AppData\Local\Dcrdata\data
(%localappdata%\Dcrdata\data
)With dcrd synchronized to the network's best block, start dcrdata to begin the initial block data sync.
It is crucial that you configure your PostgreSQL server for your hardware and the dcrdata workload.
Read postgresql-tuning.conf carefully for details on how to make the necessary changes to your system. A helpful online tool for determining good settings for your system is called PGTune. Note that when using this tool to subtract 1.5-2GB from your system RAM so dcrdata itself will have plenty of memory. DO NOT simply use this file in place of your existing postgresql.conf. DO NOT simply copy and paste these settings into the existing postgresql.conf. It is necessary to edit the existing postgresql.conf, reviewing all the settings to ensure the same configuration parameters are not set in two different places in the file (postgres will not complain).
If you tune PostgreSQL to fully utilize remaining RAM, you are limiting the RAM available to the dcrdata process, which will increase as request volume increases and its cache becomes fully utilized. Allocate sufficient memory to dcrdata for your application, and use a reverse proxy such as nginx with cache locking features to prevent simultaneous requests to the same resource.
On Linux, you may wish to use a unix domain socket instead of a TCP connection. The path to the socket depends on the system, but it is commonly /var/run/postgresql
. Just set this path in pghost
.
Begin with the sample configuration file. With the default appdata
directory for the current user on Linux:
cp sample-dcrdata.conf ~/.dcrdata/dcrdata.conf
Then edit dcrdata.conf with your dcrd RPC settings. See the output of dcrdata --help
for a list of all options and their default values.
If dcrdata has not previously been run with the PostgreSQL database backend, it is necessary to perform a bulk import of blockchain data and generate table indexes. This will be done automatically by dcrdata
on a fresh startup. Do NOT interrupt the initial sync or use the browser interface until it is completed.
Note that dcrdata requires that dcrd is running with some optional indexes enabled. By default, these indexes are not turned on when dcrd is installed. To enable them, set the following in dcrd.conf:
txindex=1
addrindex=1
If these parameters are not set, dcrdata will be unable to retrieve transaction details and perform address searches, and will exit with an error mentioning these indexes.
Launch the dcrdata daemon and allow the databases to process new blocks. Concurrent synchronization of both stake and PostgreSQL databases is performed, typically requiring between 1.5 to 8 hours. See System Hardware Requirements for more information. Please reread Configuring PostgreSQL (IMPORTANT! Seriously, read this.) of you have performance issues.
On subsequent launches, only blocks new to dcrdata are processed.
./dcrdata # don't forget to configure dcrdata.conf in the appdata folder!
Do NOT interrupt the initial sync or use the browser interface until it is completed. Follow the messages carefully, and if you are uncertain of the current sync status, check system resource utilization. Interrupting the initial sync can leave dcrdata and it's databases in an unrecoverable or suboptimal state. The main steps of the initial sync process are:
Unlike dcrdata.conf, which must be placed in the appdata
folder or explicitly set with -C
, the "public" and "views" folders must be in the same folder as the dcrdata
executable.
The time required to sync varies greatly with system hardware and software configuration. The most important factor is the storage medium on the database machine. An SSD (preferably NVMe, not SATA) is REQUIRED. The PostgreSQL operations are extremely disk intensive, especially during the initial synchronization process. Both high throughput and low latencies for fast random accesses are essential.
Without PostgreSQL, the dcrdata process can get by with:
These specifications assume dcrdata and postgres are running on the same machine.
Minimum:
Recommend:
The cmd/dcrdata
folder contains the main
package for the dcrdata
app, which has several components including:
After dcrdata syncs with the blockchain server via RPC, by default it will begin listening for HTTP connections on http://127.0.0.1:7777/
. This means it starts a web server listening on IPv4 localhost, port 7777. Both the interface and port are configurable. The block explorer and the JSON APIs are both provided by the server on this port.
Note that while dcrdata can be started with HTTPS support, it is recommended to employ a reverse proxy such as Nginx ("engine x"). See sample-nginx.conf for an example Nginx configuration.
The dcrdata block explorer is exposed by two APIs: a Decred implementation of the Insight API, and its own JSON HTTP API. The Insight API uses the path prefix /insight/api
. The dcrdata API uses the path prefix /api
. File downloads are served from the /download
path.
The Insight API is accessible via HTTP via REST or WebSocket.
See the Insight API documentation for further details.
The dcrdata API is a REST API accessible via HTTP. To call the dcrdata API, use the /api
path prefix.
Best block | Path | Type |
---|---|---|
Summary | /block/best?txtotals=[true|false] | types.BlockDataBasic |
Stake info | /block/best/pos | types.StakeInfoExtended |
Header | /block/best/header | dcrjson.GetBlockHeaderVerboseResult |
Raw Header (hex) | /block/best/header/raw | string |
Hash | /block/best/hash | string |
Height | /block/best/height | int |
Raw Block (hex) | /block/best/raw | string |
Size | /block/best/size | int32 |
Subsidy | /block/best/subsidy | types.BlockSubsidies |
Transactions | /block/best/tx | types.BlockTransactions |
Transactions Count | /block/best/tx/count | types.BlockTransactionCounts |
Verbose block result | /block/best/verbose | dcrjson.GetBlockVerboseResult |
Block X (block index) | Path | Type |
---|---|---|
Summary | /block/X | types.BlockDataBasic |
Stake info | /block/X/pos | types.StakeInfoExtended |
Header | /block/X/header | dcrjson.GetBlockHeaderVerboseResult |
Raw Header (hex) | /block/X/header/raw | string |
Hash | /block/X/hash | string |
Raw Block (hex) | /block/X/raw | string |
Size | /block/X/size | int32 |
Subsidy | /block/best/subsidy | types.BlockSubsidies |
Transactions | /block/X/tx | types.BlockTransactions |
Transactions Count | /block/X/tx/count | types.BlockTransactionCounts |
Verbose block result | /block/X/verbose | dcrjson.GetBlockVerboseResult |
Block H (block hash) | Path | Type |
---|---|---|
Summary | /block/hash/H | types.BlockDataBasic |
Stake info | /block/hash/H/pos | types.StakeInfoExtended |
Header | /block/hash/H/header | dcrjson.GetBlockHeaderVerboseResult |
Raw Header (hex) | /block/hash/H/header/raw | string |
Height | /block/hash/H/height | int |
Raw Block (hex) | /block/hash/H/raw | string |
Size | /block/hash/H/size | int32 |
Subsidy | /block/best/subsidy | types.BlockSubsidies |
Transactions | /block/hash/H/tx | types.BlockTransactions |
Transactions count | /block/hash/H/tx/count | types.BlockTransactionCounts |
Verbose block result | /block/hash/H/verbose | dcrjson.GetBlockVerboseResult |
Block range (X < Y) | Path | Type |
---|---|---|
Summary array for blocks on [X,Y] | /block/range/X/Y | []types.BlockDataBasic |
Summary array with block index step S | /block/range/X/Y/S | []types.BlockDataBasic |
Size (bytes) array | /block/range/X/Y/size | []int32 |
Size array with step S | /block/range/X/Y/S/size | []int32 |
Transaction T (transaction id) | Path | Type |
---|---|---|
Transaction details | /tx/T?spends=[true|false] | types.Tx |
Transaction details w/o block info | /tx/trimmed/T | types.TrimmedTx |
Inputs | /tx/T/in | []types.TxIn |
Details for input at index X | /tx/T/in/X | types.TxIn |
Outputs | /tx/T/out | []types.TxOut |
Details for output at index X | /tx/T/out/X | types.TxOut |
Vote info (ssgen transactions only) | /tx/T/vinfo | types.VoteInfo |
Ticket info (sstx transactions only) | /tx/T/tinfo | types.TicketInfo |
Serialized bytes of the transaction | /tx/hex/T | string |
Same as /tx/trimmed/T | /tx/decoded/T | types.TrimmedTx |
Transactions (batch) | Path | Type |
---|---|---|
Transaction details (POST body is JSON of types.Txns ) | /txs?spends=[true|false] | []types.Tx |
Transaction details w/o block info | /txs/trimmed | []types.TrimmedTx |
Address A | Path | Type |
---|---|---|
Summary of last 10 transactions | /address/A | types.Address |
Number and value of spent and unspent outputs | /address/A/totals | types.AddressTotals |
Verbose transaction result for last 10 transactions | /address/A/raw | types.AddressTxRaw |
Summary of last N transactions | /address/A/count/N | types.Address |
Verbose transaction result for last N transactions | /address/A/count/N/raw | types.AddressTxRaw |
Summary of last N transactions, skipping M | /address/A/count/N/skip/M | types.Address |
Verbose transaction result for last N transactions, skipping M | /address/A/count/N/skip/M/raw | types.AddressTxRaw |
Transaction inputs and outputs as a CSV formatted file. | /download/address/io/A | CSV file |
Stake Difficulty (Ticket Price) | Path | Type |
---|---|---|
Current sdiff and estimates | /stake/diff | types.StakeDiff |
Sdiff for block X | /stake/diff/b/X | []float64 |
Sdiff for block range [X,Y] (X <= Y) | /stake/diff/r/X/Y | []float64 |
Current sdiff separately | /stake/diff/current | dcrjson.GetStakeDifficultyResult |
Estimates separately | /stake/diff/estimates | dcrjson.EstimateStakeDiffResult |
Ticket Pool | Path | Type |
---|---|---|
Current pool info (size, total value, and average price) | /stake/pool | types.TicketPoolInfo |
Current ticket pool, in a JSON object with a "tickets" key holding an array of ticket hashes | /stake/pool/full | []string |
Pool info for block X | /stake/pool/b/X | types.TicketPoolInfo |
Full ticket pool at block height or hash H | /stake/pool/b/H/full | []string |
Pool info for block range [X,Y] (X <= Y) | /stake/pool/r/X/Y?arrays=[true|false] * | []apitypes.TicketPoolInfo |
The full ticket pool endpoints accept the URL query ?sort=[true|false]
for requesting the tickets array in lexicographical order. If a sorted list or list with deterministic order is not required, using sort=false
will reduce server load and latency. However, be aware that the ticket order will be random, and will change each time the tickets are requested.
*For the pool info block range endpoint that accepts the arrays
url query, a value of true
will put all pool values and pool sizes into separate arrays, rather than having a single array of pool info JSON objects. This may make parsing more efficient for the client.
Votes and Agendas Info | Path | Type |
---|---|---|
The current agenda and its status | /stake/vote/info | dcrjson.GetVoteInfoResult |
All agendas high level details | /agendas | []types.AgendasInfo |
Details for agenda {agendaid} | /agendas/{agendaid} | types.AgendaAPIResponse |
Mempool | Path | Type |
---|---|---|
Ticket fee rate summary | /mempool/sstx | apitypes.MempoolTicketFeeInfo |
Ticket fee rate list (all) | /mempool/sstx/fees | apitypes.MempoolTicketFees |
Ticket fee rate list (N highest) | /mempool/sstx/fees/N | apitypes.MempoolTicketFees |
Detailed ticket list (fee, hash, size, age, etc.) | /mempool/sstx/details | apitypes.MempoolTicketDetails |
Detailed ticket list (N highest fee rates) | /mempool/sstx/details/N | apitypes.MempoolTicketDetails |
Exchanges | Path | Type |
---|---|---|
Exchange data summary | /exchanges | exchanges.ExchangeBotState |
List of available currency codes | /exchanges/codes | []string |
Exchange monitoring is off by default. Server must be started with --exchange-monitor
to enable exchange data. The server will set a default currency code. To use a different code, pass URL parameter ?code=[code]
. For example, /exchanges?code=EUR
.
Other | Path | Type |
---|---|---|
Status | /status | types.Status |
Health (HTTP 200 or 503) | /status/happy | types.Happy |
Coin Supply | /supply | types.CoinSupply |
Coin Supply Circulating (Mined) | /supply/circulating?dcr=[true|false] | int (default) or float (dcr=true ) |
Endpoint list (always indented) | /list | []string |
All JSON endpoints accept the URL query indent=[true|false]
. For example, /stake/diff?indent=true
. By default, indentation is off. The characters to use for indentation may be specified with the indentjson
string configuration option.
Although there is mempool data collection and serving, it is very important to keep in mind that the mempool in your node (dcrd) is not likely to be exactly the same as other nodes' mempool. Also, your mempool is cleared out when you shutdown dcrd. So, if you have recently (e.g. after the start of the current ticket price window) started dcrd, your mempool will be missing transactions that other nodes have.
Make sure you have a recent version of node and npm installed.
From the cmd/dcrdata directory, run the following command to install the node modules.
npm clean-install
This will create and install into a directory named node_modules
.
You'll also want to run npm clean-install
after merging changes from upstream. It is run for you when you use the build script (./dev/build.sh
).
For development, there's a webpack script that watches for file changes and automatically bundles. To use it, run the following command in a separate terminal and leave it running while you work. You'll only use this command if you are editing javascript files.
npm run watch
For production, bundle assets via:
npm run build
You will need to at least build
if changes have been made. watch
essentially runs build
after file changes, but also performs some additional checks.
Webpack compiles SCSS to CSS while bundling. The watch
script described above also watches for changes in these files and performs linting to ensure syntax compliance.
Before you write any CSS, see if you can achieve your goal by using existing classes available in Bootstrap 4. This helps prevent our stylesheets from getting bloated and makes it easier for things to work well across a wide range browsers & devices. Please take the time to Read the docs
Note there is a dark mode, so make sure things look good with the dark background as well.
The core functionality of dcrdata is server-side rendered in Go and designed to work well with javascript disabled. For users with javascript enabled, Turbolinks creates a persistent single page application that handles all HTML rendering.
.tmpl files are cached by the backend, and can be reloaded via running killall -USR1 dcrdata
from the command line.
To encourage code that is idiomatic to Turbolinks based execution environment, javascript based enhancements should use Stimulus controllers with corresponding actions and targets. Keeping things tightly scoped with controllers and modules helps to localize complexity and maintain a clean application lifecycle. When using events handlers, bind and unbind them in the connect
and disconnect
function of controllers which executes when they get removed from the DOM.
The core functionality of dcrdata should perform well in low power device / high latency scenarios (eg. a cheap smart phone with poor reception). This means that heavy assets should be lazy loaded when they are actually needed. Simple tasks like checking a transaction or address should have a very fast initial page load.
package dbtypes
defines the data types used by the DB backends to model the block, transaction, and related blockchain data structures. Functions for converting from standard Decred data types (e.g. wire.MsgBlock
) are also provided.
package rpcutils
includes helper functions for interacting with a rpcclient.Client
.
package stakedb
defines the StakeDatabase
and ChainMonitor
types for efficiently tracking live tickets, with the primary purpose of computing ticket pool value quickly. It uses the database.DB
type from github.com/decred/dcrd/database
with an ffldb storage backend from github.com/decred/dcrd/database/ffldb
. It also makes use of the stake.Node
type from github.com/decred/dcrd/blockchain/stake
. The ChainMonitor
type handles connecting new blocks and chain reorganization in response to notifications from dcrd.
package txhelpers
includes helper functions for working with the common types dcrutil.Tx
, dcrutil.Block
, chainhash.Hash
, and others.
Some packages are currently designed only for internal use by other dcrdata packages, but may be of general value in the future.
blockdata
defines:
chainMonitor
type and its BlockConnectedHandler()
method that handles block-connected notifications and triggers data collection and storage.BlockData
type and methods for converting to API types.blockDataCollector
type and its Collect()
and CollectHash()
methods that are called by the chain monitor when a new block is detected.BlockDataSaver
interface required by chainMonitor
for storage of collected data.dcrpg
defines:
ChainDB
type, which is the primary exported type from dcrpg
, providing an interface for a PostgreSQL database.*sql.DB
instance and various parameters.package mempool
defines a MempoolMonitor
type that can monitor a node's mempool using the OnTxAccepted
notification handler to send newly received transaction hashes via a designated channel. Ticket purchases (SSTx) are triggers for mempool data collection, which is handled by the DataCollector
class, and data storage, which is handled by any number of objects implementing the MempoolDataSaver
interface.
See the GitHub issue trackers and the project milestones.
Yes, please! See CONTRIBUTING.md for details, but here's the gist of it:
git checkout -b cool-stuff
).DO NOT merge from master to your feature branch; rebase.
Also, come chat with us on Matrix in the dcrdata channel!
Author: Decred
Source Code: https://github.com/decred/dcrdata
License: ISC license
1659797832
JSON est devenu le standard de facto pour échanger des données entre client et serveur. Python a un package intégré appelé json pour encoder et décoder les données JSON. Pour lire et écrire les données json, nous devons utiliser le package json. Pour la gestion des fichiers, Python fournit de nombreuses fonctions qui feront le travail.
Pour créer un fichier json en Python , utilisez- le avec la fonction open() . La fonction open() prend le nom de fichier et le mode comme argument. Si le fichier n'est pas là, alors il sera créé.
Python With Statement est utilisé pour ouvrir des fichiers. L' instruction with est recommandée pour travailler avec des fichiers car elle garantit que les descripteurs de fichiers ouverts sont automatiquement fermés après que l'exécution du programme quitte le contexte de l' instruction with .
# app.py
import json
with open('new_file.json', 'w') as f:
print("The json file is created")
Dans ce code, nous essayons d'ouvrir un fichier appelé new_file.json avec le mode w . Cependant, le fichier n'existe pas dans le système de fichiers, créant un nouveau fichier dans le même dossier.
Pour créer un fichier json à partir d'un fichier json existant, ouvrez le fichier existant en mode lecture et lisez le contenu de ce fichier et utilisez l' instruction open() et with en mode écriture et videz les données json dans un nouveau fichier json.
Disons que nous avons le fichier existant data.json .
{
"data": [
{
"color": "red",
"value": "#f00"
},
{
"color": "green",
"value": "#0f0"
},
{
"color": "blue",
"value": "#00f"
},
{
"color": "black",
"value": "#000"
}
]
}
Maintenant, nous allons créer un nouveau fichier json à partir de ce fichier data.json .
# app.py
import json
with open('data.json') as f:
data = json.load(f)
with open('new_file.json', 'w') as f:
json.dump(data, f, indent=2)
print("New json file is created from data.json file")
python3 app.py
New json file is created from data.json file
Donc, fondamentalement, nous lisons le fichier json existant et créons un nouveau fichier json, et vidons le contenu dans ce nouveau fichier.
En utilisant le gestionnaire de contexte de Python, vous pouvez créer un fichier json et l'ouvrir en mode écriture. Les fichiers JSON se terminent commodément par une extension .json.
Pour travailler avec des fichiers json en Python :
Ce n'est pas toujours le cas, mais vous suivrez probablement ces étapes.
Lien : https://appdividend.com/2022/03/10/how-to-create-json-file-in-python/
#json #python
1659790620
JSON has become the de facto standard to exchange data between client and server. Python has an inbuilt package called json for encoding and decoding JSON data. To read and write the json data, we have to use the json package. For file handling, Python provides many functions that will do the job.
In this brief guide, We will share How to Create A JSON File in Python Easily
See more at: https://appdividend.com/2022/03/10/how-to-create-json-file-in-python/
#json #python