1654446660
About
Waves protobuf schemas repository
How to use
Add dependency to your pom.xml
<dependency>
<groupId>com.wavesplatform</groupId>
<artifactId>protobuf-schemas</artifactId>
<version>{version}</version>
</dependency>
build.sbt
:libraryDependencies += "com.wavesplatform" % "protobuf-schemas" % "{version}" % "protobuf-src" intransitive()
2. Configure ScalaPB to compile external schemas with:
inConfig(Compile)(Seq(
PB.protoSources in Compile := Seq(PB.externalIncludePath.value),
includeFilter in PB.generate := new SimpleFileFilter((f: File) => f.getName.endsWith(".proto") && f.getParent.endsWith("waves")),
PB.targets += scalapb.gen(flatPackage = true) -> sourceManaged.value
))
3. If you use SNAPSHOT version, add this line
resolvers += Resolver.sonatypeRepo("snapshots")
See ScalaPB docs for more info.
Npm package: @waves/protobuf-serialization
.
It contains generated JavaScript classes, TypeScript definitions as well as raw proto files. The default build uses CommonJS and includes all of the proto files. We used pbjs
to build JavaScript and pbts
to build TypeScript definitions.
You could also make your own custom build from raw .proto
files, for example, if you want to use only a subset of proto schemas or gRPC services. They can be found in @waves/protobuf-serialization/proto
directory.
long.js
is used for 64-bit integers: int64
, uint64
, etc.
Example:
npm install --save @waves/protobuf-serialization
import { waves } from '@waves/protobuf-serialization';
const block = new waves.Block();
block.header = // ... set necessary fields
const buffer = waves.Block.encode(block);
const blockDecoded = waves.Block.decode(buffer);
App.config
, packages.config
to your C# solution <ItemGroup>
<Protobuf Include="proto\waves\*.proto" OutputDir="waves\%(RelativePath)" GrpcServices="None" />
<Protobuf Include="proto\waves\events\*.proto" OutputDir="waves\events\%(RelativePath)" GrpcServices="None" />
<Protobuf Include="proto\waves\node\grpc\*.proto" OutputDir="waves\node\grpc\%(RelativePath)" GrpcServices="Both" />
</ItemGroup>
to your .csproj
file. After this just build your project.
or as alternative you can use util protoc, for example: protoc --csharp_out=RelativePath --proto_path=RelativePathToProtoDir RelativePathToProtoFile
Also there is a NuGet package WavesPlatform.ProtobufSchema with this project.
Add dependency to your Cargo.toml
[dependencies]
waves-protobuf-schemas = { git = "https://github.com/wavesplatform/protobuf-schemas" }
How to generate sources locally
Use mvn package
to create JAR artifacts:
protobuf-schemas-{version}-protobuf-src.jar
- raw .proto filesprotobuf-schemas-{version}.jar
- protoc-generated Java classesGenerating python sources requires python 3 or later. Run the following commands from the root of this repository to generate python sources in /target/python
:
python3 -m venv .venv
. .venv/bin/activate
pip install grpcio grpcio-tools base58
git clone https://github.com/wavesplatform/protobuf-schemas.git
python -m grpc_tools.protoc --proto_path=./protobuf-schemas/proto --python_out=. --grpc_python_out=. `find ./protobuf-schemas/proto -type f`
Tweak --python_out
and --grpc_python_out
parameters to generate files elsewhere. Target path should likely be absolute. Now you can use generated classes:
import grpc
from waves.events.grpc.blockchain_updates_pb2_grpc import BlockchainUpdatesApiStub
from waves.events.grpc.blockchain_updates_pb2 import SubscribeRequest
from base58 import b58encode, b58decode
def asset_id(asset_id_bytes):
return len(asset_id_bytes) > 0 and b58encode(asset_id_bytes) or 'WAVES'
def print_update(update):
update_id = b58encode(update.id)
print(f'block {update_id}:')
for (tx_id, tx_state_update) in zip(update.append.transaction_ids, update.append.transaction_state_updates):
print(f' tx {b58encode(tx_id)}:')
for balance in tx_state_update.balances:
print(f' {b58encode(balance.address)}: {balance.amount_before} -> {balance.amount_after.amount} [{asset_id(balance.amount_after.asset_id)}]')
with grpc.insecure_channel('grpc.wavesnodes.com:6881') as channel:
for block in BlockchainUpdatesApiStub(channel).Subscribe(SubscribeRequest(from_height=3135450, to_height=3135470)):
print_update(block.update)
Download Details:
Author: wavesplatform
Source Code: https://github.com/wavesplatform/protobuf-schemas
License:
#waves #blockchain #smartcontract #schemas #rust
1600347600
This is part 3 of “MS SQL Server- Zero to Hero” and in this article, we will be discussing about the SCHEMAS in SQL SERVER. Before getting into this article, please consider to visit previous articles in this series from below,
In part one, we learned the basics of data, database, database management system, and types of DBMS and SQL.
#sql server #benefits of schemas #create schema in sql #database schemas #how to create schema in sql server #schemas #schemas in sql server #sql server schemas #what is schema in sql server
1654446660
About
Waves protobuf schemas repository
How to use
Add dependency to your pom.xml
<dependency>
<groupId>com.wavesplatform</groupId>
<artifactId>protobuf-schemas</artifactId>
<version>{version}</version>
</dependency>
build.sbt
:libraryDependencies += "com.wavesplatform" % "protobuf-schemas" % "{version}" % "protobuf-src" intransitive()
2. Configure ScalaPB to compile external schemas with:
inConfig(Compile)(Seq(
PB.protoSources in Compile := Seq(PB.externalIncludePath.value),
includeFilter in PB.generate := new SimpleFileFilter((f: File) => f.getName.endsWith(".proto") && f.getParent.endsWith("waves")),
PB.targets += scalapb.gen(flatPackage = true) -> sourceManaged.value
))
3. If you use SNAPSHOT version, add this line
resolvers += Resolver.sonatypeRepo("snapshots")
See ScalaPB docs for more info.
Npm package: @waves/protobuf-serialization
.
It contains generated JavaScript classes, TypeScript definitions as well as raw proto files. The default build uses CommonJS and includes all of the proto files. We used pbjs
to build JavaScript and pbts
to build TypeScript definitions.
You could also make your own custom build from raw .proto
files, for example, if you want to use only a subset of proto schemas or gRPC services. They can be found in @waves/protobuf-serialization/proto
directory.
long.js
is used for 64-bit integers: int64
, uint64
, etc.
Example:
npm install --save @waves/protobuf-serialization
import { waves } from '@waves/protobuf-serialization';
const block = new waves.Block();
block.header = // ... set necessary fields
const buffer = waves.Block.encode(block);
const blockDecoded = waves.Block.decode(buffer);
App.config
, packages.config
to your C# solution <ItemGroup>
<Protobuf Include="proto\waves\*.proto" OutputDir="waves\%(RelativePath)" GrpcServices="None" />
<Protobuf Include="proto\waves\events\*.proto" OutputDir="waves\events\%(RelativePath)" GrpcServices="None" />
<Protobuf Include="proto\waves\node\grpc\*.proto" OutputDir="waves\node\grpc\%(RelativePath)" GrpcServices="Both" />
</ItemGroup>
to your .csproj
file. After this just build your project.
or as alternative you can use util protoc, for example: protoc --csharp_out=RelativePath --proto_path=RelativePathToProtoDir RelativePathToProtoFile
Also there is a NuGet package WavesPlatform.ProtobufSchema with this project.
Add dependency to your Cargo.toml
[dependencies]
waves-protobuf-schemas = { git = "https://github.com/wavesplatform/protobuf-schemas" }
How to generate sources locally
Use mvn package
to create JAR artifacts:
protobuf-schemas-{version}-protobuf-src.jar
- raw .proto filesprotobuf-schemas-{version}.jar
- protoc-generated Java classesGenerating python sources requires python 3 or later. Run the following commands from the root of this repository to generate python sources in /target/python
:
python3 -m venv .venv
. .venv/bin/activate
pip install grpcio grpcio-tools base58
git clone https://github.com/wavesplatform/protobuf-schemas.git
python -m grpc_tools.protoc --proto_path=./protobuf-schemas/proto --python_out=. --grpc_python_out=. `find ./protobuf-schemas/proto -type f`
Tweak --python_out
and --grpc_python_out
parameters to generate files elsewhere. Target path should likely be absolute. Now you can use generated classes:
import grpc
from waves.events.grpc.blockchain_updates_pb2_grpc import BlockchainUpdatesApiStub
from waves.events.grpc.blockchain_updates_pb2 import SubscribeRequest
from base58 import b58encode, b58decode
def asset_id(asset_id_bytes):
return len(asset_id_bytes) > 0 and b58encode(asset_id_bytes) or 'WAVES'
def print_update(update):
update_id = b58encode(update.id)
print(f'block {update_id}:')
for (tx_id, tx_state_update) in zip(update.append.transaction_ids, update.append.transaction_state_updates):
print(f' tx {b58encode(tx_id)}:')
for balance in tx_state_update.balances:
print(f' {b58encode(balance.address)}: {balance.amount_before} -> {balance.amount_after.amount} [{asset_id(balance.amount_after.asset_id)}]')
with grpc.insecure_channel('grpc.wavesnodes.com:6881') as channel:
for block in BlockchainUpdatesApiStub(channel).Subscribe(SubscribeRequest(from_height=3135450, to_height=3135470)):
print_update(block.update)
Download Details:
Author: wavesplatform
Source Code: https://github.com/wavesplatform/protobuf-schemas
License:
#waves #blockchain #smartcontract #schemas #rust
1597225740
Since Confluent Platform version 5.5, Avro is no longer the only schema in town. Protobuf and JSON schemas are now supported as first-class citizens in Confluent universe. But before I go on explaining how to use Protobuf with Kafka, let’s answer one often-asked question:
When applications communicate through a pub-sub system, they exchange messages and those messages need to be understood and agreed upon by all the participants in the communication. Additionally, you would like to detect and prevent changes to the message format that would make messages unreadable for some of the participants.
That’s where a schema comes in — it represents a contract between the participants in communication, just like an API represents a contract between a service and its consumers. And just as REST APIs can be described using OpenAPI (Swagger) so the messages in Kafka can be described using Avro, Protobuf or Avro schemas.
Schemas describe the structure of the data by:
In addition, together with Schema Registry, schemas prevent a producer from sending poison messages - malformed data that consumers cannot interpret. Schema Registry will detect if breaking changes are about to be introduced by the producer and can be configured to reject such changes. An example of a breaking change would be deleting a mandatory field from the schema.
Similar to Apache Avro, Protobuf is a method of serializing structured data. A message format is defined in a .proto file and you can generate code from it in many languages including Java, Python, C++, C#, Go and Ruby. Unlike Avro, Protobuf does not serialize schema with the message. So, in order to deserialize the message, you need the schema in the consumer.
Here’s an example of a Protobuf schema containing one message type:
#integration #apache kafka #schema #apache kafka tutorial #protobuf
1597067523
Recently, I have seen several questions like “what’s the difference between JSON-LD and JSON Schema” or “can I use JSON Schema and Schema.org”. I come from a linked data background (which is close to the world of Schema.org) but have recently started using JSON Schema a lot and I have to admit that there is no trivial answer to these questions. There is the obvious similarity in the standard names like “Schema” and “JSON”. If you compare the Schema.org page for Person to this example on the JSON Schema page, you have to admit that they kind of look alike. Combine this with the fact that Schema.org touts JSON-LD, which — by design — very much looks like regular JSON completes the confusion. So there definitely are enough reasons to write this article.
JSON Schema is to JSON what XML Schema is to XML. It allows you to specify the structure of a JSON document. You can state that the field “email” must follow a certain regular expression or that an address has “street_name”, “number”, and “street_type” fields. Michael Droettboom’s book “Understanding JSON Schema” illustrates validation quite nicely with red & green examples.
The main use case for JSON Schema seems to be in JSON APIs where it plays two major roles:
As with all things related to code, reuse is a good idea. JSON Schema has the ability to import schemas using the $ref keyword. There are also efforts to share schemas. JSON Schema Store is one example. Its main use case is to support syntax highlighting for editors, for instance when editing a swagger file. At the time of writing, it contains over 250 schemas including — drum-roll please / you certainly guessed it — Schema.org. These describe things like Action and Place. So the idea could be to centrally define JSON Schema building blocks that can be re-used in different APIs, making it easier to consume them, maybe even to the point where intelligent software can interact with APIs automatically. But before we get carried away, let’s have a look at Schema.org.
#schema #swagger #json-ld #json-schema
1622037600
Schema delegation is often used together with schema stitching. Schema stitching is a process of combining multiple GraphQL schemas together. It simplifies the creation of a gateway schema — especially when there are multiple services. Schema stitching automatically sets up delegation for root fields that already exist in the stitched-together schemas. New root fields (as well as any new non-root fields) require new resolvers.
#graphql #schema delegation #schema