1670244540
gRPC, REST’s up-and-coming competitor, approaches synchronous communication from another angle, offering protocol buffers and typed contracts. What does that mean for your project?
In today’s technology landscape, most projects require the use of APIs. APIs bridge communication between services that may represent a single, complex system but may also reside on separate machines or use multiple, incompatible networks or languages.
Many standard technologies address the interservice communication needs of distributed systems, such as REST, SOAP, GraphQL, or gRPC. While REST is a favored approach, gRPC is a worthy contender, offering high performance, typed contracts, and excellent tooling.
Representational state transfer (REST) is a means of retrieving or manipulating a service’s data. A REST API is generally built on the HTTP protocol, using a URI to select a resource and an HTTP verb (e.g., GET, PUT, POST) to select the desired operation. Request and response bodies contain data that is specific to the operation, while their headers provide metadata. To illustrate, let’s look at a simplified example of retrieving a product via a REST API.
Here, we request a product resource with an ID of 11
and direct the API to respond in JSON format:
GET /products/11 HTTP/1.1
Accept: application/json
Given this request, our response (irrelevant headers omitted) may look like:
HTTP/1.1 200 OK
Content-Type: application/json
{ id: 11, name: "Purple Bowtie", sku: "purbow", price: { amount: 100, currencyCode: "USD" } }
While JSON may be human-readable, it is not optimal when used between services. The repetitive nature of referencing property names—even when compressed—can lead to bloated messages. Let’s look at an alternative to address this concern.
gRPC Remote Procedure Call (gRPC) is an open-source, contract-based, cross-platform communication protocol that simplifies and manages interservice communication by exposing a set of functions to external clients.
Built on top of HTTP/2, gRPC leverages features such as bidirectional streaming and built-in Transport Layer Security (TLS). gRPC enables more efficient communication through serialized binary payloads. It uses protocol buffers by default as its mechanism for structured data serialization, similar to REST’s use of JSON.
Unlike JSON, however, protocol buffers are more than a serialized format. They include three other major parts:
.proto
files (We’ll follow proto3, the latest protocol buffer language specification.)The remote functions that are available on a service (defined in a .proto
file) are listed inside the service node in the protocol buffer file. As developers, we get to define these functions and their parameters using protocol buffers’ rich type system. This system supports various numeric and date types, lists, dictionaries, and nullables to define our input and output messages.
These service definitions need to be available to both the server and the client. Unfortunately, there is no default mechanism to share these definitions aside from providing direct access to the .proto
file itself.
This example .proto
file defines a function to return a product entry, given an ID:
syntax = "proto3";
package product;
service ProductCatalog {
rpc GetProductDetails (ProductDetailsRequest) returns (ProductDetailsReply);
}
message ProductDetailsRequest {
int32 id = 1;
}
message ProductDetailsReply {
int32 id = 1;
string name = 2;
string sku = 3;
Price price = 4;
}
message Price {
float amount = 1;
string currencyCode = 2;
}
Snippet 1: ProductCatalog
Service Definition
The strict typing and field ordering of proto3 make message deserialization considerably less taxing than parsing JSON.
To recap, the most significant points when comparing REST vs. gRPC are:
REST | gRPC | |
---|---|---|
Cross-platform | Yes | Yes |
Message Format | Custom but generally JSON or XML | Protocol buffers |
Message Payload Size | Medium/Large | Small |
Processing Complexity | Higher (text parsing) | Lower (well-defined binary structure) |
Browser Support | Yes (native) | Yes (via gRPC-Web) |
Where less-strict contracts and frequent additions to the payload are expected, JSON and REST are great fits. When contracts tend to stay more static and speed is of the utmost importance, gRPC generally wins out. In most projects I have worked on, gRPC has proved to be lighter and more performant than REST.
Let’s build a streamlined project to explore how simple it is to adopt gRPC.
To get started, we will create a .NET 6 project in Visual Studio 2022 Community Edition (VS). We will select the ASP.NET Core gRPC Service template and name both the project (we’ll use InventoryAPI
) and our first solution within it (Inventory
).
Now, let’s choose the .NET 6.0 (Long-term support) option for our framework:
Now that we’ve created the project, VS displays a sample gRPC prototype definition service named Greeter
. We will repurpose Greeter
’s core files to suit our needs.
greet.proto
with Snippet 1, renaming the file product.proto
.GreeterService.cs
file with Snippet 2, renaming the file ProductCatalogService.cs
.using Grpc.Core;
using Product;
namespace InventoryAPI.Services
{
public class ProductCatalogService : ProductCatalog.ProductCatalogBase
{
public override Task<ProductDetailsReply> GetProductDetails(
ProductDetailsRequest request, ServerCallContext context)
{
return Task.FromResult(new ProductDetailsReply
{
Id = request.Id,
Name = "Purple Bowtie",
Sku = "purbow",
Price = new Price
{
Amount = 100,
CurrencyCode = "USD"
}
});
}
}
}
Snippet 2: ProductCatalogService
The service now returns a hardcoded product. To make the service work, we need only change the service registration in Program.cs
to reference the new service name. In our case, we will rename app.MapGrpcService<GreeterService>();
to app.MapGrpcService<ProductCatalogService>();
to make our new API runnable.
While we may be tempted to try it, we cannot test our gRPC service through a browser aimed at its endpoint. If we were to attempt this, we would receive an error message indicating that communication with gRPC endpoints must be made through a gRPC client.
To test our service, let’s use VS’s basic Console App template and create a gRPC client to call the API. I named mine InventoryApp
.
For expediency, let’s reference a relative file path by which we will share our contract. We will add the reference manually to the .csproj
file. Then, we’ll update the path and set Client
mode. Note: I recommend you become familiar with and have confidence in your local folder structure before using relative referencing.
Here are the .proto
references, as they appear in both the service and client project files:
Service Project File (Code to copy to client project file) | Client Project File (After pasting and editing) |
---|---|
|
|
Now, to call our service, we’ll replace the contents of Program.cs
. Our code will accomplish a number of objectives:
launchsettings.json
file for the actual value).using System.Text.Json;
using Grpc.Net.Client;
using Product;
var channel = GrpcChannel.ForAddress("https://localhost:7200");
var client = new ProductCatalog.ProductCatalogClient(channel);
var request = new ProductDetailsRequest
{
Id = 1
};
var response = await client.GetProductDetailsAsync(request);
Console.WriteLine(JsonSerializer.Serialize(response, new JsonSerializerOptions
{
WriteIndented = true
}));
Console.ReadKey();
Snippet 3: New Program.cs
To test our code, in VS, we’ll right-click the solution and choose Set Startup Projects. In the Solution Property Pages dialog, we’ll:
InventoryAPI
and InventoryApp
) to Start.Now we can start the solution by clicking Start in the VS toolbar (or by pressing the F5 key). Two new console windows will display: one to tell us the service is listening, the other to show us details of the retrieved product.
Now let’s use another method to connect the gRPC client to our service’s definition. The most client-accessible contract-sharing solution is to make our definitions available through a URL. Other options are either very brittle (file shared through a path) or require more effort (contract shared through a native package). Sharing through a URL (as SOAP and Swagger/OpenAPI do) is flexible and requires less code.
To get started, make the .proto
file available as static content. We will update our code manually because the UI on the build action is set to “Protobuf Compiler.” This change directs the compiler to copy the .proto
file so it may be served from a web address. If this setting were changed through the VS UI, the build would break. Our first step, then, is to add Snippet 4 to the InventoryAPI.csproj
file:
<ItemGroup>
<Content Update="Protos\product.proto">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</Content>
</ItemGroup>
<ItemGroup>
<Content Include="Protos\product.proto" CopyToPublishDirectory="PreserveNewest" />
</ItemGroup>
Snippet 4: Code to Add to the InventoryAPI
Service Project File
Next, we insert the code in Snippet 5 at the top of the ProductCatalogService.cs
file to set up an endpoint to return our .proto
file:
using System.Net.Mime;
using Microsoft.AspNetCore.StaticFiles;
using Microsoft.Extensions.FileProviders;
Snippet 5: Namespace
Imports
And now, we add Snippet 6 just before app.Run()
, also in the ProductCatalogService.cs
file:
var provider = new FileExtensionContentTypeProvider();
provider.Mappings.Clear();
provider.Mappings[".proto"] = MediaTypeNames.Text.Plain;
app.UseStaticFiles(new StaticFileOptions
{
FileProvider = new PhysicalFileProvider(Path.Combine(app.Environment.ContentRootPath, "Protos")),
RequestPath = "/proto",
ContentTypeProvider = provider
});
app.UseRouting();
Snippet 6: Code to Make .proto
Files Accessible Through the API
With Snippets 4-6 added, the contents of the .proto
file should be visible in the browser.
Now we want to create a new console client that we will connect to our existing server with VS’s Dependency Wizard. The issue is that this wizard doesn’t talk HTTP/2. Therefore, we need to adjust our server to talk over HTTP/1 and start the server. With our server now making its .proto
file available, we can build a new test client that hooks into our server via the gRPC wizard.
appsettings.json
JSON file:Protocol
field (found at the path Kestrel.EndpointDefaults.Protocols
) to read Https
.proto
information, the server must be running. Originally, we started both the previous client and our server from VS’s Set Startup Projects dialog. Adjust the server solution to start only the server project, then start the solution. (Now that we have modified the HTTP version, our old client can no longer communicate with the server.)InventoryAppConnected
.http
version of the service address (remember to grab the randomly generated port number from launchsettings.json
).Feel free to check your work against the sample code for this example. Since, under the hood, VS has generated the same client we used in our first round of testing, we can reuse the contents of the Program.cs
file from the previous service verbatim.
When we change a contract, we need to modify our client gRPC definition to match the updated .proto
definition. To do so, we need only access VS’s Connected Services and refresh the relevant service entry. Now, our gRPC project is complete, and it’s easy to keep our service and client in sync.
Our gRPC implementation provides a firsthand glimpse into the benefits of using gRPC. REST and gRPC each have their own ideal use cases depending on contract type. However, when both options fit, I encourage you to try gRPC—it’ll put you ahead of the curve in the future of APIs.
Original article source at: https://www.toptal.com/
1669995860
There are mainly 3 building blocks in any Kalix application – Entities, Views, and Actions. Views, as the name suggests, are responsible for viewing the data. Although we can fetch data using the entity_key defined in the domain, we use Views for more flexibility and customization.
This blog is a brief discussion on Views with a Value-Entity in Kalix and how we can use them in our application.
Note: To read about the basics of gRPC and Entities visit the following links
1. gRPC Descriptors
2. Types of entities in Kalix
3. Event Sourced entities in Kalix (1) (2)
To create a View, we must follow the following steps:
a) Create a domain file
The domain file represents the data that the View will display i.e., domain files are the source of data for a View. We also require this domain file for the second step which uses these domain models to perform operations like CRUD
message TodoItemState {
string item_id = 1;
string title = 2;
string description = 3;
string added_by = 4;
bool done = 5;
}
b) Create a service for an Entity (can be Value Entity, Event Sourced Entity) defined in the domain file above. This service defines the remote procedure calls that are responsible for producing state changes for the Value-Entity
service TodoListService {
option (kalix.codegen) = {
value_entity: {
name: "todolist.domain.TodoItem",
entity_type: "todolist",
state: "todolist.domain.TodoItemState"
}
};
rpc AddItem(TodoItem) returns (google.protobuf.Empty) {
option (google.api.http) = {
post: "/todoitem/add"
body: "*"
};
}
rpc GetItemById(GetItemByIdRequest) returns (TodoItem) {
option (google.api.http) = {
get: "/todoitem/{item_id}"
};
}
// Can add more such RPC Calls
}
message TodoItem {
string item_id = 1 [(kalix.field).entity_key = true];
string title = 2;
string description = 3;
string added_by = 4;
bool done = 5;
}
message GetItemByIdRequest {
string item_id = 1 [(kalix.field).entity_key = true];
}
c) Creating a View for the domain defined above. We use this View to query information based on our custom implementation.
service TodoListByName {
option(kalix.codegen) = {
view: {}
};
rpc UpdateTodoList(domain.TodoItemState) returns (domain.TodoItemState) {
option(kalix.method).eventing.in = {
value_entity: "todolist"
};
option(kalix.method).view.update = {
table: "todolist"
};
}
rpc GetTodoListItems(GetByNameRequest) returns (stream
domain.TodoItemState) {
option(kalix.method).view.query = {
query: "SELECT * FROM todolist where added_by = :name"
};
}
}
message GetByNameRequest {
string name = 1;
}
d) At last, we have to register our View in the main class. This is a kind of binding and is a necessary step without which our View will not be shown.
def createKalix(): Kalix = {
KalixFactory.withComponents(
new TodoItem(_),
new TodoListByNameView(_))
}
CQRS stands for Command Query Responsibility Segregation. From its name, we can infer that it discusses segregating or differentiating responsibilities for querying (SELECT) records from tables and inserting/updating/deleting them from the table.
Kalix also follows the CQRS principle. The entities are responsible for creating, updating, and deleting records, and the Views are accountable for the querying part as we have seen above.
There are 2 sides in CQRS – The read side and the Write side. The Write side is responsible for storing the state changes and the read side retrieves data. Thus changing the state of View does not happen in the same transaction as changing/ persisting of state. Kalix projects these changes in the states onto the Views.
In Kalix, state changes occur in the state store (for value entities) or the event journal (for event-sourced entities). There is a time delay in state changes and when that data is queryable
The data layer part in Kalix is entirely abstracted and as developers, we don’t have access to how this is implemented internally. Nonetheless, we can take advantage of the CQRS pattern, especially in enterprise applications where there is more reading activity than writing activity and Views are the means to achieve.
References:
To read more visit the Official Kalix Documentation page
GitHub link to the complete source code provided above
To read about Kalix and its advantages visit here
Original article source at: https://blog.knoldus.com/
1668484993
In this tutorial, you will learn about what’s new in gRPC for .NET 7, including:
* Performance improvements
* Create RESTful services with gRPC JSON transcoding
* gRPC apps on Azure App Services"
"gRPC is a high-performance RPC framework used by developers around the world to build fast apps. High-performance services with gRPC: What's new in .NET 7
#dotnet #grpc
1667546369
Node.js and serving highly dynamic content to over 500k devices in near real-time
Growing your service from several thousand users to hundreds of thousands is often a painful, yet rewarding experience. We’ll explore our journey in scaling Node.js-based applications that provide gRPC for last mile communication, AMQP for inter-service communication, and interact with PostgreSQL, Redis & Clickhouse databases You would learn about ways to achieve that scale: get to know tools & techniques used to identify and measure baseline performance, be able to drill down to the root cause of specific performance issues, analyze them and make conscious decisions about ways to solve these problems.
#node #nodejs #grpc
1667492100
A modern OS for Kubernetes.
Talos is a modern OS for running Kubernetes: secure, immutable, and minimal. Talos is fully open source, production-ready, and supported by the people at Sidero Labs All system management is done via an API - there is no shell or interactive console. Benefits include:
For instructions on deploying and managing Talos, see the Documentation.
If you're interested in this project and would like to help in engineering efforts or have general usage questions, we are happy to have you! We hold a weekly meeting that all audiences are welcome to attend.
We would appreciate your feedback so that we can make Talos even better! To do so, you can take our survey.
You can subscribe to this meeting by joining the community forum above.
Note: You can convert the meeting hours to your local time.
Contributions are welcomed and appreciated! See Contributing for our guidelines.
Author: Siderolabs
Source Code: https://github.com/siderolabs/talos
License: MPL-2.0 license
1667322300
This repository contains a gRPC Swift API and code generator.
It is intended for use with Apple's SwiftProtobuf support for Protocol Buffers. Both projects contain code generation plugins for protoc
, Google's Protocol Buffer compiler, and both contain libraries of supporting code that is needed to build and run the generated code.
APIs and generated code is provided for both gRPC clients and servers, and can be built either with Xcode or the Swift Package Manager. Support is provided for all four gRPC API styles (Unary, Server Streaming, Client Streaming, and Bidirectional Streaming) and connections can be made either over secure (TLS) or insecure channels.
gRPC Swift has recently been rewritten on top of SwiftNIO as opposed to the core library provided by the gRPC project.
Version | Implementation | Branch | protoc Plugin | Support |
---|---|---|---|---|
1.x | SwiftNIO | [main ][branch-new] | protoc-gen-grpc-swift | Actively developed and supported |
0.x | gRPC C library | [cgrpc ][branch-old] | protoc-gen-swiftgrpc | No longer developed; security fixes only |
The remainder of this README refers to the 1.x version of gRPC Swift.
gRPC Swift's platform support is identical to the platform support of Swift NIO.
The earliest supported version of Swift for gRPC Swift releases are as follows:
gRPC Swift Version | Earliest Swift Version |
---|---|
1.0.0 ..< 1.8.0 | 5.2 |
1.8.0 ..< 1.11.0 | 5.4 |
1.11.0... | 5.5 |
Versions of clients and services which are use Swift's Concurrency support are available from gRPC Swift 1.8.0 and require Swift 5.6 and newer.
There are two parts to gRPC Swift: the gRPC library and an API code generator.
The Swift Package Manager is the preferred way to get gRPC Swift. Simply add the package dependency to your Package.swift
:
dependencies: [
.package(url: "https://github.com/grpc/grpc-swift.git", from: "1.9.0")
]
...and depend on "GRPC"
in the necessary targets:
.target(
name: ...,
dependencies: [.product(name: "GRPC", package: "grpc-swift")]
]
Xcode
From Xcode 11 it is possible to add Swift Package dependencies to Xcode projects and link targets to products of those packages; this is the easiest way to integrate gRPC Swift with an existing xcodeproj
.
Manual Integration
Alternatively, gRPC Swift can be manually integrated into a project:
swift package generate-xcodeproj
,GRPC
.protoc
PluginsBinary releases of protoc
, the Protocol Buffer Compiler, are available on GitHub.
To build the plugins, run make plugins
in the main directory. This uses the Swift Package Manager to build both of the necessary plugins: protoc-gen-swift
, which generates Protocol Buffer support code and protoc-gen-grpc-swift
, which generates gRPC interface code.
To install these plugins, just copy the two executables (protoc-gen-swift
and protoc-gen-grpc-swift
) that show up in the main directory into a directory that is part of your PATH
environment variable. Alternatively the full path to the plugins can be specified when using protoc
.
The plugins are available from homebrew and can be installed with:
$ brew install swift-protobuf grpc-swift
gRPC Swift has a number of tutorials and examples available. They are split across two directories:
/Sources/Examples
contains examples which do not require additional dependencies and may be built using the Swift Package Manager./Examples
contains examples which rely on external dependencies or may not be built by the Swift Package Manager (such as an iOS app).Some of the examples are accompanied by tutorials, including:
The docs
directory contains documentation, including:
protoc
plugin in docs/plugin.md
docs/tls.md
docs/keepalive.md
docs/apple-platforms.md
Please see SECURITY.md.
Please get involved! See our guidelines for contributing.
Author: grpc
Source Code: https://github.com/grpc/grpc-swift
License: Apache-2.0 license
1665094440
gRPC to JSON proxy generator following the gRPC HTTP spec
The gRPC-Gateway is a plugin of the Google protocol buffers compiler protoc. It reads protobuf service definitions and generates a reverse-proxy server which translates a RESTful HTTP API into gRPC. This server is generated according to the google.api.http
annotations in your service definitions.
This helps you provide your APIs in both gRPC and RESTful style at the same time.
You can read our docs at:
We use the gRPC-Gateway to serve millions of API requests per day, and have been since 2018 and through all of that, we have never had any issues with it.
- William Mill, Ad Hoc
gRPC is great -- it generates API clients and server stubs in many programming languages, it is fast, easy-to-use, bandwidth-efficient and its design is combat-proven by Google. However, you might still want to provide a traditional RESTful JSON API as well. Reasons can range from maintaining backward-compatibility, supporting languages or clients that are not well supported by gRPC, to simply maintaining the aesthetics and tooling involved with a RESTful JSON architecture.
This project aims to provide that HTTP+JSON interface to your gRPC service. A small amount of configuration in your service to attach HTTP semantics is all that's needed to generate a reverse-proxy with this library.
The following instructions assume you are using Go Modules for dependency management. Use a tool dependency to track the versions of the following executable packages:
// +build tools
package tools
import (
_ "github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway"
_ "github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2"
_ "google.golang.org/grpc/cmd/protoc-gen-go-grpc"
_ "google.golang.org/protobuf/cmd/protoc-gen-go"
)
Run go mod tidy
to resolve the versions. Install by running
$ go install \
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway \
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2 \
google.golang.org/protobuf/cmd/protoc-gen-go \
google.golang.org/grpc/cmd/protoc-gen-go-grpc
This will place four binaries in your $GOBIN
;
protoc-gen-grpc-gateway
protoc-gen-openapiv2
protoc-gen-go
protoc-gen-go-grpc
Make sure that your $GOBIN
is in your $PATH
.
You may alternatively download the binaries from the GitHub releases page. We generate SLSA3 signatures using the OpenSSF's slsa-framework/slsa-github-generator during the release process. To verify a release binary:
attestation.intoto.jsonl
from the GitHub releases page.slsa-verifier -artifact-path <the-binary> -provenance attestation.intoto.jsonl -source github.com/grpc-ecosystem/grpc-gateway -tag <the-tag>
Alternatively, see the section on remotely managed plugin versions below.
Define your gRPC service using protocol buffers
your_service.proto
:
syntax = "proto3";
package your.service.v1;
option go_package = "github.com/yourorg/yourprotos/gen/go/your/service/v1";
message StringMessage {
string value = 1;
}
service YourService {
rpc Echo(StringMessage) returns (StringMessage) {}
}
Generate gRPC stubs
This step generates the gRPC stubs that you can use to implement the service and consume from clients:
Here's an example buf.gen.yaml
you can use to generate the stubs with buf:
version: v1
plugins:
- name: go
out: gen/go
opt:
- paths=source_relative
- name: go-grpc
out: gen/go
opt:
- paths=source_relative
With this file in place, you can generate your files using buf generate
.
For a complete example of using
buf generate
to generate protobuf stubs, see the boilerplate repo. For more information on generating the stubs with buf, see the official documentation.
If you are using protoc
to generate stubs, here's an example of what a command might look like:
protoc -I . \
--go_out ./gen/go/ --go_opt paths=source_relative \
--go-grpc_out ./gen/go/ --go-grpc_opt paths=source_relative \
your/service/v1/your_service.proto
Implement your service in gRPC as usual.
Generate reverse-proxy using protoc-gen-grpc-gateway
At this point, you have 3 options:
.proto
file, but will not allow setting HTTP paths, request parameters or similar.proto
modifications to use a custom mapping.proto
file to set custom HTTP mappings.proto
modifications, but use an external configuration filestandalone=true
option to generate a file that can live in its own package, separate from the files generated by the source protobuf file.Write an entrypoint for the HTTP reverse-proxy server
package main
import (
"context"
"flag"
"net/http"
"github.com/golang/glog"
"github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
gw "github.com/yourorg/yourrepo/proto/gen/go/your/service/v1/your_service" // Update
)
var (
// command-line options:
// gRPC server endpoint
grpcServerEndpoint = flag.String("grpc-server-endpoint", "localhost:9090", "gRPC server endpoint")
)
func run() error {
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
defer cancel()
// Register gRPC server endpoint
// Note: Make sure the gRPC server is running properly and accessible
mux := runtime.NewServeMux()
opts := []grpc.DialOption{grpc.WithTransportCredentials(insecure.NewCredentials())}
err := gw.RegisterYourServiceHandlerFromEndpoint(ctx, mux, *grpcServerEndpoint, opts)
if err != nil {
return err
}
// Start HTTP server (and proxy calls to gRPC server endpoint)
return http.ListenAndServe(":8081", mux)
}
func main() {
flag.Parse()
defer glog.Flush()
if err := run(); err != nil {
glog.Fatal(err)
}
}
(Optional) Generate OpenAPI definitions using protoc-gen-openapiv2
Here's what a buf.gen.yaml
file might look like:
version: v1
plugins:
- name: go
out: gen/go
opt:
- paths=source_relative
- name: go-grpc
out: gen/go
opt:
- paths=source_relative
- name: grpc-gateway
out: gen/go
opt:
- paths=source_relative
- name: openapiv2
out: gen/openapiv2
To use the custom protobuf annotations supported by protoc-gen-openapiv2
, we need another dependency added to our protobuf generation step. If you are using buf
, you can add the buf.build/grpc-ecosystem/grpc-gateway
dependency to your deps
array:
version: v1
name: buf.build/yourorg/myprotos
deps:
- buf.build/googleapis/googleapis
- buf.build/grpc-ecosystem/grpc-gateway
With protoc
(just the swagger file):
protoc -I . --openapiv2_out ./gen/openapiv2 \
--openapiv2_opt logtostderr=true \
your/service/v1/your_service.proto
If you are using protoc
to generate stubs, you will need to copy the protobuf files from the protoc-gen-openapiv2/options
directory of this repository, and providing them to protoc
when running.
Note that this plugin also supports generating OpenAPI definitions for unannotated methods; use the generate_unbound_methods
option to enable this.
It is possible with the HTTP mapping for a gRPC service method to create duplicate mappings with the only difference being constraints on the path parameter.
/v1/{name=projects/*}
and /v1/{name=organizations/*}
both become /v1/{name}
. When this occurs the plugin will rename the path parameter with a "_1" (or "_2" etc) suffix to differentiate the different operations. So in the above example, the 2nd path would become /v1/{name_1=organizations/*}
. This can also cause OpenAPI clients to URL encode the "/" that is part of the path parameter as that is what OpenAPI defines in the specification. To allow gRPC gateway to accept the URL encoded slash and still route the request, use the UnescapingModeAllCharacters or UnescapingModeLegacy (which is the default currently though may change in future versions). See Customizing Your Gateway for more information.
As an alternative to all of the above, you can use buf
with remote plugins to manage plugin versions and generation. An example buf.gen.yaml
using remote plugin generation looks like this:
version: v1
plugins:
- remote: buf.build/library/plugins/go:v1.27.1-1
out: gen/go
opt:
- paths=source_relative
- remote: buf.build/library/plugins/go-grpc:v1.1.0-2
out: gen/go
opt:
- paths=source_relative
- remote: buf.build/grpc-ecosystem/plugins/grpc-gateway:v2.6.0-1
out: gen/go
opt:
- paths=source_relative
- remote: buf.build/grpc-ecosystem/plugins/openapiv2:v2.6.0-1
out: gen/openapiv2
This requires no local installation of any plugins. Be careful to use the same version of the generator as the runtime library, i.e. if using v2.6.0-1
, run
$ go get github.com/grpc-ecosystem/grpc-gateway/v2@v2.6.0
To get the same version of the runtime in your go.mod
.
This GopherCon UK 2019 presentation from our maintainer @JohanBrandhorst provides a good intro to using the gRPC-Gateway. It uses the following boilerplate repo as a base: https://github.com/johanbrandhorst/grpc-gateway-boilerplate.
When using buf
to generate stubs, flags and parameters are passed through the opt
field in your buf.gen.yaml
file, for example:
version: v1
plugins:
- name: grpc-gateway
out: gen/go
opt:
- paths=source_relative
- grpc_api_configuration=path/to/config.yaml
- standalone=true
During code generation with protoc
, flags to gRPC-Gateway tools must be passed through protoc
using one of 2 patterns:
--<tool_suffix>_out
protoc
parameter: --<tool_suffix>_out=<flags>:<path>
--grpc-gateway_out=logtostderr=true,repeated_path_param_separator=ssv:.
--openapiv2_out=logtostderr=true,repeated_path_param_separator=ssv:.
--<tool_suffix>_opt
parameters: --<tool_suffix>_opt=<flag>[,<flag>]*
--grpc-gateway_opt logtostderr=true,repeated_path_param_separator=ssv
# or separately
--grpc-gateway_opt logtostderr=true --grpc-gateway_opt repeated_path_param_separator=ssv
--openapiv2_opt logtostderr=true,repeated_path_param_separator=ssv
# or separately
--openapiv2_opt logtostderr=true --openapiv2_opt repeated_path_param_separator=ssv
More examples are available under the examples
directory.
proto/examplepb/echo_service.proto
, proto/examplepb/a_bit_of_everything.proto
, proto/examplepb/unannotated_echo_service.proto
: service definitionproto/examplepb/echo_service.pb.go
, proto/examplepb/a_bit_of_everything.pb.go
, proto/examplepb/unannotated_echo_service.pb.go
: [generated] stub of the serviceproto/examplepb/echo_service.pb.gw.go
, proto/examplepb/a_bit_of_everything.pb.gw.go
, proto/examplepb/uannotated_echo_service.pb.gw.go
: [generated] reverse proxy for the serviceproto/examplepb/unannotated_echo_service.yaml
: gRPC API Configuration for unannotated_echo_service.proto
server/main.go
: service implementationmain.go
: entrypoint of the generated reverse proxyTo use the same port for custom HTTP handlers (e.g. serving swagger.json
), gRPC-Gateway, and a gRPC server, see this example by CoreOS (and its accompanying blog post).
Grpc-Metadata-
prefix to gRPC metadata (prefixed with grpcgateway-
)Grpc-Timeout
header.But patches are welcome.
X-Forwarded-For
gRPC request header.X-Forwarded-Host
gRPC request header.Authorization
header is added as authorization
gRPC request header.grpcgateway-
and added with their values to gRPC request header.grpcgateway-
)./api/v1/{name=projects/*/topics/*}
or /prefix/{path=organizations/**}
.See CONTRIBUTING.md.
Author: grpc-ecosystem
Source Code: https://github.com/grpc-ecosystem/grpc-gateway
License: BSD-3-Clause license
1664939115
While migrating to a microservices based architecture provides many benefits, it also brings up a lot of challenges. Unlike monolithic architectures, microservice architectures have to deal with coordinating, organizing, and managing a collection of different services with different scaling needs.
Often overlooked from a developer’s perspective, HTTP client libraries are clunky and require code that defines paths, handles parameters, and deals with responses in bytes. gRPC abstracts all of this away and makes network calls feel like any other function calls defined for a struct.
This talk will show how Kubernetes and gRPC, two popular open source projects based on experience Google has gained running microservices at scale
gRPC is a platform-neutral RPC framework, based on HTTP/2 and Protobuf, used to build highly performant and scalable APIs. gRPC benefits from new features introduced in HTTP/2 like framing, bidirectional streaming, header compression, multiplexing, and flow control. gRPC is not just a blueprint for high performance RPC, but also provides a methodology to generate services and clients in multiple languages. This talk is based on our experience by contributing to the network intelligence center’s Simulator, which itself used gRPC for all intra-server communication.
This talk will demonstrate the core concepts of gRPC and Kubernetes, and show you how to combine these two technologies to create scalable and performant microservices rooted in lessons learned from Google’s experience.
If you're interested in a dead simple way to add gRPC load balancing to your Kubernetes services, regardless of what language it's written in, this talk is surely going to be beneficial.
gRPC is relatively new, but its fast-growing ecosystem and community will definitely make an impact in microservice development. Since gRPC is an open standard, all mainstream programming languages support it, it will help viewers to understand how ideal it is for working in a microservice environment.
#microservices #grpc #kubernetes
1664549767
Public interface definitions of Google APIs for Dart and gRPC.
In your pubspec.yaml
file add:
dependencies:
protobuf_google: any
import 'package:protobuf_google/protobuf_google.dart'
Run this command:
With Dart:
$ dart pub add protobuf_google
With Flutter:
$ flutter pub add protobuf_google
This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get
):
dependencies:
protobuf_google: ^0.0.1
Alternatively, your editor might support dart pub get
or flutter pub get
. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:protobuf_google/protobuf_google.dart';
Download Details:
Author: xclud
Source Code: https://github.com/xclud/dart_protobuf_google/
1659961080
Chúng tôi có thể sử dụng Postman cho các yêu cầu gRPC. (Trong khi viết bài này, nó đang ở giai đoạn BETA.)
Trong bài viết này, chúng tôi sẽ sử dụng công cụ Postman (như một ứng dụng khách gRPC) để kiểm tra API gRPC của chúng tôi. Tôi sẽ sử dụng một máy chủ gRPC đơn giản mà tôi đã phát triển khi viết bài này .
Bạn sẽ có thể kiểm tra bất kỳ phương pháp máy chủ gRPC nào ở cuối bài viết này.
Khởi động máy chủ gRPC
Máy chủ gRPC của chúng tôi có
2 loại tin nhắn
1 dịch vụ
4 RPC trong Dịch vụ đó
syntax = "proto3";
option go_package = "simple/testgrpc";
message SimpleRequest{
string request_need = 1;
}
message SimpleResponse{
string response = 1;
}
service SimpleService{
// unary RPC
rpc RPCRequest(SimpleRequest) returns (SimpleResponse);
// Server Streaming
rpc ServerStreaming(SimpleRequest) returns (stream SimpleResponse);
// Client Streaming
rpc ClientStreaming(stream SimpleRequest) returns (SimpleResponse);
// Bi-Directional Streaming
rpc StreamingBiDirectional(stream SimpleRequest) returns (stream SimpleResponse);
}
Tôi đã khởi động máy chủ gRPC của chúng tôi đang nghe tại localhost: 8090
Postman gRPC Client
- bằng cách nhập tệp proto
- sử dụng phản chiếu máy chủ (đối với điều này Máy chủ của chúng tôi cần có phản chiếu máy chủ như một tiện ích mở rộng tùy chọn để hỗ trợ khách hàng trong việc xây dựng thời gian chạy các yêu cầu mà không cần thông tin sơ khai được biên dịch trước vào máy khách.)
- bằng cách sử dụng các API Protobuf được tạo trong không gian làm việc Postman. (ngoài ra, bạn có thể nhập tệp proto và sau đó tạo API với proto đã nhập.)
Chúng tôi sẽ đi với tùy chọn đầu tiên và sau đó tạo API. (Việc tạo API sẽ do Người đưa thư đảm nhận sau khi bạn nhập tệp proto)
Nhấp vào Nhập dưới dạng API .
Bây giờ, chúng tôi đã sẵn sàng với định nghĩa Máy chủ của mình trong Người đưa thư bằng cách nhập tệp proto của chúng tôi.
Bây giờ hãy nhấp vào Chọn một phương pháp để chọn phương thức gRPC của chúng tôi để gọi.
Tại thời điểm này Postman biết về Máy chủ của chúng tôi, vì vậy các RPC được xác định trong máy chủ của chúng tôi sẽ có sẵn trong menu thả xuống.
Trong khi gửi yêu cầu, bạn có thể ủy quyền yêu cầu với sự trợ giúp của tiện ích con Ủy quyền nếu cần. Ngoài ra, bạn có thể gửi một số Siêu dữ liệu nếu cần.
Chọn một phương pháp
Chọn phương thức RPCRequest phương thức là RPC đơn nhất của Máy chủ của chúng tôi và chỉ cần nhấp vào nút Gọi, để gọi RPC và chúng tôi đã nhận được phản hồi của mình.
yêu cầu trống với phản hồi
Postman biết về Máy chủ của chúng tôi vì vậy, nó có tính năng Tạo Thông báo Ví dụ .
Nhấp vào Tạo Thông báo Sơ đồ ở cuối Tiện ích Thông báo và gọi RPC.
Tạo tin nhắn và nhận phản hồi
Ái chà! chúng tôi đã nhận được phản hồi của chúng tôi.
Bây giờ, chúng ta sẽ thử Client Streaming RPC trong Postman.
Sau khi bạn chọn phương pháp phát trực tuyến Khách hàng trong Chọn phương thức, sau đó Người đưa thư sẽ mở một tiện ích để gửi luồng yêu cầu của chúng tôi.
Dòng
Bây giờ hãy nhấp vào Tạo tin nhắn mẫu để tạo tin nhắn hoặc soạn tin nhắn và bấm vào Gửi .
gửi tin nhắn luồng
Lặp lại bước này bao nhiêu lần bạn muốn. Vì nó là một RPC phát trực tuyến Máy khách. Sau khi bạn hoàn tất luồng tin nhắn của mình, hãy nhấp vào Kết thúc phát trực tuyến.
Và Bạn sẽ nhận được phản hồi.
Phản hồi phát trực tuyến của khách hàng
Chúng tôi đã nhận được phản hồi của mình.
Bây giờ chọn phương thức ServerStreaming và gọi RPC cùng với thông báo của chúng tôi, Chúng tôi sẽ nhận được phản hồi từ luồng của chúng tôi.
Máy chủ truyền trực tuyến
Chúng tôi đã nhận được phản hồi của Luồng.
Chọn phương pháp StreamingBiDirectional và gọi RPC để kiểm tra tính năng phát trực tuyến Hai chiều.
Sau khi tôi gửi yêu cầu, tôi đã nhận được phản hồi cho yêu cầu đó (Đó là cách Máy chủ được triển khai)
gửi và nhận
Sau khi gửi yêu cầu và nhận được phản hồi, hãy kết thúc luồng.
Phát trực tuyến hai hướng
Bạn có thể lưu các yêu cầu trong Bộ sưu tập của Người đưa thư như bình thường.
API mà chúng tôi đã tạo bằng cách nhập tệp proto của chúng tôi có thể được sử dụng lại
Bản tóm tắt
Bây giờ bạn đã biết cách sử dụng Người đưa thư làm Ứng dụng khách gRPC .
Liên kết: https://faun.pub/test-and-debug-your-grpc-request-postman-9d97a42826b4
#postman #grpc #go
1659960420
Cete
Cete is a distributed key value store server written in Go built on top of BadgerDB.
It provides functions through gRPC (HTTP/2 + Protocol Buffers) or traditional RESTful API (HTTP/1.1 + JSON).
Cete implements Raft consensus algorithm by hashicorp/raft. It achieve consensus across all the instances of the nodes, ensuring that every change made to the system is made to a quorum of nodes, or none at all.
Cete makes it easy bringing up a cluster of BadgerDB (a cete of badgers) .
When you satisfied dependencies, let's build Cete for Linux as following:
$ mkdir -p ${GOPATH}/src/github.com/mosuka
$ cd ${GOPATH}/src/github.com/mosuka
$ git clone https://github.com/mosuka/cete.git
$ cd cete
$ make build
If you want to build for other platform, set GOOS
, GOARCH
environment variables. For example, build for macOS like following:
$ make GOOS=darwin build
You can see the binary file when build successful like so:
$ ls ./bin
cete
If you want to test your changes, run command like following:
$ make test
$ make GOOS=linux dist
$ make GOOS=darwin dist
CLI Flag | Environment variable | Configuration File | Description |
---|---|---|---|
--config-file | - | - | config file. if omitted, cete.yaml in /etc and home directory will be searched |
--id | CETE_ID | id | node ID |
--raft-address | CETE_RAFT_ADDRESS | raft_address | Raft server listen address |
--grpc-address | CETE_GRPC_ADDRESS | grpc_address | gRPC server listen address |
--http-address | CETE_HTTP_ADDRESS | http_address | HTTP server listen address |
--data-directory | CETE_DATA_DIRECTORY | data_directory | data directory which store the key-value store data and Raft logs |
--peer-grpc-address | CETE_PEER_GRPC_ADDRESS | peer_grpc_address | listen address of the existing gRPC server in the joining cluster |
--certificate-file | CETE_CERTIFICATE_FILE | certificate_file | path to the client server TLS certificate file |
--key-file | CETE_KEY_FILE | key_file | path to the client server TLS key file |
--common-name | CETE_COMMON_NAME | common_name | certificate common name |
--log-level | CETE_LOG_LEVEL | log_level | log level |
--log-file | CETE_LOG_FILE | log_file | log file |
--log-max-size | CETE_LOG_MAX_SIZE | log_max_size | max size of a log file in megabytes |
--log-max-backups | CETE_LOG_MAX_BACKUPS | log_max_backups | max backup count of log files |
--log-max-age | CETE_LOG_MAX_AGE | log_max_age | max age of a log file in days |
--log-compress | CETE_LOG_COMPRESS | log_compress | compress a log file |
Starting cete is easy as follows:
$ ./bin/cete start --id=node1 --raft-address=:7000 --grpc-address=:9000 --http-address=:8000 --data-directory=/tmp/cete/node1
You can get the node information with the following command:
$ ./bin/cete node | jq .
or the following URL:
$ curl -X GET http://localhost:8000/v1/node | jq .
The result of the above command is:
{
"node": {
"raft_address": ":7000",
"metadata": {
"grpc_address": ":9000",
"http_address": ":8000"
},
"state": "Leader"
}
}
You can check the health status of the node.
$ ./bin/cete healthcheck | jq .
Also provides the following REST APIs
This endpoint always returns 200 and should be used to check Cete health.
$ curl -X GET http://localhost:8000/v1/liveness_check | jq .
This endpoint returns 200 when Cete is ready to serve traffic (i.e. respond to queries).
$ curl -X GET http://localhost:8000/v1/readiness_check | jq .
To put a key-value, execute the following command:
$ ./bin/cete set 1 value1
or, you can use the RESTful API as follows:
$ curl -X PUT 'http://127.0.0.1:8000/v1/data/1' --data-binary value1
$ curl -X PUT 'http://127.0.0.1:8000/v1/data/2' -H "Content-Type: image/jpeg" --data-binary @/path/to/photo.jpg
To get a key-value, execute the following command:
$ ./bin/cete get 1
or, you can use the RESTful API as follows:
$ curl -X GET 'http://127.0.0.1:8000/v1/data/1'
You can see the result. The result of the above command is:
value1
Deleting a value by key, execute the following command:
$ ./bin/cete delete 1
or, you can use the RESTful API as follows:
$ curl -X DELETE 'http://127.0.0.1:8000/v1/data/1'
Cete is easy to bring up the cluster. Cete node is already running, but that is not fault tolerant. If you need to increase the fault tolerance, bring up 2 more data nodes like so:
$ ./bin/cete start --id=node2 --raft-address=:7001 --grpc-address=:9001 --http-address=:8001 --data-directory=/tmp/cete/node2 --peer-grpc-address=:9000
$ ./bin/cete start --id=node3 --raft-address=:7002 --grpc-address=:9002 --http-address=:8002 --data-directory=/tmp/cete/node3 --peer-grpc-address=:9000
Above example shows each Cete node running on the same host, so each node must listen on different ports. This would not be necessary if each node ran on a different host.
This instructs each new node to join an existing node, each node recognizes the joining clusters when started. So you have a 3-node cluster. That way you can tolerate the failure of 1 node. You can check the cluster with the following command:
$ ./bin/cete cluster | jq .
or, you can use the RESTful API as follows:
$ curl -X GET 'http://127.0.0.1:8000/v1/cluster' | jq .
You can see the result in JSON format. The result of the above command is:
{
"cluster": {
"nodes": {
"node1": {
"raft_address": ":7000",
"metadata": {
"grpc_address": ":9000",
"http_address": ":8000"
},
"state": "Leader"
},
"node2": {
"raft_address": ":7001",
"metadata": {
"grpc_address": ":9001",
"http_address": ":8001"
},
"state": "Follower"
},
"node3": {
"raft_address": ":7002",
"metadata": {
"grpc_address": ":9002",
"http_address": ":8002"
},
"state": "Follower"
}
},
"leader": "node1"
}
}
Recommend 3 or more odd number of nodes in the cluster. In failure scenarios, data loss is inevitable, so avoid deploying single nodes.
The above example, the node joins to the cluster at startup, but you can also join the node that already started on standalone mode to the cluster later, as follows:
$ ./bin/cete join --grpc-addr=:9000 node2 127.0.0.1:9001
or, you can use the RESTful API as follows:
$ curl -X PUT 'http://127.0.0.1:8000/v1/cluster/node2' --data-binary '
{
"raft_address": ":7001",
"metadata": {
"grpc_address": ":9001",
"http_address": ":8001"
}
}
'
To remove a node from the cluster, execute the following command:
$ ./bin/cete leave --grpc-addr=:9000 node2
or, you can use the RESTful API as follows:
$ curl -X DELETE 'http://127.0.0.1:8000/v1/cluster/node2'
The following command indexes documents to any node in the cluster:
$ ./bin/cete set 1 value1 --grpc-address=:9000
So, you can get the document from the node specified by the above command as follows:
$ ./bin/cete get 1 --grpc-address=:9000
You can see the result. The result of the above command is:
value1
You can also get the same document from other nodes in the cluster as follows:
$ ./bin/cete get 1 --grpc-address=:9001
$ ./bin/cete get 1 --grpc-address=:9002
You can see the result. The result of the above command is:
value1
You can build the Docker container image like so:
$ make docker-build
You can also use the Docker container image already registered in docker.io like so:
$ docker pull mosuka/cete:latest
See https://hub.docker.com/r/mosuka/cete/tags/
You can also use the Docker container image already registered in docker.io like so:
$ docker pull mosuka/cete:latest
Running a Cete data node on Docker. Start Cete node like so:
$ docker run --rm --name cete-node1 \
-p 7000:7000 \
-p 8000:8000 \
-p 9000:9000 \
mosuka/cete:latest cete start \
--id=node1 \
--raft-address=:7000 \
--grpc-address=:9000 \
--http-address=:8000 \
--data-directory=/tmp/cete/node1
You can execute the command in docker container as follows:
$ docker exec -it cete-node1 cete node --grpc-address=:9000
Cete supports HTTPS access, ensuring that all communication between clients and a cluster is encrypted.
One way to generate the necessary resources is via openssl. For example:
$ openssl req -x509 -nodes -newkey rsa:4096 -keyout ./etc/cete-key.pem -out ./etc/cete-cert.pem -days 365 -subj '/CN=localhost'
Generating a 4096 bit RSA private key
............................++
........++
writing new private key to 'key.pem'
Starting a node with HTTPS enabled, node-to-node encryption, and with the above configuration file. It is assumed the HTTPS X.509 certificate and key are at the paths server.crt and key.pem respectively.
$ ./bin/cete start --id=node1 --raft-address=:7000 --grpc-address=:9000 --http-address=:8000 --data-directory=/tmp/cete/node1 --peer-grpc-address=:9000 --certificate-file=./etc/cert.pem --key-file=./etc/key.pem --common-name=localhost
$ ./bin/cete start --id=node2 --raft-address=:7001 --grpc-address=:9001 --http-address=:8001 --data-directory=/tmp/cete/node2 --peer-grpc-address=:9000 --certificate-file=./etc/cert.pem --key-file=./etc/key.pem --common-name=localhost
$ ./bin/cete start --id=node3 --raft-address=:7002 --grpc-address=:9002 --http-address=:8002 --data-directory=/tmp/cete/node3 --peer-grpc-address=:9000 --certificate-file=./etc/cert.pem --key-file=./etc/key.pem --common-name=localhost
You can access the cluster by adding a flag, such as the following command:
$ ./bin/cete cluster --grpc-address=:9000 --certificate-file=./cert.pem --common-name=localhost | jq .
or
$ curl -X GET https://localhost:8000/v1/cluster --cacert ./cert.pem | jq .
Author: Mosuka
Source Code: https://github.com/mosuka/cete
License: Apache-2.0 license
1659953836
Мы можем использовать Postman для запросов gRPC. (Во время написания этой статьи она находится на стадии БЕТА.)
В этой статье мы собираемся использовать инструмент Postman (в качестве клиента gRPC) для тестирования нашего API gRPC. Я собираюсь использовать простой сервер gRPC, который я разработал во время написания этой статьи .
В конце этой статьи вы сможете протестировать любые методы gRPC Server.
Запустите сервер gRPC
Наш сервер gRPC имеет
2 типа сообщений
1 услуга
4 RPC в этой службе
syntax = "proto3";
option go_package = "simple/testgrpc";
message SimpleRequest{
string request_need = 1;
}
message SimpleResponse{
string response = 1;
}
service SimpleService{
// unary RPC
rpc RPCRequest(SimpleRequest) returns (SimpleResponse);
// Server Streaming
rpc ServerStreaming(SimpleRequest) returns (stream SimpleResponse);
// Client Streaming
rpc ClientStreaming(stream SimpleRequest) returns (SimpleResponse);
// Bi-Directional Streaming
rpc StreamingBiDirectional(stream SimpleRequest) returns (stream SimpleResponse);
}
Я запустил наш сервер gRPC, который прослушивает localhost:8090.
Почтальон gRPC-клиент
— путем импорта прото-файла
— использование отражения сервера (для этого наш Сервер должен иметь отражение сервера в качестве дополнительного расширения, чтобы помочь клиентам в построении запросов во время выполнения без предварительной компиляции информации-заглушки в клиенте.)
— с помощью API-интерфейсов Protobuf, созданных в рабочей области Postman. (также вы можете импортировать файл прототипа, а затем создать API с импортированным прототипом.)
Мы выберем 1-й вариант, а затем создадим API. (Создание API будет выполнено почтальоном после того, как вы импортируете прото-файл)
Нажмите «Импортировать как API » .
Теперь мы готовы с нашим определением сервера в почтальоне, импортировав наш прото-файл.
Теперь нажмите « Выбрать метод », чтобы выбрать наш метод gRPC для вызова.
На данный момент Почтальон знает о нашем сервере, поэтому RPC, определенные на нашем сервере, будут доступны в раскрывающемся списке.
При отправке запроса вы можете авторизовать запрос с помощью виджета Авторизация , если это необходимо. Также вы можете отправить некоторые метаданные , если это необходимо.
Выберите метод
Выберите метод RPCRequest , который является унарным RPC нашего сервера, и просто нажмите кнопку Invoke, чтобы вызвать RPC, и мы получили наш ответ.
пустой запрос с ответом
Почтальон знает о нашем сервере, поэтому у него есть возможность генерировать пример сообщения .
Нажмите « Создать сообщение Exmaple» в нижней части виджета сообщений и вызовите RPC.
Генерация сообщений и получение ответа
Вау! мы получили свой ответ.
Теперь мы попробуем Client Streaming RPC в Postman.
После того, как вы выберете метод потоковой передачи клиента в разделе « Выбрать метод», Postman откроет виджет для отправки нашего потока запросов.
Ручей
Теперь нажмите « Создать пример сообщения » , чтобы сгенерировать сообщение или составить сообщение, и нажмите « Отправить » .
отправить потоковое сообщение
Повторите этот шаг столько раз, сколько хотите. Так как это клиентский потоковый RPC. Когда вы закончите с потоком сообщений, нажмите « Завершить потоковую передачу».
И Вы получите ответ обратно.
Ответ клиента на потоковую передачу
Мы получили наш ответ.
Теперь выберите метод ServerStreaming и вызовите RPC с нашим сообщением. Мы получим наши потоковые ответы.
Потоковая передача сервера
Мы получили ответы Stream.
Выберите метод StreamingBiDirectional и вызовите RPC для проверки двунаправленной потоковой передачи.
Как только я отправил запрос, я получил ответ на этот запрос (так реализован сервер)
отправлять и получать
После отправки запроса и получения ответа завершите поток.
Двунаправленная потоковая передача
Вы можете сохранять запросы в коллекции Postman , как обычно.
API, который мы создали, импортировав наш прото-файл, можно использовать повторно.
Резюме
Теперь вы знаете, как использовать Postman в качестве клиента gRPC .
Ссылка: https://faun.pub/test-and-debug-your-grpc-request-postman-9d97a42826b4
#postman #grpc #go
1659946588
我們可以將 Postman 用於 gRPC 請求。(在撰寫本文時,它處於 BETA 階段。)
在本文中,我們將使用 Postman 工具(作為 gRPC 客戶端)來測試我們的 gRPC API。我將使用我在撰寫本文時開發的簡單 gRPC 服務器。
您應該能夠在本文結尾處測試任何 gRPC 服務器方法。
啟動 gRPC 服務器
我們的 gRPC 服務器有
2 消息類型
1 服務
該服務中的 4 個 RPC
syntax = "proto3";
option go_package = "simple/testgrpc";
message SimpleRequest{
string request_need = 1;
}
message SimpleResponse{
string response = 1;
}
service SimpleService{
// unary RPC
rpc RPCRequest(SimpleRequest) returns (SimpleResponse);
// Server Streaming
rpc ServerStreaming(SimpleRequest) returns (stream SimpleResponse);
// Client Streaming
rpc ClientStreaming(stream SimpleRequest) returns (SimpleResponse);
// Bi-Directional Streaming
rpc StreamingBiDirectional(stream SimpleRequest) returns (stream SimpleResponse);
}
我已經啟動了我們的 gRPC 服務器,它正在監聽localhost:8090
郵遞員 gRPC 客戶端
— 通過導入 proto 文件
— 使用服務器反射(為此,我們的服務器需要將服務器反射作為可選擴展來幫助客戶端在運行時構建請求,而無需將存根信息預編譯到客戶端中。)
— 通過使用在 Postman 工作區中創建的 Protobuf API。(您也可以導入 proto 文件,然後使用導入的 proto 創建 API。)
我們將使用第一個選項,然後創建 API。(導入 proto 文件後,API 創建將由 Postman 負責)
單擊導入為 API。
現在,通過導入我們的 proto 文件,我們已準備好在 Postman 中定義服務器。
現在單擊Select a method以選擇我們要調用的 gRPC 方法。
此時 Postman 知道我們的服務器,因此我們服務器中定義的 RPC 將在下拉列表中可用。
在發送請求時,如果需要,您可以在授權小部件的幫助下授權請求。如果需要,您也可以發送一些元數據。
選擇一種方法
選擇方法RPCRequest方法,它是我們服務器的一元 RPC,然後單擊 Invoke 按鈕,調用 RPC,我們得到了響應。
帶有響應的空請求
Postman 知道我們的服務器,所以它具有生成示例消息的功能。
單擊Message Widget 底部的Generate Exmaple Message並調用 RPC。
生成消息並獲得響應
哇!我們得到了回應。
現在,我們將在 Postman 中嘗試 Client Streaming RPC。
在選擇方法中選擇客戶端流方法後, Postman 會打開一個小部件來發送我們的請求流。
溪流
現在單擊生成示例消息以生成消息或撰寫消息,然後單擊發送。
發送流消息
重複此步驟多次。因為它是一個客戶端流式 RPC。完成消息流後,單擊結束流式傳輸。
你會得到回复。
客戶端流式響應
我們得到了回應。
現在選擇ServerStreaming方法並使用我們的消息調用 RPC,我們將獲得我們的流響應。
服務器流式傳輸
我們得到了 Stream 響應。
選擇StreamingBiDirectional方法並調用 RPC 來測試雙向流。
發送請求後,我得到了該請求的響應(這就是服務器的實現方式)
發送和接收
發送請求並收到響應後,結束流。
雙向流
您可以像往常一樣將請求保存在Postman Collections中。
我們通過導入 proto 文件創建的 API 可以重複使用
概括
現在您已經了解瞭如何將 Postman 用作gRPC 客戶端。
鏈接:https ://faun.pub/test-and-debug-your-grpc-request-postman-9d97a42826b4
#postman #grpc #go
1659939360
Nous pouvons utiliser le Postman pour les requêtes gRPC. (Lors de la rédaction de cet article, il est en phase BETA.)
Dans cet article, nous allons utiliser l'outil Postman (en tant que client gRPC) pour tester notre API gRPC. Je vais utiliser un simple serveur gRPC que j'ai développé lors de la rédaction de cet article .
Vous devriez pouvoir tester toutes les méthodes du serveur gRPC à la fin de cet article.
Démarrer le serveur gRPC
Notre serveur gRPC a
2 types de messages
1 Prestation
4 RPC dans ce service
syntax = "proto3";
option go_package = "simple/testgrpc";
message SimpleRequest{
string request_need = 1;
}
message SimpleResponse{
string response = 1;
}
service SimpleService{
// unary RPC
rpc RPCRequest(SimpleRequest) returns (SimpleResponse);
// Server Streaming
rpc ServerStreaming(SimpleRequest) returns (stream SimpleResponse);
// Client Streaming
rpc ClientStreaming(stream SimpleRequest) returns (SimpleResponse);
// Bi-Directional Streaming
rpc StreamingBiDirectional(stream SimpleRequest) returns (stream SimpleResponse);
}
J'ai démarré notre serveur gRPC qui écoute sur localhost:8090
Client gRPC du facteur
— en important le fichier proto
- en utilisant la réflexion du serveur (pour cela, notre serveur doit avoir la réflexion du serveur en tant qu'extension facultative pour aider les clients à construire des requêtes à l'exécution sans avoir les informations de stub précompilées dans le client.)
— en utilisant les API Protobuf créées dans l'espace de travail Postman. (Vous pouvez également importer le fichier proto, puis créer l'API avec le proto importé.)
Nous allons choisir la 1ère option puis créer l'API. (La création de l'API sera prise en charge par le facteur une fois que vous aurez importé le fichier proto)
Cliquez sur Importer en tant qu'API .
Maintenant, nous sommes prêts avec notre définition de serveur dans le facteur en important notre fichier proto.
Cliquez maintenant sur Sélectionner une méthode pour sélectionner notre méthode gRPC à appeler.
À ce stade, Postman connaît notre serveur, de sorte que les RPC définis sur notre serveur seront disponibles dans la liste déroulante.
Lors de l'envoi d'une demande, vous pouvez autoriser la demande à l'aide du widget d' autorisation si nécessaire. Vous pouvez également envoyer des métadonnées si nécessaire.
Sélectionnez une méthode
Sélectionnez la méthode RPCRequest method qui est le RPC unaire de notre serveur et cliquez simplement sur le bouton Invoke, pour invoquer le RPC et nous avons obtenu notre réponse.
requête vide avec la réponse
Postman connaît notre serveur, il a donc la fonction de générer un exemple de message .
Cliquez sur Générer un exemple de message en bas du widget de message et invoquez le RPC.
Générer des messages et obtenir la réponse
Waouh ! nous avons eu notre réponse.
Maintenant, nous allons essayer le Client Streaming RPC dans le Postman.
Une fois que vous avez sélectionné la méthode de streaming client dans Sélectionner une méthode, Postman ouvre un widget pour envoyer notre flux de demandes.
Flux
Cliquez maintenant sur Générer un exemple de message pour générer le message ou composez un message et cliquez sur Envoyer .
envoyer un message de flux
Répétez cette étape autant de fois que vous le souhaitez. Puisqu'il s'agit d'un RPC de streaming client. Une fois que vous avez terminé avec votre flux de messages, cliquez sur Terminer le streaming.
Et vous obtiendrez la réponse en retour.
Réponse de flux client
Nous avons eu notre réponse.
Sélectionnez maintenant la méthode ServerStreaming et invoquez le RPC avec notre message, nous obtiendrons nos réponses de flux.
Diffusion de serveur
Nous avons reçu les réponses du flux.
Sélectionnez la méthode StreamingBiDirectional et appelez le RPC pour tester le streaming bidirectionnel.
Une fois que j'ai envoyé la demande, j'ai reçu la réponse à cette demande (c'est ainsi que le serveur est implémenté)
envoyer et recevoir
Après avoir envoyé la demande et reçu la réponse, mettez fin au flux.
Streaming bidirectionnel
Vous pouvez enregistrer les demandes dans les collections Postman comme d'habitude.
L'API que nous avons créé en important notre fichier proto peut être réutilisé
Sommaire
Maintenant que vous savez comment utiliser Postman en tant que client gRPC .
Lien : https://faun.pub/test-and-debug-your-grpc-request-postman-9d97a42826b4
#postman #grpc #go
1659932042
We can use the Postman for the gRPC requests. (While writing this article it is in the BETA stage.)
In this article, we are going to use the Postman tool(as a gRPC client) to test our gRPC API. I am going to use a simple gRPC server which I have developed while writing this article.
You should be able to test any gRPC Server methods at the end of this article.
See more at: https://faun.pub/test-and-debug-your-grpc-request-postman-9d97a42826b4
#postman #grpc #go