Getting Started With the Best API Protocol: gRPC vs. REST

gRPC, REST’s up-and-coming competitor, approaches synchronous communication from another angle, offering protocol buffers and typed contracts. What does that mean for your project?

In today’s technology landscape, most projects require the use of APIs. APIs bridge communication between services that may represent a single, complex system but may also reside on separate machines or use multiple, incompatible networks or languages.

Many standard technologies address the interservice communication needs of distributed systems, such as REST, SOAP, GraphQL, or gRPC. While REST is a favored approach, gRPC is a worthy contender, offering high performance, typed contracts, and excellent tooling.

REST Overview

Representational state transfer (REST) is a means of retrieving or manipulating a service’s data. A REST API is generally built on the HTTP protocol, using a URI to select a resource and an HTTP verb (e.g., GET, PUT, POST) to select the desired operation. Request and response bodies contain data that is specific to the operation, while their headers provide metadata. To illustrate, let’s look at a simplified example of retrieving a product via a REST API.

Here, we request a product resource with an ID of 11 and direct the API to respond in JSON format:

GET /products/11 HTTP/1.1
Accept: application/json

Given this request, our response (irrelevant headers omitted) may look like:

HTTP/1.1 200 OK
Content-Type: application/json

{ id: 11, name: "Purple Bowtie", sku: "purbow", price: { amount: 100, currencyCode: "USD"  }  }

While JSON may be human-readable, it is not optimal when used between services. The repetitive nature of referencing property names—even when compressed—can lead to bloated messages. Let’s look at an alternative to address this concern.

gRPC Overview

gRPC Remote Procedure Call (gRPC) is an open-source, contract-based, cross-platform communication protocol that simplifies and manages interservice communication by exposing a set of functions to external clients.

Built on top of HTTP/2, gRPC leverages features such as bidirectional streaming and built-in Transport Layer Security (TLS). gRPC enables more efficient communication through serialized binary payloads. It uses protocol buffers by default as its mechanism for structured data serialization, similar to REST’s use of JSON.

Unlike JSON, however, protocol buffers are more than a serialized format. They include three other major parts:

  • A contract definition language found in .proto files (We’ll follow proto3, the latest protocol buffer language specification.)
  • Generated accessor-function code
  • Language-specific runtime libraries

The remote functions that are available on a service (defined in a .proto file) are listed inside the service node in the protocol buffer file. As developers, we get to define these functions and their parameters using protocol buffers’ rich type system. This system supports various numeric and date types, lists, dictionaries, and nullables to define our input and output messages.

These service definitions need to be available to both the server and the client. Unfortunately, there is no default mechanism to share these definitions aside from providing direct access to the .proto file itself.

This example .proto file defines a function to return a product entry, given an ID:

syntax = "proto3";

package product;

service ProductCatalog {
    rpc GetProductDetails (ProductDetailsRequest) returns (ProductDetailsReply);
}

message ProductDetailsRequest {
    int32 id = 1;
}

message ProductDetailsReply {
    int32 id = 1;
    string name = 2;
    string sku = 3;
    Price price = 4;
}

message Price {
    float amount = 1;
    string currencyCode = 2;
}

Snippet 1: ProductCatalog Service Definition

The strict typing and field ordering of proto3 make message deserialization considerably less taxing than parsing JSON.

Comparing REST vs. gRPC

To recap, the most significant points when comparing REST vs. gRPC are:

 RESTgRPC
Cross-platformYesYes
Message FormatCustom but generally JSON or XMLProtocol buffers
Message Payload SizeMedium/LargeSmall
Processing ComplexityHigher (text parsing)Lower (well-defined binary structure)
Browser SupportYes (native)Yes (via gRPC-Web)

 

Where less-strict contracts and frequent additions to the payload are expected, JSON and REST are great fits. When contracts tend to stay more static and speed is of the utmost importance, gRPC generally wins out. In most projects I have worked on, gRPC has proved to be lighter and more performant than REST.

gRPC Service Implementation

Let’s build a streamlined project to explore how simple it is to adopt gRPC.

Creating the API Project

To get started, we will create a .NET 6 project in Visual Studio 2022 Community Edition (VS). We will select the ASP.NET Core gRPC Service template and name both the project (we’ll use InventoryAPI) and our first solution within it (Inventory).

 

A "Configure your new project" dialog within Visual Studio 2022. In this screen we typed "InventoryAPI" in the Project name field, we selected "C:\MyInventoryService" in the Location field, and typed "Inventory" in the Solution name field. We left "Place solution and project in the same directory" unchecked.

 

Now, let’s choose the .NET 6.0 (Long-term support) option for our framework:

 

An Additional information dialog within Visual Studio 2022. In this screen we selected ".NET 6.0 (Long-term support)" from the Framework dropdown. We left "Enable Docker" unchecked.

 

Defining Our Product Service

Now that we’ve created the project, VS displays a sample gRPC prototype definition service named Greeter. We will repurpose Greeter’s core files to suit our needs.

  • To create our contract, we will replace the contents of greet.proto with Snippet 1, renaming the file product.proto.
  • To create our service, we will replace the contents of the GreeterService.cs file with Snippet 2, renaming the file ProductCatalogService.cs.
using Grpc.Core;
using Product;

namespace InventoryAPI.Services
{
    public class ProductCatalogService : ProductCatalog.ProductCatalogBase
    {
        public override Task<ProductDetailsReply> GetProductDetails(
            ProductDetailsRequest request, ServerCallContext context)
        {
            return Task.FromResult(new ProductDetailsReply
            {
                Id = request.Id,
                Name = "Purple Bowtie",
                Sku = "purbow",
                Price = new Price
                {
                    Amount = 100,
                    CurrencyCode = "USD"
                }
            });
        }
    }
}

Snippet 2: ProductCatalogService

The service now returns a hardcoded product. To make the service work, we need only change the service registration in Program.cs to reference the new service name. In our case, we will rename app.MapGrpcService<GreeterService>(); to app.MapGrpcService<ProductCatalogService>(); to make our new API runnable.

Fair Warning: Not Your Standard Protocol Test

While we may be tempted to try it, we cannot test our gRPC service through a browser aimed at its endpoint. If we were to attempt this, we would receive an error message indicating that communication with gRPC endpoints must be made through a gRPC client.

Creating the Client

To test our service, let’s use VS’s basic Console App template and create a gRPC client to call the API. I named mine InventoryApp.

For expediency, let’s reference a relative file path by which we will share our contract. We will add the reference manually to the .csproj file. Then, we’ll update the path and set Client mode. Note: I recommend you become familiar with and have confidence in your local folder structure before using relative referencing.

Here are the .proto references, as they appear in both the service and client project files:

Service Project File
(Code to copy to client project file)
Client Project File
(After pasting and editing)
  <ItemGroup>
    <Content Update="Protos\product.proto" GrpcServices="Server" />
  </ItemGroup>
  <ItemGroup>
    <Protobuf Include="..\InventoryAPI\Protos\product.proto" GrpcServices="Client" />
  </ItemGroup>

Now, to call our service, we’ll replace the contents of Program.cs. Our code will accomplish a number of objectives:

  1. Create a channel that represents the location of the service endpoint (the port may vary, so consult the launchsettings.json file for the actual value).
  2. Create the client object.
  3. Construct a simple request.
  4. Send the request.
using System.Text.Json;
using Grpc.Net.Client;
using Product;

var channel = GrpcChannel.ForAddress("https://localhost:7200");
var client = new ProductCatalog.ProductCatalogClient(channel);

var request = new ProductDetailsRequest
{
    Id = 1
};

var response = await client.GetProductDetailsAsync(request);

Console.WriteLine(JsonSerializer.Serialize(response, new JsonSerializerOptions
{
    WriteIndented = true
}));
Console.ReadKey();

Snippet 3: New Program.cs

Preparing for Launch

To test our code, in VS, we’ll right-click the solution and choose Set Startup Projects. In the Solution Property Pages dialog, we’ll:

  • Select the radio button beside Multiple startup projects, and in the Action drop-down menu, set both projects (InventoryAPI and InventoryApp) to Start.
  • Click OK.

Now we can start the solution by clicking Start in the VS toolbar (or by pressing the F5 key). Two new console windows will display: one to tell us the service is listening, the other to show us details of the retrieved product.

gRPC Contract Sharing

Now let’s use another method to connect the gRPC client to our service’s definition. The most client-accessible contract-sharing solution is to make our definitions available through a URL. Other options are either very brittle (file shared through a path) or require more effort (contract shared through a native package). Sharing through a URL (as SOAP and Swagger/OpenAPI do) is flexible and requires less code.

To get started, make the .proto file available as static content. We will update our code manually because the UI on the build action is set to “Protobuf Compiler.” This change directs the compiler to copy the .proto file so it may be served from a web address. If this setting were changed through the VS UI, the build would break. Our first step, then, is to add Snippet 4 to the InventoryAPI.csproj file:

  <ItemGroup>
    <Content Update="Protos\product.proto">
      <CopyToOutputDirectory>Always</CopyToOutputDirectory>
    </Content>
  </ItemGroup>

  <ItemGroup>
    <Content Include="Protos\product.proto" CopyToPublishDirectory="PreserveNewest" />
  </ItemGroup>

Snippet 4: Code to Add to the InventoryAPI Service Project File

Next, we insert the code in Snippet 5 at the top of the ProductCatalogService.cs file to set up an endpoint to return our .proto file:

using System.Net.Mime;
using Microsoft.AspNetCore.StaticFiles;
using Microsoft.Extensions.FileProviders;

Snippet 5: Namespace Imports

And now, we add Snippet 6 just before app.Run(), also in the ProductCatalogService.cs file:

var provider = new FileExtensionContentTypeProvider();
provider.Mappings.Clear();
provider.Mappings[".proto"] = MediaTypeNames.Text.Plain;
app.UseStaticFiles(new StaticFileOptions
{
    FileProvider = new PhysicalFileProvider(Path.Combine(app.Environment.ContentRootPath, "Protos")),
    RequestPath = "/proto",
    ContentTypeProvider = provider
});

app.UseRouting();

Snippet 6: Code to Make .proto Files Accessible Through the API

With Snippets 4-6 added, the contents of the .proto file should be visible in the browser.

A New Test Client

Now we want to create a new console client that we will connect to our existing server with VS’s Dependency Wizard. The issue is that this wizard doesn’t talk HTTP/2. Therefore, we need to adjust our server to talk over HTTP/1 and start the server. With our server now making its .proto file available, we can build a new test client that hooks into our server via the gRPC wizard.

  1. To change our server to talk over HTTP/1, we’ll edit our appsettings.json JSON file:
    1. Adjust the Protocol field (found at the path Kestrel.EndpointDefaults.Protocols) to read Https.
    2. Save the file.
  2. For our new client to read this proto information, the server must be running. Originally, we started both the previous client and our server from VS’s Set Startup Projects dialog. Adjust the server solution to start only the server project, then start the solution. (Now that we have modified the HTTP version, our old client can no longer communicate with the server.)
  3. Next, create the new test client. Launch another instance of VS. We’ll repeat the steps as detailed in the Creating the API Project section, but this time, we’ll choose the Console App template. We’ll name our project and solution InventoryAppConnected.
  4. With the client chassis created, we’ll connect to our gRPC server. Expand the new project in the VS Solution Explorer.
    1. Right-click Dependencies and, in the context menu, select Manage Connected Services.
    2. On the Connected Services tab, click Add a service reference and choose gRPC.
    3. In the Add Service Reference dialog, choose the URL option and input the http version of the service address (remember to grab the randomly generated port number from launchsettings.json).
    4. Click Finish to add a service reference that can be easily maintained.

Feel free to check your work against the sample code for this example. Since, under the hood, VS has generated the same client we used in our first round of testing, we can reuse the contents of the Program.cs file from the previous service verbatim.

When we change a contract, we need to modify our client gRPC definition to match the updated .proto definition. To do so, we need only access VS’s Connected Services and refresh the relevant service entry. Now, our gRPC project is complete, and it’s easy to keep our service and client in sync.

Your Next Project Candidate: gRPC

Our gRPC implementation provides a firsthand glimpse into the benefits of using gRPC. REST and gRPC each have their own ideal use cases depending on contract type. However, when both options fit, I encourage you to try gRPC—it’ll put you ahead of the curve in the future of APIs.

Original article source at: https://www.toptal.com/

#grpc #rest 

Getting Started With the Best API Protocol: gRPC vs. REST

How to Implementing Views in Kalix

Introduction

There are mainly 3 building blocks in any Kalix application – Entities, Views, and Actions. Views, as the name suggests, are responsible for viewing the data. Although we can fetch data using the entity_key defined in the domain, we use Views for more flexibility and customization.

This blog is a brief discussion on Views with a Value-Entity in Kalix and how we can use them in our application.

Note: To read about the basics of gRPC and Entities visit the following links
1. gRPC Descriptors
2. Types of entities in Kalix
3. Event Sourced entities in Kalix (1) (2)

Creating a View in Kalix

To create a View, we must follow the following steps:

a) Create a domain file
The domain file represents the data that the View will display i.e., domain files are the source of data for a View. We also require this domain file for the second step which uses these domain models to perform operations like CRUD

message TodoItemState {

  string item_id = 1;

  string title = 2;

  string description = 3;

  string added_by = 4;

  bool done = 5;
}

b) Create a service for an Entity (can be Value Entity, Event Sourced Entity) defined in the domain file above. This service defines the remote procedure calls that are responsible for producing state changes for the Value-Entity

service TodoListService {

  option (kalix.codegen) = {

    value_entity: {

      name: "todolist.domain.TodoItem",

      entity_type: "todolist",

      state: "todolist.domain.TodoItemState"
    }
  };


  rpc AddItem(TodoItem) returns (google.protobuf.Empty) {

    option (google.api.http) = {

      post: "/todoitem/add"

      body: "*"
    };
  }


  rpc GetItemById(GetItemByIdRequest) returns (TodoItem) {

    option (google.api.http) = {

      get: "/todoitem/{item_id}"
    };
  }

  // Can add more such RPC Calls
}


message TodoItem {

  string item_id = 1 [(kalix.field).entity_key = true];

  string title = 2;

  string description = 3;

  string added_by = 4;

  bool done = 5;
}


message GetItemByIdRequest {

  string item_id = 1 [(kalix.field).entity_key = true];
}

c) Creating a View for the domain defined above. We use this View to query information based on our custom implementation.

  • Informing the code generator that a view needs to be created
  • Next, we define a method that basically links our domain model with a table. This allows Kalis to update the table when there is a change in the state of the domain.
  • Next, we define another method responsible for fetching records from the table we define in the previous step. We use SQL queries here to get records
service TodoListByName {

  option(kalix.codegen) = {

    view: {}

  };


  rpc UpdateTodoList(domain.TodoItemState) returns (domain.TodoItemState) {

    option(kalix.method).eventing.in = {

      value_entity: "todolist"

    };

    option(kalix.method).view.update = {

      table: "todolist"
    };
  }


  rpc GetTodoListItems(GetByNameRequest) returns (stream 

domain.TodoItemState) {

    option(kalix.method).view.query = {

      query: "SELECT * FROM todolist where added_by = :name"
    };
  }

}


message GetByNameRequest {

    string name = 1;
}

d) At last, we have to register our View in the main class. This is a kind of binding and is a necessary step without which our View will not be shown.

  def createKalix(): Kalix = {

    KalixFactory.withComponents(

      new TodoItem(_),

      new TodoListByNameView(_))
  }

CQRS

CQRS stands for Command Query Responsibility Segregation. From its name, we can infer that it discusses segregating or differentiating responsibilities for querying (SELECT) records from tables and inserting/updating/deleting them from the table.

Kalix also follows the CQRS principle. The entities are responsible for creating, updating, and deleting records, and the Views are accountable for the querying part as we have seen above.

There are 2 sides in CQRS – The read side and the Write side. The Write side is responsible for storing the state changes and the read side retrieves data. Thus changing the state of View does not happen in the same transaction as changing/ persisting of state. Kalix projects these changes in the states onto the Views.

In Kalix, state changes occur in the state store (for value entities) or the event journal (for event-sourced entities). There is a time delay in state changes and when that data is queryable

Conclusion

The data layer part in Kalix is entirely abstracted and as developers, we don’t have access to how this is implemented internally. Nonetheless, we can take advantage of the CQRS pattern, especially in enterprise applications where there is more reading activity than writing activity and Views are the means to achieve.

References:
To read more visit the Official Kalix Documentation page
GitHub link to the complete source code provided above
To read about Kalix and its advantages visit here

Original article source at: https://blog.knoldus.com/

#grpc #views #action 

How to Implementing Views in Kalix
Debbie Clay

Debbie Clay

1668484993

What’s New in gRPC for .NET 7

In this tutorial, you will learn about what’s new in gRPC for .NET 7, including:

* Performance improvements
* Create RESTful services with gRPC JSON transcoding
* gRPC apps on Azure App Services"

"gRPC is a high-performance RPC framework used by developers around the world to build fast apps. High-performance services with gRPC: What's new in .NET 7

#dotnet #grpc

What’s New in gRPC for .NET 7
Thomas  Granger

Thomas Granger

1667546369

Node.js and Serving Highly Dynamic Content in Real-time

Node.js and serving highly dynamic content to over 500k devices in near real-time

Growing your service from several thousand users to hundreds of thousands is often a painful, yet rewarding experience. We’ll explore our journey in scaling Node.js-based applications that provide gRPC for last mile communication, AMQP for inter-service communication, and interact with PostgreSQL, Redis & Clickhouse databases You would learn about ways to achieve that scale: get to know tools & techniques used to identify and measure baseline performance, be able to drill down to the root cause of specific performance issues, analyze them and make conscious decisions about ways to solve these problems.

#node #nodejs #grpc 

Node.js and Serving Highly Dynamic Content in Real-time
Elian  Harber

Elian Harber

1667492100

Talos Linux: A Modern OS for Kubernetes

Talos Linux

A modern OS for Kubernetes.


Talos is a modern OS for running Kubernetes: secure, immutable, and minimal. Talos is fully open source, production-ready, and supported by the people at Sidero Labs All system management is done via an API - there is no shell or interactive console. Benefits include:

  • Security: Talos reduces your attack surface: It's minimal, hardened, and immutable. All API access is secured with mutual TLS (mTLS) authentication.
  • Predictability: Talos eliminates configuration drift, reduces unknown factors by employing immutable infrastructure ideology, and delivers atomic updates.
  • Evolvability: Talos simplifies your architecture, increases your agility, and always delivers current stable Kubernetes and Linux versions.

Documentation

For instructions on deploying and managing Talos, see the Documentation.

Community

If you're interested in this project and would like to help in engineering efforts or have general usage questions, we are happy to have you! We hold a weekly meeting that all audiences are welcome to attend.

We would appreciate your feedback so that we can make Talos even better! To do so, you can take our survey.

Office Hours

You can subscribe to this meeting by joining the community forum above.

Note: You can convert the meeting hours to your local time.

Contributing

Contributions are welcomed and appreciated! See Contributing for our guidelines.

Download Details:

Author: Siderolabs
Source Code: https://github.com/siderolabs/talos 
License: MPL-2.0 license

#go #golang #linux #kubernetes #grpc 

Talos Linux: A Modern OS for Kubernetes
Rupert  Beatty

Rupert Beatty

1667322300

GRPC-swift: The Swift Language Implementation Of GRPC

gRPC Swift

This repository contains a gRPC Swift API and code generator.

It is intended for use with Apple's SwiftProtobuf support for Protocol Buffers. Both projects contain code generation plugins for protoc, Google's Protocol Buffer compiler, and both contain libraries of supporting code that is needed to build and run the generated code.

APIs and generated code is provided for both gRPC clients and servers, and can be built either with Xcode or the Swift Package Manager. Support is provided for all four gRPC API styles (Unary, Server Streaming, Client Streaming, and Bidirectional Streaming) and connections can be made either over secure (TLS) or insecure channels.

Versions

gRPC Swift has recently been rewritten on top of SwiftNIO as opposed to the core library provided by the gRPC project.

VersionImplementationBranchprotoc PluginSupport
1.xSwiftNIO[main][branch-new]protoc-gen-grpc-swiftActively developed and supported
0.xgRPC C library[cgrpc][branch-old]protoc-gen-swiftgrpcNo longer developed; security fixes only

The remainder of this README refers to the 1.x version of gRPC Swift.

Supported Platforms

gRPC Swift's platform support is identical to the platform support of Swift NIO.

The earliest supported version of Swift for gRPC Swift releases are as follows:

gRPC Swift VersionEarliest Swift Version
1.0.0 ..< 1.8.05.2
1.8.0 ..< 1.11.05.4
1.11.0...5.5

Versions of clients and services which are use Swift's Concurrency support are available from gRPC Swift 1.8.0 and require Swift 5.6 and newer.

Getting gRPC Swift

There are two parts to gRPC Swift: the gRPC library and an API code generator.

Getting the gRPC library

Swift Package Manager

The Swift Package Manager is the preferred way to get gRPC Swift. Simply add the package dependency to your Package.swift:

dependencies: [
  .package(url: "https://github.com/grpc/grpc-swift.git", from: "1.9.0")
]

...and depend on "GRPC" in the necessary targets:

.target(
  name: ...,
  dependencies: [.product(name: "GRPC", package: "grpc-swift")]
]

Xcode

From Xcode 11 it is possible to add Swift Package dependencies to Xcode projects and link targets to products of those packages; this is the easiest way to integrate gRPC Swift with an existing xcodeproj.

Manual Integration

Alternatively, gRPC Swift can be manually integrated into a project:

  1. Build an Xcode project: swift package generate-xcodeproj,
  2. Add the generated project to your own project, and
  3. Add a build dependency on GRPC.

Getting the protoc Plugins

Binary releases of protoc, the Protocol Buffer Compiler, are available on GitHub.

To build the plugins, run make plugins in the main directory. This uses the Swift Package Manager to build both of the necessary plugins: protoc-gen-swift, which generates Protocol Buffer support code and protoc-gen-grpc-swift, which generates gRPC interface code.

To install these plugins, just copy the two executables (protoc-gen-swift and protoc-gen-grpc-swift) that show up in the main directory into a directory that is part of your PATH environment variable. Alternatively the full path to the plugins can be specified when using protoc.

Homebrew

The plugins are available from homebrew and can be installed with:

    $ brew install swift-protobuf grpc-swift

Examples

gRPC Swift has a number of tutorials and examples available. They are split across two directories:

  • /Sources/Examples contains examples which do not require additional dependencies and may be built using the Swift Package Manager.
  • /Examples contains examples which rely on external dependencies or may not be built by the Swift Package Manager (such as an iOS app).

Some of the examples are accompanied by tutorials, including:

  • A quick start guide for creating and running your first gRPC service.
  • A basic tutorial covering the creation and implementation of a gRPC service using all four call types as well as the code required to setup and run a server and make calls to it using a generated client.
  • An interceptors tutorial covering how to create and use interceptors with gRPC Swift.

Documentation

The docs directory contains documentation, including:

Security

Please see SECURITY.md.

Contributing

Please get involved! See our guidelines for contributing.

Download Details:

Author: grpc
Source Code: https://github.com/grpc/grpc-swift 
License: Apache-2.0 license

#swift #grpc #protocol

GRPC-swift: The Swift Language Implementation Of GRPC
Elian  Harber

Elian Harber

1665094440

gRPC to JSON proxy generator following the gRPC HTTP spec

gRPC-Gateway

gRPC to JSON proxy generator following the gRPC HTTP spec

About

The gRPC-Gateway is a plugin of the Google protocol buffers compiler protoc. It reads protobuf service definitions and generates a reverse-proxy server which translates a RESTful HTTP API into gRPC. This server is generated according to the google.api.http annotations in your service definitions.

This helps you provide your APIs in both gRPC and RESTful style at the same time.

Docs

You can read our docs at:

Testimonials

We use the gRPC-Gateway to serve millions of API requests per day, and have been since 2018 and through all of that, we have never had any issues with it.

- William Mill, Ad Hoc

Background

gRPC is great -- it generates API clients and server stubs in many programming languages, it is fast, easy-to-use, bandwidth-efficient and its design is combat-proven by Google. However, you might still want to provide a traditional RESTful JSON API as well. Reasons can range from maintaining backward-compatibility, supporting languages or clients that are not well supported by gRPC, to simply maintaining the aesthetics and tooling involved with a RESTful JSON architecture.

This project aims to provide that HTTP+JSON interface to your gRPC service. A small amount of configuration in your service to attach HTTP semantics is all that's needed to generate a reverse-proxy with this library.

Installation

Compile from source

The following instructions assume you are using Go Modules for dependency management. Use a tool dependency to track the versions of the following executable packages:

// +build tools

package tools

import (
    _ "github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway"
    _ "github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2"
    _ "google.golang.org/grpc/cmd/protoc-gen-go-grpc"
    _ "google.golang.org/protobuf/cmd/protoc-gen-go"
)

Run go mod tidy to resolve the versions. Install by running

$ go install \
    github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway \
    github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2 \
    google.golang.org/protobuf/cmd/protoc-gen-go \
    google.golang.org/grpc/cmd/protoc-gen-go-grpc

This will place four binaries in your $GOBIN;

  • protoc-gen-grpc-gateway
  • protoc-gen-openapiv2
  • protoc-gen-go
  • protoc-gen-go-grpc

Make sure that your $GOBIN is in your $PATH.

Download the binaries

You may alternatively download the binaries from the GitHub releases page. We generate SLSA3 signatures using the OpenSSF's slsa-framework/slsa-github-generator during the release process. To verify a release binary:

  1. Install the verification tool from slsa-framework/slsa-verifier#installation.
  2. Download the provenance file attestation.intoto.jsonl from the GitHub releases page.
  3. Run the verifier:
slsa-verifier -artifact-path <the-binary> -provenance attestation.intoto.jsonl -source github.com/grpc-ecosystem/grpc-gateway -tag <the-tag>

Alternatively, see the section on remotely managed plugin versions below.

Usage

Define your gRPC service using protocol buffers

your_service.proto:

 syntax = "proto3";
 package your.service.v1;
 option go_package = "github.com/yourorg/yourprotos/gen/go/your/service/v1";

 message StringMessage {
   string value = 1;
 }

 service YourService {
   rpc Echo(StringMessage) returns (StringMessage) {}
 }

Generate gRPC stubs

This step generates the gRPC stubs that you can use to implement the service and consume from clients:

Here's an example buf.gen.yaml you can use to generate the stubs with buf:

version: v1
plugins:
  - name: go
    out: gen/go
    opt:
      - paths=source_relative
  - name: go-grpc
    out: gen/go
    opt:
      - paths=source_relative

With this file in place, you can generate your files using buf generate.

For a complete example of using buf generate to generate protobuf stubs, see the boilerplate repo. For more information on generating the stubs with buf, see the official documentation.

If you are using protoc to generate stubs, here's an example of what a command might look like:

protoc -I . \
    --go_out ./gen/go/ --go_opt paths=source_relative \
    --go-grpc_out ./gen/go/ --go-grpc_opt paths=source_relative \
    your/service/v1/your_service.proto

Implement your service in gRPC as usual.

Generate reverse-proxy using protoc-gen-grpc-gateway

At this point, you have 3 options:

  • no further modifications, use the default mapping to HTTP semantics (method, path, etc.)
    • this will work on any .proto file, but will not allow setting HTTP paths, request parameters or similar
  • additional .proto modifications to use a custom mapping
    • relies on parameters in the .proto file to set custom HTTP mappings
  • no .proto modifications, but use an external configuration file
    • relies on an external configuration file to set custom HTTP mappings
    • mostly useful when the source proto file isn't under your control
  1. Using the default mapping
  2. With custom annotations
  3. External configuration If you do not want to (or cannot) modify the proto file for use with gRPC-Gateway you can alternatively use an external gRPC Service Configuration file. Check our documentation for more information. This is best combined with the standalone=true option to generate a file that can live in its own package, separate from the files generated by the source protobuf file.

Write an entrypoint for the HTTP reverse-proxy server

package main

import (
  "context"
  "flag"
  "net/http"

  "github.com/golang/glog"
  "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
  "google.golang.org/grpc"
  "google.golang.org/grpc/credentials/insecure"

  gw "github.com/yourorg/yourrepo/proto/gen/go/your/service/v1/your_service"  // Update
)

var (
  // command-line options:
  // gRPC server endpoint
  grpcServerEndpoint = flag.String("grpc-server-endpoint",  "localhost:9090", "gRPC server endpoint")
)

func run() error {
  ctx := context.Background()
  ctx, cancel := context.WithCancel(ctx)
  defer cancel()

  // Register gRPC server endpoint
  // Note: Make sure the gRPC server is running properly and accessible
  mux := runtime.NewServeMux()
  opts := []grpc.DialOption{grpc.WithTransportCredentials(insecure.NewCredentials())}
  err := gw.RegisterYourServiceHandlerFromEndpoint(ctx, mux,  *grpcServerEndpoint, opts)
  if err != nil {
    return err
  }

  // Start HTTP server (and proxy calls to gRPC server endpoint)
  return http.ListenAndServe(":8081", mux)
}

func main() {
  flag.Parse()
  defer glog.Flush()

  if err := run(); err != nil {
    glog.Fatal(err)
  }
}

(Optional) Generate OpenAPI definitions using protoc-gen-openapiv2

Here's what a buf.gen.yaml file might look like:

version: v1
plugins:
  - name: go
    out: gen/go
    opt:
      - paths=source_relative
  - name: go-grpc
    out: gen/go
    opt:
      - paths=source_relative
  - name: grpc-gateway
    out: gen/go
    opt:
      - paths=source_relative
  - name: openapiv2
    out: gen/openapiv2

To use the custom protobuf annotations supported by protoc-gen-openapiv2, we need another dependency added to our protobuf generation step. If you are using buf, you can add the buf.build/grpc-ecosystem/grpc-gateway dependency to your deps array:

version: v1
name: buf.build/yourorg/myprotos
deps:
  - buf.build/googleapis/googleapis
  - buf.build/grpc-ecosystem/grpc-gateway

With protoc (just the swagger file):

protoc -I . --openapiv2_out ./gen/openapiv2 \
    --openapiv2_opt logtostderr=true \
    your/service/v1/your_service.proto

If you are using protoc to generate stubs, you will need to copy the protobuf files from the protoc-gen-openapiv2/options directory of this repository, and providing them to protoc when running.

Note that this plugin also supports generating OpenAPI definitions for unannotated methods; use the generate_unbound_methods option to enable this.

It is possible with the HTTP mapping for a gRPC service method to create duplicate mappings with the only difference being constraints on the path parameter.

/v1/{name=projects/*} and /v1/{name=organizations/*} both become /v1/{name}. When this occurs the plugin will rename the path parameter with a "_1" (or "_2" etc) suffix to differentiate the different operations. So in the above example, the 2nd path would become /v1/{name_1=organizations/*}. This can also cause OpenAPI clients to URL encode the "/" that is part of the path parameter as that is what OpenAPI defines in the specification. To allow gRPC gateway to accept the URL encoded slash and still route the request, use the UnescapingModeAllCharacters or UnescapingModeLegacy (which is the default currently though may change in future versions). See Customizing Your Gateway for more information.

Usage with remote plugins

As an alternative to all of the above, you can use buf with remote plugins to manage plugin versions and generation. An example buf.gen.yaml using remote plugin generation looks like this:

version: v1
plugins:
  - remote: buf.build/library/plugins/go:v1.27.1-1
    out: gen/go
    opt:
      - paths=source_relative
  - remote: buf.build/library/plugins/go-grpc:v1.1.0-2
    out: gen/go
    opt:
      - paths=source_relative
  - remote: buf.build/grpc-ecosystem/plugins/grpc-gateway:v2.6.0-1
    out: gen/go
    opt:
      - paths=source_relative
  - remote: buf.build/grpc-ecosystem/plugins/openapiv2:v2.6.0-1
    out: gen/openapiv2

This requires no local installation of any plugins. Be careful to use the same version of the generator as the runtime library, i.e. if using v2.6.0-1, run

$ go get github.com/grpc-ecosystem/grpc-gateway/v2@v2.6.0

To get the same version of the runtime in your go.mod.

Video intro

This GopherCon UK 2019 presentation from our maintainer @JohanBrandhorst provides a good intro to using the gRPC-Gateway. It uses the following boilerplate repo as a base: https://github.com/johanbrandhorst/grpc-gateway-boilerplate.

Parameters and flags

When using buf to generate stubs, flags and parameters are passed through the opt field in your buf.gen.yaml file, for example:

version: v1
plugins:
  - name: grpc-gateway
    out: gen/go
    opt:
      - paths=source_relative
      - grpc_api_configuration=path/to/config.yaml
      - standalone=true

During code generation with protoc, flags to gRPC-Gateway tools must be passed through protoc using one of 2 patterns:

  • as part of the --<tool_suffix>_out protoc parameter: --<tool_suffix>_out=<flags>:<path>
--grpc-gateway_out=logtostderr=true,repeated_path_param_separator=ssv:.
--openapiv2_out=logtostderr=true,repeated_path_param_separator=ssv:.
  • using additional --<tool_suffix>_opt parameters: --<tool_suffix>_opt=<flag>[,<flag>]*
--grpc-gateway_opt logtostderr=true,repeated_path_param_separator=ssv
# or separately
--grpc-gateway_opt logtostderr=true --grpc-gateway_opt repeated_path_param_separator=ssv
--openapiv2_opt logtostderr=true,repeated_path_param_separator=ssv
# or separately
--openapiv2_opt logtostderr=true --openapiv2_opt repeated_path_param_separator=ssv

More examples

More examples are available under the examples directory.

  • proto/examplepb/echo_service.proto, proto/examplepb/a_bit_of_everything.proto, proto/examplepb/unannotated_echo_service.proto: service definition
    • proto/examplepb/echo_service.pb.go, proto/examplepb/a_bit_of_everything.pb.go, proto/examplepb/unannotated_echo_service.pb.go: [generated] stub of the service
    • proto/examplepb/echo_service.pb.gw.go, proto/examplepb/a_bit_of_everything.pb.gw.go, proto/examplepb/uannotated_echo_service.pb.gw.go: [generated] reverse proxy for the service
    • proto/examplepb/unannotated_echo_service.yaml: gRPC API Configuration for unannotated_echo_service.proto
  • server/main.go: service implementation
  • main.go: entrypoint of the generated reverse proxy

To use the same port for custom HTTP handlers (e.g. serving swagger.json), gRPC-Gateway, and a gRPC server, see this example by CoreOS (and its accompanying blog post).

Features

Supported

  • Generating JSON API handlers.
  • Method parameters in the request body.
  • Method parameters in the request path.
  • Method parameters in the query string.
  • Enum fields in the path parameter (including repeated enum fields).
  • Mapping streaming APIs to newline-delimited JSON streams.
  • Mapping HTTP headers with Grpc-Metadata- prefix to gRPC metadata (prefixed with grpcgateway-)
  • Optionally emitting API definitions for OpenAPI (Swagger) v2.
  • Setting gRPC timeouts through inbound HTTP Grpc-Timeout header.
  • Partial support for gRPC API Configuration files as an alternative to annotation.
  • Automatically translating PATCH requests into Field Mask gRPC requests. See the docs for more information.

No plan to support

But patches are welcome.

  • Method parameters in HTTP headers.
  • Handling trailer metadata.
  • Encoding request/response body in XML.
  • True bi-directional streaming.

Mapping gRPC to HTTP

  • How gRPC error codes map to HTTP status codes in the response.
  • HTTP request source IP is added as X-Forwarded-For gRPC request header.
  • HTTP request host is added as X-Forwarded-Host gRPC request header.
  • HTTP Authorization header is added as authorization gRPC request header.
  • Remaining Permanent HTTP header keys (as specified by the IANA here) are prefixed with grpcgateway- and added with their values to gRPC request header.
  • HTTP headers that start with 'Grpc-Metadata-' are mapped to gRPC metadata (prefixed with grpcgateway-).
  • While configurable, the default {un,}marshaling uses protojson.
  • The path template used to map gRPC service methods to HTTP endpoints supports the google.api.http path template syntax. For example, /api/v1/{name=projects/*/topics/*} or /prefix/{path=organizations/**}.

Contribution

See CONTRIBUTING.md.

Download Details:

Author: grpc-ecosystem
Source Code: https://github.com/grpc-ecosystem/grpc-gateway 
License: BSD-3-Clause license

#go #golang #grpc #rest 

gRPC to JSON proxy generator following the gRPC HTTP spec
Charles Cooper

Charles Cooper

1664939115

Scalable Microservices with gRPC and Kubernetes

This tutorial will demonstrate the core concepts of gRPC and Kubernetes, and show you how to combine these two technologies to create scalable and performant microservices rooted in lessons learned from Google’s experience.

While migrating to a microservices based architecture provides many benefits, it also brings up a lot of challenges. Unlike monolithic architectures, microservice architectures have to deal with coordinating, organizing, and managing a collection of different services with different scaling needs.
Often overlooked from a developer’s perspective, HTTP client libraries are clunky and require code that defines paths, handles parameters, and deals with responses in bytes. gRPC abstracts all of this away and makes network calls feel like any other function calls defined for a struct.
This talk will show how Kubernetes and gRPC, two popular open source projects based on experience Google has gained running microservices at scale

gRPC is a platform-neutral RPC framework, based on HTTP/2 and Protobuf, used to build highly performant and scalable APIs. gRPC benefits from new features introduced in HTTP/2 like framing, bidirectional streaming, header compression, multiplexing, and flow control. gRPC is not just a blueprint for high performance RPC, but also provides a methodology to generate services and clients in multiple languages. This talk is based on our experience by contributing to the network intelligence center’s Simulator, which itself used gRPC for all intra-server communication.

This talk will demonstrate the core concepts of gRPC and Kubernetes, and show you how to combine these two technologies to create scalable and performant microservices rooted in lessons learned from Google’s experience.
If you're interested in a dead simple way to add gRPC load balancing to your Kubernetes services, regardless of what language it's written in, this talk is surely going to be beneficial.
gRPC is relatively new, but its fast-growing ecosystem and community will definitely make an impact in microservice development. Since gRPC is an open standard, all mainstream programming languages support it, it will help viewers to understand how ideal it is for working in a microservice environment.

#microservices #grpc #kubernetes

Scalable Microservices with gRPC and Kubernetes

Public interface Definitions of Google APIs for Dart and GRPC

Public interface definitions of Google APIs for Dart and gRPC.

Getting started

In your pubspec.yaml file add:

dependencies:
  protobuf_google: any

Usage

import 'package:protobuf_google/protobuf_google.dart'

Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add protobuf_google

With Flutter:

 $ flutter pub add protobuf_google

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

dependencies:
  protobuf_google: ^0.0.1

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:protobuf_google/protobuf_google.dart'; 

Download Details:

Author: xclud

Source Code: https://github.com/xclud/dart_protobuf_google/

#dart #grpc #android 

Public interface Definitions of Google APIs for Dart and GRPC
Minh  Nguyet

Minh Nguyet

1659961080

Cách Kiểm Tra Và Gỡ Lỗi Yêu Cầu GRPC Của Bạn Trong Postman

Chúng tôi có thể sử dụng Postman cho các yêu cầu gRPC. (Trong khi viết bài này, nó đang ở giai đoạn BETA.)

Trong bài viết này, chúng tôi sẽ sử dụng công cụ Postman (như một ứng dụng khách gRPC) để kiểm tra API gRPC của chúng tôi. Tôi sẽ sử dụng một máy chủ gRPC đơn giản mà tôi đã phát triển khi viết bài này .

Bạn sẽ có thể kiểm tra bất kỳ phương pháp máy chủ gRPC nào ở cuối bài viết này.

Khởi động máy chủ gRPC

Tổng quan cấp cao về Máy chủ của chúng tôi

Máy chủ gRPC của chúng tôi có

2 loại tin nhắn

  • SimpleRequest
  • SimpleResponse

1 dịch vụ

  • SimpleService

4 RPC trong Dịch vụ đó

  • RPCRequest - RPC một bậc
  • ServerStreaming - Server Streaming RPC
  • ClientStreaming - Client Streaming RPC
  • Phát trực tuyếnBiDirectional - RPC phát trực tuyến hai chiều

Tệp Proto

syntax = "proto3";

option go_package = "simple/testgrpc";

message SimpleRequest{
    string request_need = 1;
}

message SimpleResponse{
    string response = 1;
}

service SimpleService{
    // unary RPC
    rpc RPCRequest(SimpleRequest) returns (SimpleResponse);
    // Server Streaming
    rpc ServerStreaming(SimpleRequest) returns (stream SimpleResponse);
    // Client Streaming
    rpc ClientStreaming(stream SimpleRequest) returns (SimpleResponse);
    // Bi-Directional Streaming
    rpc StreamingBiDirectional(stream SimpleRequest) returns (stream SimpleResponse);
}

Tôi đã khởi động máy chủ gRPC của chúng tôi đang nghe tại localhost: 8090

 

Postman gRPC Client

  • Mở Người đưa thư. Nhấp vào Mới .
  • Chọn yêu cầu gRPC
  • Cửa sổ yêu cầu sẽ được mở
  • Nhập URL máy chủ. (Máy chủ của chúng tôi không yêu cầu TLS nên chúng tôi không chọn nó.)
  • Bây giờ chúng ta phải Chọn một phương thức để gọi. Đối với người đưa thư đó nên biết về máy chủ gRPC của chúng tôi. Chúng tôi có thể làm cho Người đưa thư biết về máy chủ của chúng tôi theo 3 cách,

- bằng cách nhập tệp proto

- sử dụng phản chiếu máy chủ (đối với điều này Máy chủ của chúng tôi cần có phản chiếu máy chủ như một tiện ích mở rộng tùy chọn để hỗ trợ khách hàng trong việc xây dựng thời gian chạy các yêu cầu mà không cần thông tin sơ khai được biên dịch trước vào máy khách.)

- bằng cách sử dụng các API Protobuf được tạo trong không gian làm việc Postman. (ngoài ra, bạn có thể nhập tệp proto và sau đó tạo API với proto đã nhập.)

Chúng tôi sẽ đi với tùy chọn đầu tiên và sau đó tạo API. (Việc tạo API sẽ do Người đưa thư đảm nhận sau khi bạn nhập tệp proto)

  • Nhấp vào Tiếp theo . Postman yêu cầu chúng tôi nhập tệp proto của chúng tôi dưới dạng API để sử dụng lại.
  • Cung cấp tên API và tên Phiên bản . (Nếu bạn không có, hãy tạo nó bằng cách nhập tên bạn muốn Postman sẽ yêu cầu tạo)

Nhấp vào Nhập dưới dạng API .

Bây giờ, chúng tôi đã sẵn sàng với định nghĩa Máy chủ của mình trong Người đưa thư bằng cách nhập tệp proto của chúng tôi.

Bây giờ hãy nhấp vào Chọn một phương pháp để chọn phương thức gRPC của chúng tôi để gọi.

Tại thời điểm này Postman biết về Máy chủ của chúng tôi, vì vậy các RPC được xác định trong máy chủ của chúng tôi sẽ có sẵn trong menu thả xuống.

Trong khi gửi yêu cầu, bạn có thể ủy quyền yêu cầu với sự trợ giúp của tiện ích con Ủy quyền nếu cần. Ngoài ra, bạn có thể gửi một số Siêu dữ liệu nếu cần.

Yêu cầu đơn nguyên

Chọn một phương pháp

Chọn phương thức RPCRequest phương thức là RPC đơn nhất của Máy chủ của chúng tôi và chỉ cần nhấp vào nút Gọi, để gọi RPC và chúng tôi đã nhận được phản hồi của mình.

yêu cầu trống với phản hồi

Postman biết về Máy chủ của chúng tôi vì vậy, nó có tính năng Tạo Thông báo Ví dụ .

Nhấp vào Tạo Thông báo Sơ đồ ở cuối Tiện ích Thông báo và gọi RPC.

Tạo tin nhắn và nhận phản hồi

Ái chà! chúng tôi đã nhận được phản hồi của chúng tôi.

Phát trực tuyến khách hàng

Bây giờ, chúng ta sẽ thử Client Streaming RPC trong Postman.

Sau khi bạn chọn phương pháp phát trực tuyến Khách hàng trong Chọn phương thức, sau đó Người đưa thư sẽ mở một tiện ích để gửi luồng yêu cầu của chúng tôi.

Dòng

Bây giờ hãy nhấp vào Tạo tin nhắn mẫu để tạo tin nhắn hoặc soạn tin nhắn và bấm vào Gửi .

gửi tin nhắn luồng

Lặp lại bước này bao nhiêu lần bạn muốn. Vì nó là một RPC phát trực tuyến Máy khách. Sau khi bạn hoàn tất luồng tin nhắn của mình, hãy nhấp vào Kết thúc phát trực tuyến.

Và Bạn sẽ nhận được phản hồi.

Phản hồi phát trực tuyến của khách hàng

Chúng tôi đã nhận được phản hồi của mình.

Máy chủ truyền trực tuyến

Bây giờ chọn phương thức ServerStreaming và gọi RPC cùng với thông báo của chúng tôi, Chúng tôi sẽ nhận được phản hồi từ luồng của chúng tôi.

Máy chủ truyền trực tuyến

Chúng tôi đã nhận được phản hồi của Luồng.

Phát trực tuyến hai chiều

Chọn phương pháp StreamingBiDirectional và gọi RPC để kiểm tra tính năng phát trực tuyến Hai chiều.

Sau khi tôi gửi yêu cầu, tôi đã nhận được phản hồi cho yêu cầu đó (Đó là cách Máy chủ được triển khai)

gửi và nhận

Sau khi gửi yêu cầu và nhận được phản hồi, hãy kết thúc luồng.

Phát trực tuyến hai hướng

Bạn có thể lưu các yêu cầu trong Bộ sưu tập của Người đưa thư như bình thường.

API

API mà chúng tôi đã tạo bằng cách nhập tệp proto của chúng tôi có thể được sử dụng lại

Bản tóm tắt

Bây giờ bạn đã biết cách sử dụng Người đưa thư làm Ứng dụng khách gRPC .

Liên kết: https://faun.pub/test-and-debug-your-grpc-request-postman-9d97a42826b4

#postman #grpc #go

Cách Kiểm Tra Và Gỡ Lỗi Yêu Cầu GRPC Của Bạn Trong Postman
Elian  Harber

Elian Harber

1659960420

Cete: A Distributed Key Value Store Server Written in Go

Cete

Cete is a distributed key value store server written in Go built on top of BadgerDB.
It provides functions through gRPC (HTTP/2 + Protocol Buffers) or traditional RESTful API (HTTP/1.1 + JSON).
Cete implements Raft consensus algorithm by hashicorp/raft. It achieve consensus across all the instances of the nodes, ensuring that every change made to the system is made to a quorum of nodes, or none at all.
Cete makes it easy bringing up a cluster of BadgerDB (a cete of badgers) .

Features

  • Easy deployment
  • Bringing up cluster
  • Database replication
  • An easy-to-use HTTP API
  • CLI is also available
  • Docker container image is available

Building Cete

When you satisfied dependencies, let's build Cete for Linux as following:

$ mkdir -p ${GOPATH}/src/github.com/mosuka
$ cd ${GOPATH}/src/github.com/mosuka
$ git clone https://github.com/mosuka/cete.git
$ cd cete
$ make build

If you want to build for other platform, set GOOS, GOARCH environment variables. For example, build for macOS like following:

$ make GOOS=darwin build

Binaries

You can see the binary file when build successful like so:

$ ls ./bin
cete

Testing Cete

If you want to test your changes, run command like following:

$ make test

Packaging Cete

Linux

$ make GOOS=linux dist

macOS

$ make GOOS=darwin dist

Configure Cete

CLI FlagEnvironment variableConfiguration FileDescription
--config-file--config file. if omitted, cete.yaml in /etc and home directory will be searched
--idCETE_IDidnode ID
--raft-addressCETE_RAFT_ADDRESSraft_addressRaft server listen address
--grpc-addressCETE_GRPC_ADDRESSgrpc_addressgRPC server listen address
--http-addressCETE_HTTP_ADDRESShttp_addressHTTP server listen address
--data-directoryCETE_DATA_DIRECTORYdata_directorydata directory which store the key-value store data and Raft logs
--peer-grpc-addressCETE_PEER_GRPC_ADDRESSpeer_grpc_addresslisten address of the existing gRPC server in the joining cluster
--certificate-fileCETE_CERTIFICATE_FILEcertificate_filepath to the client server TLS certificate file
--key-fileCETE_KEY_FILEkey_filepath to the client server TLS key file
--common-nameCETE_COMMON_NAMEcommon_namecertificate common name
--log-levelCETE_LOG_LEVELlog_levellog level
--log-fileCETE_LOG_FILElog_filelog file
--log-max-sizeCETE_LOG_MAX_SIZElog_max_sizemax size of a log file in megabytes
--log-max-backupsCETE_LOG_MAX_BACKUPSlog_max_backupsmax backup count of log files
--log-max-ageCETE_LOG_MAX_AGElog_max_agemax age of a log file in days
--log-compressCETE_LOG_COMPRESSlog_compresscompress a log file

Starting Cete node

Starting cete is easy as follows:

$ ./bin/cete start --id=node1 --raft-address=:7000 --grpc-address=:9000 --http-address=:8000 --data-directory=/tmp/cete/node1

You can get the node information with the following command:

$ ./bin/cete node | jq .

or the following URL:

$ curl -X GET http://localhost:8000/v1/node | jq .

The result of the above command is:

{
  "node": {
    "raft_address": ":7000",
    "metadata": {
      "grpc_address": ":9000",
      "http_address": ":8000"
    },
    "state": "Leader"
  }
}

Health check

You can check the health status of the node.

$ ./bin/cete healthcheck | jq .

Also provides the following REST APIs

Liveness prove

This endpoint always returns 200 and should be used to check Cete health.

$ curl -X GET http://localhost:8000/v1/liveness_check | jq .

Readiness probe

This endpoint returns 200 when Cete is ready to serve traffic (i.e. respond to queries).

$ curl -X GET http://localhost:8000/v1/readiness_check | jq .

Putting a key-value

To put a key-value, execute the following command:

$ ./bin/cete set 1 value1

or, you can use the RESTful API as follows:

$ curl -X PUT 'http://127.0.0.1:8000/v1/data/1' --data-binary value1
$ curl -X PUT 'http://127.0.0.1:8000/v1/data/2' -H "Content-Type: image/jpeg" --data-binary @/path/to/photo.jpg

Getting a key-value

To get a key-value, execute the following command:

$ ./bin/cete get 1

or, you can use the RESTful API as follows:

$ curl -X GET 'http://127.0.0.1:8000/v1/data/1'

You can see the result. The result of the above command is:

value1

Deleting a key-value

Deleting a value by key, execute the following command:

$ ./bin/cete delete 1

or, you can use the RESTful API as follows:

$ curl -X DELETE 'http://127.0.0.1:8000/v1/data/1'

Bringing up a cluster

Cete is easy to bring up the cluster. Cete node is already running, but that is not fault tolerant. If you need to increase the fault tolerance, bring up 2 more data nodes like so:

$ ./bin/cete start --id=node2 --raft-address=:7001 --grpc-address=:9001 --http-address=:8001 --data-directory=/tmp/cete/node2 --peer-grpc-address=:9000
$ ./bin/cete start --id=node3 --raft-address=:7002 --grpc-address=:9002 --http-address=:8002 --data-directory=/tmp/cete/node3 --peer-grpc-address=:9000

Above example shows each Cete node running on the same host, so each node must listen on different ports. This would not be necessary if each node ran on a different host.

This instructs each new node to join an existing node, each node recognizes the joining clusters when started. So you have a 3-node cluster. That way you can tolerate the failure of 1 node. You can check the cluster with the following command:

$ ./bin/cete cluster | jq .

or, you can use the RESTful API as follows:

$ curl -X GET 'http://127.0.0.1:8000/v1/cluster' | jq .

You can see the result in JSON format. The result of the above command is:

{
  "cluster": {
    "nodes": {
      "node1": {
        "raft_address": ":7000",
        "metadata": {
          "grpc_address": ":9000",
          "http_address": ":8000"
        },
        "state": "Leader"
      },
      "node2": {
        "raft_address": ":7001",
        "metadata": {
          "grpc_address": ":9001",
          "http_address": ":8001"
        },
        "state": "Follower"
      },
      "node3": {
        "raft_address": ":7002",
        "metadata": {
          "grpc_address": ":9002",
          "http_address": ":8002"
        },
        "state": "Follower"
      }
    },
    "leader": "node1"
  }
}

Recommend 3 or more odd number of nodes in the cluster. In failure scenarios, data loss is inevitable, so avoid deploying single nodes.

The above example, the node joins to the cluster at startup, but you can also join the node that already started on standalone mode to the cluster later, as follows:

$ ./bin/cete join --grpc-addr=:9000 node2 127.0.0.1:9001

or, you can use the RESTful API as follows:

$ curl -X PUT 'http://127.0.0.1:8000/v1/cluster/node2' --data-binary '
{
  "raft_address": ":7001",
  "metadata": {
    "grpc_address": ":9001",
    "http_address": ":8001"
  }
}
'

To remove a node from the cluster, execute the following command:

$ ./bin/cete leave --grpc-addr=:9000 node2

or, you can use the RESTful API as follows:

$ curl -X DELETE 'http://127.0.0.1:8000/v1/cluster/node2'

The following command indexes documents to any node in the cluster:

$ ./bin/cete set 1 value1 --grpc-address=:9000 

So, you can get the document from the node specified by the above command as follows:

$ ./bin/cete get 1 --grpc-address=:9000

You can see the result. The result of the above command is:

value1

You can also get the same document from other nodes in the cluster as follows:

$ ./bin/cete get 1 --grpc-address=:9001
$ ./bin/cete get 1 --grpc-address=:9002

You can see the result. The result of the above command is:

value1

Cete on Docker

Building Cete Docker container image on localhost

You can build the Docker container image like so:

$ make docker-build

Pulling Cete Docker container image from docker.io

You can also use the Docker container image already registered in docker.io like so:

$ docker pull mosuka/cete:latest

See https://hub.docker.com/r/mosuka/cete/tags/

Pulling Cete Docker container image from docker.io

You can also use the Docker container image already registered in docker.io like so:

$ docker pull mosuka/cete:latest

Running Cete node on Docker

Running a Cete data node on Docker. Start Cete node like so:

$ docker run --rm --name cete-node1 \
    -p 7000:7000 \
    -p 8000:8000 \
    -p 9000:9000 \
    mosuka/cete:latest cete start \
      --id=node1 \
      --raft-address=:7000 \
      --grpc-address=:9000 \
      --http-address=:8000 \
      --data-directory=/tmp/cete/node1

You can execute the command in docker container as follows:

$ docker exec -it cete-node1 cete node --grpc-address=:9000

Securing Cete

Cete supports HTTPS access, ensuring that all communication between clients and a cluster is encrypted.

Generating a certificate and private key

One way to generate the necessary resources is via openssl. For example:

$ openssl req -x509 -nodes -newkey rsa:4096 -keyout ./etc/cete-key.pem -out ./etc/cete-cert.pem -days 365 -subj '/CN=localhost'
Generating a 4096 bit RSA private key
............................++
........++
writing new private key to 'key.pem'

Secure cluster example

Starting a node with HTTPS enabled, node-to-node encryption, and with the above configuration file. It is assumed the HTTPS X.509 certificate and key are at the paths server.crt and key.pem respectively.

$ ./bin/cete start --id=node1 --raft-address=:7000 --grpc-address=:9000 --http-address=:8000 --data-directory=/tmp/cete/node1 --peer-grpc-address=:9000 --certificate-file=./etc/cert.pem --key-file=./etc/key.pem --common-name=localhost
$ ./bin/cete start --id=node2 --raft-address=:7001 --grpc-address=:9001 --http-address=:8001 --data-directory=/tmp/cete/node2 --peer-grpc-address=:9000 --certificate-file=./etc/cert.pem --key-file=./etc/key.pem --common-name=localhost
$ ./bin/cete start --id=node3 --raft-address=:7002 --grpc-address=:9002 --http-address=:8002 --data-directory=/tmp/cete/node3 --peer-grpc-address=:9000 --certificate-file=./etc/cert.pem --key-file=./etc/key.pem --common-name=localhost

You can access the cluster by adding a flag, such as the following command:

$ ./bin/cete cluster --grpc-address=:9000 --certificate-file=./cert.pem --common-name=localhost | jq .

or

$ curl -X GET https://localhost:8000/v1/cluster --cacert ./cert.pem | jq .

Author: Mosuka
Source Code: https://github.com/mosuka/cete 
License: Apache-2.0 license

#go #golang #rest #grpc 

Cete: A Distributed Key Value Store Server Written in Go

Как протестировать и отладить ваш запрос GRPC в Postman

Мы можем использовать Postman для запросов gRPC. (Во время написания этой статьи она находится на стадии БЕТА.)

В этой статье мы собираемся использовать инструмент Postman (в качестве клиента gRPC) для тестирования нашего API gRPC. Я собираюсь использовать простой сервер gRPC, который я разработал во время написания этой статьи .

В конце этой статьи вы сможете протестировать любые методы gRPC Server.

Запустите сервер gRPC

Общий обзор нашего сервера

Наш сервер gRPC имеет

2 типа сообщений

  • Простой запрос
  • Простой ответ

1 услуга

  • ПростСервис

4 RPC в этой службе

  • RPCRequest — Унарный RPC
  • ServerStreaming — серверная потоковая передача RPC
  • ClientStreaming — клиентская потоковая передача RPC
  • StreamingBiDirectional — двунаправленный потоковый RPC

Прото файл

syntax = "proto3";

option go_package = "simple/testgrpc";

message SimpleRequest{
    string request_need = 1;
}

message SimpleResponse{
    string response = 1;
}

service SimpleService{
    // unary RPC
    rpc RPCRequest(SimpleRequest) returns (SimpleResponse);
    // Server Streaming
    rpc ServerStreaming(SimpleRequest) returns (stream SimpleResponse);
    // Client Streaming
    rpc ClientStreaming(stream SimpleRequest) returns (SimpleResponse);
    // Bi-Directional Streaming
    rpc StreamingBiDirectional(stream SimpleRequest) returns (stream SimpleResponse);
}

Я запустил наш сервер gRPC, который прослушивает localhost:8090.

 

Почтальон gRPC-клиент

  • Откройте Почтальона. Щелкните Создать .
  • Выберите запрос gRPC
  • Откроется окно запроса
  • Введите URL-адрес сервера. (Наш сервер не требует TLS, поэтому мы его не выбираем.)
  • Теперь нам нужно выбрать метод для вызова. Для этого Почтальон должен знать о нашем сервере gRPC. Мы можем заставить Почтальона узнать о нашем сервере тремя способами:

— путем импорта прото-файла

— использование отражения сервера (для этого наш Сервер должен иметь отражение сервера в качестве дополнительного расширения, чтобы помочь клиентам в построении запросов во время выполнения без предварительной компиляции информации-заглушки в клиенте.)

— с помощью API-интерфейсов Protobuf, созданных в рабочей области Postman. (также вы можете импортировать файл прототипа, а затем создать API с импортированным прототипом.)

Мы выберем 1-й вариант, а затем создадим API. (Создание API будет выполнено почтальоном после того, как вы импортируете прото-файл)

  • Нажмите «Далее» . Почтальон просит нас импортировать наш прото-файл как API для повторного использования.
  • Укажите имя API и имя версии . (Если у вас его нет, создайте его, введя имя, которое вы хотите, чтобы Postman попросил создать)

Нажмите «Импортировать как API » .

Теперь мы готовы с нашим определением сервера в почтальоне, импортировав наш прото-файл.

Теперь нажмите « Выбрать метод », чтобы выбрать наш метод gRPC для вызова.

На данный момент Почтальон знает о нашем сервере, поэтому RPC, определенные на нашем сервере, будут доступны в раскрывающемся списке.

При отправке запроса вы можете авторизовать запрос с помощью виджета Авторизация , если это необходимо. Также вы можете отправить некоторые метаданные , если это необходимо.

Унарный запрос

Выберите метод

Выберите метод RPCRequest , который является унарным RPC нашего сервера, и просто нажмите кнопку Invoke, чтобы вызвать RPC, и мы получили наш ответ.

пустой запрос с ответом

Почтальон знает о нашем сервере, поэтому у него есть возможность генерировать пример сообщения .

Нажмите « Создать сообщение Exmaple» в нижней части виджета сообщений и вызовите RPC.

Генерация сообщений и получение ответа

Вау! мы получили свой ответ.

Клиентская потоковая передача

Теперь мы попробуем Client Streaming RPC в Postman.

После того, как вы выберете метод потоковой передачи клиента в разделе « Выбрать метод», Postman откроет виджет для отправки нашего потока запросов.

Ручей

Теперь нажмите « Создать пример сообщения » , чтобы сгенерировать сообщение или составить сообщение, и нажмите « Отправить » .

отправить потоковое сообщение

Повторите этот шаг столько раз, сколько хотите. Так как это клиентский потоковый RPC. Когда вы закончите с потоком сообщений, нажмите « Завершить потоковую передачу».

И Вы получите ответ обратно.

Ответ клиента на потоковую передачу

Мы получили наш ответ.

Потоковая передача сервера

Теперь выберите метод ServerStreaming и вызовите RPC с нашим сообщением. Мы получим наши потоковые ответы.

Потоковая передача сервера

Мы получили ответы Stream.

Двунаправленная потоковая передача

Выберите метод StreamingBiDirectional и вызовите RPC для проверки двунаправленной потоковой передачи.

Как только я отправил запрос, я получил ответ на этот запрос (так реализован сервер)

отправлять и получать

После отправки запроса и получения ответа завершите поток.

Двунаправленная потоковая передача

Вы можете сохранять запросы в коллекции Postman , как обычно.

API

API, который мы создали, импортировав наш прото-файл, можно использовать повторно.

Резюме

Теперь вы знаете, как использовать Postman в качестве клиента gRPC .

Ссылка: https://faun.pub/test-and-debug-your-grpc-request-postman-9d97a42826b4

#postman #grpc #go

Как протестировать и отладить ваш запрос GRPC в Postman
山田  千代

山田 千代

1659946588

如何在 Postman 中測試和調試 GRPC 請求

我們可以將 Postman 用於 gRPC 請求。(在撰寫本文時,它處於 BETA 階段。)

在本文中,我們將使用 Postman 工具(作為 gRPC 客戶端)來測試我們的 gRPC API。我將使用我在撰寫本文時開發的簡單 gRPC 服務器。

您應該能夠在本文結尾處測試任何 gRPC 服務器方法。

啟動 gRPC 服務器

我們的服務器的高級概述

我們的 gRPC 服務器有

2 消息類型

  • 簡單請求
  • 簡單響應

1 服務

  • 簡單服務

該服務中的 4 個 RPC

  • RPCRequest — 一元 RPC
  • ServerStreaming — 服務器流式傳輸 RPC
  • ClientStreaming — 客戶端流 RPC
  • StreamingBiDirectional — 雙向流 RPC

原型文件

syntax = "proto3";

option go_package = "simple/testgrpc";

message SimpleRequest{
    string request_need = 1;
}

message SimpleResponse{
    string response = 1;
}

service SimpleService{
    // unary RPC
    rpc RPCRequest(SimpleRequest) returns (SimpleResponse);
    // Server Streaming
    rpc ServerStreaming(SimpleRequest) returns (stream SimpleResponse);
    // Client Streaming
    rpc ClientStreaming(stream SimpleRequest) returns (SimpleResponse);
    // Bi-Directional Streaming
    rpc StreamingBiDirectional(stream SimpleRequest) returns (stream SimpleResponse);
}

我已經啟動了我們的 gRPC 服務器,它正在監聽localhost:8090

 

郵遞員 gRPC 客戶端

  • 打開郵遞員。單擊新建
  • 選擇gRPC 請求
  • 申請窗口將打開
  • 輸入服務器 URL。(我們的服務器不需要 TLS,所以我們沒有選擇它。)
  • 現在我們必須選擇一個方法來調用。因為那個 Postman 應該知道我們的 gRPC 服務器。我們可以通過 3 種方式讓 Postman 了解我們的服務器,

— 通過導入 proto 文件

— 使用服務器反射(為此,我們的服務器需要將服務器反射作為可選擴展來幫助客戶端在運行時構建請求,而無需將存根信息預編譯到客戶端中。)

— 通過使用在 Postman 工作區中創建的 Protobuf API。(您也可以導入 proto 文件,然後使用導入的 proto 創建 API。)

我們將使用第一個選項,然後創建 API。(導入 proto 文件後,API 創建將由 Postman 負責)

  • 單擊下一步。Postman 要求我們將 proto 文件作為 API 導入以進行重用。
  • 給出API 名稱版本名稱。(如果您沒有創建它,請輸入您希望 Postman 要求創建的名稱)

單擊導入為 API

現在,通過導入我們的 proto 文件,我們已準備好在 Postman 中定義服務器。

現在單擊Select a method以選擇我們要調用的 gRPC 方法。

此時 Postman 知道我們的服務器,因此我們服務器中定義的 RPC 將在下拉列表中可用。

在發送請求時,如果需要,您可以在授權小部件的幫助下授權請求。如果需要,您也可以發送一些元數據。

一元請求

選擇一種方法

選擇方法RPCRequest方法,它是我們服務器的一元 RPC,然後單擊 Invoke 按鈕,調用 RPC,我們得到了響應。

帶有響應的空請求

Postman 知道我們的服務器,所以它具有生成示例消息的功能。

單擊Message Widget 底部的Generate Exmaple Message並調用 RPC。

生成消息並獲得響應

哇!我們得到了回應。

客戶端流式傳輸

現在,我們將在 Postman 中嘗試 Client Streaming RPC。

在選擇方法中選擇客戶端流方法後 Postman 會打開一個小部件來發送我們的請求流。

溪流

現在單擊生成示例消息以生成消息或撰寫消息,然後單擊發送

發送流消息

重複此步驟多次。因為它是一個客戶端流式 RPC。完成消息流後,單擊結束流式傳輸。

你會得到回复。

客戶端流式響應

我們得到了回應。

服務器流式傳輸

現在選擇ServerStreaming方法並使用我們的消息調用 RPC,我們將獲得我們的流響應。

服務器流式傳輸

我們得到了 Stream 響應。

雙向流

選擇StreamingBiDirectional方法並調用 RPC 來測試雙向流。

發送請求後,我得到了該請求的響應(這就是服務器的實現方式)

發送和接收

發送請求並收到響應後,結束流。

雙向流

您可以像往常一樣將請求保存在Postman Collections中。

API

我們通過導入 proto 文件創建的 API 可以重複使用

概括

現在您已經了解瞭如何將 Postman 用作gRPC 客戶端

鏈接:https ://faun.pub/test-and-debug-your-grpc-request-postman-9d97a42826b4

#postman #grpc #go

如何在 Postman 中測試和調試 GRPC 請求
Thierry  Perret

Thierry Perret

1659939360

Comment Tester Et Déboguer Votre Demande GRPC Dans Postman

Nous pouvons utiliser le Postman pour les requêtes gRPC. (Lors de la rédaction de cet article, il est en phase BETA.)

Dans cet article, nous allons utiliser l'outil Postman (en tant que client gRPC) pour tester notre API gRPC. Je vais utiliser un simple serveur gRPC que j'ai développé lors de la rédaction de cet article .

Vous devriez pouvoir tester toutes les méthodes du serveur gRPC à la fin de cet article.

Démarrer le serveur gRPC

Aperçu de haut niveau de notre serveur

Notre serveur gRPC a

2 types de messages

  • SimpleRequête
  • SimpleReponse

1 Prestation

  • SimpleService

4 RPC dans ce service

  • RPCRequest — RPC unaire
  • ServerStreaming — Serveur Streaming RPC
  • ClientStreaming — Client Streaming RPC
  • StreamingBiDirectional — RPC de streaming bidirectionnel

Fichier prototype

syntax = "proto3";

option go_package = "simple/testgrpc";

message SimpleRequest{
    string request_need = 1;
}

message SimpleResponse{
    string response = 1;
}

service SimpleService{
    // unary RPC
    rpc RPCRequest(SimpleRequest) returns (SimpleResponse);
    // Server Streaming
    rpc ServerStreaming(SimpleRequest) returns (stream SimpleResponse);
    // Client Streaming
    rpc ClientStreaming(stream SimpleRequest) returns (SimpleResponse);
    // Bi-Directional Streaming
    rpc StreamingBiDirectional(stream SimpleRequest) returns (stream SimpleResponse);
}

J'ai démarré notre serveur gRPC qui écoute sur localhost:8090

 

Client gRPC du facteur

  • Ouvrez le facteur. Cliquez sur Nouveau .
  • Sélectionner la requête gRPC
  • La fenêtre de demande s'ouvrira
  • Saisissez l'URL du serveur. (Notre serveur ne nécessite pas TLS, nous ne le sélectionnons donc pas.)
  • Maintenant, nous devons sélectionner une méthode à appeler. Pour cela, Postman devrait connaître notre serveur gRPC. Nous pouvons faire en sorte que Postman connaisse notre serveur de 3 façons,

— en important le fichier proto

- en utilisant la réflexion du serveur (pour cela, notre serveur doit avoir la réflexion du serveur en tant qu'extension facultative pour aider les clients à construire des requêtes à l'exécution sans avoir les informations de stub précompilées dans le client.)

— en utilisant les API Protobuf créées dans l'espace de travail Postman. (Vous pouvez également importer le fichier proto, puis créer l'API avec le proto importé.)

Nous allons choisir la 1ère option puis créer l'API. (La création de l'API sera prise en charge par le facteur une fois que vous aurez importé le fichier proto)

  • Cliquez sur Suivant . Postman nous demande d'importer notre fichier proto en tant qu'API pour le réutiliser.
  • Donnez le nom de l' API et le nom de la version . (Si vous n'en avez pas, créez-le en tapant le nom que vous voulez que Postman vous demande de créer)

Cliquez sur Importer en tant qu'API .

Maintenant, nous sommes prêts avec notre définition de serveur dans le facteur en important notre fichier proto.

Cliquez maintenant sur Sélectionner une méthode pour sélectionner notre méthode gRPC à appeler.

À ce stade, Postman connaît notre serveur, de sorte que les RPC définis sur notre serveur seront disponibles dans la liste déroulante.

Lors de l'envoi d'une demande, vous pouvez autoriser la demande à l'aide du widget d' autorisation si nécessaire. Vous pouvez également envoyer des métadonnées si nécessaire.

Requête unaire

Sélectionnez une méthode

Sélectionnez la méthode RPCRequest method qui est le RPC unaire de notre serveur et cliquez simplement sur le bouton Invoke, pour invoquer le RPC et nous avons obtenu notre réponse.

requête vide avec la réponse

Postman connaît notre serveur, il a donc la fonction de générer un exemple de message .

Cliquez sur Générer un exemple de message en bas du widget de message et invoquez le RPC.

Générer des messages et obtenir la réponse

Waouh ! nous avons eu notre réponse.

Diffusion client

Maintenant, nous allons essayer le Client Streaming RPC dans le Postman.

Une fois que vous avez sélectionné la méthode de streaming client dans Sélectionner une méthode, Postman ouvre un widget pour envoyer notre flux de demandes.

Flux

Cliquez maintenant sur Générer un exemple de message pour générer le message ou composez un message et cliquez sur Envoyer .

envoyer un message de flux

Répétez cette étape autant de fois que vous le souhaitez. Puisqu'il s'agit d'un RPC de streaming client. Une fois que vous avez terminé avec votre flux de messages, cliquez sur Terminer le streaming.

Et vous obtiendrez la réponse en retour.

Réponse de flux client

Nous avons eu notre réponse.

Diffusion de serveur

Sélectionnez maintenant la méthode ServerStreaming et invoquez le RPC avec notre message, nous obtiendrons nos réponses de flux.

Diffusion de serveur

Nous avons reçu les réponses du flux.

Streaming bidirectionnel

Sélectionnez la méthode StreamingBiDirectional et appelez le RPC pour tester le streaming bidirectionnel.

Une fois que j'ai envoyé la demande, j'ai reçu la réponse à cette demande (c'est ainsi que le serveur est implémenté)

envoyer et recevoir

Après avoir envoyé la demande et reçu la réponse, mettez fin au flux.

Streaming bidirectionnel

Vous pouvez enregistrer les demandes dans les collections Postman comme d'habitude.

API

L'API que nous avons créé en important notre fichier proto peut être réutilisé

Sommaire

Maintenant que vous savez comment utiliser Postman en tant que client gRPC .

Lien : https://faun.pub/test-and-debug-your-grpc-request-postman-9d97a42826b4

#postman #grpc #go

Comment Tester Et Déboguer Votre Demande GRPC Dans Postman
Hans  Marvin

Hans Marvin

1659932042

How to Test and Debug Your GRPC Request in Postman

We can use the Postman for the gRPC requests. (While writing this article it is in the BETA stage.)

In this article, we are going to use the Postman tool(as a gRPC client) to test our gRPC API. I am going to use a simple gRPC server which I have developed while writing this article.

You should be able to test any gRPC Server methods at the end of this article.

See more at: https://faun.pub/test-and-debug-your-grpc-request-postman-9d97a42826b4

#postman #grpc #go

How to Test and Debug Your GRPC Request in Postman