Spring

Spring

Spring makes programming Java quicker, easier, and safer for everybody. Spring’s focus on speed, simplicity, and productivity has made it the world's most popular Java framework.

Annotations In Java

How to create Custom Annotations In Java?

https://javatechonline.com/annotations-in-java/ 

#java #javaprogramming #javaprogramminglanguage #javadeveloper #javadevelopment #javadevelopers #javadeveloperjobs #javabackend #javabackenddeveloper #javafullstack #javafullstackdeveloper #javadeveloper #javaprogrammer #javatraining #javaee #j2ee #j2eedeveloper #java8 #microservices #microservicesarchitecture #microservice #microservicios #mvcframework #mvc #collections #sorting #java8 #javacode #spring #springframework #javaspring #javaspringboot #springmvc #springboot #interviewquestions #interviewpreparation #interview #annotation

Seguimiento Distribuido Con OpenTelemetry, Spring Cloud Sleuth, Kafka

El seguimiento distribuido le brinda información sobre el rendimiento de un servicio en particular como parte del todo en un sistema de software distribuido. Realiza un seguimiento y registra las solicitudes desde su punto de origen hasta su destino y los sistemas por los que pasan.

En este artículo, implementaremos el rastreo distribuido en tres microservicios Spring Boot usando OpenTelemetry, Spring Cloud Sleuth, Kafka y Jaeger.

Primero echemos un vistazo a algunos de los términos básicos en el rastreo distribuido.

Span: Representa una sola unidad de trabajo dentro del sistema. Los tramos se pueden anidar unos dentro de otros para modelar la descomposición del trabajo. Por ejemplo, un tramo podría estar llamando a un extremo REST y otro tramo secundario podría ser ese extremo llamando a otro, y así sucesivamente en un servicio diferente.

Seguimiento: una colección de tramos que comparten el mismo tramo raíz o, más simplemente, poner todos los tramos que se crearon como resultado directo de la solicitud original. La jerarquía de tramos (cada uno con su propio tramo principal junto con el tramo raíz) se puede utilizar para formar gráficos acíclicos dirigidos que muestren la ruta de la solicitud a medida que avanza a través de varios componentes.

OpenTelemetry

OpenTelemetry , también conocido como OTel para abreviar, es un marco de observabilidad de código abierto independiente del proveedor para instrumentar, generar, recopilar y exportar datos de telemetría, como seguimientos , métricas y registros . Como proyecto de incubación de Cloud Native Computing Foundation (CNCF), OTel tiene como objetivo proporcionar conjuntos unificados de bibliotecas y API independientes del proveedor, principalmente para recopilar datos y transferirlos a algún lugar. OTel se está convirtiendo en el estándar mundial para generar y administrar datos de telemetría, y se está adoptando ampliamente.

Detective de nubes de primavera

Sleuth es un proyecto administrado y mantenido por el equipo de Spring Cloud destinado a integrar la funcionalidad de rastreo distribuido dentro de las aplicaciones Spring Boot. Se incluye como un paquete típico Spring Starter, por lo que con solo agregarlo como una dependencia, la configuración automática maneja toda la integración y la instrumentación en toda la aplicación. Aquí hay algunos instrumentos de Sleuth listos para usar:

  • solicitudes recibidas en los controladores Spring MVC (puntos finales REST)
  • solicitudes sobre tecnologías de mensajería como Kafka o MQ
  • solicitudes realizadas con RestTemplate, WebClient, etc.

Sleuth agrega un interceptor para garantizar que toda la información de rastreo se transmita en las solicitudes. Cada vez que se realiza una llamada, se crea un nuevo Span. Se cierra al recibir la respuesta.

Sleuth puede rastrear sus solicitudes y mensajes para que pueda correlacionar esa comunicación con las entradas de registro correspondientes. También puede exportar la información de seguimiento a un sistema externo para visualizar la latencia.

Jaeger

Jaeger fue creado originalmente por equipos de Uber y luego fue abierto en 2015. Fue aceptado como un proyecto de incubación nativo de la nube en 2017 y se graduó en 2019. Como parte de CNCF, Jaeger es un proyecto reconocido en arquitecturas nativas de la nube. Su código fuente está escrito principalmente en Go. La arquitectura de Jaeger incluye:

  • Bibliotecas de instrumentación
  • Coleccionistas
  • Servicio de consultas e interfaz de usuario web
  • Almacenamiento de base de datos

Similar a Jaeger, Zipkin también proporciona el mismo conjunto de componentes en su arquitectura. Aunque Zipkin es un proyecto más antiguo, Jaeger tiene un diseño más moderno y escalable. Para este ejemplo, hemos elegido a Jaeger como backend.

Diseño del sistema de rastreo

Diseñemos tres microservicios Spring Boot:

  • customer-service-bff: usando backend for frontendun patrón, este servicio se encuentra entre la interfaz de usuario y el backend. Lo llama una aplicación web de interfaz de usuario, que a su vez llama al servicio de atención al cliente de back-end a través de llamadas API REST.
  • customer-service: un simple servicio CRUD al cliente. Además de conservar los datos en su base de datos sobre las operaciones CRUD, también publica eventos en Kafka al crear, actualizar o eliminar un registro de cliente.
  • order-service: escucha el tema de Kafka, consume eventos creados/actualizados/eliminados por el cliente.

Los tres microservicios están diseñados para:

  • comunicarse a través de la API REST ( customer-service-bffy customer-service)
  • comunicarse a través de pub/sub basado en eventos a través de Kafka ( customer-servicey order-service)

Esto es para observar cómo OpenTelemetry combinado con Spring Cloud Sleuth maneja la instrumentación automática del código y genera y transmite los datos de seguimiento. Las líneas punteadas de arriba capturan la ruta de los datos de rastreo, exportados por los microservicios, viajan a OpenTelemetry Collector a través de OTLP (OpenTelemetry Line Protocol) y, a su vez, Collector procesa y exporta los datos de rastreo al backend Jaeger para almacenarlos y consultarlos.

Usando un monorepo, tenemos la estructura del proyecto de la siguiente manera:

Paso 1: agregar dependencias de POM

Esta es la clave para implementar el rastreo distribuido utilizando OTel y Spring Cloud Sleuth. Nuestro objetivo es no tener que instrumentar manualmente nuestro código, por lo que confiamos en estas dependencias para hacer aquello para lo que están diseñadas: instrumentar automáticamente nuestro código, además de rastrear la implementación, exportar datos de telemetría a OTel Collector, etc.


<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>${spring-cloud.version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-sleuth-otel-dependencies</artifactId>
            <version>${spring-cloud-sleuth-otel.version}</version>
            <scope>import</scope>
            <type>pom</type>
        </dependency>
    </dependencies>
</dependencyManagement>

<dependencies>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-sleuth</artifactId>
        <exclusions>
            <exclusion>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-sleuth-brave</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-sleuth-otel-autoconfigure</artifactId>
    </dependency>
    <dependency>
        <groupId>io.opentelemetry</groupId>
        <artifactId>opentelemetry-exporter-otlp-trace</artifactId>
    </dependency>
</dependencies>
  • spring-cloud-dependencies: Dependencias de Spring Cloud
  • spring-cloud-sleuth-otel-dependencies: Spring Cloud Sleuth Dependencias de OpenTelemetry
  • spring-cloud-starter-sleuth: Sleuth se integra con el rastreador OpenZipkin Brave a través del puente que está disponible en el spring-cloud-sleuth-bravemódulo. Como no estamos usando Zipkin para el back-end, tenemos que excluir spring-cloud-sleuth-bravede la spring-cloud-starter-sleuthdependencia y, en su lugar, agregar la spring-cloud-sleuth-otel-autoconfiguredependencia. Esto reemplaza la implementación de seguimiento predeterminada basada en Brave con la implementación basada en OpenTelemetry.
  • opentelemetry-exporter-otlp-trace: este es el componente en Spring Cloud Sleuth OTel que envía rastros a un OpenTelemetry Collector.

Paso 2: Configuración de OpenTelemetry

Extremo de OpenTelemetry Collector

Para cada microservicio, debemos agregar la siguiente configuración application.yml(consulte el fragmento de código de muestra en la sección a continuación). spring.sleuth.otel.exporter.otlp.endpointes principalmente para configurar el punto final de OTel Collector. Le dice al exportador, Sleuth en nuestro caso, que envíe los datos de seguimiento a través de OTLP al punto final del recopilador especificado http://otel-collector:4317. El aviso otel-collectoren la URL del punto final proviene del servicio docker-compose para la otel-collectorimagen.

Muestreo probabilístico de datos de rastreo

spring.sleuth.otel.config.trace-id-ratio-basedLa propiedad define la probabilidad de muestreo de los datos de rastreo. Muestrea una fracción de trazas en función de la fracción entregada al muestreador. El muestreo de probabilidad permite a los usuarios de seguimiento de OpenTelemetry reducir los costos de recopilación de tramos mediante el uso de técnicas de muestreo aleatorio. Si la proporción es inferior a 1,0, algunos rastros no se exportarán. Para este ejemplo, configuraremos el muestreo para que sea 1.0, 100%.

Para conocer las propiedades adicionales de OTel Spring Cloud Sleuth, consulte las propiedades comunes de la aplicación .

spring:
  application:
    name: customer-service
  sleuth:
    otel:
      config:
        trace-id-ratio-based: 1.0
      exporter:
        otlp:
          endpoint: http://otel-collector:4317

Archivo de configuración de OpenTelemetry

Necesitamos un archivo de configuración de OTel otel-config.yamlen la raíz del proyecto. El contenido es el siguiente. Este archivo de configuración define los comportamientos de los receptores, procesadores y exportadores de OTel. Como podemos ver, definimos nuestros receptores para escuchar en gRPC y HTTP, procesadores que usan lote y exportadores como jaeger y registro.

extensions:
  memory_ballast:
    size_mib: 512
  zpages:
    endpoint: 0.0.0.0:55679

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:

exporters:
  logging:
    logLevel: debug
  jaeger:
    endpoint: jaeger-all-in-one:14250
    tls:
      insecure: true

service:
  pipelines:
    traces:
      receivers: [ otlp ]
      processors: [ batch ]
      exporters: [ logging, jaeger ]
  extensions: [ memory_ballast, zpages ]

Paso 3: docker-compose para unir todo

Veamos los contenedores docker que necesitamos activar para ejecutar estos tres microservicios y observemos su seguimiento distribuido, los primeros tres microservicios se explican en la sección anterior.

  • customer-service-bff
  • customer-service
  • order-service
  • postgres-customer: base de datos paracustomer-service
  • postgres-order: base de datos paraorder-service
  • jaeger-all-in-one: imagen única que ejecuta todos los componentes back-end y la interfaz de usuario de Jaeger
  • otel-collector: el motor de seguimiento de OpenTelemetry, recibe, procesa y exporta los datos de seguimiento al backend
  • zookeeper: realice un seguimiento del estado de los nodos en el clúster de Kafka y mantenga una lista de temas y mensajes de Kafka
  • kafka: plataforma de procesamiento de transmisión de eventos pub/sub
services:

  customer-service-bff:
    image: customer-service-bff:0.0.1-SNAPSHOT
    ports:
      - "8080:8080"
    depends_on:
      - zookeeper
      - kafka

  customer-service:
    image: customer-service:0.0.1-SNAPSHOT
    ports:
      - "8081:8081"
    depends_on:
      - zookeeper
      - kafka
      - postgres-customer
    environment:
      - SPRING_DATASOURCE_JDBC-URL=jdbc:postgresql://postgres-customer:5432/customerservice
      - SPRING_DATASOURCE_USERNAME=postgres
      - SPRING_DATASOURCE_PASSWORD=postgres
      - SPRING_JPA_HIBERNATE_DDL_AUTO=update

  order-service:
    image: order-service:0.0.1-SNAPSHOT
    ports:
      - "8082:8082"
    depends_on:
      - zookeeper
      - kafka
      - postgres-order
    environment:
      - SPRING_DATASOURCE_JDBC-URL=jdbc:postgresql://postgres-order:5432/orderservice
      - SPRING_DATASOURCE_USERNAME=postgres
      - SPRING_DATASOURCE_PASSWORD=postgres
      - SPRING_JPA_HIBERNATE_DDL_AUTO=update

  postgres-customer:
    image: postgres
    ports:
      - "5432:5432"
    environment:
      - POSTGRES_PASSWORD=postgres
      - POSTGRES_USER=postgres
      - POSTGRES_DB=customerservice

  postgres-order:
    image: postgres
    ports:
      - "5431:5431"
    environment:
      - POSTGRES_PASSWORD=postgres
      - POSTGRES_USER=postgres
      - POSTGRES_DB=orderservice

  jaeger-all-in-one:
    image: jaegertracing/all-in-one:latest
    ports:
      - "16686:16686"
      - "14268"
      - "14250"

  otel-collector:
    image: otel/opentelemetry-collector:0.47.0
    command: [ "--config=/etc/otel-collector-config.yaml" ]
    volumes:
      - ./otel-config.yaml:/etc/otel-collector-config.yaml
    ports:
      - "1888:1888"   # pprof extension
      - "13133:13133" # health_check extension
      - "4317"        # OTLP gRPC receiver
      - "55670:55679" # zpages extension
    depends_on:
      - jaeger-all-in-one

  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
    ports:
      - 22181:2181

  kafka:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper
    ports:
      - 29092:29092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

Corre docker-compose up -dpara que aparezcan los nueve contenedores:

Paso 4: Seguimiento de datos en acción

camino feliz

Ahora, lancemos nuestro customer-service-bff, el punto de entrada al flujo, para crear un nuevo cliente.

Inicie la interfaz de usuario de Jaeger http://localhost:16686/, busque por servicio customer-service-bff, haga clic en el Find Tracesbotón, esto es lo que vemos para crear el seguimiento del cliente: abarcó tres servicios, un total de seis, una duración de 82,35 ms.

Además de la vista de línea de tiempo de seguimiento (captura de pantalla anterior), Jaeger también proporciona una vista de gráfico (seleccione Trace Graphen el menú desplegable superior derecho):

El resultado del registro en la ventana acoplable para tres microservicios muestra el mismo ID de seguimiento, resaltado en rojo, y un ID de tramo diferente según el nombre de la aplicación (los nombres de las aplicaciones y sus ID de tramo correspondientes se resaltan en colores coincidentes). En el caso de customer-service, se pasa el mismo ID de intervalo desde la solicitud de la API de REST a la solicitud del editor de Kafka.

Escenario de error

Hagamos una pausa en nuestra customer-servicebase de datos PostgreSQL en la ventana acoplable y repitamos el flujo de creación de clientes desde customer-service-bff. Lo conseguimos 500 internal server error, como era de esperar. Al verificar en Jaeger, vemos el siguiente seguimiento, con la excepción de stacktrace quejándose SocketTimeoutException, nuevamente como se esperaba.

Identificación de tramos de larga duración

La interfaz de usuario de Jaeger nos permite buscar rastros que superen la duración máxima especificada. Por ejemplo, podemos buscar todas las trazas que hayan tardado más de 1000 ms. Luego, podemos profundizar en los rastros de ejecución prolongada para investigar sus causas principales.

Resumen

Desempaquetamos el rastreo distribuido en la lente de OpenTelemetry, Spring Cloud Sleuth y Jaeger en esta historia, verificando la instrumentación automática del rastreo distribuido en las llamadas API REST y Kafka pub/sub. Espero que esta historia le brinde una mejor comprensión de estos marcos y herramientas de seguimiento, especialmente OpenTelemetry, y cómo cambia fundamentalmente la forma en que hacemos la Observabilidad en los sistemas distribuidos.

El código fuente de esta historia se puede encontrar en mi repositorio de GitHub .

¡Feliz codificación!

Esta historia se publicó originalmente en https://betterprogramming.pub/distributed-tracing-with-opentelemetry-spring-cloud-sleuth-kafka-and-jaeger-939e35f45821

#jaeger #opentelemetry #spring #cloud #kafka 

Seguimiento Distribuido Con OpenTelemetry, Spring Cloud Sleuth, Kafka

OpenTelemetry、Spring Cloud Sleuth、Kafka、およびJaegerを使用した分散トレース

分散トレースは、特定のサービスが分散ソフトウェアシステム全体の一部としてどのように実行されているかについての洞察を提供します。発信元から宛先、および通過するシステムまでのリクエストを追跡および記録します。

この記事では、OpenTelemetry、Spring Cloud Sleuth、Kafka、およびJaegerを使用して、3つのSpringBootマイクロサービスに分散トレースを実装します。

まず、分散トレースの基本的な用語のいくつかを見てみましょう。

スパン:システム内の単一の作業単位を表します。スパンを相互にネストして、作業の分解をモデル化できます。たとえば、あるスパンがRESTエンドポイントを呼び出し、別の子スパンがそのエンドポイントが別のエンドポイントを呼び出すようにし、以下同様に別のサービスで行うことができます。

トレース:すべてが同じルートスパンを共有するスパンのコレクション、またはより単純に、元のリクエストの直接の結果として作成されたすべてのスパンを配置します。スパンの階層(それぞれがルートスパンに沿って独自の親スパンを持つ)を使用して、さまざまなコンポーネントを通過する際のリクエストのパスを示す有向非巡回グラフを作成できます。

OpenTelemetry

OpenTelemetryは、略してOTelとも呼ばれ、トレースメトリックログなどのテレメトリデータを計測、生成、収集、およびエクスポートするための、ベンダーに依存しないオープンソースの可観測性フレームワークです。Cloud Native Computing Foundation(CNCF)のインキュベーションプロジェクトとして、OTelは、主にデータを収集してどこかに転送するために、ベンダーに依存しないライブラリとAPIの統合セットを提供することを目指しています。OTelは、テレメトリデータを生成および管理するための世界標準になりつつあり、広く採用されています。

Spring Cloud Sleuth

Sleuthは、SpringBootアプリケーション内に分散トレース機能を統合することを目的としたSpringCloudチームによって管理および保守されているプロジェクトです。これは一般的なものとしてバンドルされているSpring Starterため、依存関係として追加するだけで、自動構成がアプリ全体のすべての統合とインストルメンテーションを処理します。すぐに使用できるSleuthインストゥルメントは次のとおりです。

  • Spring MVCコントローラー(RESTエンドポイント)で受信したリクエスト
  • KafkaやMQなどのメッセージングテクノロジーを介したリクエスト
  • RestTemplateWebClientなどで行われたリクエスト

Sleuthはインターセプターを追加して、すべてのトレース情報がリクエストで確実に渡されるようにします。呼び出しが行われるたびに、新しいスパンが作成されます。応答を受信すると閉じられます。

Sleuthはリクエストとメッセージを追跡できるため、その通信を対応するログエントリに関連付けることができます。トレース情報を外部システムにエクスポートして、遅延を視覚化することもできます。

イエーガー

Jaegerは元々Uberのチームによって構築され、2015年にオープンソースになりました。2017年にクラウドネイティブインキュベーションプロジェクトとして受け入れられ、2019年に卒業しました。CNCFの一部として、Jaegerはクラウドネイティブアーキテクチャで認められたプロジェクトです。そのソースコードは主にGoで書かれています。イエーガーのアーキテクチャには次のものが含まれます。

  • インストルメンテーションライブラリ
  • コレクター
  • クエリサービスとWebUI
  • データベースストレージ

Jaegerと同様に、Zipkinもそのアーキテクチャで同じコンポーネントのセットを提供します。Zipkinは古いプロジェクトですが、Jaegerはよりモダンでスケーラブルなデザインになっています。この例では、バックエンドとしてJaegerを選択しました。

トレースシステムの設計

3つのSpringBootマイクロサービスを設計しましょう。

  • customer-service-bffbackend for frontendパターンを使用して、このサービスはUIとバックエンドの間に配置されます。これはUIWebアプリによって呼び出され、UIWebアプリはRESTAPI呼び出しを介してバックエンドカスタマーサービスを呼び出します。
  • customer-service:シンプルなカスタマーCRUDサービス。CRUD操作時にデータベースにデータを保持するだけでなく、顧客レコードを作成、更新、または削除するときにイベントをKafkaに公開します。
  • order-service:Kafkaトピックをリッスンし、顧客が作成/更新/削除したイベントを消費します。

3つのマイクロサービスは次のように設計されています。

  • REST APIを介して通信する(customer-service-bffおよびcustomer-service
  • customer-serviceKafka(およびorder-service)を介したイベント駆動型のpub/subを介した通信

これは、OpenTelemetryとSpring Cloud Sleuthを組み合わせて、コードの自動インストルメンテーションを処理し、トレースデータを生成して送信する方法を観察するためのものです。上記の点線は、マイクロサービスによってエクスポートされたトレースデータのパスをキャプチャし、OTLP(OpenTelemetry Line Protocol)を介してOpenTelemetry Collectorに移動し、コレクターはトレースデータを処理してバックエンドJaegerにエクスポートし、保存してクエリを実行します。

モノレポを使用すると、次のようなプロジェクト構造になります。

ステップ1:POMの依存関係を追加する

これは、OTelとSpringCloudSleuthを使用して分散トレースを実装するための鍵です。私たちの目標は、コードを手動でインストルメントする必要がないため、これらの依存関係に依存して、設計されていることを実行します。実装のトレースに加えて、コードの自動インストルメンテーション、テレメトリデータのOTelコレクターへのエクスポートなどです。


<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>${spring-cloud.version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-sleuth-otel-dependencies</artifactId>
            <version>${spring-cloud-sleuth-otel.version}</version>
            <scope>import</scope>
            <type>pom</type>
        </dependency>
    </dependencies>
</dependencyManagement>

<dependencies>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-sleuth</artifactId>
        <exclusions>
            <exclusion>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-sleuth-brave</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-sleuth-otel-autoconfigure</artifactId>
    </dependency>
    <dependency>
        <groupId>io.opentelemetry</groupId>
        <artifactId>opentelemetry-exporter-otlp-trace</artifactId>
    </dependency>
</dependencies>
  • spring-cloud-dependencies:SpringCloudの依存関係
  • spring-cloud-sleuth-otel-dependencies:Spring CloudSleuthOpenTelemetryの依存関係
  • spring-cloud-starter-sleuth:Sleuthは、モジュールで使用可能なブリッジを介してOpenZipkinBraveトレーサーと統合されspring-cloud-sleuth-braveます。バックエンドにZipkinを使用していないため、依存関係から除外spring-cloud-sleuth-braveし、spring-cloud-starter-sleuth代わりに依存関係を追加する必要がありspring-cloud-sleuth-otel-autoconfigureます。これにより、 Braveに基づくデフォルトのトレース実装がOpenTelemetryに基づく実装に置き換えられます。
  • opentelemetry-exporter-otlp-trace:これは、OpenTelemetryCollectorにトレースを送信するSpringCloudSleuthOTelのコンポーネントです。

ステップ2:OpenTelemetryの構成

OpenTelemetryCollectorエンドポイント

マイクロサービスごとに、次の構成をに追加する必要がありますapplication.yml(以下のセクションのサンプルスニペットを参照)。spring.sleuth.otel.exporter.otlp.endpoint主にOTelCollectorエンドポイントを構成するためのものです。これは、エクスポーター(この場合はSleuth)に、OTLPを介して指定されたコレクターエンドポイントにトレースデータを送信するように指示しますhttp://otel-collector:4317。エンドポイントURLの通知otel-collectorは、画像のdocker-composeサービスからのotel-collectorものです。

データ確率サンプリングのトレース

spring.sleuth.otel.config.trace-id-ratio-basedプロパティは、トレースデータのサンプリング確率を定義します。サンプラーに与えられたフラクションに基づいて、トレースのフラクションをサンプリングします。確率サンプリングにより、OpenTelemetryトレースのユーザーは、ランダム化されたサンプリング手法を使用してスパン収集コストを削減できます。比率が1.0未満の場合、一部のトレースはエクスポートされません。この例では、サンプリングを1.0、100%に構成します。

その他のOTelSpringCloud Sleuthプロパティについては、一般的なアプリケーションプロパティを参照してください。

spring:
  application:
    name: customer-service
  sleuth:
    otel:
      config:
        trace-id-ratio-based: 1.0
      exporter:
        otlp:
          endpoint: http://otel-collector:4317

OpenTelemetry構成ファイル

otel-config.yamlプロジェクトルートにOTel構成ファイルが必要です。内容は以下の通りです。この構成ファイルは、OTelレシーバー、プロセッサー、およびエクスポーターの動作を定義します。ご覧のとおり、gRPCとHTTPでリッスンするようにレシーバーを定義し、バッチとエクスポーターをイェーガーとロギングとして使用するプロセッサーを定義しました。

extensions:
  memory_ballast:
    size_mib: 512
  zpages:
    endpoint: 0.0.0.0:55679

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:

exporters:
  logging:
    logLevel: debug
  jaeger:
    endpoint: jaeger-all-in-one:14250
    tls:
      insecure: true

service:
  pipelines:
    traces:
      receivers: [ otlp ]
      processors: [ batch ]
      exporters: [ logging, jaeger ]
  extensions: [ memory_ballast, zpages ]

ステップ3:docker-すべて一緒に文字列を作成する

これらの3つのマイクロサービスを実行して分散トレースを観察するためにスピンアップする必要のあるDockerコンテナーを見てみましょう。最初の3つのマイクロサービスについては、上記のセクションで説明しています。

  • customer-service-bff
  • customer-service
  • order-service
  • postgres-customer:データベースcustomer-service
  • postgres-order:データベースorder-service
  • jaeger-all-in-one:すべてのJaegerバックエンドコンポーネントとUIを実行する単一のイメージ
  • otel-collector:OpenTelemetryトレースのエンジンであり、トレースデータを受信、処理、およびバックエンドにエクスポートします
  • zookeeper:Kafkaクラスター内のノードのステータスを追跡し、Kafkaトピックとメッセージのリストを維持します
  • kafka:pub/subイベントストリーミング処理プラットフォーム
services:

  customer-service-bff:
    image: customer-service-bff:0.0.1-SNAPSHOT
    ports:
      - "8080:8080"
    depends_on:
      - zookeeper
      - kafka

  customer-service:
    image: customer-service:0.0.1-SNAPSHOT
    ports:
      - "8081:8081"
    depends_on:
      - zookeeper
      - kafka
      - postgres-customer
    environment:
      - SPRING_DATASOURCE_JDBC-URL=jdbc:postgresql://postgres-customer:5432/customerservice
      - SPRING_DATASOURCE_USERNAME=postgres
      - SPRING_DATASOURCE_PASSWORD=postgres
      - SPRING_JPA_HIBERNATE_DDL_AUTO=update

  order-service:
    image: order-service:0.0.1-SNAPSHOT
    ports:
      - "8082:8082"
    depends_on:
      - zookeeper
      - kafka
      - postgres-order
    environment:
      - SPRING_DATASOURCE_JDBC-URL=jdbc:postgresql://postgres-order:5432/orderservice
      - SPRING_DATASOURCE_USERNAME=postgres
      - SPRING_DATASOURCE_PASSWORD=postgres
      - SPRING_JPA_HIBERNATE_DDL_AUTO=update

  postgres-customer:
    image: postgres
    ports:
      - "5432:5432"
    environment:
      - POSTGRES_PASSWORD=postgres
      - POSTGRES_USER=postgres
      - POSTGRES_DB=customerservice

  postgres-order:
    image: postgres
    ports:
      - "5431:5431"
    environment:
      - POSTGRES_PASSWORD=postgres
      - POSTGRES_USER=postgres
      - POSTGRES_DB=orderservice

  jaeger-all-in-one:
    image: jaegertracing/all-in-one:latest
    ports:
      - "16686:16686"
      - "14268"
      - "14250"

  otel-collector:
    image: otel/opentelemetry-collector:0.47.0
    command: [ "--config=/etc/otel-collector-config.yaml" ]
    volumes:
      - ./otel-config.yaml:/etc/otel-collector-config.yaml
    ports:
      - "1888:1888"   # pprof extension
      - "13133:13133" # health_check extension
      - "4317"        # OTLP gRPC receiver
      - "55670:55679" # zpages extension
    depends_on:
      - jaeger-all-in-one

  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
    ports:
      - 22181:2181

  kafka:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper
    ports:
      - 29092:29092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

docker-compose up -d9つのコンテナすべてを起動するために実行します。

ステップ4:実際のデータのトレース

ハッピーパス

customer-service-bffそれでは、フローへのエントリポイントであるを起動して、新しい顧客を作成しましょう。

Jaeger UIを起動し、サービスhttp://localhost:16686/検索し、ボタンcustomer-service-bffをクリックしFind Tracesます。これが、顧客トレースの作成で表示されるものです。これは、3つのサービスにまたがり、合計で6にまたがり、期間は82.35ミリ秒です。

トレースタイムラインビュー(上のスクリーンショット)に加えて、イエーガーはグラフビュー(Trace Graph右上のドロップダウンで選択)も提供します。

3つのマイクロサービスのDockerのログ出力には、同じトレースIDが赤で強調表示され、アプリ名に応じてスパンIDが異なります(アプリケーション名と対応するスパンIDは一致する色で強調表示されます)。の場合customer-service、同じスパンIDがRESTAPIリクエストからKafkaパブリッシャーリクエストに渡されます。

エラーシナリオ

customer-serviceDockerでPostgreSQLデータベースを一時停止し、から顧客の作成フローを繰り返しますcustomer-service-bff500 internal server errorさすがに手に入れた。Jaegerをチェックインすると、次のトレースが表示されますが、例外のstacktraceがSocketTimeoutException、再び予想どおりに文句を言っています。

長期的なスパンの特定

Jaeger UIを使用すると、指定した最大期間よりも長いトレースを検索できます。たとえば、1000ミリ秒より長くかかったすべてのトレースを検索できます。次に、長時間実行されているトレースにドリルダウンして、それらの根本原因を調査できます。

概要

このストーリーでは、OpenTelemetry、Spring Cloud Sleuth、およびJaegerのレンズで分散トレースを解凍し、RESTAPI呼び出しとKafkapub/subの両方で分散トレースの自動インストルメンテーションを検証しました。このストーリーが、これらのトレースフレームワークとツール、特にOpenTelemetryについての理解を深め、分散システムでの可観測性の方法を根本的に変える方法を理解してくれることを願っています。

このストーリーのソースコードは、私のGitHubリポジトリにあります。

ハッピーコーディング!

このストーリーは、もともとhttps://betterprogramming.pub/distributed-tracing-with-opentelemetry-spring-cloud-sleuth-kafka-and-jaeger-939e35f45821で公開されました

#jaeger #opentelemetry #spring #cloud #kafka 

OpenTelemetry、Spring Cloud Sleuth、Kafka、およびJaegerを使用した分散トレース

Nacos ECO Project for Spring Boot & Java

Nacos Spring Boot Project

Alibaba Nacos ships main core features of Cloud-Native application, including:

  • Service Discovery and Service Health Check
  • Dynamic Configuration Management
  • Dynamic DNS Service
  • Service and MetaData Management

Nacos Spring Boot Project is based on it and embraces Spring Boot ECO System so that developers could build Spring Boot application rapidly.

Nacos Spring Boot Project consist of two parts: nacos-config-spring-boot and nacos-discovery-spring-boot.

nacos-config-spring-boot module is using for Dynamic Configuration Management and Service and MetaData Management.

nacos-discovery-spring-boot module is using for Service Discovery, Service Health Check and Dynamic DNS Service.

Samples

Nacos Config Sample

Nacos Discovery Sample

Dependencies & Compatibility

Version: 0.2.x / 2.x.x ( branch master )

DependenciesCompatibility
Java1.8+
Spring Boot2.0.3.RELEASE
Nacos-Spring-Context1.1.0

Version: 0.1.x / 1.x.x ( branch: 1.x )

DependenciesCompatibility
Java1.7+
Spring Boot1.4.1.RELEASE
Nacos-Spring-Context1.1.0

Quick Start

Nacos Config Quick Start

Nacos Discovery Quick Start

For more information about Nacos Spring, see Nacos Spring Project.

Relative Projects

Download Details:
Author: nacos-group
Source Code: https://github.com/nacos-group/nacos-spring-boot-project
License: Apache-2.0 license

#spring #springboot #java

Nacos ECO Project for Spring Boot & Java

Quarkus Vs Springboot: Differences and Similarities Between Frameworks

quarkus-vs-springboot

A demo project that demonstrates the differences and similarities between the frameworks

Features:

  • Controller
    • REST interface to read all orders
    • REST interface to post a new order
    • New orders are send using Kafka to the order service
    • Existing orders are gotten trough HTTP from the order service
  • Service
    • Postgres database
    • Kafka consumer for new orders
    • REST interface for all orders (use paging)

demo

There is an action/sh script in /scripts. Use this to perform the following steps:

First for spring-boot:

  • ./action.sh build-image spring-boot order-controller
  • ./action.sh start-pods spring-boot order-controller 
    This will barely be able to start the pods in time and by doing so use a lot of resources.
  • ./action.sh replicas spring-boot order-controller 30 
    This will take forever with a lot of CrashLoopBackOff. Better do this in small increments of 4 replicas a time.
  • ./action.sh drain 
    This will drain one of the nodes and move all pods to another node at once. See the load increase and wait forever for this to have all pods running again.
  • ./action.sh uncordon
  • ./action.sh replicas spring-boot order-controller 1

Second for quarkus:

  • ./action.sh build-image quarkus order-controller
  • ./action.sh start-pods quarkus order-controller 
    This will quickly start the pods and use little resources.
  • ./action.sh replicas quarkus order-controller 30 
    There might be some restarts but it reaches the 30 pods quite quickly.
  • ./action.sh drain 
    This will drain one of the nodes and move all pods to another node at once. See how quickly this finishes and moves all the drained pods to the other node.
  • ./action.sh uncordon
  • ./action.sh replicas quarkus order-controller 1

Download Details:
Author: gupbeheer
Source Code: https://github.com/gupbeheer/quarkus-vs-springboot
License:

#spring #springboot #java #quarkus 

Quarkus Vs Springboot: Differences and Similarities Between Frameworks

How to Build Front & Backend Using SpringBoot & Thymeleaf

SpringBoot-Thymeleaf-Projects

Project1- SpringBoot-Thymeleaf-Book-Project-Mapping-NoDB

  • We'll start by showing how to display elements of a List in a Thymeleaf page and how to bind a list of objects as user's inputs in a Thymeleaf form.
  • Here, we've added List of Book objects as a model attribute sent to the view, where we'll display it using an HTML table:
TitleAuthor
No Books Available
TitleAuthor
  • AddBook

Project2-SpringBoot-Thymeleaf-Hibernate-Planet-Project

  • orking with Enums in Thymeleaf
  • Planet Project Let's start by adding the Spring Boot starter for Thymeleaf to our pom.xml file:

org.springframework.boot spring-boot-starter-thymeleaf RELEASE

  • We're going to be working with widgets that have a few choices of color, so let's define our Color enum:
 public enum Solor {
   MERCURY, VENUS, EARTH, MARS, JUPITER, SATURN, URANUS, NEPTUNE
 }
  • Now, let's create our Widget class:
 public class Planet {
   private String name;
   private Solar solar;

  // Standard getters/setters
 }

Download Details:
Author: Urunov
Source Code: https://github.com/Urunov/SpringBoot-Thymeleaf-FrontBackend-Projects
License:

#spring #springboot #java #Thymeleaf

How to Build Front & Backend Using SpringBoot & Thymeleaf

Instructions for Spring Boot, Hibernate & Database

Spring-DAO-ORM-JEE-Web-AOP-Core-Boot

Spring Framework Architecure and Spring Framework Runtime details....

SpringBoot-DAO-ORM-Web

More valuable information also here (let's go)

Project-1. Spring Boot REST API JDBC MySQL Gradle

  • Gradle project provides Spring Boot and JDBC (using Gradle)

Project-2. Spring Boot REST API JDBC MySQL Maven

  • Family Member small project provides Spring Boot and JDBC template (using MySQL) implementation. In case of, Spring Boot using Maven configuration, and DB (database) using JDBC (only template not JPA ).

Project-3. SpringMVC-Boot2-JSP-JPA-Hibernate5-MySQL

Project-4. SpringBoot-ToDo-Project

  • Todo project provides Spring Boot and JDBC template (using MySQL) implementation. In case of, Spring Boot using Maven configuration, and DB (database) using JDBC (only template not JPA ).
  • The mainly 2 parameters: Adding todo list of the daily, List all of DB todo lists.

Project-5.SpringBoot-UploadFiles and Image

  • Spring Boot + Thyeamleaf + Web : Project which uploading files existing source code (upload) folder.

Project6 - Spring Boot Upload and Downloading File MySQLDB

  • Spring Boot + Web + MySQL Thyeamleaf.
  • Uplad and Download files Store in DB (MySQL in example)

Project7 - Spring Boot REST API - JPA- Hibernate-Thymeleaf

  • Spring Boot + Web + MySQL Thyeamleaf.
  • Project of Goverment population control related to People Inforamtion CRUD process. ->Project1- Spring Boot REST API- MySQL-Thymeleaf: Just adding and updating people to DB(MySQL), using Spring Boot, MySQL, Hibernate + Thymeleaf template 
    -> Project2- Spring Boot REST API- MySQL-Thymeleaf-Many-to-Many: Just adding and updating + SEARCHING bar people to DB(MySQL), using Spring Boot, MySQL, Hibernate + Thymeleaf template -> Project3- Spring Boot REST API- MySQL-Thymeleaf- Country Selection, Time selection, difference (Backed end thymeleaf): Just adding and updating people to DB(MySQL), using Spring Boot, MySQL, Hibernate + Thymeleaf template

Project8 -Simple-Build-CRUD-MySQL-JPA-Hibernate-okta-OAuth2

  • Spring Boot + Web + MySQL Thyeamleaf + Security.
  • Ongoing project Updating

Project9-RealProject-Populations

  • Spring Boot
  • Hibernate JPA
  • MySQL
  • Security
  • Sorting/Paginatation
  • Converting
  • Admin/User Controleres

Population_Final3

Core data for Spring Boot with Database details....

This provides Database implementation in the Spring Boot. Indeed, We should breifly inform here concept of Spring, Spring Boot, JDBC, JPA, H2.

Download Details:
Author: 
Source Code: 
License:

#spring #springboot #java #database #hibernate 

Instructions for Spring Boot, Hibernate & Database

Spring Boot and Kafka Practical Results

Microservice - Modern Application

  1. Monolith vs Microservices - An Analysis
  2. Design Principles | Boundaries around microservices | Guidelines to follow when designing microservices applications
  3. Microservices: Design Patterns

🌠 More about Microservices :

  • Microservices Decomposition Pattern: By Domain and subdomain
  • Microservices Decomposition Pattern: Strangle Vine Pattern
  • Microservices Decomposition Pattern: Sidecar Pattern
  • Microservices Decomposition Pattern: Service Mesh
  • Microservices Database Pattern: Database per service & Shared Database per service
  • Microservices Database Pattern: CQRS - Common Query Responsibility Segregation
  • Microservices Database Pattern: Data Consistency - Eventual vs Strong Consistency
  • Microservices Database Pattern: Event-Driven Architecture
  • Microservices Database Pattern: Event Sourcing
  • Mircroservices Databse Pattern: 2 Phase Commit
  • Microservices Database Pattern: SAGA
  • Microservices Database Pattern: Summary
  • Microservices Communication: How microservices understand each other(such as connect)
  • Microservices Communication: Synchronous vs Asyncronous
  • Microservices Communication: HTTP & REST
  • Microservices Communication: Message Based Communication
  • Microservices Communication: GraphQL
  • Microservices Integration Patterns: API Gateway
  • Microservices Integration Patterns: Aggregator Pattern
    • Chained Pattern
    • Branch Pattern
  • Microservices Integration Pattern: Clientside UI Composition Pattern
  • Microservices Observable Pattern: Health Check and Performance Metrics
  • Microservices Cross Cutting Concern Pattern: Service Registry and Discovery
  • Microservices Cross Cutting Concern Pattern: Load Balancer
  • Microservices Cross Cutting Concern PatternL Extrenal Configuration
  • Microservices Deployment Patterns: What is Container ? What is VM? Container vs VM
  • Microservices Deployment Patterns: Multiple service instances per host & Service instance per host? Service Instance per VM | Service Instance per Container
  • Microservices Deployment Patterns: Serverless Pattern
  • Microservices Deployment Patterns: Blue - Green | Cananry| Rolling Patterns

Practical Microservices Architecture vs source code

Monolith vs Microservices - An Analysis

What is Monoloth Architecture?

  • Single jar/war file for whole application
  • Issues
    • Less flexible for large team and code base
    • Overload IDE
    • Continuous development is difficult
    • Scaling the app is difficult
    • Scaling development is difficult
    • Technology stack change is difficult

Reference architecture - Monolothic

What is Microservice Architecture?

=> A Set of loosely coupled, collaborating services. Each service is relating several parameters:

  • Highly maintainable and testable
  • Loosely coupled with other services
  • Independently deployable
  • Capable of being developed by a small team
  • Services can be developed independent of each other
  • Communication among service via HTTP/REST/AMQP
  • Service granularity (how small service and how to size of the service capacity. Logically)
  • Linguistic approach
  • Technologic agnostic More about Microservices:
    • Microservices is a specialization of an implementation approarch for service - oriented architecure (SOA) used to build flexible, independently deployable software systems.
    • Followed the introduction of DevOps
    • Strategy - "Do one thing and do it well".

Important References:

🔥 Microservices Antipatterns 🔥 CAP Theorem Reference architecture - Mircroservice

Download Details:
Author: Urunov
Source Code: https://github.com/Urunov/Microservice-Modern-Application
License:

#spring #springboot #java #Kafka #Microservice

Spring Boot and Kafka Practical Results

How To Create A GraphQL Server with Spring Boot & Java

Getting started with Spring Boot

This is a tutorial for people who want to create a GraphQL server in Java. It requires some Spring Boot and Java knowledge and while we give a brief introduction into GraphQL, the focus of this tutorial is on developing a GraphQL server in Java.

GraphQL in 3 minutes

GraphQL is a query language to retrieve data from a server. It is an alternative to REST, SOAP or gRPC in some way.

Let's suppose we want to query the details for a specific book from a online store backend.

With GraphQL you send the following query to server to get the details for the book with the id "book-1":

{
  bookById(id: "book-1"){
    id
    name
    pageCount
    author {
      firstName
      lastName
    }
  }
}

This is not JSON (even though it looks deliberately similar), it is a GraphQL query. It basically says:

  • query a book with a specific id
  • get me the id, name, pageCount and author from that book
  • for the author I want to know the firstName and lastName

The response is normal JSON:

{
  "bookById":
  {
    "id":"book-1",
    "name":"Harry Potter and the Philosopher's Stone",
    "pageCount":223,
    "author": {
      "firstName":"Joanne",
      "lastName":"Rowling"
    }
  }
}

One very important property of GraphQL is that it is statically typed: the server knows exactly the shape of every object you can query and any client can actually "introspect" the server and ask for the so called "schema". The schema describes what queries are possible and what fields you can get back. (Note: when we refer to schema here, we always refer to a "GraphQL Schema", which is not related to other schemas like "JSON Schema" or "Database Schema")

The schema for the above query looks like this:

type Query {
  bookById(id: ID): Book
}

type Book {
  id: ID
  name: String
  pageCount: Int
  author: Author
}

type Author {
  id: ID
  firstName: String
  lastName: String
}

This tutorial will focus on how to implement a GraphQL server with exactly this schema in Java.

We've barely scratched the surface of what's possible with GraphQL. Further information can be found on the official page: https://graphql.github.io/learn/

GraphQL Java Overview

GraphQL Java is the Java (server) implementation for GraphQL. There are several repositories in the GraphQL Java Github org. The most important one is the GraphQL Java Engine which is the basis for everything else.

GraphQL Java Engine itself is only concerned with executing queries. It doesn't deal with any HTTP or JSON related topics. For these aspects, we will use the GraphQL Java Spring Boot adapter which takes care of exposing our API via Spring Boot over HTTP.

The main steps of creating a GraphQL Java server are:

  1. Defining a GraphQL Schema.
  2. Deciding on how the actual data for a query is fetched.

Our example API: getting book details

Our example app will be a simple API to get details for a specific book. This is in no way a comprehensive API, but it is enough for this tutorial.

Create a Spring Boot app

The easiest way to create a Spring Boot app is to use the "Spring Initializr" at https://start.spring.io/.

Select:

  • Gradle Project
  • Java
  • Spring Boot 2.1.x

For the project metadata we use:

  • Group: com.graphql-java.tutorial
  • Artifact: book-details

As dependency, we just select Web.

A click on Generate Project will give you a ready to use Spring Boot app. All subsequently mentioned files and paths will be relative to this generated project.

We are adding three dependencies to our project inside the dependencies section of build.gradle:

the first two are GraphQL Java and GraphQL Java Spring and then we also add Google Guava. Guava is not strictly needed but it will make our life a little bit easier.

The dependencies will look like that:

dependencies {
    implementation 'com.graphql-java:graphql-java:11.0' // NEW
    implementation 'com.graphql-java:graphql-java-spring-boot-starter-webmvc:1.0' // NEW
    implementation 'com.google.guava:guava:26.0-jre' // NEW
    implementation 'org.springframework.boot:spring-boot-starter-web'
    testImplementation 'org.springframework.boot:spring-boot-starter-test'
}

Schema

We are creating a new file schema.graphqls in src/main/resources with the following content:

type Query {
  bookById(id: ID): Book
}

type Book {
  id: ID
  name: String
  pageCount: Int
  author: Author
}

type Author {
  id: ID
  firstName: String
  lastName: String
}

This schema defines one top level field (in the type Query): bookById which returns the details of a specific book.

It also defines the type Book which has the fields: id, name, pageCount and author. author is of type Author, which is defined after Book.

The Domain Specific Language shown above which is used to describe a schema is called Schema Definition Language or SDL. More details about it can be found here.

Once we have this file we need to "bring it to life" by reading the file and parsing it and adding code to fetch data for it.

We create a new GraphQLProvider class in the package com.graphqljava.tutorial.bookdetails with an init method which will create a GraphQL instance:

@Component
public class GraphQLProvider {

    private GraphQL graphQL;

    @Bean
    public GraphQL graphQL() {
        return graphQL;
    }

    @PostConstruct
    public void init() throws IOException {
        URL url = Resources.getResource("schema.graphqls");
        String sdl = Resources.toString(url, Charsets.UTF_8);
        GraphQLSchema graphQLSchema = buildSchema(sdl);
        this.graphQL = GraphQL.newGraphQL(graphQLSchema).build();
    }

    private GraphQLSchema buildSchema(String sdl) {
      // TODO: we will create the schema here later
    }
}

We use Guava Resources to read the file from our classpath, then create a GraphQLSchema and GraphQL instance. This GraphQL instance is exposed as a Spring Bean via the graphQL() method annotated with @Bean. The GraphQL Java Spring adapter will use that GraphQL instance to make our schema available via HTTP on the default url /graphql.

What we still need to do is to implement the buildSchema method which creates the GraphQLSchema instance and wires in code to fetch data:

@Autowired
GraphQLDataFetchers graphQLDataFetchers;

private GraphQLSchema buildSchema(String sdl) {
    TypeDefinitionRegistry typeRegistry = new SchemaParser().parse(sdl);
    RuntimeWiring runtimeWiring = buildWiring();
    SchemaGenerator schemaGenerator = new SchemaGenerator();
    return schemaGenerator.makeExecutableSchema(typeRegistry, runtimeWiring);
}

private RuntimeWiring buildWiring() {
    return RuntimeWiring.newRuntimeWiring()
            .type(newTypeWiring("Query")
                    .dataFetcher("bookById", graphQLDataFetchers.getBookByIdDataFetcher()))
            .type(newTypeWiring("Book")
                    .dataFetcher("author", graphQLDataFetchers.getAuthorDataFetcher()))
            .build();
}

TypeDefinitionRegistry is the parsed version of our schema file. SchemaGenerator combines the TypeDefinitionRegistry with RuntimeWiring to actually make the GraphQLSchema.

buildWiring uses the graphQLDataFetchers bean to actually register two DataFetchers:

  • One to retrieve a book with a specific ID
  • One to get the author for a specific book.

DataFetcher and how to implement the GraphQLDataFetchers bean is explained in the next section.

Overall the process of creating a GraphQL and GraphQLSchema instance looks like this:

Creating GraphQL

DataFetchers

Probably the most important concept for a GraphQL Java server is a DataFetcher: A DataFetcher fetches the Data for one field while the query is executed.

While GraphQL Java is executing a query, it calls the appropriate DataFetcher for each field it encounters in query. A DataFetcher is an Interface with a single method, taking a single argument of type DataFetcherEnvironment:

public interface DataFetcher<T> {
    T get(DataFetchingEnvironment dataFetchingEnvironment) throws Exception;
}

Important: Every field from the schema has a DataFetcher associated with it. If you don't specify any DataFetcher for a specific field, then the default PropertyDataFetcher is used. We will discuss this later in more detail.

We are creating a new class GraphQLDataFetchers which contains a sample list of books and authors.

The full implementation looks like this which we will look at it in detail soon:

@Component
public class GraphQLDataFetchers {

    private static List<Map<String, String>> books = Arrays.asList(
            ImmutableMap.of("id", "book-1",
                    "name", "Harry Potter and the Philosopher's Stone",
                    "pageCount", "223",
                    "authorId", "author-1"),
            ImmutableMap.of("id", "book-2",
                    "name", "Moby Dick",
                    "pageCount", "635",
                    "authorId", "author-2"),
            ImmutableMap.of("id", "book-3",
                    "name", "Interview with the vampire",
                    "pageCount", "371",
                    "authorId", "author-3")
    );

    private static List<Map<String, String>> authors = Arrays.asList(
            ImmutableMap.of("id", "author-1",
                    "firstName", "Joanne",
                    "lastName", "Rowling"),
            ImmutableMap.of("id", "author-2",
                    "firstName", "Herman",
                    "lastName", "Melville"),
            ImmutableMap.of("id", "author-3",
                    "firstName", "Anne",
                    "lastName", "Rice")
    );

    public DataFetcher getBookByIdDataFetcher() {
        return dataFetchingEnvironment -> {
            String bookId = dataFetchingEnvironment.getArgument("id");
            return books
                    .stream()
                    .filter(book -> book.get("id").equals(bookId))
                    .findFirst()
                    .orElse(null);
        };
    }

    public DataFetcher getAuthorDataFetcher() {
        return dataFetchingEnvironment -> {
            Map<String,String> book = dataFetchingEnvironment.getSource();
            String authorId = book.get("authorId");
            return authors
                    .stream()
                    .filter(author -> author.get("id").equals(authorId))
                    .findFirst()
                    .orElse(null);
        };
    }
}

Source of the data

We are getting our books and authors from a static list inside the class. This is made just for this tutorial. It is very important to understand that GraphQL doesn't dictate in anyway where the data comes from. This is the power of GraphQL: it can come from a static in memory list, from a database or an external service

Book DataFetcher

Our first method getBookByIdDataFetcher returns a DataFetcher implementation which takes a DataFetcherEnvironment and returns a book. In our case this means we need to get the id argument from the bookById field and find the book with this specific id. If we can't find it, we just return null.

The "id" in String bookId = dataFetchingEnvironment.getArgument("id"); is the "id" from the bookById query field in the schema:

type Query {
  bookById(id: ID): Book
}
...

Author DataFetcher

Our second method getAuthorDataFetcher, returns a DataFetcher for getting the author for a specific book. Compared to the previously described book DataFetcher, we don't have an argument, but we have a book instance. The result of the DataFetcher from the parent field is made available via getSource. This is an important concept to understand: the DataFetcher for each field in GraphQL are called in a top-down fashion and the parent's result is the source property of the child DataFetcherEnvironment.

We then use the previously fetched book to get the authorId and look for that specific author in the same way we look for a specific book.

Default DataFetchers

We only implement two DataFetchers. As mentioned above, if you don't specify one, the default PropertyDataFetcher is used. In our case it means Book.id, Book.name, Book.pageCount, Author.id, Author.firstName and Author.lastName all have a default PropertyDataFetcher associated with it.

A PropertyDataFetcher tries to lookup a property on a Java object in multiple ways. In case of a java.util.Map it simply looks up the property by key. This works perfectly fine for us because the keys of the book and author Maps are the same as the fields specified in the schema. For example in the schema we define for the Book type the field pageCount and the book DataFetcher returns a Map with a key pageCount. Because the field name is the same as the key in the Map("pageCount") the PropertyDateFetcher works for us.

Lets assume for a second we have a mismatch and the book Map has a key totalPages instead of pageCount.

// In the GraphQLDataFetchers class
// Rename key from 'pageCount' to 'totalPages'
private static List<Map<String, String>> books = Arrays.asList(
        ImmutableMap.of("id", "book-1",
                "name", "Harry Potter and the Philosopher's Stone",
                "totalPages", "223",
                "authorId", "author-1"),
        ImmutableMap.of("id", "book-2",
                "name", "Moby Dick",
                "totalPages", "635",
                "authorId", "author-2"),
        ImmutableMap.of("id", "book-3",
                "name", "Interview with the vampire",
                "totalPages", "371",
                "authorId", "author-3")
);

This would result in a null value for pageCount for every book, because the PropertyDataFetcher can't fetch the right value. In order to fix that you would have to register a new DataFetcher for Book.pageCount which looks like this:

// In the GraphQLProvider class
private RuntimeWiring buildWiring() {
    return RuntimeWiring.newRuntimeWiring()
            .type(newTypeWiring("Query")
                    .dataFetcher("bookById", graphQLDataFetchers.getBookByIdDataFetcher()))
            .type(newTypeWiring("Book")
                    .dataFetcher("author", graphQLDataFetchers.getAuthorDataFetcher())
                    // This line is new: we need to register the additional DataFetcher
                    .dataFetcher("pageCount", graphQLDataFetchers.getPageCountDataFetcher()))
            .build();
}

// In the GraphQLDataFetchers class
// Implement the DataFetcher
public DataFetcher getPageCountDataFetcher() {
    return dataFetchingEnvironment -> {
        Map<String,String> book = dataFetchingEnvironment.getSource();
        return book.get("totalPages");
    };
}

This DataFetcher would fix that problem by looking up the right key in the book Map. (Again: we don't need that for our example, because we don't have a naming mismatch)

Try out the API

This is all you actually need to build a working GraphQL API. After the starting the Spring Boot application the API is available on http://localhost:8080/graphql.

The easiest way to try out and explore a GraphQL API is to use a tool like GraphQL Playground. Download it and run it.

After starting it you will be asked for a URL, enter "http://localhost:8080/graphql".

After that, you can query our example API and you should get back the result we mentioned above in the beginning. It should look something like this:

GraphQL Playground

Link: https://www.graphql-java.com/tutorials/getting-started-with-spring-boot/#try-out-the-api

#spring #springboot #java #graphql #api

How To Create A GraphQL Server with Spring Boot & Java

Spring Boot Graphql Example with Hibernate JPA

spring-boot-graphql-example

This is a simple maven-based Java example that uses spring-boot, an H2 embedded in-memory database and Hibernate ORM to stand up a graphql service. This example is self-contained and ready to play with after running mvn spring-boot:run

The example defines a basic JPA annontated data model containing a single entity, Person. When spring-boot runs it takes that entity definition and creates an in-memory H2 embedded database with pre-poluated dummy data. The GraphQL service is then started and the H2 data is offered as a queryable repository accessible through the GraphQL API found at http://localhost:8080/.

Compiling and Running

This project was compiled and tested using JDK8 and Maven 3.6.1.

Starting the Service

Run the following to compile and run the example GraphQL service.

mvn spring-boot:run

The service is ready for use when you see a similar log line as below:

2019-07-16 12:15:11.053  INFO 67805 --- [  restartedMain] c.o.s.graphql.GraphQLSpringBootApp       : Started GraphQLSpringBootApp in 6.457 seconds (JVM running for 6.931)

Stopping the Service

Use CTRL+C to stop.

Exploring the GraphQL Service

In addition to a GraphQL runtime and API, this project starts up two graphical interfaces, GraphiQL and H2-Console to play around with.

GraphiQL

A web console that can be used to explore the schema and test querying the GraphQL API. Found at: http://localhost:8080

The left-hand pane is used to input your client-side GraphQL queries. The right-hand pane displays the result returned back from this GraphQL service.

alt GraphiQL

H2-Console

A web console to manage the H2 in-memory database. Found at: http://localhost:8080/h2-console (login credentials are found in application.properites)

alt H2-Console

Project Structure

There aren't many files to this project which is quite impressive considering this example starts up a GraphQL service and serves dummy data from an H2 embedded database.

The classes defined in this example are quite small and succinct. They essentially define how the GraphQL runtime serves the H2 data for query requests.

  • GraphQLSpringBootApp.java - entry point of the GraphQL service, also defines where the entity model is located
  • Person.java - entity model
  • PersonRepository.java - defines the CRUD operations against the Person table in the H2 embedded database
  • PersonQuery.java - defines how the 'allPeople' query returns data back by using the PersonRepository
  • PersonMutator.java - defines how the 'createPerson' mutator persist a new Person and returns data back that Person to confirm success

These classes define the abstracted client input types. They are distinct from the Output types, Person in this case.

  • CreatePersonInput.java
  • UpdateNameInput.java
  • UpdateAgeInput.java

These classes define how errors are handled implicitly and explicitly. Their role is to discern between server-side and client-side errors and how they should be displayed.

  • GraphQLErrorAdapter.java
  • InvalidArgumentException.java
  • PersonNotFoundException.java

The following 3 files are used for configuration, schema definition and dummy data population:

  • src/main/resource/application.properties - defines the hibernate connection to the H2 embedded database
  • src/main/resource/data.sql - dummy data to go in the H2 database on startup
  • src/main/resource/schema.graphqls - GraphQL schema

Other files:

  • pom.xml - defines the dependencies needed to build the project
  • src/main/webapp/index.html - the GraphiQL web interface

GraphQL Schema

The schema in this example defines an output type called Person, some abstracted input types and their pertaining mutations and a few queries. The use of input types like CreatePersonInput is to abstract the actual creation of the Person away from the client, resulting in a separation of concerns between client request and the server response.

Note: It's important that the member variables of the Person entity match the output type fields of Person in the GraphQL schema. The same applies for input type objects.

schema {
    query: Query
    mutation: Mutation
}

type Person {
    id: ID!
    firstName: String!
    middleName: String
    lastName: String!
    age: Int!
}

type Query {
    person(id: ID!): Person
    allPeople: [Person]
}

type Mutation {
    createPerson(input: CreatePersonInput!) : Person!
    deletePerson(id: ID!) : Boolean
    updateName(input: UpdateNameInput!) : Person!
    updateAge(input: UpdateAgeInput!) : Person!
}

input CreatePersonInput {
    firstName: String!
    middleName: String
    lastName: String!
}

input UpdateNameInput{
    id: ID!
    firstName: String
    middleName: String
    lastName: String
}

input UpdateAgeInput{
    id: ID!
    age: Int!
}

Example Queries

Get all people.

{
  allPeople {
    id
    firstName
    middleName
    lastName
  }
}

Find a person by id.

{
  person (id: 3){
    id,
    firstName,
    middleName,
    lastName
  }
}

Creating a person.

mutation CreatePerson($input: CreatePersonInput!) {
  createPerson(input: $input) {
    id
    firstName
    middleName
    lastName
  }
}

{ # query variables
  "input": {
   "firstName": "Tim", 
   "middleName": "Alfred", 
   "lastName": "Adams"
    }
}

Update a person's age:

mutation UpdateAge($input: UpdateAgeInput!) {
  updateAge(input: $input) {
    id
    firstName
    middleName
    lastName
    age
  }
}

{ # query variables
  "input": {
    "id": 1,
    "age": 34
  }
}

Delete a person.

mutation {
  deletePerson(id:1)
}

Simplifying Boiler-plate Code

To reduce boiler-plate code, this example takes advantage of the spring framework's CrudRepository class and the lombok framework.

Defining the PersonRepository code is no more than a four lines of code:

package com.ohair.stephen.graphql.repositories; import com.ohair.stephen.graphql.model.Product;import org.springframework.data.repository.CrudRepository; public interface PersonRepository extends CrudRepository<Product, Long> { }

Defining a simple input type pojo:

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

@Data
@NoArgsConstructor
@AllArgsConstructor
public class CreatePersonInput {
    private String firstName;
    private String middleName;
    private String lastName;
}

Connecting to Other Databases

There are plenty of examples connecting to MySQL databases but not many for Oracle PL/SQL.

So here is an example of what to add to this project to connect to an Oracle 12 database:

pom.xml

Remove the H2 dependency and add:

<dependency> <!-- Oracle JDBC driver -->
    <groupId>com.oracle</groupId>
    <artifactId>ojdbc7</artifactId>
    <version>12.1.0.1</version>
</dependency>

application.properties This configuration can be used to connect to an Oracle12 datbase using hibernate.

spring.datasource.url=jdbc:oracle:thin:@<DATABASE_URL>:<PORT>/<SID>
spring.datasource.username=
spring.datasource.password=
spring.datasource.driver.class=oracle.jdbc.driver.OracleDriver
spring.jpa.database-platform=org.hibernate.dialect.Oracle10gDialect
spring.jpa.hibernate.ddl-auto=none
spring.jpa.show-sql=true

Note: be very careful not to blat an existing db schema by ensuring spring.jpa.hibernate.ddl-auto=none is included as above.

Link: https://github.com/Urunov/SpringBoot-GraphQL-FullStack-Projects/tree/master/Spring-Boot-Graphql-MySQL-Hibernate-JPA

#spring #springboot #java #graphql

Spring Boot Graphql Example with Hibernate JPA