Coding  Life

Coding Life

1673236819

Node.js Microservices using RabbitMQ - Message Queueing

We are going to use RabbitMQ in order to communicate between different microservices in Node.js . This example uses the Direct exchange type in a logging system.

Timestamps
---------------------

0:00- Introduction
0:20- Explaining the system to develop
1:50- Creating the Logger Microservice (Producer/ Publisher)
2:44- Steps to create a Producer
3:08- Creating the Producer class
11:33- Creating the API
15:27- Testing the API and the producer with Postman
17:30- RabbitMQ Management UI
19:05- Creating the Info Microservice (First Consumer)
19:12- Steps to create a Consumer
20:34- Creating the consumeMessages function
27:26- Testing the Info Consumer
29:50- Analysing the changes in rabbitMQ Management
30:33- Creating the WarningAndError Microservice (Second Consumer)
32:34- Testing the WarningAndError Consumer
32:55- Testing the whole application 
36:10- Analyzing the changes in rabbitMQ Management
36:48- Taking a look at our Initial Diagram
37:25- Explaining the important of ACK
39:17- Final Recap

Source Code : https://github.com/charbelh3/RabbitMQ-Logger-Example 

Subscribe: https://www.youtube.com/@Computerix/featured 

#nodejs #rabbitmq 

Node.js Microservices using RabbitMQ - Message Queueing
Anil  Sakhiya

Anil Sakhiya

1670482672

Facebook Messenger Clone with RabbitMQ, Microservices and NestJS

In this tutorial, we'll learn about Microservices and RabbitMQ in NestJS both conceptually and also practically by creating the Facebook Messenger clone. We will also learn about and use Docker to easily setup our microservice architecture for the Facebook Messenger Clone

00:00 - Introduction
00:28 - Prerequisites
05:56 - System_Design [RabbitMQ]
09:53 - RabbitMQ_Fast_Version
11:36 - RabbitMQ
12:43 - Coding (FB Messenger Clone)

#nestjs #rabbitmq #microservices #docker

Facebook Messenger Clone with RabbitMQ, Microservices and NestJS
Charles Cooper

Charles Cooper

1667954913

Microservice Architectures & System Design with Python, Kubernetes, RabbitMQ, MongoDB, MySQL

Microservice Architecture and System Design with Python & Kubernetes – Full Course

Learn about software system design and microservices. This course is a hands-on approach to learning about microservice architectures and distributed systems using Python, Kubernetes, RabbitMQ, MongoDB, mySQL.

⭐️ Contents ⭐️
(0:00:00) Intro
(0:01:02) Overview 
(0:02:47) Installation & Setup?
(0:10:16) Auth Service Code
(0:32:25) Auth Flow Overview & JWTs
(0:53:04) Auth Service Deployment
(0:56:08) Auth Dockerfile
(1:20:05) Kubernetes
(1:37:26) Gateway Service Code
(1:42:34) MongoDB & GridFs
(1:47:04) Architecture Overview (RabbitMQ)
(1:49:50) Synchronous Interservice Communication
(1:50:49) Asynchronous Interservice Communication
(1:53:19) Strong Consistency
(1:54:07) Eventual Consistency
(2:19:16) RabbitMQ
(2:21:16) Gateway Service Deployment
(2:35:34) Kubernetes Ingress
(2:46:28) Kubernetes StatefulSet
(2:51:18) RabbitMQ Deployment
(3:09:35) Converter Service Code
(3:33:43) Converter Service Deployment
(4:21:09) Checkpoint
(4:22:11) Update Gateway Service
(4:31:46) Notification Service Code
(4:43:24) Notification Service Deployment
(4:51:55) Sanity Check
(5:05:54) End

Kubernetes API Reference: https://kubernetes.io/docs/reference/kubernetes-api/ 

⭐️ References ⭐️
https://www.mongodb.com/docs/ 
https://www.rabbitmq.com/documentation.html 
https://docs.microsoft.com/en-us/azure/architecture/patterns/competing-consumers 
https://docs.microsoft.com/en-us/azure/architecture/microservices/design/interservice-communication 
https://cloud.google.com/datastore/docs/articles/balancing-strong-and-eventual-consistency-with-google-cloud-datastore 

#microservice #python #kubernetes #rabbitmq #mongodb #mysql

Microservice Architectures & System Design with Python, Kubernetes, RabbitMQ, MongoDB, MySQL

Amqp: Go Client for AMQP 0.9.1

Go RabbitMQ Client Library (Unmaintained Fork)

Beware of Abandonware

This repository is NOT ACTIVELY MAINTAINED. Consider using a different fork instead: rabbitmq/amqp091-go. In case of questions, start a discussion in that repo or use other RabbitMQ community resources.

Project Maturity

This project has been used in production systems for many years. As of 2022, this repository is NOT ACTIVELY MAINTAINED.

This repository is very strict about any potential public API changes. You may want to consider rabbitmq/amqp091-go which is more willing to adapt the API.

Supported Go Versions

This library supports two most recent Go release series, currently 1.10 and 1.11.

Supported RabbitMQ Versions

This project supports RabbitMQ versions starting with 2.0 but primarily tested against reasonably recent 3.x releases. Some features and behaviours may be server version-specific.

Goals

Provide a functional interface that closely represents the AMQP 0.9.1 model targeted to RabbitMQ as a server. This includes the minimum necessary to interact the semantics of the protocol.

Non-goals

Things not intended to be supported.

  • Auto reconnect and re-synchronization of client and server topologies.
    • Reconnection would require understanding the error paths when the topology cannot be declared on reconnect. This would require a new set of types and code paths that are best suited at the call-site of this package. AMQP has a dynamic topology that needs all peers to agree. If this doesn't happen, the behavior is undefined. Instead of producing a possible interface with undefined behavior, this package is designed to be simple for the caller to implement the necessary connection-time topology declaration so that reconnection is trivial and encapsulated in the caller's application code.
  • AMQP Protocol negotiation for forward or backward compatibility.
    • 0.9.1 is stable and widely deployed. Versions 0.10 and 1.0 are divergent specifications that change the semantics and wire format of the protocol. We will accept patches for other protocol support but have no plans for implementation ourselves.
  • Anything other than PLAIN and EXTERNAL authentication mechanisms.
    • Keeping the mechanisms interface modular makes it possible to extend outside of this package. If other mechanisms prove to be popular, then we would accept patches to include them in this package.

Usage

See the 'examples' subdirectory for simple producers and consumers executables. If you have a use-case in mind which isn't well-represented by the examples, please file an issue.

Documentation

Use Godoc documentation for reference and usage.

RabbitMQ tutorials in Go are also available.

Contributing

Pull requests are very much welcomed. Create your pull request on a non-master branch, make sure a test or example is included that covers your change and your commits represent coherent changes that include a reason for the change.

To run the integration tests, make sure you have RabbitMQ running on any host, export the environment variable AMQP_URL=amqp://host/ and run go test -tags integration. TravisCI will also run the integration tests.

Thanks to the community of contributors.

External packages

Download Details:

Author: streadway
Source Code: https://github.com/streadway/amqp 
License: BSD-2-Clause license

#go #golang #rabbitmq 

Amqp: Go Client for AMQP 0.9.1

PHP-amqplib: The Most Widely Used PHP Client for RabbitMQ

PHP-amqplib 

This library is a pure PHP implementation of the AMQP 0-9-1 protocol. It's been tested against RabbitMQ.

The library was used for the PHP examples of RabbitMQ in Action and the official RabbitMQ tutorials.

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.

Project Maintainers

Thanks to videlalvaro and postalservice14 for creating php-amqplib.

The package is now maintained by Ramūnas Dronga, Luke Bakken and several VMware engineers working on RabbitMQ.

Supported RabbitMQ Versions

Starting with version 2.0 this library uses AMQP 0.9.1 by default and thus requires RabbitMQ 2.0 or later version. Usually server upgrades do not require any application code changes since the protocol changes very infrequently but please conduct your own testing before upgrading.

Supported RabbitMQ Extensions

Since the library uses AMQP 0.9.1 we added support for the following RabbitMQ extensions:

  • Exchange to Exchange Bindings
  • Basic Nack
  • Publisher Confirms
  • Consumer Cancel Notify

Extensions that modify existing methods like alternate exchanges are also supported.

Related libraries

enqueue/amqp-lib is a amqp interop compatible wrapper.

AMQProxy is a proxy library with connection and channel pooling/reusing. This allows for lower connection and channel churn when using php-amqplib, leading to less CPU usage of RabbitMQ.

Setup

Ensure you have composer installed, then run the following command:

$ composer require php-amqplib/php-amqplib

That will fetch the library and its dependencies inside your vendor folder. Then you can add the following to your .php files in order to use the library

require_once __DIR__.'/vendor/autoload.php';

Then you need to use the relevant classes, for example:

use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Message\AMQPMessage;

Usage

With RabbitMQ running open two Terminals and on the first one execute the following commands to start the consumer:

$ cd php-amqplib/demo
$ php amqp_consumer.php

Then on the other Terminal do:

$ cd php-amqplib/demo
$ php amqp_publisher.php some text to publish

You should see the message arriving to the process on the other Terminal

Then to stop the consumer, send to it the quit message:

$ php amqp_publisher.php quit

If you need to listen to the sockets used to connect to RabbitMQ then see the example in the non blocking consumer.

$ php amqp_consumer_non_blocking.php

Change log

Please see CHANGELOG for more information what has changed recently.

API Documentation

http://php-amqplib.github.io/php-amqplib/

Tutorials

To not repeat ourselves, if you want to learn more about this library, please refer to the official RabbitMQ tutorials.

More Examples

  • amqp_ha_consumer.php: demos the use of mirrored queues.
  • amqp_consumer_exclusive.php and amqp_publisher_exclusive.php: demos fanout exchanges using exclusive queues.
  • amqp_consumer_fanout_{1,2}.php and amqp_publisher_fanout.php: demos fanout exchanges with named queues.
  • amqp_consumer_pcntl_heartbeat.php: demos signal-based heartbeat sender usage.
  • basic_get.php: demos obtaining messages from the queues by using the basic get AMQP call.

Multiple hosts connections

If you have a cluster of multiple nodes to which your application can connect, you can start a connection with an array of hosts. To do that you should use the create_connection static method.

For example:

$connection = AMQPStreamConnection::create_connection([
    ['host' => HOST1, 'port' => PORT, 'user' => USER, 'password' => PASS, 'vhost' => VHOST],
    ['host' => HOST2, 'port' => PORT, 'user' => USER, 'password' => PASS, 'vhost' => VHOST]
],
$options);

This code will try to connect to HOST1 first, and connect to HOST2 if the first connection fails. The method returns a connection object for the first successful connection. Should all connections fail it will throw the exception from the last connection attempt.

See demo/amqp_connect_multiple_hosts.php for more examples.

Batch Publishing

Let's say you have a process that generates a bunch of messages that are going to be published to the same exchange using the same routing_key and options like mandatory. Then you could make use of the batch_basic_publish library feature. You can batch messages like this:

$msg = new AMQPMessage($msg_body);
$ch->batch_basic_publish($msg, $exchange);

$msg2 = new AMQPMessage($msg_body);
$ch->batch_basic_publish($msg2, $exchange);

and then send the batch like this:

$ch->publish_batch();

When do we publish the message batch?

Let's say our program needs to read from a file and then publish one message per line. Depending on the message size, you will have to decide when it's better to send the batch. You could send it every 50 messages, or every hundred. That's up to you.

Optimized Message Publishing

Another way to speed up your message publishing is by reusing the AMQPMessage message instances. You can create your new message like this:

$properties = array('content_type' => 'text/plain', 'delivery_mode' => AMQPMessage::DELIVERY_MODE_PERSISTENT);
$msg = new AMQPMessage($body, $properties);
$ch->basic_publish($msg, $exchange);

Now let's say that while you want to change the message body for future messages, you will keep the same properties, that is, your messages will still be text/plain and the delivery_mode will still be AMQPMessage::DELIVERY_MODE_PERSISTENT. If you create a new AMQPMessage instance for every published message, then those properties would have to be re-encoded in the AMQP binary format. You could avoid all that by just reusing the AMQPMessage and then resetting the message body like this:

$msg->setBody($body2);
$ch->basic_publish($msg, $exchange);

Truncating Large Messages

AMQP imposes no limit on the size of messages; if a very large message is received by a consumer, PHP's memory limit may be reached within the library before the callback passed to basic_consume is called.

To avoid this, you can call the method AMQPChannel::setBodySizeLimit(int $bytes) on your Channel instance. Body sizes exceeding this limit will be truncated, and delivered to your callback with a AMQPMessage::$is_truncated flag set to true. The property AMQPMessage::$body_size will reflect the true body size of a received message, which will be higher than strlen(AMQPMessage::getBody()) if the message has been truncated.

Note that all data above the limit is read from the AMQP Channel and immediately discarded, so there is no way to retrieve it within your callback. If you have another consumer which can handle messages with larger payloads, you can use basic_reject or basic_nack to tell the server (which still has a complete copy) to forward it to a Dead Letter Exchange.

By default, no truncation will occur. To disable truncation on a Channel that has had it enabled, pass 0 (or null) to AMQPChannel::setBodySizeLimit().

Connection recovery

Some RabbitMQ clients using automated connection recovery mechanisms to reconnect and recover channels and consumers in case of network errors.

Since this client is using a single-thread, you can set up connection recovery using exception handling mechanism.

Exceptions which might be thrown in case of connection errors:

PhpAmqpLib\Exception\AMQPConnectionClosedException
PhpAmqpLib\Exception\AMQPIOException
\RuntimeException
\ErrorException

Some other exceptions might be thrown, but connection can still be there. It's always a good idea to clean up an old connection when handling an exception before reconnecting.

For example, if you want to set up a recovering connection:

$connection = null;
$channel = null;
while(true){
    try {
        $connection = new AMQPStreamConnection(HOST, PORT, USER, PASS, VHOST);
        // Your application code goes here.
        do_something_with_connection($connection);
    } catch(AMQPRuntimeException $e) {
        echo $e->getMessage();
        cleanup_connection($connection);
        usleep(WAIT_BEFORE_RECONNECT_uS);
    } catch(\RuntimeException $e) {
        cleanup_connection($connection);
        usleep(WAIT_BEFORE_RECONNECT_uS);
    } catch(\ErrorException $e) {
        cleanup_connection($connection);
        usleep(WAIT_BEFORE_RECONNECT_uS);
    }
}

A full example is in demo/connection_recovery_consume.php.

This code will reconnect and retry the application code every time the exception occurs. Some exceptions can still be thrown and should not be handled as a part of reconnection process, because they might be application errors.

This approach makes sense mostly for consumer applications, producers will require some additional application code to avoid publishing the same message multiple times.

This was a simplest example, in a real-life application you might want to control retr count and maybe gracefully degrade wait time to reconnection.

You can find a more excessive example in #444

UNIX Signals

If you have installed PCNTL extension dispatching of signal will be handled when consumer is not processing message.

$pcntlHandler = function ($signal) {
    switch ($signal) {
        case \SIGTERM:
        case \SIGUSR1:
        case \SIGINT:
            // some stuff before stop consumer e.g. delete lock etc
            pcntl_signal($signal, SIG_DFL); // restore handler
            posix_kill(posix_getpid(), $signal); // kill self with signal, see https://www.cons.org/cracauer/sigint.html
        case \SIGHUP:
            // some stuff to restart consumer
            break;
        default:
            // do nothing
    }
};

pcntl_signal(\SIGTERM, $pcntlHandler);
pcntl_signal(\SIGINT,  $pcntlHandler);
pcntl_signal(\SIGUSR1, $pcntlHandler);
pcntl_signal(\SIGHUP,  $pcntlHandler);

To disable this feature just define constant AMQP_WITHOUT_SIGNALS as true

<?php
define('AMQP_WITHOUT_SIGNALS', true);

... more code

Signal-based Heartbeat

If you have installed PCNTL extension and are using PHP 7.1 or greater, you can register a signal-based heartbeat sender.

<?php

$sender = new PCNTLHeartbeatSender($connection);
$sender->register();
... code
$sender->unregister();

Debugging

If you want to know what's going on at a protocol level then add the following constant to your code:

<?php
define('AMQP_DEBUG', true);

... more code

?>

Benchmarks

To run the publishing/consume benchmark type:

$ make benchmark

Tests

To successfully run the tests you need to first have a stock RabbitMQ broker running locally.Then, run tests like this:

$ make test

Contributing

Please see CONTRIBUTING for details.

Using AMQP 0.8

If you still want to use the old version of the protocol then you can do it by setting the following constant in your configuration code:

define('AMQP_PROTOCOL', '0.8');

The default value is '0.9.1'.

Providing your own autoloader

If for some reason you don't want to use composer, then you need to have an autoloader in place fo the library classes. People have reported to use this autoloader with success.

Original README:

Below is the original README file content. Credits goes to the original authors.

PHP library implementing Advanced Message Queuing Protocol (AMQP).

The library is port of python code of py-amqplib http://barryp.org/software/py-amqplib/

It have been tested with RabbitMQ server.

Project home page: http://code.google.com/p/php-amqplib/

For discussion, please join the group:

http://groups.google.com/group/php-amqplib-devel

For bug reports, please use bug tracking system at the project page.

Patches are very welcome!

Author: Vadim Zaliva lord@crocodile.org

Download Details:

Author: php-amqplib
Source Code: https://github.com/php-amqplib/php-amqplib 
License: LGPL-2.1 license

#php #rabbitmq 

PHP-amqplib: The Most Widely Used PHP Client for RabbitMQ
Coding  Life

Coding Life

1660013045

How to Use AMQP Messaging via RabbitMQ in a Spring Boot Application

This tutorial will guide you how to use AMQP messaging via RabbitMQ in a Spring Boot application. Configure the Message Converters to switch from default Java deserialization to JSON

GitHub:
https://github.com/Java-Techie-jt/springboot-rabbitmq-example 

Subscribe: https://www.youtube.com/c/JavaTechie/featured 

#rabbitmq #springboot  

How to Use AMQP Messaging via RabbitMQ in a Spring Boot Application
Elian  Harber

Elian Harber

1659868440

Garagemq: AMQP Message Broker Implemented with Golang

GarageMQ 

GarageMQ is a message broker that implement the Advanced Message Queuing Protocol (AMQP). Compatible with any AMQP or RabbitMQ clients (tested streadway/amqp and php-amqp lib)

Goals of this project

  • Have fun and learn a lon
  • Implement AMQP message broker in Go
  • Make protocol compatible with RabbitMQ and standard AMQP 0-9-1.

Demo

Simple demo server on Digital Ocean, 2 GB Memory / 25 GB Disk / FRA1 - Ubuntu Docker 17.12.0~ce on 16.04

ServerPortAdmin portLoginPasswordVirtual Host
46.101.117.78567215672guestguest/

Installation and Building

Docker

The quick way to start with GarageMQ is by using docker. You can build it by your own or pull from docker-hub

docker pull amplitudo/garagemq
docker run --name garagemq -p 5672:5672 -p 15672:15672 amplitudo/garagemq

or

go get -u github.com/valinurovam/garagemq/...
cd $GOPATH/src/github.com/valinurovam/garagemq
docker build -t garagemq .
docker run --name garagemq -p 5672:5672 -p 15672:15672 garagemq

Go get

You can also use go get: go get -u github.com/valinurovam/garagemq/...

go get -u github.com/valinurovam/garagemq/...
cd $GOPATH/src/github.com/valinurovam/garagemq
make build.all && make run

Execution flags

FlagDefaultDescriptionENV
--configdefault configConfig pathGMQ_CONFIG
--log-filestdoutLog file path or stdout, stderrGMQ_LOG_FILE
--log-levelinfoLogger levelGMQ_LOG_LEVEL
--hproffalseEnable or disable hprof profilerGMQ_HPROF
--hprof-host0.0.0.0Profiler hostGMQ_HPROF_HOST
--hprof-port8080Profiler portGMQ_HPROF_PORT

Default config params

# Proto name to implement (amqp-rabbit or amqp-0-9-1)
proto: amqp-rabbit
# User list
users:
  - username: guest
    password: 084e0343a0486ff05530df6c705c8bb4 # guest md5
# Server TCP settings
tcp:
  ip: 0.0.0.0
  port: 5672
  nodelay: false
  readBufSize: 196608
  writeBufSize: 196608
# Admin-server settings
admin:
  ip: 0.0.0.0
  port: 15672
queue:
  shardSize: 8192
  maxMessagesInRam: 131072
# DB settings
db:
  # default path 
  defaultPath: db
  # backend engine (badger or buntdb) 
  engine: badger
# Default virtual host path  
vhost:
  defaultPath: /
# Security check rule (md5 or bcrypt)
security:
  passwordCheck: md5
connection:
  channelsMax: 4096
  frameMaxSize: 65536

Performance tests

Performance tests with load testing tool https://github.com/rabbitmq/rabbitmq-perf-test on test-machine:

MacBook Pro (15-inch, 2016)
Processor 2,6 GHz Intel Core i7
Memory 16 GB 2133 MHz LPDDR3

Persistent messages

./bin/runjava com.rabbitmq.perf.PerfTest --exchange test -uri amqp://guest:guest@localhost:5672 --queue test --consumers 10 --producers 5 --qos 100 -flag persistent
...
...
id: test-235131-686, sending rate avg: 53577 msg/s
id: test-235131-686, receiving rate avg: 51941 msg/s

Transient messages

./bin/runjava com.rabbitmq.perf.PerfTest --exchange test -uri amqp://guest:guest@localhost:5672 --queue test --consumers 10 --producers 5 --qos 100
...
...
id: test-235231-085, sending rate avg: 71247 msg/s
id: test-235231-085, receiving rate avg: 69009 msg/s

Internals

Backend for durable entities

Database backend is changeable through config db.engine

db:
  defaultPath: db
  engine: badger
db:
  defaultPath: db
  engine: buntdb

QOS

basic.qos method implemented for standard AMQP and RabbitMQ mode. It means that by default qos applies for connection(global=true) or channel(global=false). RabbitMQ Qos means for channel(global=true) or each new consumer(global=false).

Admin server

The administration server is available at standard :15672 port and is read only mode at the moment. Main page above, and more screenshots at /readme folder

Overview

TODO

  •  Optimize binds
  •  Replication and clusterization
  •  Own backend for durable entities and persistent messages
  •  Migrate to message reference counting

Contribution

Contribution of any kind is always welcome and appreciated. Contribution Guidelines in WIP

Author: Valinurovam
Source Code: https://github.com/valinurovam/garagemq 
License: MIT license

#go #golang #queue #rabbitmq 

Garagemq: AMQP Message Broker Implemented with Golang

Idiomatic, Fast and Well-maintained JRuby Client for RabbitMQ

March Hare, a JRuby RabbitMQ Client

March Hare is an idiomatic, fast and well-maintained (J)Ruby DSL on top of the RabbitMQ Java client. It strives to combine strong parts of the Java client with over 4 years of experience using and developing Ruby amqp gem and Bunny.

Why March Hare

  • Concurrency support on the JVM is excellent, with many tools & approaches available. Lets make use of it.
  • RabbitMQ Java client is rock solid and supports every RabbitMQ feature. Very nice.
  • It is screaming fast thanks to all the heavy duty being done in the pretty efficient & lightweight Java code.
  • It uses synchronous APIs where it makes sense and asynchronous APIs where it makes sense. Some other Ruby RabbitMQ clients only use one or the other.
  • amqp gem has certain amount of baggage it cannot drop because of backwards compatibility concerns. March Hare is a clean room design, much more open to radical new ideas.

What March Hare is not

March Hare is not

  • A replacement for the RabbitMQ Java client
  • A replacement for Bunny, the most popular Ruby RabbitMQ client
  • A long running "work queue" service

Project Maturity

March Hare has been around since 2011 and can be considered a mature library.

It is based on the RabbitMQ Java client, which is officially supported by the RabbitMQ team at VMware.

Installation, Dependency

With Rubygems

gem install march_hare

With Bundler

gem "march_hare", "~> 4.4"

Documentation

Guides

MarchHare documentation guides are mostly complete.

Examples

Several code examples are available. Our test suite also has many code examples that demonstrate various parts of the API.

Reference

API reference is available.

Supported Ruby Versions

March Hare supports JRuby 9.0 or later.

Supported JDK Versions

March Hare requires JDK 8 or later.

Change Log

See ChangeLog.md.

Continuous Integration

Continuous Integration status

CI is hosted by travis-ci.org

Testing

You'll need a running RabbitMQ instance with all defaults and management plugin enabled on your local machine to run the specs.

To boot one via docker you can use:

docker run -p 5672:5672 -p 15672:15672 rabbitmq:3-management

And then you can run the specs using rspec:

bundle exec rspec

Author: ruby-amqp
Source code: https://github.com/ruby-amqp/march_hare
License: MIT license

#ruby-on-rails  #ruby #rabbitmq 

Idiomatic, Fast and Well-maintained JRuby Client for RabbitMQ

Flash Sale System AKA Using Spring Boot, RabbitMQ, Redis, MySQL

FlashSaleSystem

Project highlights

Distributed system scheme

From a single machine to a cluster, it is easy to scale horizontally simply by adding servers to cope with greater traffic and concurrency

System optimization

Browser cache/Nginx cache/page cache/object cache/RabbitMQ queue asynchronous ordering, reduce network traffic, reduce database pressure, improve the system's concurrent processing capability

In-depth microservice skills

SpringBoot/RabbitMQ/Redis/MySQL, based on the most popular Java microservices framework

The security policy

Graphic verification code, flow limiting and brush prevention, interface address hiding, various security mechanisms to reject the robot ticket brushing

Server design ideas

The bottleneck is the database's ability to handle requests. After a large number of requests are sent to the database, the database may time out or break down due to its limited processing capability. So the idea is to try to intercept requests upstream of the system.

For applications that read a lot (read inventory) and write a little (create order), use caching more (for inventory query operations through caching, reduce database operations)

Cache, application, database cluster, load balancing; Asynchronous message processing

The foreground can do some restrictions to the normal user's operation through JS, and cache some static resources with CDN and user browser

The overall architecture

architecture

Main questions

How to ensure thread safety and prevent oversold when inventory is deducted in Redis?

Redis has a decr() method that implements decrement and atomicity

How to limit traffic and prevent robot access?

This is done by interceptor, and we have a custom annotation that will mark a method, specify the number of times it is accessed per unit of time, and if it exceeds the requirement, it will be intercepted.

Interceptor is inherited HandlerInterceptorAdapter, rewriting is preHandle method, in this method, will visit frequency synchronization to Redis, there is the period of validity of the key/value pair. Finally, you need to configure the interceptor into the project, inheriting the WebMvcConfigurerAdapter and overriding the addInterceptors() method.

The detailed guide to build such a system

Chap01: Integrate Mybatis and Redis

Chap02: MD5 encryption and globle exception handler

Chap03: Implement distributed session via redis

Chap04: Implement the flash sale function

Chap05: Using JMeter to pressure test

Chap06: Page cache and object cache

Chap07: Integrate rabbitMQ and optimize the interface

Chap08: Optimizing the flash sale system after integrated rabbitMQ

Chap09: Dynamic flash sale url, mathematical formula verification code and the interface current limiting

Chap10: Conclusion the project

Download details:

Author: codesssss
Source code: https://github.com/codesssss/FlashSale

#spring #springboot #java #mysql #rabbitmq 

Flash Sale System AKA Using Spring Boot, RabbitMQ, Redis, MySQL

CQRS Design Pattern Implementation with Java, RabbitMQ, MySQL

CQRS Design Pattern Java

This repository contains CQRS implementation in Java. I've written this code-base step by step on Medium that is my Turkish content as called "Java ile CQRS Design Pattern | Docker, Elasticsearch, RabbitMQ, Spring, MySQL"

Setup

There are several basic steps below that we need to execute.

Docker Compose && Environment

Firstly, we need executing docker-compose.yml file, that is given below, due to setuping environment tech. Compose file is already here. docker-compose.yml

version: "3.9"

services:
  database:
    container_name: classifieds_mysql_container
    image: mysql:latest
    restart: always
    ports:
      - "3307:3306"
    environment:
      MYSQL_ROOT_PASSWORD: password
      MYSQL_DATABASE: classifieds
      MYSQL_USER: user
      MYSQL_PASSWORD: password
    volumes:
      - mysql_database:/var/lib/mysql

  rabbitmq:
    container_name: classifieds_rabbitmq_container
    image: rabbitmq:3-management
    ports:
      - "5672:5672"
      - "15672:15672"

  elasticsearch:
    container_name: classifieds_elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0
    volumes:
      - esdata:/usr/share/elasticsearch/data
    environment:
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - discovery.type=single-node
    logging:
      driver: none
    ports:
      - "9300:9300"
      - "9200:9200"

volumes:
  mysql_database:
  esdata:
docker-compose up

MySQL Database Table

After executing docker-compose file we need creating classified, that is a entity we use during the application, table on MySQL. Database connection information is already defined in docker-compose.yml file, after the MySQL connection we use this schema that is below.

CREATE TABLE `classified` (
  `id` bigint NOT NULL AUTO_INCREMENT,
  `title` varchar(100) DEFAULT NULL,
  `price` double DEFAULT NULL,
  `detail` text,
  `categoryId` bigint DEFAULT NULL,
  PRIMARY KEY (`id`)
) AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci

Elasticsearch Index Creating And Mapping

We need create index on Elasticsearch because of representing database table on it. If you need checking Elasticsearch container status, you may use cURL code that is stay below.

curl -XGET "http://localhost:9200/_cat/health?format=json&pretty"

Create Index with mapping on Elasticsearch:

curl --location --request PUT 'http://localhost:9200/classifieds' \
--header 'Content-Type: application/json' \
--data-raw '{
    "settings": {
        "index": {
            "number_of_shards": 1,
            "number_of_replicas": 1
        }
    },
    "mappings": {
        "properties": {
            "id": {
                "type": "long"
            },
            "title": {
                "type": "text"
            },
            "price": {
                "type": "double"
            },
            "detail": {
                "type": "text"
            },
            "categoryId": {
                "type": "long"
            }
        }
    }
}'

We will see mapping on Elasticsearch if there is no any error. We may use this cURL code that is below for showing mapping.

curl -XGET "http://localhost:9200/classifieds/_mapping?pretty&format=json"

It show use created index's mapping.

RabbitMQ

We've bound RabbitMQ port in docker-compose file then we've used default RabbitMQ port, we may need checking RabbitMQ status, we are able to go this link to show RabbitMQ dashboard. http://localhost:15672

Demo

1_n3zJulJ4aZA1_tVCjJ9SBw

Usages

Sending request to api then creating data on MySQL then sending RabbitMQ event that will update Elasticsearch:

curl --location --request POST 'http://localhost:8080/classifieds' \
--header 'Content-Type: application/json' \
--data-raw '{
    "title": "Macbook Pro 2019",
    "detail": "Sahibinden çok temiz Macbook Pro 2019.",
    "price": 27894,
    "categoryId": 47
}'

Reading classified list from Elasticsearch:

curl --location --request GET 'http://localhost:8080/classifieds'

Download details:
Author: yusufyilmazfr
Source code: https://github.com/yusufyilmazfr/cqrs-design-pattern-java
License:

#spring #java #springboot #elasticsearch #rabbitmq #docker #mysql 

CQRS Design Pattern Implementation with Java, RabbitMQ, MySQL
Minh  Nguyet

Minh Nguyet

1659105556

Cách Chạy ActiveMQ Artemis Với Spring Boot Trên Kubernetes

Bài viết này sẽ hướng dẫn bạn cách chạy ActiveMQ trên Kubernetes và tích hợp nó với ứng dụng của bạn thông qua Spring Boot. Chúng tôi sẽ triển khai một nhà môi giới ActiveMQ theo cụm bằng cách sử dụng một nhà điều hành chuyên dụng . Sau đó, chúng tôi sẽ xây dựng và chạy hai ứng dụng Spring Boot. Đầu tiên trong số chúng đang chạy trong nhiều trường hợp và nhận tin nhắn từ hàng đợi, trong khi bản thứ hai đang gửi tin nhắn đến hàng đợi đó. Để kiểm tra cụm ActiveMQ, chúng tôi sẽ sử dụng Kind . Ứng dụng dành cho người tiêu dùng kết nối với cụm bằng một số chế độ khác nhau. Chúng tôi sẽ thảo luận chi tiết về các chế độ đó.

Bạn có thể tìm thấy rất nhiều bài viết về các nhà môi giới tin nhắn khác như RabbitMQ hoặc Kafka trên blog của tôi. Nếu bạn muốn đọc về RabbitMQ trên Kubernetes, vui lòng tham khảo bài viết đó . Để tìm hiểu thêm về tích hợp Kafka và Spring Boot, bạn có thể đọc bài viết về Kafka Streams và Spring Cloud Stream có sẵn tại đây . Trước đây tôi không viết nhiều về ActiveMQ, nhưng nó cũng là một nhà môi giới tin nhắn rất phổ biến. Ví dụ: nó hỗ trợ phiên bản mới nhất của giao thức AMQP, trong khi Rabbit dựa trên phần mở rộng của AMQP 0.9.

Mã nguồn

Nếu bạn muốn thử nó một mình, bạn luôn có thể xem qua mã nguồn của tôi. Để làm được điều đó, bạn cần sao chép  kho lưu trữ GitHub của tôi . Sau đó vào thư mục messaging. Bạn sẽ thấy có ba ứng dụng Spring Boot simple-producer: simple-consumersimple-counter. Sau đó, bạn chỉ nên làm theo hướng dẫn của tôi. Hãy bắt đầu nào.

Tích hợp Spring Boot với ActiveMQ

Hãy bắt đầu với việc tích hợp giữa các ứng dụng Spring Boot của chúng tôi và nhà môi giới ActiveMQ Artemis. Trên thực tế, ActiveMQ Artemis là cơ sở của sản phẩm thương mại do Red Hat cung cấp có tên AMQ Broker . Red Hat tích cực phát triển một bộ khởi động Spring Boot cho ActiveMQ và một nhà điều hành để chạy nó trên Kubernetes. Để truy cập Spring Boot, bạn cần bao gồm kho lưu trữ Red Hat Maven trong pom.xmltệp của mình:

<repository>
  <id>red-hat-ga</id>
  <url>https://maven.repository.redhat.com/ga</url>
</repository>

Sau đó, bạn có thể bao gồm một người mới bắt đầu trong Maven của mình pom.xml:

<dependency>
  <groupId>org.amqphub.spring</groupId>
  <artifactId>amqp-10-jms-spring-boot-starter</artifactId>
  <version>2.5.6</version>
  <exclusions>
    <exclusion>
      <groupId>org.slf4j</groupId>
      <artifactId>log4j-over-slf4j</artifactId>
    </exclusion>
  </exclusions>
</dependency>

Sau đó, chúng tôi chỉ cần bật JMS cho ứng dụng của mình với @EnableJMSchú thích:

@SpringBootApplication
@EnableJms
public class SimpleConsumer {

   public static void main(String[] args) {
      SpringApplication.run(SimpleConsumer.class, args);
   }

}

Ứng dụng của chúng tôi rất đơn giản. Nó chỉ nhận và in một tin nhắn đến. Phương thức nhận tin nhắn phải được chú thích bằng @JmsListener. Trường destinationchứa tên của hàng đợi đích.

@Service
public class Listener {

   private static final Logger LOG = LoggerFactory
      .getLogger(Listener.class);

   @JmsListener(destination = "test-1")
   public void processMsg(SimpleMessage message) {
      LOG.info("============= Received: " + message);
   }

}

Đây là lớp đại diện cho thông điệp của chúng tôi:

public class SimpleMessage implements Serializable {

   private Long id;
   private String source;
   private String content;

   public SimpleMessage() {
   }

   public SimpleMessage(Long id, String source, String content) {
      this.id = id;
      this.source = source;
      this.content = content;
   }

   // ... GETTERS AND SETTERS

   @Override
   public String toString() {
      return "SimpleMessage{" +
              "id=" + id +
              ", source='" + source + '\'' +
              ", content='" + content + '\'' +
              '}';
   }
}

Cuối cùng, chúng ta cần thiết lập cài đặt cấu hình kết nối. Với AMQP Spring Boot khởi động rất đơn giản. Chúng tôi chỉ cần thiết lập thuộc tính amqphub.amqp10jms.remoteUrl. Hiện tại, chúng ta sẽ dựa trên biến môi trường được đặt ở cấp Kubernetes Deployment.

amqphub.amqp10jms.remoteUrl = ${ARTEMIS_URL}

Ứng dụng của nhà sản xuất khá giống nhau. Thay vì chú thích để nhận tin nhắn, chúng tôi sử dụng Spring JmsTemplateđể tạo và gửi tin nhắn đến hàng đợi đích. Phương thức gửi tin nhắn được hiển thị dưới dạng POST /producer/sendđiểm cuối HTTP.

@RestController
@RequestMapping("/producer")
public class ProducerController {

   private static long id = 1;
   private final JmsTemplate jmsTemplate;
   @Value("${DESTINATION}")
   private String destination;

   public ProducerController(JmsTemplate jmsTemplate) {
      this.jmsTemplate = jmsTemplate;
   }

   @PostMapping("/send")
   public SimpleMessage send(@RequestBody SimpleMessage message) {
      if (message.getId() == null) {
          message.setId(id++);
      }
      jmsTemplate.convertAndSend(destination, message);
      return message;
   }
}

Tạo một nhóm Kind với Nginx Ingress

Các ứng dụng mẫu của chúng tôi đã sẵn sàng. Trước khi triển khai chúng, chúng ta cần chuẩn bị cụm Kubernetes cục bộ. Chúng tôi sẽ triển khai ở đó cụm ActiveMQ bao gồm ba nhà môi giới. Do đó, cụm Kubernetes của chúng tôi cũng sẽ bao gồm ba nút. Do đó, có ba phiên bản của ứng dụng dành cho người tiêu dùng đang chạy trên Kubernetes. Họ đang kết nối với các nhà môi giới ActiveMQ qua giao thức AMQP. Ngoài ra còn có một phiên bản duy nhất của ứng dụng nhà sản xuất gửi tin nhắn theo yêu cầu. Đây là sơ đồ kiến ​​trúc của chúng tôi.

activemq-spring-boot-kubernetes-Arch

Để chạy cục bộ một cụm Kubernetes nhiều nút, chúng tôi sẽ sử dụng Kind. Chúng tôi sẽ kiểm tra không chỉ giao tiếp qua giao thức AMQP mà còn kiểm tra bảng điều khiển quản lý ActiveMQ qua HTTP. Vì ActiveMQ sử dụng các dịch vụ không sử dụng để hiển thị bảng điều khiển web, nên chúng tôi phải tạo và định cấu hình Ingress on Kind để truy cập nó. Hãy bắt đầu nào.

Trong bước đầu tiên, chúng ta sẽ tạo một cụm Kind. Nó bao gồm một máy bay điều khiển và ba công nhân. Cấu hình phải được chuẩn bị chính xác để chạy Nginx Ingress Controller. Chúng ta nên thêm ingress-readynhãn vào một nút công nhân và hiển thị các cổng 80443. Đây là phiên bản cuối cùng của tệp cấu hình Kind:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
    kubeadmConfigPatches:
    - |
      kind: JoinConfiguration
      nodeRegistration:
        kubeletExtraArgs:
          node-labels: "ingress-ready=true"
    extraPortMappings:
    - containerPort: 80
      hostPort: 80
      protocol: TCP
    - containerPort: 443
      hostPort: 443
      protocol: TCP  
  - role: worker
  - role: worker

Bây giờ, hãy tạo một cụm Kind bằng cách thực hiện lệnh sau:

$ kind create cluster --config kind-config.yaml

Nếu cụm của bạn đã được tạo thành công, bạn sẽ thấy thông tin tương tự:

Sau đó, hãy cài đặt Nginx Ingress Controller. Nó chỉ là một lệnh duy nhất:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

Hãy xác minh cài đặt:

$ kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS  AGE
ingress-nginx-admission-create-wbbzh        0/1     Completed   0         1m
ingress-nginx-admission-patch-ws2mv         0/1     Completed   0         1m
ingress-nginx-controller-86b6d5756c-rkbmz   1/1     Running     0         1m

Cài đặt ActiveMQ Artemis trên Kubernetes

Cuối cùng, chúng tôi có thể tiến hành cài đặt ActiveMQ Artemis. Đầu tiên, hãy cài đặt các CRD bắt buộc. Bạn có thể tìm thấy tất cả các tệp kê khai YAML bên trong kho của nhà điều hành trên GitHub.

$ git clone https://github.com/artemiscloud/activemq-artemis-operator.git
$ cd activemq-artemis-operator

Tệp kê khai có CRD nằm trong thư mục deploy/crds:

$ kubectl create -f ./deploy/crds

Sau đó, chúng ta có thể cài đặt toán tử:

$ kubectl create -f ./deploy/service_account.yaml
$ kubectl create -f ./deploy/role.yaml
$ kubectl create -f ./deploy/role_binding.yaml
$ kubectl create -f ./deploy/election_role.yaml
$ kubectl create -f ./deploy/election_role_binding.yaml
$ kubectl create -f ./deploy/operator_config.yaml
$ kubectl create -f ./deploy/operator.yaml

Để tạo một cụm, chúng ta phải tạo ActiveMQArtemisđối tượng. Nó chứa một số nhà môi giới là một phần của cụm (1) . Chúng ta cũng nên đặt bộ truy cập, để hiển thị cổng AMQP bên ngoài mỗi nhóm môi giới (2) . Tất nhiên, chúng tôi cũng sẽ tiết lộ bảng điều khiển quản lý (3) .

apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemis
metadata:
  name: ex-aao
spec:
  deploymentPlan:
    size: 3 # (1)
    image: placeholder
    messageMigration: true
    resources:
      limits:
        cpu: "500m"
        memory: "1024Mi"
      requests:
        cpu: "250m"
        memory: "512Mi"
  acceptors: # (2)
    - name: amqp
      protocols: amqp
      port: 5672
      connectionsAllowed: 5
  console: # (3)
    expose: true

Sau khi ActiveMQArtemisđược tạo và nhà điều hành bắt đầu quá trình triển khai. Nó tạo ra StatefulSetđối tượng:

$ kubectl get statefulset
NAME        READY   AGE
ex-aao-ss   3/3     1m

Nó bắt đầu tất cả ba nhóm với các nhà môi giới một cách tuần tự:

$ kubectl get pod -l application=ex-aao-app
NAME          READY   STATUS    RESTARTS    AGE
ex-aao-ss-0   1/1     Running   0           5m
ex-aao-ss-1   1/1     Running   0           3m
ex-aao-ss-2   1/1     Running   0           1m

Hãy hiển thị danh sách các Services được tạo bởi toán tử. Mỗi nhà môi giới có một đơn Serviceđể hiển thị cổng AMQP ( ex-aao-amqp-*) và bảng điều khiển web ( ex-aao-wsconsj-*):

activemq-spring-boot-kubernetes-services

Nhà điều hành tự động tạo các đối tượng Ingress trên mỗi bảng điều khiển web Service. Chúng tôi sẽ sửa đổi chúng bằng cách thêm các máy chủ khác nhau. Giả sử đó là one.activemq.commiền cho nhà môi giới đầu tiên, two.activemq.comcho nhà môi giới thứ hai, v.v.

$ kubectl get ing    
NAME                      CLASS    HOSTS                  ADDRESS     PORTS   AGE
ex-aao-wconsj-0-svc-ing   <none>   one.activemq.com       localhost   80      1h
ex-aao-wconsj-1-svc-ing   <none>   two.activemq.com       localhost   80      1h
ex-aao-wconsj-2-svc-ing   <none>   three.activemq.com                  localhost   80      1h

Sau khi tạo các lần nhập, chúng ta sẽ phải thêm dòng sau vào /etc/hosts.

127.0.0.1    one.activemq.com two.activemq.com three.activemq.com

Bây giờ, chúng tôi truy cập bảng điều khiển quản lý, chẳng hạn như đối với nhà môi giới thứ ba theo URL sau http://three.activemq.com/console .

activemq-spring-boot-kubernetes-console

Khi nhà môi giới đã sẵn sàng, chúng tôi có thể xác định một hàng đợi kiểm tra. Tên của hàng đợi đó là test-1.

apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemisAddress
metadata:
  name: address-1
spec:
  addressName: address-1
  queueName: test-1
  routingType: anycast

Chạy ứng dụng Spring Boot trên Kubernetes và kết nối với ActiveMQ

Bây giờ, hãy triển khai ứng dụng dành cho người tiêu dùng. Trong Deploymenttệp kê khai, chúng tôi phải đặt URL kết nối cụm ActiveMQ. Nhưng chờ đã ... làm thế nào để kết nối nó? Có ba nhà môi giới được tiếp xúc bằng cách sử dụng ba Kubernetes riêng biệt Service. May mắn thay, bộ khởi động AMQP Spring Boot hỗ trợ nó. Chúng tôi có thể đặt địa chỉ của ba nhà môi giới bên trong failoverphần này. Hãy thử nó để xem điều gì sẽ xảy ra.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: simple-consumer
spec:
  replicas: 3
  selector:
    matchLabels:
      app: simple-consumer
  template:
    metadata:
      labels:
        app: simple-consumer
    spec:
      containers:
      - name: simple-consumer
        image: piomin/simple-consumer
        env:
          - name: ARTEMIS_URL
            value: failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)
        resources:
          limits:
            memory: 256Mi
            cpu: 500m
          requests:
            memory: 128Mi
            cpu: 250m

Ứng dụng chuẩn bị được triển khai với Skaffold. Nếu bạn chạy skaffold devlệnh, bạn sẽ triển khai và xem nhật ký của cả ba phiên bản của ứng dụng dành cho người tiêu dùng. Kết quả là gì? Tất cả các trường hợp kết nối với URL đầu tiên từ danh sách như hình dưới đây.

May mắn thay, có một tham số chuyển đổi dự phòng giúp phân phối các kết nối máy khách đồng đều hơn trên nhiều đồng nghiệp từ xa. Với failover.randomizetùy chọn này, các URI sẽ được xáo trộn ngẫu nhiên trước khi cố gắng kết nối với một trong số chúng. Hãy thay thế ARTEMIS_URLenv trong Deploymenttệp kê khai bằng dòng sau:

failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)?failover.randomize=true

Sự phân bổ giữa các trường hợp môi giới có vẻ tốt hơn một chút. Tất nhiên, kết quả là ngẫu nhiên, vì vậy bạn có thể nhận được các kết quả khác nhau.

Cách đầu tiên để phân phối các kết nối là thông qua Kubernetes chuyên dụng Service. Chúng tôi không phải tận dụng các dịch vụ được tạo tự động bởi nhà điều hành. Chúng tôi có thể tạo của riêng mình Serviceđể cân bằng tải giữa tất cả các nhóm có sẵn với các nhà môi giới.

kind: Service
apiVersion: v1
metadata:
  name: ex-aao-amqp-lb
spec:
  ports:
    - name: amqp
      protocol: TCP
      port: 5672
  type: ClusterIP
  selector:
    application: ex-aao-app

Bây giờ, chúng tôi có thể từ chức ở failoverphần phía khách hàng và hoàn toàn dựa vào các cơ chế của Kubernetes.

spec:
  containers:
  - name: simple-consumer
    image: piomin/simple-consumer
    env:
      - name: ARTEMIS_URL
        value: amqp://ex-aao-amqp-lb:5672

Lần này, chúng tôi sẽ không thấy bất kỳ thứ gì trong nhật ký ứng dụng, vì tất cả các phiên bản đều kết nối với cùng một URL. Chúng tôi có thể xác minh sự phân phối giữa tất cả các phiên bản nhà môi giới bằng cách sử dụng bảng điều khiển web quản lý. Đây là danh sách những người tiêu dùng trong phiên bản đầu tiên của ActiveMQ:

Dưới đây, bạn sẽ có kết quả chính xác cho trường hợp thứ hai. Tất cả các phiên bản ứng dụng dành cho người tiêu dùng đã được phân phối đồng đều giữa tất cả các nhà môi giới có sẵn bên trong cụm.

Bây giờ, chúng tôi sẽ triển khai ứng dụng nhà sản xuất. Chúng tôi sử dụng cùng một Kubernetes Serviceđể kết nối cụm ActiveMQ.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: simple-producer
spec:
  replicas: 3
  selector:
    matchLabels:
      app: simple-producer
  template:
    metadata:
      labels:
        app: simple-producer
    spec:
      containers:
        - name: simple-producer
          image: piomin/simple-producer
          env:
            - name: ARTEMIS_URL
              value: amqp://ex-aao-amqp-lb:5672
            - name: DESTINATION
              value: test-1
          ports:
            - containerPort: 8080

Vì chúng ta phải gọi điểm cuối HTTP nên hãy tạo Servicecho ứng dụng nhà sản xuất:

apiVersion: v1
kind: Service
metadata:
  name: simple-producer
spec:
  type: ClusterIP
  selector:
    app: simple-producer
  ports:
  - port: 8080

Hãy triển khai ứng dụng nhà sản xuất bằng Skaffold với tính năng chuyển tiếp cổng được bật:

$ skaffold dev --port-forward

Đây là danh sách của chúng tôi Deployment:

Để gửi một tin nhắn kiểm tra, chỉ cần thực hiện lệnh sau:

$ curl http://localhost:8080/producer/send \
  -d "{\"source\":\"test\",\"content\":\"Hello\"}" \
  -H "Content-Type:application/json"

Cấu hình nâng cao

Nếu bạn cần phân phối lưu lượng nâng cao hơn giữa các nhà môi giới bên trong cụm, bạn có thể đạt được nó theo một số cách. Ví dụ: chúng tôi có thể ghi đè động thuộc tính cấu hình trong thời gian chạy. Đây là một ví dụ rất đơn giản. Sau khi khởi động ứng dụng, chúng tôi đang kết nối dịch vụ bên ngoài qua HTTP. Nó trả về số phiên bản tiếp theo.

@Configuration
public class AmqpConfig {

    @PostConstruct
    public void init() {
        RestTemplate t = new RestTemplateBuilder().build();
        int x = t.getForObject("http://simple-counter:8080/counter", Integer.class);
        System.setProperty("amqphub.amqp10jms.remoteUrl",
                "amqp://ex-aao-amqp-" + x + "-svc:5672");
    }

}

Đây là cách triển khai ứng dụng bộ đếm. Nó chỉ tăng số lượng và chia nó cho số lượng cá thể môi giới. Tất nhiên, chúng tôi có thể tạo một triển khai nâng cao hơn và cung cấp kết nối, ví dụ: kết nối với phiên bản của nhà môi giới đang chạy trên cùng một nút Kubernetes với nhóm ứng dụng.

@SpringBootApplication
@RestController
@RequestMapping("/counter")
public class CounterApp {

   private static int c = 0;

   public static void main(String[] args) {
      SpringApplication.run(CounterApp.class, args);
   }

   @Value("${DIVIDER:0}")
   int divider;

   @GetMapping
   public Integer count() {
      if (divider > 0)
         return c++ % divider;
      else
         return c++;
   }
}

Lời kết

ActiveMQ là một sự thay thế thú vị cho RabbitMQ như một nhà môi giới tin nhắn. Trong bài viết này, bạn đã học cách chạy, quản lý và tích hợp ActiveMQ với Spring Boot trên Kubernetes. Nó có thể được quản lý một cách khai báo trên Kubernetes nhờ ActiveMQ Artemis Operator. Bạn cũng có thể dễ dàng tích hợp nó với Spring Boot bằng bộ khởi động chuyên dụng. Nó cung cấp các tùy chọn cấu hình khác nhau và được phát triển bởi Red Hat và cộng đồng.

Liên kết: https://piotrminkowski.com/2022/07/26/activemq-artemis-with-spring-boot-on-kubernetes/

#kubernetes #springboot #java #rabbitmq 

Cách Chạy ActiveMQ Artemis Với Spring Boot Trên Kubernetes

Как запустить ActiveMQ Artemis с Spring Boot в Kubernetes

В этой статье вы узнаете, как запустить ActiveMQ в Kubernetes и интегрировать его с вашим приложением через Spring Boot. Мы развернем кластерный брокер ActiveMQ с помощью выделенного оператора . Затем мы собираемся создать и запустить два приложения Spring Boot. Первый из них работает в нескольких экземплярах и получает сообщения из очереди, а второй отправляет сообщения в эту очередь. Чтобы протестировать кластер ActiveMQ, мы будем использовать Kind . Потребительское приложение подключается к кластеру, используя несколько разных режимов. Мы подробно обсудим эти режимы.

Вы можете найти много статей о других брокерах сообщений, таких как RabbitMQ или Kafka, в моем блоге. Если вы хотите прочитать о RabbitMQ в Kubernetes, обратитесь к этой статье . Чтобы узнать больше об интеграции Kafka и Spring Boot, вы можете прочитать статью о Kafka Streams и Spring Cloud Stream, доступную здесь . Раньше я мало писал об ActiveMQ, но это тоже очень популярный брокер сообщений. Например, он поддерживает последнюю версию протокола AMQP, а Rabbit основан на их расширении AMQP 0.9.

Исходный код

Если вы хотите попробовать это самостоятельно, вы всегда можете взглянуть на мой исходный код. Для этого вам нужно клонировать мой  репозиторий GitHub . Затем перейдите в messagingкаталог. Там вы найдете три приложения Spring Boot: simple-producer, simple-consumerи simple-counter. После этого вы должны просто следовать моим инструкциям. Давайте начнем.

Интеграция Spring Boot с ActiveMQ

Начнем с интеграции между нашими приложениями Spring Boot и брокером ActiveMQ Artemis. По сути, ActiveMQ Artemis является основой коммерческого продукта, предоставляемого Red Hat, под названием AMQ Broker . Red Hat активно разрабатывает стартер Spring Boot для ActiveMQ и оператор для его запуска в Kubernetes. Чтобы получить доступ к Spring Boot, вам необходимо включить репозиторий Red Hat Maven в свой pom.xmlфайл:

<repository>
  <id>red-hat-ga</id>
  <url>https://maven.repository.redhat.com/ga</url>
</repository>

После этого вы можете включить стартер в свой Maven pom.xml:

<dependency>
  <groupId>org.amqphub.spring</groupId>
  <artifactId>amqp-10-jms-spring-boot-starter</artifactId>
  <version>2.5.6</version>
  <exclusions>
    <exclusion>
      <groupId>org.slf4j</groupId>
      <artifactId>log4j-over-slf4j</artifactId>
    </exclusion>
  </exclusions>
</dependency>

Затем нам просто нужно включить JMS для нашего приложения с @EnableJMSаннотацией:

@SpringBootApplication
@EnableJms
public class SimpleConsumer {

   public static void main(String[] args) {
      SpringApplication.run(SimpleConsumer.class, args);
   }

}

Наше приложение очень простое. Он просто получает и печатает входящее сообщение. Метод получения сообщений должен быть аннотирован с помощью @JmsListener. Поле destinationсодержит имя целевой очереди.

@Service
public class Listener {

   private static final Logger LOG = LoggerFactory
      .getLogger(Listener.class);

   @JmsListener(destination = "test-1")
   public void processMsg(SimpleMessage message) {
      LOG.info("============= Received: " + message);
   }

}

Вот класс, представляющий наше сообщение:

public class SimpleMessage implements Serializable {

   private Long id;
   private String source;
   private String content;

   public SimpleMessage() {
   }

   public SimpleMessage(Long id, String source, String content) {
      this.id = id;
      this.source = source;
      this.content = content;
   }

   // ... GETTERS AND SETTERS

   @Override
   public String toString() {
      return "SimpleMessage{" +
              "id=" + id +
              ", source='" + source + '\'' +
              ", content='" + content + '\'' +
              '}';
   }
}

Наконец, нам нужно установить параметры конфигурации подключения. Со стартером AMQP Spring Boot это очень просто. Нам просто нужно установить свойство amqphub.amqp10jms.remoteUrl. На данный момент мы собираемся использовать переменную среды, установленную на уровне Kubernetes Deployment.

amqphub.amqp10jms.remoteUrl = ${ARTEMIS_URL}

Приложение производителя очень похоже. Вместо аннотации для получения сообщений мы используем Spring JmsTemplateдля создания и отправки сообщений в целевую очередь. Метод отправки сообщений предоставляется как POST /producer/sendконечная точка HTTP.

@RestController
@RequestMapping("/producer")
public class ProducerController {

   private static long id = 1;
   private final JmsTemplate jmsTemplate;
   @Value("${DESTINATION}")
   private String destination;

   public ProducerController(JmsTemplate jmsTemplate) {
      this.jmsTemplate = jmsTemplate;
   }

   @PostMapping("/send")
   public SimpleMessage send(@RequestBody SimpleMessage message) {
      if (message.getId() == null) {
          message.setId(id++);
      }
      jmsTemplate.convertAndSend(destination, message);
      return message;
   }
}

Создайте кластер Kind с помощью Nginx Ingress.

Наши примеры приложений готовы. Перед их развертыванием нам нужно подготовить локальный кластер Kubernetes. Мы развернём там кластер ActiveMQ, состоящий из трёх брокеров. Поэтому наш кластер Kubernetes также будет состоять из трех нод. Следовательно, в Kubernetes работает три экземпляра пользовательского приложения. Они подключаются к брокерам ActiveMQ по протоколу AMQP. Существует также один экземпляр приложения производителя, которое отправляет сообщения по запросу. Вот схема нашей архитектуры.

activemq-весна-загрузка-kubernetes-арка

Для локального запуска многоузлового кластера Kubernetes мы будем использовать Kind. Мы проверим не только связь по протоколу AMQP, но и предоставим консоль управления ActiveMQ через HTTP. Поскольку ActiveMQ использует безголовые службы для предоставления веб-консоли, мы должны создать и настроить Ingress on Kind для доступа к ней. Давайте начнем.

На первом этапе мы собираемся создать кластер Kind. Он состоит из плоскости управления и трех рабочих. Конфигурация должна быть правильно подготовлена ​​для запуска Nginx Ingress Controller. Мы должны добавить ingress-readyметку к одному рабочему узлу и открыть порты 80и файлы 443. Вот окончательная версия конфигурационного файла Kind:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
    kubeadmConfigPatches:
    - |
      kind: JoinConfiguration
      nodeRegistration:
        kubeletExtraArgs:
          node-labels: "ingress-ready=true"
    extraPortMappings:
    - containerPort: 80
      hostPort: 80
      protocol: TCP
    - containerPort: 443
      hostPort: 443
      protocol: TCP  
  - role: worker
  - role: worker

Теперь давайте создадим кластер Kind, выполнив следующую команду:

$ kind create cluster --config kind-config.yaml

Если ваш кластер был успешно создан, вы должны увидеть аналогичную информацию:

После этого давайте установим Nginx Ingress Controller. Это всего лишь одна команда:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

Проверим установку:

$ kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS  AGE
ingress-nginx-admission-create-wbbzh        0/1     Completed   0         1m
ingress-nginx-admission-patch-ws2mv         0/1     Completed   0         1m
ingress-nginx-controller-86b6d5756c-rkbmz   1/1     Running     0         1m

Установите ActiveMQ Artemis в Kubernetes

Наконец, мы можем перейти к установке ActiveMQ Artemis. Во-первых, давайте установим необходимые CRD. Вы можете найти все манифесты YAML в репозитории оператора на GitHub.

$ git clone https://github.com/artemiscloud/activemq-artemis-operator.git
$ cd activemq-artemis-operator

Манифесты с CRD находятся в deploy/crdsкаталоге:

$ kubectl create -f ./deploy/crds

После этого мы можем установить оператор:

$ kubectl create -f ./deploy/service_account.yaml
$ kubectl create -f ./deploy/role.yaml
$ kubectl create -f ./deploy/role_binding.yaml
$ kubectl create -f ./deploy/election_role.yaml
$ kubectl create -f ./deploy/election_role_binding.yaml
$ kubectl create -f ./deploy/operator_config.yaml
$ kubectl create -f ./deploy/operator.yaml

Чтобы создать кластер, мы должны создать ActiveMQArtemisобъект. Он содержит ряд брокеров, входящих в состав кластера (1) . Мы также должны установить метод доступа, чтобы открыть порт AMQP за пределами каждого модуля брокера (2) . Разумеется, мы также выставим консоль управления (3) .

apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemis
metadata:
  name: ex-aao
spec:
  deploymentPlan:
    size: 3 # (1)
    image: placeholder
    messageMigration: true
    resources:
      limits:
        cpu: "500m"
        memory: "1024Mi"
      requests:
        cpu: "250m"
        memory: "512Mi"
  acceptors: # (2)
    - name: amqp
      protocols: amqp
      port: 5672
      connectionsAllowed: 5
  console: # (3)
    expose: true

После ActiveMQArtemisсоздания оператор запускает процесс развертывания. Он создает StatefulSetобъект:

$ kubectl get statefulset
NAME        READY   AGE
ex-aao-ss   3/3     1m

Он последовательно запускает все три пода с брокерами:

$ kubectl get pod -l application=ex-aao-app
NAME          READY   STATUS    RESTARTS    AGE
ex-aao-ss-0   1/1     Running   0           5m
ex-aao-ss-1   1/1     Running   0           3m
ex-aao-ss-2   1/1     Running   0           1m

Выведем список Services, созданный оператором. Для Serviceкаждого брокера имеется отдельный порт для доступа к порту AMQP ( ex-aao-amqp-*) и веб-консоли ( ex-aao-wsconsj-*):

ActiveMQ-весна-загрузка-кубернеты-сервисы

Оператор автоматически создает объекты Ingress для каждой веб-консоли Service. Мы изменим их, добавив разные хосты. Допустим, это one.activemq.comдомен для первого брокера, two.activemq.comдля второго брокера и т. д.

$ kubectl get ing    
NAME                      CLASS    HOSTS                  ADDRESS     PORTS   AGE
ex-aao-wconsj-0-svc-ing   <none>   one.activemq.com       localhost   80      1h
ex-aao-wconsj-1-svc-ing   <none>   two.activemq.com       localhost   80      1h
ex-aao-wconsj-2-svc-ing   <none>   three.activemq.com                  localhost   80      1h

После создания входов нам нужно будет добавить следующую строку в /etc/hosts.

127.0.0.1    one.activemq.com two.activemq.com three.activemq.com

Теперь мы получаем доступ к консоли управления, например, для третьего брокера по следующему URL-адресу http://three.activemq.com/console .

ActiveMQ-весна-загрузка-кубернетес-консоль

Как только брокер будет готов, мы можем определить тестовую очередь. Имя этой очереди test-1.

apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemisAddress
metadata:
  name: address-1
spec:
  addressName: address-1
  queueName: test-1
  routingType: anycast

Запустите приложение Spring Boot в Kubernetes и подключитесь к ActiveMQ.

Теперь давайте развернем потребительское приложение. В Deploymentманифесте мы должны указать URL-адрес подключения к кластеру ActiveMQ. Но подождите… как его подключить? Есть три брокера, использующие три отдельных Kubernetes Service. К счастью, стартер AMQP Spring Boot поддерживает его. failoverВнутри раздела мы можем указать адреса трех брокеров . Давайте попробуем, чтобы увидеть, что произойдет.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: simple-consumer
spec:
  replicas: 3
  selector:
    matchLabels:
      app: simple-consumer
  template:
    metadata:
      labels:
        app: simple-consumer
    spec:
      containers:
      - name: simple-consumer
        image: piomin/simple-consumer
        env:
          - name: ARTEMIS_URL
            value: failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)
        resources:
          limits:
            memory: 256Mi
            cpu: 500m
          requests:
            memory: 128Mi
            cpu: 250m

Приложение подготовлено для развертывания с помощью Skaffold. Если вы запустите skaffold devкоманду, вы развернете и увидите журналы всех трех экземпляров потребительского приложения. Каков результат? Все экземпляры подключаются к первому URL из списка, как показано ниже.

К счастью, существует параметр аварийного переключения, который помогает более равномерно распределять клиентские подключения между несколькими удаленными одноранговыми узлами. С этой failover.randomizeопцией URI случайным образом перемешиваются перед попыткой подключения к одному из них. Давайте заменим ARTEMIS_URLenv в Deploymentманифесте следующей строкой:

failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)?failover.randomize=true

Распределение между экземплярами брокера выглядит немного лучше. Конечно, результат случайный, поэтому вы можете получить разные результаты.

Первый способ распределения подключений — через выделенный Kubernetes Service. Нам не нужно использовать сервисы, автоматически созданные оператором. Мы можем создать свой собственный Service, который балансирует нагрузку между всеми доступными модулями с брокерами.

kind: Service
apiVersion: v1
metadata:
  name: ex-aao-amqp-lb
spec:
  ports:
    - name: amqp
      protocol: TCP
      port: 5672
  type: ClusterIP
  selector:
    application: ex-aao-app

Теперь мы можем отказаться от failoverраздела на стороне клиента и полностью положиться на механизмы Kubernetes.

spec:
  containers:
  - name: simple-consumer
    image: piomin/simple-consumer
    env:
      - name: ARTEMIS_URL
        value: amqp://ex-aao-amqp-lb:5672

На этот раз мы ничего не увидим в журналах приложения, потому что все экземпляры подключаются к одному и тому же URL-адресу. Мы можем проверить распределение между всеми экземплярами брокера, используя, например, веб-консоль управления. Вот список потребителей первого экземпляра ActiveMQ:

Ниже вы получите точно такие же результаты для второго экземпляра. Все экземпляры потребительских приложений были равномерно распределены между всеми доступными брокерами внутри кластера.

Теперь мы собираемся развернуть приложение производителя. Мы используем тот же Kubernetes Serviceдля подключения кластера ActiveMQ.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: simple-producer
spec:
  replicas: 3
  selector:
    matchLabels:
      app: simple-producer
  template:
    metadata:
      labels:
        app: simple-producer
    spec:
      containers:
        - name: simple-producer
          image: piomin/simple-producer
          env:
            - name: ARTEMIS_URL
              value: amqp://ex-aao-amqp-lb:5672
            - name: DESTINATION
              value: test-1
          ports:
            - containerPort: 8080

Поскольку нам нужно вызвать конечную точку HTTP, давайте создадим Serviceприложение для производителя:

apiVersion: v1
kind: Service
metadata:
  name: simple-producer
spec:
  type: ClusterIP
  selector:
    app: simple-producer
  ports:
  - port: 8080

Давайте развернем приложение-производитель с помощью Skaffold с включенной переадресацией портов:

$ skaffold dev --port-forward

Вот список наших Deployments:

Чтобы отправить тестовое сообщение, просто выполните следующую команду:

$ curl http://localhost:8080/producer/send \
  -d "{\"source\":\"test\",\"content\":\"Hello\"}" \
  -H "Content-Type:application/json"

Расширенная конфигурация

Если вам нужно более продвинутое распределение трафика между брокерами внутри кластера, вы можете добиться этого несколькими способами. Например, мы можем динамически переопределять свойство конфигурации во время выполнения. Вот очень простой пример. После запуска приложения мы подключаемся к внешнему сервису по HTTP. Он возвращает номер следующего экземпляра.

@Configuration
public class AmqpConfig {

    @PostConstruct
    public void init() {
        RestTemplate t = new RestTemplateBuilder().build();
        int x = t.getForObject("http://simple-counter:8080/counter", Integer.class);
        System.setProperty("amqphub.amqp10jms.remoteUrl",
                "amqp://ex-aao-amqp-" + x + "-svc:5672");
    }

}

Вот реализация приложения-счетчика. Он просто увеличивает число и делит его на количество экземпляров брокера. Конечно, мы можем создать более продвинутую реализацию и обеспечить, например, подключение к экземпляру брокера, работающему на том же узле Kubernetes, что и модуль приложения.

@SpringBootApplication
@RestController
@RequestMapping("/counter")
public class CounterApp {

   private static int c = 0;

   public static void main(String[] args) {
      SpringApplication.run(CounterApp.class, args);
   }

   @Value("${DIVIDER:0}")
   int divider;

   @GetMapping
   public Integer count() {
      if (divider > 0)
         return c++ % divider;
      else
         return c++;
   }
}

Последние мысли

ActiveMQ — интересная альтернатива RabbitMQ в качестве брокера сообщений. В этой статье вы узнали, как запускать, управлять и интегрировать ActiveMQ с Spring Boot в Kubernetes. В Kubernetes им можно декларативно управлять благодаря ActiveMQ Artemis Operator. Вы также можете легко интегрировать его с Spring Boot, используя специальный стартер. Он предоставляет различные варианты конфигурации и активно разрабатывается Red Hat и сообществом.

Ссылка: https://piotrminkowski.com/2022/07/26/activemq-artemis-with-spring-boot-on-kubernetes/

#kubernetes #springboot #java #rabbitmq 

Как запустить ActiveMQ Artemis с Spring Boot в Kubernetes
山田  千代

山田 千代

1659091080

如何在 Kubernetes 上使用 Spring Boot 運行 ActiveMQ Artemis

本文將教您如何在 Kubernetes 上運行 ActiveMQ,並通過 Spring Boot 將其與您的應用程序集成。我們將使用專門的operator部署一個集群的 ActiveMQ 代理。然後我們將構建並運行兩個 Spring Boot 應用程序。第一個在多個實例中運行並從隊列接收消息,而第二個是向該隊列發送消息。為了測試 ActiveMQ 集群,我們將使用Kind。消費者應用程序使用幾種不同的模式連接到集群。我們將詳細討論這些模式。

你可以在我的博客上找到很多關於其他消息代理(如 RabbitMQ 或 Kafka)的文章。如果您想了解 Kubernetes 上的 RabbitMQ,請參閱那篇文章。要了解有關 Kafka 和 Spring Boot 集成的更多信息,您可以在此處閱讀有關 Kafka Streams 和 Spring Cloud Stream 的文章。之前我沒有寫太多關於 ActiveMQ 的文章,但它也是一個非常流行的消息代理。例如,它支持最新版本的 AMQP 協議,而 Rabbit 則是基於它們對 AMQP 0.9 的擴展。

源代碼

如果您想自己嘗試一下,可以隨時查看我的源代碼。為此,您需要克隆我的 GitHub 存儲庫。然後進入messaging目錄。您將找到三個 Spring Boot 應用程序simple-producersimple-consumersimple-counter. 之後,您應該按照我的指示進行操作。讓我們開始。

將 Spring Boot 與 ActiveMQ 集成

讓我們從 Spring Boot 應用程序和 ActiveMQ Artemis 代理之間的集成開始。實際上,ActiveMQ Artemis 是 Red Hat 提供的名為AMQ Broker的商業產品的基礎。Red Hat 積極開發了一個用於 ActiveMQ 的 Spring Boot 啟動器和一個在 Kubernetes 上運行它的操作符。為了訪問 Spring Boot,您需要在pom.xml文件中包含 Red Hat Maven 存儲庫:

<repository>
  <id>red-hat-ga</id>
  <url>https://maven.repository.redhat.com/ga</url>
</repository>

之後,您可以在 Maven 中包含一個啟動器pom.xml

<dependency>
  <groupId>org.amqphub.spring</groupId>
  <artifactId>amqp-10-jms-spring-boot-starter</artifactId>
  <version>2.5.6</version>
  <exclusions>
    <exclusion>
      <groupId>org.slf4j</groupId>
      <artifactId>log4j-over-slf4j</artifactId>
    </exclusion>
  </exclusions>
</dependency>

然後,我們只需要使用@EnableJMS註解為我們的應用程序啟用 JMS:

@SpringBootApplication
@EnableJms
public class SimpleConsumer {

   public static void main(String[] args) {
      SpringApplication.run(SimpleConsumer.class, args);
   }

}

我們的應用程序非常簡單。它只是接收並打印傳入的消息。接收消息的方法應該用 註釋@JmsListener。該destination字段包含目標隊列的名稱。

@Service
public class Listener {

   private static final Logger LOG = LoggerFactory
      .getLogger(Listener.class);

   @JmsListener(destination = "test-1")
   public void processMsg(SimpleMessage message) {
      LOG.info("============= Received: " + message);
   }

}

這是代表我們信息的類:

public class SimpleMessage implements Serializable {

   private Long id;
   private String source;
   private String content;

   public SimpleMessage() {
   }

   public SimpleMessage(Long id, String source, String content) {
      this.id = id;
      this.source = source;
      this.content = content;
   }

   // ... GETTERS AND SETTERS

   @Override
   public String toString() {
      return "SimpleMessage{" +
              "id=" + id +
              ", source='" + source + '\'' +
              ", content='" + content + '\'' +
              '}';
   }
}

最後,我們需要設置連接配置設置。使用 AMQP Spring Boot 啟動器非常簡單。我們只需要設置屬性amqphub.amqp10jms.remoteUrl。現在,我們將基於在 Kubernetes 級別設置的環境變量Deployment

amqphub.amqp10jms.remoteUrl = ${ARTEMIS_URL}

生產者應用程序非常相似。我們使用 SpringJmsTemplate生成消息並將消息發送到目標隊列,而不是用於接收消息的註解。發送消息的方法公開為 HTTPPOST /producer/send端點。

@RestController
@RequestMapping("/producer")
public class ProducerController {

   private static long id = 1;
   private final JmsTemplate jmsTemplate;
   @Value("${DESTINATION}")
   private String destination;

   public ProducerController(JmsTemplate jmsTemplate) {
      this.jmsTemplate = jmsTemplate;
   }

   @PostMapping("/send")
   public SimpleMessage send(@RequestBody SimpleMessage message) {
      if (message.getId() == null) {
          message.setId(id++);
      }
      jmsTemplate.convertAndSend(destination, message);
      return message;
   }
}

使用 Nginx Ingress 創建 Kind 集群

我們的示例應用程序已準備就緒。在部署它們之前,我們需要準備本地 Kubernetes 集群。我們將在那裡部署由三個代理組成的 ActiveMQ 集群。因此,我們的 Kubernetes 集群也將由三個節點組成。因此,在 Kubernetes 上運行了三個消費者應用程序實例。它們通過 AMQP 協議連接到 ActiveMQ 代理。還有一個生產者應用程序實例可以按需發送消息。這是我們的架構圖。

activemq-spring-boot-kubernetes-arch

為了在本地運行多節點 Kubernetes 集群,我們將使用 Kind。我們不僅會測試通過 AMQP 協議的通信,還會通過 HTTP 公開 ActiveMQ 管理控制台。因為 ActiveMQ 使用無頭服務來公開 Web 控制台,所以我們必須在 Kind 上創建和配置 Ingress 才能訪問它。讓我們開始。

第一步,我們將創建一個 Kind 集群。它由一個控制平面和三個工作人員組成。必須正確準備配置才能運行 Nginx 入口控制器。我們應該將ingress-ready標籤添加到單個工作節點並公開端口80443. 這是 Kind 配置文件的最終版本:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
    kubeadmConfigPatches:
    - |
      kind: JoinConfiguration
      nodeRegistration:
        kubeletExtraArgs:
          node-labels: "ingress-ready=true"
    extraPortMappings:
    - containerPort: 80
      hostPort: 80
      protocol: TCP
    - containerPort: 443
      hostPort: 443
      protocol: TCP  
  - role: worker
  - role: worker

現在,讓我們通過執行以下命令創建一個 Kind 集群:

$ kind create cluster --config kind-config.yaml

如果您的集群已成功創建,您應該會看到類似的信息:

之後,讓我們安裝 Nginx Ingress Controller。它只是一個命令:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

讓我們驗證安裝:

$ kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS  AGE
ingress-nginx-admission-create-wbbzh        0/1     Completed   0         1m
ingress-nginx-admission-patch-ws2mv         0/1     Completed   0         1m
ingress-nginx-controller-86b6d5756c-rkbmz   1/1     Running     0         1m

在 Kubernetes 上安裝 ActiveMQ Artemis

最後,我們可以繼續安裝 ActiveMQ Artemis。首先,讓我們安裝所需的 CRD。您可以在 GitHub 上的操作員存儲庫中找到所有 YAML 清單。

$ git clone https://github.com/artemiscloud/activemq-artemis-operator.git
$ cd activemq-artemis-operator

帶有 CRD 的清單位於deploy/crds目錄中:

$ kubectl create -f ./deploy/crds

之後,我們可以安裝操作符:

$ kubectl create -f ./deploy/service_account.yaml
$ kubectl create -f ./deploy/role.yaml
$ kubectl create -f ./deploy/role_binding.yaml
$ kubectl create -f ./deploy/election_role.yaml
$ kubectl create -f ./deploy/election_role_binding.yaml
$ kubectl create -f ./deploy/operator_config.yaml
$ kubectl create -f ./deploy/operator.yaml

為了創建集群,我們必須創建ActiveMQArtemis對象。它包含許多作為集群(1)一部分的代理。我們還應該設置訪問器,以在每個代理 pod (2)之外公開 AMQP 端口。當然,我們也會暴露管理控制台(3)

apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemis
metadata:
  name: ex-aao
spec:
  deploymentPlan:
    size: 3 # (1)
    image: placeholder
    messageMigration: true
    resources:
      limits:
        cpu: "500m"
        memory: "1024Mi"
      requests:
        cpu: "250m"
        memory: "512Mi"
  acceptors: # (2)
    - name: amqp
      protocols: amqp
      port: 5672
      connectionsAllowed: 5
  console: # (3)
    expose: true

創建完成ActiveMQArtemis後,操作員將開始部署過程。它創建StatefulSet對象:

$ kubectl get statefulset
NAME        READY   AGE
ex-aao-ss   3/3     1m

它按順序使用代理啟動所有三個 pod:

$ kubectl get pod -l application=ex-aao-app
NAME          READY   STATUS    RESTARTS    AGE
ex-aao-ss-0   1/1     Running   0           5m
ex-aao-ss-1   1/1     Running   0           3m
ex-aao-ss-2   1/1     Running   0           1m

讓我們顯示Service操作員創建的 s 列表。Service每個代理都有一個用於公開 AMQP 端口 ( ex-aao-amqp-*) 和 Web 控制台 ( ex-aao-wsconsj-*):

activemq-spring-boot-kubernetes-services

操作員會自動為每個 Web 控制台創建 Ingress 對象Service。我們將通過添加不同的主機來修改它們。假設這是one.activemq.com第一個代理、two.activemq.com第二個代理等的域。

$ kubectl get ing    
NAME                      CLASS    HOSTS                  ADDRESS     PORTS   AGE
ex-aao-wconsj-0-svc-ing   <none>   one.activemq.com       localhost   80      1h
ex-aao-wconsj-1-svc-ing   <none>   two.activemq.com       localhost   80      1h
ex-aao-wconsj-2-svc-ing   <none>   three.activemq.com                  localhost   80      1h

創建入口後,我們必須在/etc/hosts.

127.0.0.1    one.activemq.com two.activemq.com three.activemq.com

現在,我們訪問管理控制台,例如以下 URL http://three.activemq.com/console下的第三個代理。

activemq-spring-boot-kubernetes-console

一旦代理準備好,我們就可以定義一個測試隊列。該隊列的名稱是test-1

apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemisAddress
metadata:
  name: address-1
spec:
  addressName: address-1
  queueName: test-1
  routingType: anycast

在 Kubernetes 上運行 Spring Boot 應用並連接到 ActiveMQ

現在,讓我們部署消費者應用程序。在Deployment清單中,我們必須設置 ActiveMQ 集群連接 URL。但是等等……如何連接它?使用三個單獨的 Kubernetes 暴露了三個代理Service。幸運的是,AMQP Spring Boot starter 支持它。failover我們可以在節內設置三個經紀人的地址。讓我們試試看會發生什麼。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: simple-consumer
spec:
  replicas: 3
  selector:
    matchLabels:
      app: simple-consumer
  template:
    metadata:
      labels:
        app: simple-consumer
    spec:
      containers:
      - name: simple-consumer
        image: piomin/simple-consumer
        env:
          - name: ARTEMIS_URL
            value: failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)
        resources:
          limits:
            memory: 256Mi
            cpu: 500m
          requests:
            memory: 128Mi
            cpu: 250m

該應用程序已準備好與 Skaffold 一起部署。如果您運行該skaffold dev命令,您將部署並查看消費者應用程序的所有三個實例的日誌。結果是什麼?所有實例都連接到列表中的第一個 URL,如下所示。

幸運的是,有一個故障轉移參數可以幫助在多個遠程對等點之間更均勻地分配客戶端連接。使用該failover.randomize選項,在嘗試連接到其中一個之前,URI 會被隨機打亂。讓我們ARTEMIS_URL將清單中的 env替換Deployment為以下行:

failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)?failover.randomize=true

代理實例之間的分佈看起來稍微好一些。當然,結果是隨機的,所以你可能會得到不同的結果。

分配連接的第一種方法是通過專用的 Kubernetes Service。我們不必利用運營商自動創建的服務。我們可以創建自己Service的負載均衡器,在所有可用的 Pod 之間通過代理實現負載均衡。

kind: Service
apiVersion: v1
metadata:
  name: ex-aao-amqp-lb
spec:
  ports:
    - name: amqp
      protocol: TCP
      port: 5672
  type: ClusterIP
  selector:
    application: ex-aao-app

現在,我們可以從客戶端部分辭職failover,完全依賴 Kubernetes 機制。

spec:
  containers:
  - name: simple-consumer
    image: piomin/simple-consumer
    env:
      - name: ARTEMIS_URL
        value: amqp://ex-aao-amqp-lb:5672

這次我們不會在應用程序日誌中看到任何內容,因為所有實例都連接到同一個 URL。我們可以使用例如管理 Web 控制台來驗證所有代理實例之間的分佈。這是 ActiveMQ 第一個實例上的消費者列表:

下面,您將在第二個實例中得到完全相同的結果。所有消費者應用程序實例均已在集群內的所有可用代理之間平均分配。

現在,我們將部署生產者應用程序。我們使用相同的 KubernetesService來連接 ActiveMQ 集群。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: simple-producer
spec:
  replicas: 3
  selector:
    matchLabels:
      app: simple-producer
  template:
    metadata:
      labels:
        app: simple-producer
    spec:
      containers:
        - name: simple-producer
          image: piomin/simple-producer
          env:
            - name: ARTEMIS_URL
              value: amqp://ex-aao-amqp-lb:5672
            - name: DESTINATION
              value: test-1
          ports:
            - containerPort: 8080

因為我們必須調用 HTTP 端點,所以讓我們Service為生產者應用程序創建:

apiVersion: v1
kind: Service
metadata:
  name: simple-producer
spec:
  type: ClusterIP
  selector:
    app: simple-producer
  ports:
  - port: 8080

讓我們使用啟用端口轉發的 Skaffold 部署生產者應用程序:

$ skaffold dev --port-forward

這是我們的清單Deployment

為了發送測試消息,只需執行以下命令:

$ curl http://localhost:8080/producer/send \
  -d "{\"source\":\"test\",\"content\":\"Hello\"}" \
  -H "Content-Type:application/json"

高級配置

如果您需要在集群內的代理之間進行更高級的流量分配,您可以通過多種方式實現。例如,我們可以在運行時動態覆蓋配置屬性。這是一個非常簡單的例子。啟動應用程序後,我們將通過 HTTP 連接外部服務。它返回下一個實例編號。

@Configuration
public class AmqpConfig {

    @PostConstruct
    public void init() {
        RestTemplate t = new RestTemplateBuilder().build();
        int x = t.getForObject("http://simple-counter:8080/counter", Integer.class);
        System.setProperty("amqphub.amqp10jms.remoteUrl",
                "amqp://ex-aao-amqp-" + x + "-svc:5672");
    }

}

這是計數器應用程序的實現。它只是增加數字並將其除以代理實例的數量。當然,我們可以創建更高級的實現,並提供與運行在與應用程序 pod 相同的 Kubernetes 節點上的代理實例的連接。

@SpringBootApplication
@RestController
@RequestMapping("/counter")
public class CounterApp {

   private static int c = 0;

   public static void main(String[] args) {
      SpringApplication.run(CounterApp.class, args);
   }

   @Value("${DIVIDER:0}")
   int divider;

   @GetMapping
   public Integer count() {
      if (divider > 0)
         return c++ % divider;
      else
         return c++;
   }
}

最後的想法

ActiveMQ 是作為消息代理的 RabbitMQ 的一個有趣的替代方案。在本文中,您學習瞭如何在 Kubernetes 上運行、管理和集成 ActiveMQ 與 Spring Boot。借助 ActiveMQ Artemis Operator,它可以在 Kubernetes 上進行聲明式管理。您還可以使用專用的啟動器輕鬆地將其與 Spring Boot 集成。它提供各種配置選項,由紅帽和社區積極開發。

鏈接:https ://piotrminkowski.com/2022/07/26/activemq-artemis-with-spring-boot-on-kubernetes/

#kubernetes #springboot #java #rabbitmq 

如何在 Kubernetes 上使用 Spring Boot 運行 ActiveMQ Artemis
Thierry  Perret

Thierry Perret

1659083820

Comment Exécuter ActiveMQ Artemis Avec Spring Boot Sur Kubernetes

Cet article vous apprendra à exécuter ActiveMQ sur Kubernetes et à l'intégrer à votre application via Spring Boot. Nous allons déployer un courtier ActiveMQ en cluster à l'aide d'un opérateur dédié . Ensuite, nous allons créer et exécuter deux applications Spring Boot. Le premier d'entre eux s'exécute dans plusieurs instances et reçoit des messages de la file d'attente, tandis que le second envoie des messages à cette file d'attente. Afin de tester le cluster ActiveMQ, nous allons utiliser Kind . L'application consommateur se connecte au cluster en utilisant plusieurs modes différents. Nous allons détailler ces modes.

Vous pouvez trouver de nombreux articles sur d'autres courtiers de messages comme RabbitMQ ou Kafka sur mon blog. Si vous souhaitez en savoir plus sur RabbitMQ sur Kubernetes, veuillez vous référer à cet article . Afin d'en savoir plus sur l'intégration de Kafka et Spring Boot, vous pouvez lire l'article sur Kafka Streams et Spring Cloud Stream disponible ici . Auparavant, je n'écrivais pas beaucoup sur ActiveMQ, mais c'est aussi un courtier de messages très populaire. Par exemple, il prend en charge la dernière version du protocole AMQP, tandis que Rabbit est basé sur leur extension AMQP 0.9.

Code source

Si vous souhaitez l'essayer par vous-même, vous pouvez toujours jeter un œil à mon code source. Pour ce faire, vous devez cloner mon  référentiel GitHub . Allez ensuite dans le messagingrépertoire. Vous y trouverez trois applications Spring Boot : simple-producer, simple-consumeret simple-counter. Après cela, vous n'aurez plus qu'à suivre mes instructions. Commençons.

Intégrer Spring Boot à ActiveMQ

Commençons par l'intégration entre nos applications Spring Boot et le courtier ActiveMQ Artemis. En fait, ActiveMQ Artemis est la base du produit commercial fourni par Red Hat appelé AMQ Broker . Red Hat développe activement un démarreur Spring Boot pour ActiveMQ et un opérateur pour l'exécuter sur Kubernetes. Pour accéder à Spring Boot, vous devez inclure le référentiel Red Hat Maven dans votre pom.xmlfichier :

<repository>
  <id>red-hat-ga</id>
  <url>https://maven.repository.redhat.com/ga</url>
</repository>

Après cela, vous pouvez inclure un starter dans votre Maven pom.xml:

<dependency>
  <groupId>org.amqphub.spring</groupId>
  <artifactId>amqp-10-jms-spring-boot-starter</artifactId>
  <version>2.5.6</version>
  <exclusions>
    <exclusion>
      <groupId>org.slf4j</groupId>
      <artifactId>log4j-over-slf4j</artifactId>
    </exclusion>
  </exclusions>
</dependency>

Ensuite, il nous suffit d'activer JMS pour notre application avec l' @EnableJMSannotation :

@SpringBootApplication
@EnableJms
public class SimpleConsumer {

   public static void main(String[] args) {
      SpringApplication.run(SimpleConsumer.class, args);
   }

}

Notre application est très simple. Il reçoit et imprime simplement un message entrant. La méthode de réception des messages doit être annotée avec @JmsListener. Le destinationchamp contient le nom d'une file d'attente cible.

@Service
public class Listener {

   private static final Logger LOG = LoggerFactory
      .getLogger(Listener.class);

   @JmsListener(destination = "test-1")
   public void processMsg(SimpleMessage message) {
      LOG.info("============= Received: " + message);
   }

}

Voici la classe qui représente notre message :

public class SimpleMessage implements Serializable {

   private Long id;
   private String source;
   private String content;

   public SimpleMessage() {
   }

   public SimpleMessage(Long id, String source, String content) {
      this.id = id;
      this.source = source;
      this.content = content;
   }

   // ... GETTERS AND SETTERS

   @Override
   public String toString() {
      return "SimpleMessage{" +
              "id=" + id +
              ", source='" + source + '\'' +
              ", content='" + content + '\'' +
              '}';
   }
}

Enfin, nous devons définir les paramètres de configuration de la connexion. Avec le démarreur AMQP Spring Boot, c'est très simple. Nous avons juste besoin de définir la propriété amqphub.amqp10jms.remoteUrl. Pour l'instant, nous allons nous baser sur la variable d'environnement définie au niveau de Kubernetes Deployment.

amqphub.amqp10jms.remoteUrl = ${ARTEMIS_URL}

L'application du producteur est assez similaire. Au lieu de l'annotation pour recevoir des messages, nous utilisons Spring JmsTemplatepour produire et envoyer des messages à la file d'attente cible. La méthode d'envoi de messages est exposée en tant que point de POST /producer/sendterminaison HTTP.

@RestController
@RequestMapping("/producer")
public class ProducerController {

   private static long id = 1;
   private final JmsTemplate jmsTemplate;
   @Value("${DESTINATION}")
   private String destination;

   public ProducerController(JmsTemplate jmsTemplate) {
      this.jmsTemplate = jmsTemplate;
   }

   @PostMapping("/send")
   public SimpleMessage send(@RequestBody SimpleMessage message) {
      if (message.getId() == null) {
          message.setId(id++);
      }
      jmsTemplate.convertAndSend(destination, message);
      return message;
   }
}

Créer un cluster Kind avec Nginx Ingress

Nos exemples d'applications sont prêts. Avant de les déployer, nous devons préparer le cluster Kubernetes local. Nous y déploierons le cluster ActiveMQ composé de trois brokers. Par conséquent, notre cluster Kubernetes sera également composé de trois nœuds. Par conséquent, trois instances de l'application grand public s'exécutent sur Kubernetes. Ils se connectent aux courtiers ActiveMQ via le protocole AMQP. Il existe également une seule instance de l'application producteur qui envoie des messages à la demande. Voici le schéma de notre architecture.

activemq-spring-boot-kubernetes-arch

Afin d'exécuter localement un cluster Kubernetes multi-nœuds, nous utiliserons Kind. Nous allons non seulement tester la communication via le protocole AMQP, mais également exposer la console de gestion ActiveMQ via HTTP. Étant donné qu'ActiveMQ utilise des services sans tête pour exposer une console Web, nous devons créer et configurer Ingress sur Kind pour y accéder. Commençons.

Dans un premier temps, nous allons créer un cluster Kind. Il se compose d'un plan de contrôle et de trois travailleurs. La configuration doit être préparée correctement pour exécuter le contrôleur d'entrée Nginx. Nous devrions ajouter l' ingress-readyétiquette à un seul nœud de travail et exposer les ports 80et 443. Voici la version finale d'un fichier de configuration Kind :

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
    kubeadmConfigPatches:
    - |
      kind: JoinConfiguration
      nodeRegistration:
        kubeletExtraArgs:
          node-labels: "ingress-ready=true"
    extraPortMappings:
    - containerPort: 80
      hostPort: 80
      protocol: TCP
    - containerPort: 443
      hostPort: 443
      protocol: TCP  
  - role: worker
  - role: worker

Créons maintenant un cluster Kind en exécutant la commande suivante :

$ kind create cluster --config kind-config.yaml

Si votre cluster a été créé avec succès, vous devriez voir des informations similaires :

Après cela, installons le contrôleur d'entrée Nginx. C'est juste une seule commande :

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

Vérifions l'installation :

$ kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS  AGE
ingress-nginx-admission-create-wbbzh        0/1     Completed   0         1m
ingress-nginx-admission-patch-ws2mv         0/1     Completed   0         1m
ingress-nginx-controller-86b6d5756c-rkbmz   1/1     Running     0         1m

Installer ActiveMQ Artemis sur Kubernetes

Enfin, nous pouvons procéder à l'installation d'ActiveMQ Artemis. Tout d'abord, installons les CRD requis. Vous pouvez trouver tous les manifestes YAML dans le référentiel de l'opérateur sur GitHub.

$ git clone https://github.com/artemiscloud/activemq-artemis-operator.git
$ cd activemq-artemis-operator

Les manifestes avec CRD se trouvent dans le deploy/crdsrépertoire :

$ kubectl create -f ./deploy/crds

Après cela, nous pouvons installer l'opérateur :

$ kubectl create -f ./deploy/service_account.yaml
$ kubectl create -f ./deploy/role.yaml
$ kubectl create -f ./deploy/role_binding.yaml
$ kubectl create -f ./deploy/election_role.yaml
$ kubectl create -f ./deploy/election_role_binding.yaml
$ kubectl create -f ./deploy/operator_config.yaml
$ kubectl create -f ./deploy/operator.yaml

Pour créer un cluster, nous devons créer l' ActiveMQArtemisobjet. Il contient un certain nombre de courtiers faisant partie du cluster (1) . Nous devons également définir l'accesseur, pour exposer le port AMQP en dehors de chaque pod de courtier (2) . Bien entendu, nous exposerons également la console de gestion (3) .

apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemis
metadata:
  name: ex-aao
spec:
  deploymentPlan:
    size: 3 # (1)
    image: placeholder
    messageMigration: true
    resources:
      limits:
        cpu: "500m"
        memory: "1024Mi"
      requests:
        cpu: "250m"
        memory: "512Mi"
  acceptors: # (2)
    - name: amqp
      protocols: amqp
      port: 5672
      connectionsAllowed: 5
  console: # (3)
    expose: true

Une fois le ActiveMQArtemisest créé, l'opérateur démarre le processus de déploiement. Il crée l' StatefulSetobjet :

$ kubectl get statefulset
NAME        READY   AGE
ex-aao-ss   3/3     1m

Il démarre les trois pods avec des courtiers de manière séquentielle :

$ kubectl get pod -l application=ex-aao-app
NAME          READY   STATUS    RESTARTS    AGE
ex-aao-ss-0   1/1     Running   0           5m
ex-aao-ss-1   1/1     Running   0           3m
ex-aao-ss-2   1/1     Running   0           1m

Affichons une liste de Services créés par l'opérateur. Il y a un seul Servicepar courtier pour exposer le port AMQP ( ex-aao-amqp-*) et la console Web ( ex-aao-wsconsj-*) :

activemq-spring-boot-kubernetes-services

L'opérateur crée automatiquement des objets Ingress pour chaque console Web Service. Nous allons les modifier en ajoutant différents hébergeurs. Disons que c'est le one.activemq.comdomaine du premier courtier, two.activemq.comdu deuxième courtier, etc.

$ kubectl get ing    
NAME                      CLASS    HOSTS                  ADDRESS     PORTS   AGE
ex-aao-wconsj-0-svc-ing   <none>   one.activemq.com       localhost   80      1h
ex-aao-wconsj-1-svc-ing   <none>   two.activemq.com       localhost   80      1h
ex-aao-wconsj-2-svc-ing   <none>   three.activemq.com                  localhost   80      1h

Après avoir créé les entrées, nous devrons ajouter la ligne suivante dans /etc/hosts.

127.0.0.1    one.activemq.com two.activemq.com three.activemq.com

Maintenant, nous accédons à la console de gestion, par exemple pour le troisième courtier sous l'URL suivante http://three.activemq.com/console .

activemq-spring-boot-console-kubernetes

Une fois que le courtier est prêt, nous pouvons définir une file d'attente de test. Le nom de cette file d'attente est test-1.

apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemisAddress
metadata:
  name: address-1
spec:
  addressName: address-1
  queueName: test-1
  routingType: anycast

Exécutez l'application Spring Boot sur Kubernetes et connectez-vous à ActiveMQ

Maintenant, déployons l'application grand public. Dans le Deploymentmanifeste, nous devons définir l'URL de connexion du cluster ActiveMQ. Mais attendez… comment le connecter ? Trois courtiers sont exposés à l'aide de trois Kubernetes distincts Service. Heureusement, le démarreur AMQP Spring Boot le prend en charge. Nous pouvons définir les adresses de trois courtiers à l'intérieur de la failoversection. Essayons pour voir ce qui va se passer.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: simple-consumer
spec:
  replicas: 3
  selector:
    matchLabels:
      app: simple-consumer
  template:
    metadata:
      labels:
        app: simple-consumer
    spec:
      containers:
      - name: simple-consumer
        image: piomin/simple-consumer
        env:
          - name: ARTEMIS_URL
            value: failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)
        resources:
          limits:
            memory: 256Mi
            cpu: 500m
          requests:
            memory: 128Mi
            cpu: 250m

L'application est prête à être déployée avec Skaffold. Si vous exécutez la skaffold devcommande, vous déploierez et consulterez les journaux des trois instances de l'application grand public. Quel est le résultat ? Toutes les instances se connectent à la première URL de la liste, comme indiqué ci-dessous.

Heureusement, il existe un paramètre de basculement qui aide à répartir les connexions client plus uniformément sur plusieurs pairs distants. Avec cette failover.randomizeoption, les URI sont mélangés de manière aléatoire avant de tenter de se connecter à l'un d'entre eux. Remplaçons l' ARTEMIS_URLenv dans le Deploymentmanifeste par la ligne suivante :

failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)?failover.randomize=true

La répartition entre les instances de courtier semble légèrement meilleure. Bien sûr, le résultat est aléatoire, vous pouvez donc obtenir des résultats différents.

La première façon de distribuer les connexions est via le Kubernetes dédié Service. Nous n'avons pas à tirer parti des services créés automatiquement par l'opérateur. Nous pouvons créer le nôtre Servicequi équilibre la charge entre tous les pods disponibles avec les courtiers.

kind: Service
apiVersion: v1
metadata:
  name: ex-aao-amqp-lb
spec:
  ports:
    - name: amqp
      protocol: TCP
      port: 5672
  type: ClusterIP
  selector:
    application: ex-aao-app

Désormais, nous pouvons démissionner de la failoversection côté client et nous fier entièrement aux mécanismes de Kubernetes.

spec:
  containers:
  - name: simple-consumer
    image: piomin/simple-consumer
    env:
      - name: ARTEMIS_URL
        value: amqp://ex-aao-amqp-lb:5672

Cette fois, nous ne verrons rien dans les journaux d'application, car toutes les instances se connectent à la même URL. Nous pouvons vérifier une distribution entre toutes les instances de courtier en utilisant par exemple la console Web de gestion. Voici une liste de consommateurs sur la première instance d'ActiveMQ :

Ci-dessous, vous obtiendrez exactement les mêmes résultats pour la deuxième instance. Toutes les instances d'application grand public ont été réparties équitablement entre tous les courtiers disponibles au sein du cluster.

Maintenant, nous allons déployer l'application producteur. Nous utilisons le même Kubernetes Servicepour connecter le cluster ActiveMQ.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: simple-producer
spec:
  replicas: 3
  selector:
    matchLabels:
      app: simple-producer
  template:
    metadata:
      labels:
        app: simple-producer
    spec:
      containers:
        - name: simple-producer
          image: piomin/simple-producer
          env:
            - name: ARTEMIS_URL
              value: amqp://ex-aao-amqp-lb:5672
            - name: DESTINATION
              value: test-1
          ports:
            - containerPort: 8080

Comme nous devons appeler le point de terminaison HTTP, créons le Servicepour l'application productrice :

apiVersion: v1
kind: Service
metadata:
  name: simple-producer
spec:
  type: ClusterIP
  selector:
    app: simple-producer
  ports:
  - port: 8080

Déployons l'application du producteur à l'aide de Skaffold avec la redirection de port activée :

$ skaffold dev --port-forward

Voici une liste de nos Deployments:

Pour envoyer un message de test, exécutez simplement la commande suivante :

$ curl http://localhost:8080/producer/send \
  -d "{\"source\":\"test\",\"content\":\"Hello\"}" \
  -H "Content-Type:application/json"

Configuration avancée

Si vous avez besoin d'une distribution de trafic plus avancée entre les courtiers à l'intérieur du cluster, vous pouvez y parvenir de plusieurs manières. Par exemple, nous pouvons remplacer dynamiquement la propriété de configuration lors de l'exécution. Voici un exemple très simple. Après avoir démarré l'application, nous connectons le service externe via HTTP. Il renvoie le numéro d'instance suivant.

@Configuration
public class AmqpConfig {

    @PostConstruct
    public void init() {
        RestTemplate t = new RestTemplateBuilder().build();
        int x = t.getForObject("http://simple-counter:8080/counter", Integer.class);
        System.setProperty("amqphub.amqp10jms.remoteUrl",
                "amqp://ex-aao-amqp-" + x + "-svc:5672");
    }

}

Voici la mise en œuvre de l'application compteur. Il incrémente simplement le nombre et le divise par le nombre d'instances de courtier. Bien sûr, nous pouvons créer une implémentation plus avancée et fournir, par exemple, une connexion à l'instance d'un courtier s'exécutant sur le même nœud Kubernetes que le pod d'application.

@SpringBootApplication
@RestController
@RequestMapping("/counter")
public class CounterApp {

   private static int c = 0;

   public static void main(String[] args) {
      SpringApplication.run(CounterApp.class, args);
   }

   @Value("${DIVIDER:0}")
   int divider;

   @GetMapping
   public Integer count() {
      if (divider > 0)
         return c++ % divider;
      else
         return c++;
   }
}

Dernières pensées

ActiveMQ est une alternative intéressante à RabbitMQ en tant que courtier de messages. Dans cet article, vous avez appris à exécuter, gérer et intégrer ActiveMQ avec Spring Boot sur Kubernetes. Il peut être géré de manière déclarative sur Kubernetes grâce à ActiveMQ Artemis Operator. Vous pouvez également l'intégrer facilement à Spring Boot à l'aide d'un démarreur dédié. Il fournit diverses options de configuration et est activement développé par Red Hat et la communauté.

Lien : https://piotrminkowski.com/2022/07/26/activemq-artemis-with-spring-boot-on-kubernetes/

#kubernetes #springboot #java #rabbitmq 

Comment Exécuter ActiveMQ Artemis Avec Spring Boot Sur Kubernetes
Hans  Marvin

Hans Marvin

1659076485

How to Run ActiveMQ Artemis with Spring Boot on Kubernetes

This article will teach you how to run ActiveMQ on Kubernetes and integrate it with your app through Spring Boot. We will deploy a clustered ActiveMQ broker using a dedicated operator. Then we are going to build and run two Spring Boot apps. The first of them is running in multiple instances and receiving messages from the queue, while the second is sending messages to that queue. In order to test the ActiveMQ cluster, we will use Kind. The consumer app connects to the cluster using several different modes. We will discuss those modes in detail.

You can find a lot of articles about other message brokers like RabbitMQ or Kafka on my blog. If you would to read about RabbitMQ on Kubernetes please refer to that article. In order to find out more about Kafka and Spring Boot integration, you can read the article about Kafka Streams and Spring Cloud Stream available here. Previously I didn’t write much about ActiveMQ, but it is also a very popular message broker. For example, it supports the latest version of AMQP protocol, while Rabbit is based on their extension of AMQP 0.9.

See more at: https://piotrminkowski.com/2022/07/26/activemq-artemis-with-spring-boot-on-kubernetes/

#kubernetes #springboot #java #rabbitmq 

How to Run ActiveMQ Artemis with Spring Boot on Kubernetes