1673236819
We are going to use RabbitMQ in order to communicate between different microservices in Node.js . This example uses the Direct exchange type in a logging system.
Timestamps
---------------------
0:00- Introduction
0:20- Explaining the system to develop
1:50- Creating the Logger Microservice (Producer/ Publisher)
2:44- Steps to create a Producer
3:08- Creating the Producer class
11:33- Creating the API
15:27- Testing the API and the producer with Postman
17:30- RabbitMQ Management UI
19:05- Creating the Info Microservice (First Consumer)
19:12- Steps to create a Consumer
20:34- Creating the consumeMessages function
27:26- Testing the Info Consumer
29:50- Analysing the changes in rabbitMQ Management
30:33- Creating the WarningAndError Microservice (Second Consumer)
32:34- Testing the WarningAndError Consumer
32:55- Testing the whole application
36:10- Analyzing the changes in rabbitMQ Management
36:48- Taking a look at our Initial Diagram
37:25- Explaining the important of ACK
39:17- Final Recap
Source Code : https://github.com/charbelh3/RabbitMQ-Logger-Example
Subscribe: https://www.youtube.com/@Computerix/featured
1670482672
In this tutorial, we'll learn about Microservices and RabbitMQ in NestJS both conceptually and also practically by creating the Facebook Messenger clone. We will also learn about and use Docker to easily setup our microservice architecture for the Facebook Messenger Clone
00:00 - Introduction
00:28 - Prerequisites
05:56 - System_Design [RabbitMQ]
09:53 - RabbitMQ_Fast_Version
11:36 - RabbitMQ
12:43 - Coding (FB Messenger Clone)
#nestjs #rabbitmq #microservices #docker
1667954913
Learn about software system design and microservices. This course is a hands-on approach to learning about microservice architectures and distributed systems using Python, Kubernetes, RabbitMQ, MongoDB, mySQL.
⭐️ Contents ⭐️
(0:00:00) Intro
(0:01:02) Overview
(0:02:47) Installation & Setup?
(0:10:16) Auth Service Code
(0:32:25) Auth Flow Overview & JWTs
(0:53:04) Auth Service Deployment
(0:56:08) Auth Dockerfile
(1:20:05) Kubernetes
(1:37:26) Gateway Service Code
(1:42:34) MongoDB & GridFs
(1:47:04) Architecture Overview (RabbitMQ)
(1:49:50) Synchronous Interservice Communication
(1:50:49) Asynchronous Interservice Communication
(1:53:19) Strong Consistency
(1:54:07) Eventual Consistency
(2:19:16) RabbitMQ
(2:21:16) Gateway Service Deployment
(2:35:34) Kubernetes Ingress
(2:46:28) Kubernetes StatefulSet
(2:51:18) RabbitMQ Deployment
(3:09:35) Converter Service Code
(3:33:43) Converter Service Deployment
(4:21:09) Checkpoint
(4:22:11) Update Gateway Service
(4:31:46) Notification Service Code
(4:43:24) Notification Service Deployment
(4:51:55) Sanity Check
(5:05:54) End
Kubernetes API Reference: https://kubernetes.io/docs/reference/kubernetes-api/
⭐️ References ⭐️
https://www.mongodb.com/docs/
https://www.rabbitmq.com/documentation.html
https://docs.microsoft.com/en-us/azure/architecture/patterns/competing-consumers
https://docs.microsoft.com/en-us/azure/architecture/microservices/design/interservice-communication
https://cloud.google.com/datastore/docs/articles/balancing-strong-and-eventual-consistency-with-google-cloud-datastore
#microservice #python #kubernetes #rabbitmq #mongodb #mysql
1666639260
This repository is NOT ACTIVELY MAINTAINED. Consider using a different fork instead: rabbitmq/amqp091-go. In case of questions, start a discussion in that repo or use other RabbitMQ community resources.
This project has been used in production systems for many years. As of 2022, this repository is NOT ACTIVELY MAINTAINED.
This repository is very strict about any potential public API changes. You may want to consider rabbitmq/amqp091-go which is more willing to adapt the API.
This library supports two most recent Go release series, currently 1.10 and 1.11.
This project supports RabbitMQ versions starting with 2.0
but primarily tested against reasonably recent 3.x
releases. Some features and behaviours may be server version-specific.
Provide a functional interface that closely represents the AMQP 0.9.1 model targeted to RabbitMQ as a server. This includes the minimum necessary to interact the semantics of the protocol.
Things not intended to be supported.
See the 'examples' subdirectory for simple producers and consumers executables. If you have a use-case in mind which isn't well-represented by the examples, please file an issue.
Use Godoc documentation for reference and usage.
RabbitMQ tutorials in Go are also available.
Pull requests are very much welcomed. Create your pull request on a non-master branch, make sure a test or example is included that covers your change and your commits represent coherent changes that include a reason for the change.
To run the integration tests, make sure you have RabbitMQ running on any host, export the environment variable AMQP_URL=amqp://host/
and run go test -tags integration
. TravisCI will also run the integration tests.
Thanks to the community of contributors.
Author: streadway
Source Code: https://github.com/streadway/amqp
License: BSD-2-Clause license
1665032783
This library is a pure PHP implementation of the AMQP 0-9-1 protocol. It's been tested against RabbitMQ.
The library was used for the PHP examples of RabbitMQ in Action and the official RabbitMQ tutorials.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.
Thanks to videlalvaro and postalservice14 for creating php-amqplib
.
The package is now maintained by Ramūnas Dronga, Luke Bakken and several VMware engineers working on RabbitMQ.
Starting with version 2.0 this library uses AMQP 0.9.1
by default and thus requires RabbitMQ 2.0 or later version. Usually server upgrades do not require any application code changes since the protocol changes very infrequently but please conduct your own testing before upgrading.
Since the library uses AMQP 0.9.1
we added support for the following RabbitMQ extensions:
Extensions that modify existing methods like alternate exchanges
are also supported.
enqueue/amqp-lib is a amqp interop compatible wrapper.
AMQProxy is a proxy library with connection and channel pooling/reusing. This allows for lower connection and channel churn when using php-amqplib, leading to less CPU usage of RabbitMQ.
Ensure you have composer installed, then run the following command:
$ composer require php-amqplib/php-amqplib
That will fetch the library and its dependencies inside your vendor folder. Then you can add the following to your .php files in order to use the library
require_once __DIR__.'/vendor/autoload.php';
Then you need to use
the relevant classes, for example:
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Message\AMQPMessage;
With RabbitMQ running open two Terminals and on the first one execute the following commands to start the consumer:
$ cd php-amqplib/demo
$ php amqp_consumer.php
Then on the other Terminal do:
$ cd php-amqplib/demo
$ php amqp_publisher.php some text to publish
You should see the message arriving to the process on the other Terminal
Then to stop the consumer, send to it the quit
message:
$ php amqp_publisher.php quit
If you need to listen to the sockets used to connect to RabbitMQ then see the example in the non blocking consumer.
$ php amqp_consumer_non_blocking.php
Please see CHANGELOG for more information what has changed recently.
http://php-amqplib.github.io/php-amqplib/
To not repeat ourselves, if you want to learn more about this library, please refer to the official RabbitMQ tutorials.
amqp_ha_consumer.php
: demos the use of mirrored queues.amqp_consumer_exclusive.php
and amqp_publisher_exclusive.php
: demos fanout exchanges using exclusive queues.amqp_consumer_fanout_{1,2}.php
and amqp_publisher_fanout.php
: demos fanout exchanges with named queues.amqp_consumer_pcntl_heartbeat.php
: demos signal-based heartbeat sender usage.basic_get.php
: demos obtaining messages from the queues by using the basic get AMQP call.If you have a cluster of multiple nodes to which your application can connect, you can start a connection with an array of hosts. To do that you should use the create_connection
static method.
For example:
$connection = AMQPStreamConnection::create_connection([
['host' => HOST1, 'port' => PORT, 'user' => USER, 'password' => PASS, 'vhost' => VHOST],
['host' => HOST2, 'port' => PORT, 'user' => USER, 'password' => PASS, 'vhost' => VHOST]
],
$options);
This code will try to connect to HOST1
first, and connect to HOST2
if the first connection fails. The method returns a connection object for the first successful connection. Should all connections fail it will throw the exception from the last connection attempt.
See demo/amqp_connect_multiple_hosts.php
for more examples.
Let's say you have a process that generates a bunch of messages that are going to be published to the same exchange
using the same routing_key
and options like mandatory
. Then you could make use of the batch_basic_publish
library feature. You can batch messages like this:
$msg = new AMQPMessage($msg_body);
$ch->batch_basic_publish($msg, $exchange);
$msg2 = new AMQPMessage($msg_body);
$ch->batch_basic_publish($msg2, $exchange);
and then send the batch like this:
$ch->publish_batch();
Let's say our program needs to read from a file and then publish one message per line. Depending on the message size, you will have to decide when it's better to send the batch. You could send it every 50 messages, or every hundred. That's up to you.
Another way to speed up your message publishing is by reusing the AMQPMessage
message instances. You can create your new message like this:
$properties = array('content_type' => 'text/plain', 'delivery_mode' => AMQPMessage::DELIVERY_MODE_PERSISTENT);
$msg = new AMQPMessage($body, $properties);
$ch->basic_publish($msg, $exchange);
Now let's say that while you want to change the message body for future messages, you will keep the same properties, that is, your messages will still be text/plain
and the delivery_mode
will still be AMQPMessage::DELIVERY_MODE_PERSISTENT
. If you create a new AMQPMessage
instance for every published message, then those properties would have to be re-encoded in the AMQP binary format. You could avoid all that by just reusing the AMQPMessage
and then resetting the message body like this:
$msg->setBody($body2);
$ch->basic_publish($msg, $exchange);
AMQP imposes no limit on the size of messages; if a very large message is received by a consumer, PHP's memory limit may be reached within the library before the callback passed to basic_consume
is called.
To avoid this, you can call the method AMQPChannel::setBodySizeLimit(int $bytes)
on your Channel instance. Body sizes exceeding this limit will be truncated, and delivered to your callback with a AMQPMessage::$is_truncated
flag set to true
. The property AMQPMessage::$body_size
will reflect the true body size of a received message, which will be higher than strlen(AMQPMessage::getBody())
if the message has been truncated.
Note that all data above the limit is read from the AMQP Channel and immediately discarded, so there is no way to retrieve it within your callback. If you have another consumer which can handle messages with larger payloads, you can use basic_reject
or basic_nack
to tell the server (which still has a complete copy) to forward it to a Dead Letter Exchange.
By default, no truncation will occur. To disable truncation on a Channel that has had it enabled, pass 0
(or null
) to AMQPChannel::setBodySizeLimit()
.
Some RabbitMQ clients using automated connection recovery mechanisms to reconnect and recover channels and consumers in case of network errors.
Since this client is using a single-thread, you can set up connection recovery using exception handling mechanism.
Exceptions which might be thrown in case of connection errors:
PhpAmqpLib\Exception\AMQPConnectionClosedException
PhpAmqpLib\Exception\AMQPIOException
\RuntimeException
\ErrorException
Some other exceptions might be thrown, but connection can still be there. It's always a good idea to clean up an old connection when handling an exception before reconnecting.
For example, if you want to set up a recovering connection:
$connection = null;
$channel = null;
while(true){
try {
$connection = new AMQPStreamConnection(HOST, PORT, USER, PASS, VHOST);
// Your application code goes here.
do_something_with_connection($connection);
} catch(AMQPRuntimeException $e) {
echo $e->getMessage();
cleanup_connection($connection);
usleep(WAIT_BEFORE_RECONNECT_uS);
} catch(\RuntimeException $e) {
cleanup_connection($connection);
usleep(WAIT_BEFORE_RECONNECT_uS);
} catch(\ErrorException $e) {
cleanup_connection($connection);
usleep(WAIT_BEFORE_RECONNECT_uS);
}
}
A full example is in demo/connection_recovery_consume.php
.
This code will reconnect and retry the application code every time the exception occurs. Some exceptions can still be thrown and should not be handled as a part of reconnection process, because they might be application errors.
This approach makes sense mostly for consumer applications, producers will require some additional application code to avoid publishing the same message multiple times.
This was a simplest example, in a real-life application you might want to control retr count and maybe gracefully degrade wait time to reconnection.
You can find a more excessive example in #444
If you have installed PCNTL extension dispatching of signal will be handled when consumer is not processing message.
$pcntlHandler = function ($signal) {
switch ($signal) {
case \SIGTERM:
case \SIGUSR1:
case \SIGINT:
// some stuff before stop consumer e.g. delete lock etc
pcntl_signal($signal, SIG_DFL); // restore handler
posix_kill(posix_getpid(), $signal); // kill self with signal, see https://www.cons.org/cracauer/sigint.html
case \SIGHUP:
// some stuff to restart consumer
break;
default:
// do nothing
}
};
pcntl_signal(\SIGTERM, $pcntlHandler);
pcntl_signal(\SIGINT, $pcntlHandler);
pcntl_signal(\SIGUSR1, $pcntlHandler);
pcntl_signal(\SIGHUP, $pcntlHandler);
To disable this feature just define constant AMQP_WITHOUT_SIGNALS
as true
<?php
define('AMQP_WITHOUT_SIGNALS', true);
... more code
If you have installed PCNTL extension and are using PHP 7.1 or greater, you can register a signal-based heartbeat sender.
<?php
$sender = new PCNTLHeartbeatSender($connection);
$sender->register();
... code
$sender->unregister();
If you want to know what's going on at a protocol level then add the following constant to your code:
<?php
define('AMQP_DEBUG', true);
... more code
?>
To run the publishing/consume benchmark type:
$ make benchmark
To successfully run the tests you need to first have a stock RabbitMQ broker running locally.Then, run tests like this:
$ make test
Please see CONTRIBUTING for details.
If you still want to use the old version of the protocol then you can do it by setting the following constant in your configuration code:
define('AMQP_PROTOCOL', '0.8');
The default value is '0.9.1'
.
If for some reason you don't want to use composer, then you need to have an autoloader in place fo the library classes. People have reported to use this autoloader with success.
Below is the original README file content. Credits goes to the original authors.
PHP library implementing Advanced Message Queuing Protocol (AMQP).
The library is port of python code of py-amqplib http://barryp.org/software/py-amqplib/
It have been tested with RabbitMQ server.
Project home page: http://code.google.com/p/php-amqplib/
For discussion, please join the group:
http://groups.google.com/group/php-amqplib-devel
For bug reports, please use bug tracking system at the project page.
Patches are very welcome!
Author: Vadim Zaliva lord@crocodile.org
Author: php-amqplib
Source Code: https://github.com/php-amqplib/php-amqplib
License: LGPL-2.1 license
1660013045
This tutorial will guide you how to use AMQP messaging via RabbitMQ in a Spring Boot application. Configure the Message Converters to switch from default Java deserialization to JSON
GitHub:
https://github.com/Java-Techie-jt/springboot-rabbitmq-example
1659868440
GarageMQ
GarageMQ is a message broker that implement the Advanced Message Queuing Protocol (AMQP). Compatible with any AMQP or RabbitMQ clients (tested streadway/amqp and php-amqp lib)
Simple demo server on Digital Ocean, 2 GB Memory / 25 GB Disk / FRA1 - Ubuntu Docker 17.12.0~ce on 16.04
Server | Port | Admin port | Login | Password | Virtual Host |
---|---|---|---|---|---|
46.101.117.78 | 5672 | 15672 | guest | guest | / |
amqp://guest:guest@46.101.117.78:5672
The quick way to start with GarageMQ is by using docker
. You can build it by your own or pull from docker-hub
docker pull amplitudo/garagemq
docker run --name garagemq -p 5672:5672 -p 15672:15672 amplitudo/garagemq
or
go get -u github.com/valinurovam/garagemq/...
cd $GOPATH/src/github.com/valinurovam/garagemq
docker build -t garagemq .
docker run --name garagemq -p 5672:5672 -p 15672:15672 garagemq
You can also use go get: go get -u github.com/valinurovam/garagemq/...
go get -u github.com/valinurovam/garagemq/...
cd $GOPATH/src/github.com/valinurovam/garagemq
make build.all && make run
Flag | Default | Description | ENV |
---|---|---|---|
--config | default config | Config path | GMQ_CONFIG |
--log-file | stdout | Log file path or stdout , stderr | GMQ_LOG_FILE |
--log-level | info | Logger level | GMQ_LOG_LEVEL |
--hprof | false | Enable or disable hprof profiler | GMQ_HPROF |
--hprof-host | 0.0.0.0 | Profiler host | GMQ_HPROF_HOST |
--hprof-port | 8080 | Profiler port | GMQ_HPROF_PORT |
# Proto name to implement (amqp-rabbit or amqp-0-9-1)
proto: amqp-rabbit
# User list
users:
- username: guest
password: 084e0343a0486ff05530df6c705c8bb4 # guest md5
# Server TCP settings
tcp:
ip: 0.0.0.0
port: 5672
nodelay: false
readBufSize: 196608
writeBufSize: 196608
# Admin-server settings
admin:
ip: 0.0.0.0
port: 15672
queue:
shardSize: 8192
maxMessagesInRam: 131072
# DB settings
db:
# default path
defaultPath: db
# backend engine (badger or buntdb)
engine: badger
# Default virtual host path
vhost:
defaultPath: /
# Security check rule (md5 or bcrypt)
security:
passwordCheck: md5
connection:
channelsMax: 4096
frameMaxSize: 65536
Performance tests with load testing tool https://github.com/rabbitmq/rabbitmq-perf-test on test-machine:
MacBook Pro (15-inch, 2016)
Processor 2,6 GHz Intel Core i7
Memory 16 GB 2133 MHz LPDDR3
./bin/runjava com.rabbitmq.perf.PerfTest --exchange test -uri amqp://guest:guest@localhost:5672 --queue test --consumers 10 --producers 5 --qos 100 -flag persistent
...
...
id: test-235131-686, sending rate avg: 53577 msg/s
id: test-235131-686, receiving rate avg: 51941 msg/s
./bin/runjava com.rabbitmq.perf.PerfTest --exchange test -uri amqp://guest:guest@localhost:5672 --queue test --consumers 10 --producers 5 --qos 100
...
...
id: test-235231-085, sending rate avg: 71247 msg/s
id: test-235231-085, receiving rate avg: 69009 msg/s
Database backend is changeable through config db.engine
db:
defaultPath: db
engine: badger
db:
defaultPath: db
engine: buntdb
basic.qos
method implemented for standard AMQP and RabbitMQ mode. It means that by default qos applies for connection(global=true) or channel(global=false). RabbitMQ Qos means for channel(global=true) or each new consumer(global=false).
The administration server is available at standard :15672
port and is read only mode
at the moment. Main page above, and more screenshots at /readme folder
Contribution of any kind is always welcome and appreciated. Contribution Guidelines in WIP
Author: Valinurovam
Source Code: https://github.com/valinurovam/garagemq
License: MIT license
1659790500
March Hare is an idiomatic, fast and well-maintained (J)Ruby DSL on top of the RabbitMQ Java client. It strives to combine strong parts of the Java client with over 4 years of experience using and developing Ruby amqp gem and Bunny.
March Hare is not
March Hare has been around since 2011 and can be considered a mature library.
It is based on the RabbitMQ Java client, which is officially supported by the RabbitMQ team at VMware.
gem install march_hare
gem "march_hare", "~> 4.4"
MarchHare documentation guides are mostly complete.
Several code examples are available. Our test suite also has many code examples that demonstrate various parts of the API.
API reference is available.
March Hare supports JRuby 9.0 or later.
March Hare requires JDK 8 or later.
See ChangeLog.md.
CI is hosted by travis-ci.org
You'll need a running RabbitMQ instance with all defaults and management plugin enabled on your local machine to run the specs.
To boot one via docker you can use:
docker run -p 5672:5672 -p 15672:15672 rabbitmq:3-management
And then you can run the specs using rspec
:
bundle exec rspec
Author: ruby-amqp
Source code: https://github.com/ruby-amqp/march_hare
License: MIT license
1659748440
From a single machine to a cluster, it is easy to scale horizontally simply by adding servers to cope with greater traffic and concurrency
Browser cache/Nginx cache/page cache/object cache/RabbitMQ queue asynchronous ordering, reduce network traffic, reduce database pressure, improve the system's concurrent processing capability
SpringBoot/RabbitMQ/Redis/MySQL, based on the most popular Java microservices framework
Graphic verification code, flow limiting and brush prevention, interface address hiding, various security mechanisms to reject the robot ticket brushing
The bottleneck is the database's ability to handle requests. After a large number of requests are sent to the database, the database may time out or break down due to its limited processing capability. So the idea is to try to intercept requests upstream of the system.
For applications that read a lot (read inventory) and write a little (create order), use caching more (for inventory query operations through caching, reduce database operations)
Cache, application, database cluster, load balancing; Asynchronous message processing
The foreground can do some restrictions to the normal user's operation through JS, and cache some static resources with CDN and user browser
Redis has a decr() method that implements decrement and atomicity
This is done by interceptor, and we have a custom annotation that will mark a method, specify the number of times it is accessed per unit of time, and if it exceeds the requirement, it will be intercepted.
Interceptor is inherited HandlerInterceptorAdapter, rewriting is preHandle method, in this method, will visit frequency synchronization to Redis, there is the period of validity of the key/value pair. Finally, you need to configure the interceptor into the project, inheriting the WebMvcConfigurerAdapter and overriding the addInterceptors() method.
Chap01: Integrate Mybatis and Redis
Chap02: MD5 encryption and globle exception handler
Chap03: Implement distributed session via redis
Chap04: Implement the flash sale function
Chap05: Using JMeter to pressure test
Chap06: Page cache and object cache
Chap07: Integrate rabbitMQ and optimize the interface
Chap08: Optimizing the flash sale system after integrated rabbitMQ
Chap10: Conclusion the project
Author: codesssss
Source code: https://github.com/codesssss/FlashSale
#spring #springboot #java #mysql #rabbitmq
1659128460
This repository contains CQRS implementation in Java. I've written this code-base step by step on Medium that is my Turkish content as called "Java ile CQRS Design Pattern | Docker, Elasticsearch, RabbitMQ, Spring, MySQL"
There are several basic steps below that we need to execute.
Firstly, we need executing docker-compose.yml file, that is given below, due to setuping environment tech. Compose file is already here. docker-compose.yml
version: "3.9"
services:
database:
container_name: classifieds_mysql_container
image: mysql:latest
restart: always
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: classifieds
MYSQL_USER: user
MYSQL_PASSWORD: password
volumes:
- mysql_database:/var/lib/mysql
rabbitmq:
container_name: classifieds_rabbitmq_container
image: rabbitmq:3-management
ports:
- "5672:5672"
- "15672:15672"
elasticsearch:
container_name: classifieds_elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
logging:
driver: none
ports:
- "9300:9300"
- "9200:9200"
volumes:
mysql_database:
esdata:
docker-compose up
After executing docker-compose file we need creating classified, that is a entity we use during the application, table on MySQL. Database connection information is already defined in docker-compose.yml file, after the MySQL connection we use this schema that is below.
CREATE TABLE `classified` (
`id` bigint NOT NULL AUTO_INCREMENT,
`title` varchar(100) DEFAULT NULL,
`price` double DEFAULT NULL,
`detail` text,
`categoryId` bigint DEFAULT NULL,
PRIMARY KEY (`id`)
) AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
We need create index on Elasticsearch because of representing database table on it. If you need checking Elasticsearch container status, you may use cURL code that is stay below.
curl -XGET "http://localhost:9200/_cat/health?format=json&pretty"
Create Index with mapping on Elasticsearch:
curl --location --request PUT 'http://localhost:9200/classifieds' \
--header 'Content-Type: application/json' \
--data-raw '{
"settings": {
"index": {
"number_of_shards": 1,
"number_of_replicas": 1
}
},
"mappings": {
"properties": {
"id": {
"type": "long"
},
"title": {
"type": "text"
},
"price": {
"type": "double"
},
"detail": {
"type": "text"
},
"categoryId": {
"type": "long"
}
}
}
}'
We will see mapping on Elasticsearch if there is no any error. We may use this cURL code that is below for showing mapping.
curl -XGET "http://localhost:9200/classifieds/_mapping?pretty&format=json"
It show use created index's mapping.
We've bound RabbitMQ port in docker-compose file then we've used default RabbitMQ port, we may need checking RabbitMQ status, we are able to go this link to show RabbitMQ dashboard. http://localhost:15672
Sending request to api then creating data on MySQL then sending RabbitMQ event that will update Elasticsearch:
curl --location --request POST 'http://localhost:8080/classifieds' \
--header 'Content-Type: application/json' \
--data-raw '{
"title": "Macbook Pro 2019",
"detail": "Sahibinden çok temiz Macbook Pro 2019.",
"price": 27894,
"categoryId": 47
}'
Reading classified list from Elasticsearch:
curl --location --request GET 'http://localhost:8080/classifieds'
Download details:
Author: yusufyilmazfr
Source code: https://github.com/yusufyilmazfr/cqrs-design-pattern-java
License:
#spring #java #springboot #elasticsearch #rabbitmq #docker #mysql
1659105556
Bài viết này sẽ hướng dẫn bạn cách chạy ActiveMQ trên Kubernetes và tích hợp nó với ứng dụng của bạn thông qua Spring Boot. Chúng tôi sẽ triển khai một nhà môi giới ActiveMQ theo cụm bằng cách sử dụng một nhà điều hành chuyên dụng . Sau đó, chúng tôi sẽ xây dựng và chạy hai ứng dụng Spring Boot. Đầu tiên trong số chúng đang chạy trong nhiều trường hợp và nhận tin nhắn từ hàng đợi, trong khi bản thứ hai đang gửi tin nhắn đến hàng đợi đó. Để kiểm tra cụm ActiveMQ, chúng tôi sẽ sử dụng Kind . Ứng dụng dành cho người tiêu dùng kết nối với cụm bằng một số chế độ khác nhau. Chúng tôi sẽ thảo luận chi tiết về các chế độ đó.
Bạn có thể tìm thấy rất nhiều bài viết về các nhà môi giới tin nhắn khác như RabbitMQ hoặc Kafka trên blog của tôi. Nếu bạn muốn đọc về RabbitMQ trên Kubernetes, vui lòng tham khảo bài viết đó . Để tìm hiểu thêm về tích hợp Kafka và Spring Boot, bạn có thể đọc bài viết về Kafka Streams và Spring Cloud Stream có sẵn tại đây . Trước đây tôi không viết nhiều về ActiveMQ, nhưng nó cũng là một nhà môi giới tin nhắn rất phổ biến. Ví dụ: nó hỗ trợ phiên bản mới nhất của giao thức AMQP, trong khi Rabbit dựa trên phần mở rộng của AMQP 0.9.
Nếu bạn muốn thử nó một mình, bạn luôn có thể xem qua mã nguồn của tôi. Để làm được điều đó, bạn cần sao chép kho lưu trữ GitHub của tôi . Sau đó vào thư mục messaging
. Bạn sẽ thấy có ba ứng dụng Spring Boot simple-producer
: simple-consumer
và simple-counter
. Sau đó, bạn chỉ nên làm theo hướng dẫn của tôi. Hãy bắt đầu nào.
Hãy bắt đầu với việc tích hợp giữa các ứng dụng Spring Boot của chúng tôi và nhà môi giới ActiveMQ Artemis. Trên thực tế, ActiveMQ Artemis là cơ sở của sản phẩm thương mại do Red Hat cung cấp có tên AMQ Broker . Red Hat tích cực phát triển một bộ khởi động Spring Boot cho ActiveMQ và một nhà điều hành để chạy nó trên Kubernetes. Để truy cập Spring Boot, bạn cần bao gồm kho lưu trữ Red Hat Maven trong pom.xml
tệp của mình:
<repository>
<id>red-hat-ga</id>
<url>https://maven.repository.redhat.com/ga</url>
</repository>
Sau đó, bạn có thể bao gồm một người mới bắt đầu trong Maven của mình pom.xml
:
<dependency>
<groupId>org.amqphub.spring</groupId>
<artifactId>amqp-10-jms-spring-boot-starter</artifactId>
<version>2.5.6</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
</exclusion>
</exclusions>
</dependency>
Sau đó, chúng tôi chỉ cần bật JMS cho ứng dụng của mình với @EnableJMS
chú thích:
@SpringBootApplication
@EnableJms
public class SimpleConsumer {
public static void main(String[] args) {
SpringApplication.run(SimpleConsumer.class, args);
}
}
Ứng dụng của chúng tôi rất đơn giản. Nó chỉ nhận và in một tin nhắn đến. Phương thức nhận tin nhắn phải được chú thích bằng @JmsListener
. Trường destination
chứa tên của hàng đợi đích.
@Service
public class Listener {
private static final Logger LOG = LoggerFactory
.getLogger(Listener.class);
@JmsListener(destination = "test-1")
public void processMsg(SimpleMessage message) {
LOG.info("============= Received: " + message);
}
}
Đây là lớp đại diện cho thông điệp của chúng tôi:
public class SimpleMessage implements Serializable {
private Long id;
private String source;
private String content;
public SimpleMessage() {
}
public SimpleMessage(Long id, String source, String content) {
this.id = id;
this.source = source;
this.content = content;
}
// ... GETTERS AND SETTERS
@Override
public String toString() {
return "SimpleMessage{" +
"id=" + id +
", source='" + source + '\'' +
", content='" + content + '\'' +
'}';
}
}
Cuối cùng, chúng ta cần thiết lập cài đặt cấu hình kết nối. Với AMQP Spring Boot khởi động rất đơn giản. Chúng tôi chỉ cần thiết lập thuộc tính amqphub.amqp10jms.remoteUrl
. Hiện tại, chúng ta sẽ dựa trên biến môi trường được đặt ở cấp Kubernetes Deployment
.
amqphub.amqp10jms.remoteUrl = ${ARTEMIS_URL}
Ứng dụng của nhà sản xuất khá giống nhau. Thay vì chú thích để nhận tin nhắn, chúng tôi sử dụng Spring JmsTemplate
để tạo và gửi tin nhắn đến hàng đợi đích. Phương thức gửi tin nhắn được hiển thị dưới dạng POST /producer/send
điểm cuối HTTP.
@RestController
@RequestMapping("/producer")
public class ProducerController {
private static long id = 1;
private final JmsTemplate jmsTemplate;
@Value("${DESTINATION}")
private String destination;
public ProducerController(JmsTemplate jmsTemplate) {
this.jmsTemplate = jmsTemplate;
}
@PostMapping("/send")
public SimpleMessage send(@RequestBody SimpleMessage message) {
if (message.getId() == null) {
message.setId(id++);
}
jmsTemplate.convertAndSend(destination, message);
return message;
}
}
Các ứng dụng mẫu của chúng tôi đã sẵn sàng. Trước khi triển khai chúng, chúng ta cần chuẩn bị cụm Kubernetes cục bộ. Chúng tôi sẽ triển khai ở đó cụm ActiveMQ bao gồm ba nhà môi giới. Do đó, cụm Kubernetes của chúng tôi cũng sẽ bao gồm ba nút. Do đó, có ba phiên bản của ứng dụng dành cho người tiêu dùng đang chạy trên Kubernetes. Họ đang kết nối với các nhà môi giới ActiveMQ qua giao thức AMQP. Ngoài ra còn có một phiên bản duy nhất của ứng dụng nhà sản xuất gửi tin nhắn theo yêu cầu. Đây là sơ đồ kiến trúc của chúng tôi.
Để chạy cục bộ một cụm Kubernetes nhiều nút, chúng tôi sẽ sử dụng Kind. Chúng tôi sẽ kiểm tra không chỉ giao tiếp qua giao thức AMQP mà còn kiểm tra bảng điều khiển quản lý ActiveMQ qua HTTP. Vì ActiveMQ sử dụng các dịch vụ không sử dụng để hiển thị bảng điều khiển web, nên chúng tôi phải tạo và định cấu hình Ingress on Kind để truy cập nó. Hãy bắt đầu nào.
Trong bước đầu tiên, chúng ta sẽ tạo một cụm Kind. Nó bao gồm một máy bay điều khiển và ba công nhân. Cấu hình phải được chuẩn bị chính xác để chạy Nginx Ingress Controller. Chúng ta nên thêm ingress-ready
nhãn vào một nút công nhân và hiển thị các cổng 80
và 443
. Đây là phiên bản cuối cùng của tệp cấu hình Kind:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- role: worker
- role: worker
Bây giờ, hãy tạo một cụm Kind bằng cách thực hiện lệnh sau:
$ kind create cluster --config kind-config.yaml
Nếu cụm của bạn đã được tạo thành công, bạn sẽ thấy thông tin tương tự:
Sau đó, hãy cài đặt Nginx Ingress Controller. Nó chỉ là một lệnh duy nhất:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
Hãy xác minh cài đặt:
$ kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-wbbzh 0/1 Completed 0 1m
ingress-nginx-admission-patch-ws2mv 0/1 Completed 0 1m
ingress-nginx-controller-86b6d5756c-rkbmz 1/1 Running 0 1m
Cuối cùng, chúng tôi có thể tiến hành cài đặt ActiveMQ Artemis. Đầu tiên, hãy cài đặt các CRD bắt buộc. Bạn có thể tìm thấy tất cả các tệp kê khai YAML bên trong kho của nhà điều hành trên GitHub.
$ git clone https://github.com/artemiscloud/activemq-artemis-operator.git
$ cd activemq-artemis-operator
Tệp kê khai có CRD nằm trong thư mục deploy/crds
:
$ kubectl create -f ./deploy/crds
Sau đó, chúng ta có thể cài đặt toán tử:
$ kubectl create -f ./deploy/service_account.yaml
$ kubectl create -f ./deploy/role.yaml
$ kubectl create -f ./deploy/role_binding.yaml
$ kubectl create -f ./deploy/election_role.yaml
$ kubectl create -f ./deploy/election_role_binding.yaml
$ kubectl create -f ./deploy/operator_config.yaml
$ kubectl create -f ./deploy/operator.yaml
Để tạo một cụm, chúng ta phải tạo ActiveMQArtemis
đối tượng. Nó chứa một số nhà môi giới là một phần của cụm (1) . Chúng ta cũng nên đặt bộ truy cập, để hiển thị cổng AMQP bên ngoài mỗi nhóm môi giới (2) . Tất nhiên, chúng tôi cũng sẽ tiết lộ bảng điều khiển quản lý (3) .
apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemis
metadata:
name: ex-aao
spec:
deploymentPlan:
size: 3 # (1)
image: placeholder
messageMigration: true
resources:
limits:
cpu: "500m"
memory: "1024Mi"
requests:
cpu: "250m"
memory: "512Mi"
acceptors: # (2)
- name: amqp
protocols: amqp
port: 5672
connectionsAllowed: 5
console: # (3)
expose: true
Sau khi ActiveMQArtemis
được tạo và nhà điều hành bắt đầu quá trình triển khai. Nó tạo ra StatefulSet
đối tượng:
$ kubectl get statefulset
NAME READY AGE
ex-aao-ss 3/3 1m
Nó bắt đầu tất cả ba nhóm với các nhà môi giới một cách tuần tự:
$ kubectl get pod -l application=ex-aao-app
NAME READY STATUS RESTARTS AGE
ex-aao-ss-0 1/1 Running 0 5m
ex-aao-ss-1 1/1 Running 0 3m
ex-aao-ss-2 1/1 Running 0 1m
Hãy hiển thị danh sách các Service
s được tạo bởi toán tử. Mỗi nhà môi giới có một đơn Service
để hiển thị cổng AMQP ( ex-aao-amqp-*
) và bảng điều khiển web ( ex-aao-wsconsj-*
):
Nhà điều hành tự động tạo các đối tượng Ingress trên mỗi bảng điều khiển web Service
. Chúng tôi sẽ sửa đổi chúng bằng cách thêm các máy chủ khác nhau. Giả sử đó là one.activemq.com
miền cho nhà môi giới đầu tiên, two.activemq.com
cho nhà môi giới thứ hai, v.v.
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
ex-aao-wconsj-0-svc-ing <none> one.activemq.com localhost 80 1h
ex-aao-wconsj-1-svc-ing <none> two.activemq.com localhost 80 1h
ex-aao-wconsj-2-svc-ing <none> three.activemq.com localhost 80 1h
Sau khi tạo các lần nhập, chúng ta sẽ phải thêm dòng sau vào /etc/hosts
.
127.0.0.1 one.activemq.com two.activemq.com three.activemq.com
Bây giờ, chúng tôi truy cập bảng điều khiển quản lý, chẳng hạn như đối với nhà môi giới thứ ba theo URL sau http://three.activemq.com/console .
Khi nhà môi giới đã sẵn sàng, chúng tôi có thể xác định một hàng đợi kiểm tra. Tên của hàng đợi đó là test-1
.
apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemisAddress
metadata:
name: address-1
spec:
addressName: address-1
queueName: test-1
routingType: anycast
Bây giờ, hãy triển khai ứng dụng dành cho người tiêu dùng. Trong Deployment
tệp kê khai, chúng tôi phải đặt URL kết nối cụm ActiveMQ. Nhưng chờ đã ... làm thế nào để kết nối nó? Có ba nhà môi giới được tiếp xúc bằng cách sử dụng ba Kubernetes riêng biệt Service
. May mắn thay, bộ khởi động AMQP Spring Boot hỗ trợ nó. Chúng tôi có thể đặt địa chỉ của ba nhà môi giới bên trong failover
phần này. Hãy thử nó để xem điều gì sẽ xảy ra.
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-consumer
spec:
replicas: 3
selector:
matchLabels:
app: simple-consumer
template:
metadata:
labels:
app: simple-consumer
spec:
containers:
- name: simple-consumer
image: piomin/simple-consumer
env:
- name: ARTEMIS_URL
value: failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)
resources:
limits:
memory: 256Mi
cpu: 500m
requests:
memory: 128Mi
cpu: 250m
Ứng dụng chuẩn bị được triển khai với Skaffold. Nếu bạn chạy skaffold dev
lệnh, bạn sẽ triển khai và xem nhật ký của cả ba phiên bản của ứng dụng dành cho người tiêu dùng. Kết quả là gì? Tất cả các trường hợp kết nối với URL đầu tiên từ danh sách như hình dưới đây.
May mắn thay, có một tham số chuyển đổi dự phòng giúp phân phối các kết nối máy khách đồng đều hơn trên nhiều đồng nghiệp từ xa. Với failover.randomize
tùy chọn này, các URI sẽ được xáo trộn ngẫu nhiên trước khi cố gắng kết nối với một trong số chúng. Hãy thay thế ARTEMIS_URL
env trong Deployment
tệp kê khai bằng dòng sau:
failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)?failover.randomize=true
Sự phân bổ giữa các trường hợp môi giới có vẻ tốt hơn một chút. Tất nhiên, kết quả là ngẫu nhiên, vì vậy bạn có thể nhận được các kết quả khác nhau.
Cách đầu tiên để phân phối các kết nối là thông qua Kubernetes chuyên dụng Service
. Chúng tôi không phải tận dụng các dịch vụ được tạo tự động bởi nhà điều hành. Chúng tôi có thể tạo của riêng mình Service
để cân bằng tải giữa tất cả các nhóm có sẵn với các nhà môi giới.
kind: Service
apiVersion: v1
metadata:
name: ex-aao-amqp-lb
spec:
ports:
- name: amqp
protocol: TCP
port: 5672
type: ClusterIP
selector:
application: ex-aao-app
Bây giờ, chúng tôi có thể từ chức ở failover
phần phía khách hàng và hoàn toàn dựa vào các cơ chế của Kubernetes.
spec:
containers:
- name: simple-consumer
image: piomin/simple-consumer
env:
- name: ARTEMIS_URL
value: amqp://ex-aao-amqp-lb:5672
Lần này, chúng tôi sẽ không thấy bất kỳ thứ gì trong nhật ký ứng dụng, vì tất cả các phiên bản đều kết nối với cùng một URL. Chúng tôi có thể xác minh sự phân phối giữa tất cả các phiên bản nhà môi giới bằng cách sử dụng bảng điều khiển web quản lý. Đây là danh sách những người tiêu dùng trong phiên bản đầu tiên của ActiveMQ:
Dưới đây, bạn sẽ có kết quả chính xác cho trường hợp thứ hai. Tất cả các phiên bản ứng dụng dành cho người tiêu dùng đã được phân phối đồng đều giữa tất cả các nhà môi giới có sẵn bên trong cụm.
Bây giờ, chúng tôi sẽ triển khai ứng dụng nhà sản xuất. Chúng tôi sử dụng cùng một Kubernetes Service
để kết nối cụm ActiveMQ.
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-producer
spec:
replicas: 3
selector:
matchLabels:
app: simple-producer
template:
metadata:
labels:
app: simple-producer
spec:
containers:
- name: simple-producer
image: piomin/simple-producer
env:
- name: ARTEMIS_URL
value: amqp://ex-aao-amqp-lb:5672
- name: DESTINATION
value: test-1
ports:
- containerPort: 8080
Vì chúng ta phải gọi điểm cuối HTTP nên hãy tạo Service
cho ứng dụng nhà sản xuất:
apiVersion: v1
kind: Service
metadata:
name: simple-producer
spec:
type: ClusterIP
selector:
app: simple-producer
ports:
- port: 8080
Hãy triển khai ứng dụng nhà sản xuất bằng Skaffold với tính năng chuyển tiếp cổng được bật:
$ skaffold dev --port-forward
Đây là danh sách của chúng tôi Deployment
:
Để gửi một tin nhắn kiểm tra, chỉ cần thực hiện lệnh sau:
$ curl http://localhost:8080/producer/send \
-d "{\"source\":\"test\",\"content\":\"Hello\"}" \
-H "Content-Type:application/json"
Nếu bạn cần phân phối lưu lượng nâng cao hơn giữa các nhà môi giới bên trong cụm, bạn có thể đạt được nó theo một số cách. Ví dụ: chúng tôi có thể ghi đè động thuộc tính cấu hình trong thời gian chạy. Đây là một ví dụ rất đơn giản. Sau khi khởi động ứng dụng, chúng tôi đang kết nối dịch vụ bên ngoài qua HTTP. Nó trả về số phiên bản tiếp theo.
@Configuration
public class AmqpConfig {
@PostConstruct
public void init() {
RestTemplate t = new RestTemplateBuilder().build();
int x = t.getForObject("http://simple-counter:8080/counter", Integer.class);
System.setProperty("amqphub.amqp10jms.remoteUrl",
"amqp://ex-aao-amqp-" + x + "-svc:5672");
}
}
Đây là cách triển khai ứng dụng bộ đếm. Nó chỉ tăng số lượng và chia nó cho số lượng cá thể môi giới. Tất nhiên, chúng tôi có thể tạo một triển khai nâng cao hơn và cung cấp kết nối, ví dụ: kết nối với phiên bản của nhà môi giới đang chạy trên cùng một nút Kubernetes với nhóm ứng dụng.
@SpringBootApplication
@RestController
@RequestMapping("/counter")
public class CounterApp {
private static int c = 0;
public static void main(String[] args) {
SpringApplication.run(CounterApp.class, args);
}
@Value("${DIVIDER:0}")
int divider;
@GetMapping
public Integer count() {
if (divider > 0)
return c++ % divider;
else
return c++;
}
}
ActiveMQ là một sự thay thế thú vị cho RabbitMQ như một nhà môi giới tin nhắn. Trong bài viết này, bạn đã học cách chạy, quản lý và tích hợp ActiveMQ với Spring Boot trên Kubernetes. Nó có thể được quản lý một cách khai báo trên Kubernetes nhờ ActiveMQ Artemis Operator. Bạn cũng có thể dễ dàng tích hợp nó với Spring Boot bằng bộ khởi động chuyên dụng. Nó cung cấp các tùy chọn cấu hình khác nhau và được phát triển bởi Red Hat và cộng đồng.
Liên kết: https://piotrminkowski.com/2022/07/26/activemq-artemis-with-spring-boot-on-kubernetes/
#kubernetes #springboot #java #rabbitmq
1659098303
В этой статье вы узнаете, как запустить ActiveMQ в Kubernetes и интегрировать его с вашим приложением через Spring Boot. Мы развернем кластерный брокер ActiveMQ с помощью выделенного оператора . Затем мы собираемся создать и запустить два приложения Spring Boot. Первый из них работает в нескольких экземплярах и получает сообщения из очереди, а второй отправляет сообщения в эту очередь. Чтобы протестировать кластер ActiveMQ, мы будем использовать Kind . Потребительское приложение подключается к кластеру, используя несколько разных режимов. Мы подробно обсудим эти режимы.
Вы можете найти много статей о других брокерах сообщений, таких как RabbitMQ или Kafka, в моем блоге. Если вы хотите прочитать о RabbitMQ в Kubernetes, обратитесь к этой статье . Чтобы узнать больше об интеграции Kafka и Spring Boot, вы можете прочитать статью о Kafka Streams и Spring Cloud Stream, доступную здесь . Раньше я мало писал об ActiveMQ, но это тоже очень популярный брокер сообщений. Например, он поддерживает последнюю версию протокола AMQP, а Rabbit основан на их расширении AMQP 0.9.
Если вы хотите попробовать это самостоятельно, вы всегда можете взглянуть на мой исходный код. Для этого вам нужно клонировать мой репозиторий GitHub . Затем перейдите в messaging
каталог. Там вы найдете три приложения Spring Boot: simple-producer
, simple-consumer
и simple-counter
. После этого вы должны просто следовать моим инструкциям. Давайте начнем.
Начнем с интеграции между нашими приложениями Spring Boot и брокером ActiveMQ Artemis. По сути, ActiveMQ Artemis является основой коммерческого продукта, предоставляемого Red Hat, под названием AMQ Broker . Red Hat активно разрабатывает стартер Spring Boot для ActiveMQ и оператор для его запуска в Kubernetes. Чтобы получить доступ к Spring Boot, вам необходимо включить репозиторий Red Hat Maven в свой pom.xml
файл:
<repository>
<id>red-hat-ga</id>
<url>https://maven.repository.redhat.com/ga</url>
</repository>
После этого вы можете включить стартер в свой Maven pom.xml
:
<dependency>
<groupId>org.amqphub.spring</groupId>
<artifactId>amqp-10-jms-spring-boot-starter</artifactId>
<version>2.5.6</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
</exclusion>
</exclusions>
</dependency>
Затем нам просто нужно включить JMS для нашего приложения с @EnableJMS
аннотацией:
@SpringBootApplication
@EnableJms
public class SimpleConsumer {
public static void main(String[] args) {
SpringApplication.run(SimpleConsumer.class, args);
}
}
Наше приложение очень простое. Он просто получает и печатает входящее сообщение. Метод получения сообщений должен быть аннотирован с помощью @JmsListener
. Поле destination
содержит имя целевой очереди.
@Service
public class Listener {
private static final Logger LOG = LoggerFactory
.getLogger(Listener.class);
@JmsListener(destination = "test-1")
public void processMsg(SimpleMessage message) {
LOG.info("============= Received: " + message);
}
}
Вот класс, представляющий наше сообщение:
public class SimpleMessage implements Serializable {
private Long id;
private String source;
private String content;
public SimpleMessage() {
}
public SimpleMessage(Long id, String source, String content) {
this.id = id;
this.source = source;
this.content = content;
}
// ... GETTERS AND SETTERS
@Override
public String toString() {
return "SimpleMessage{" +
"id=" + id +
", source='" + source + '\'' +
", content='" + content + '\'' +
'}';
}
}
Наконец, нам нужно установить параметры конфигурации подключения. Со стартером AMQP Spring Boot это очень просто. Нам просто нужно установить свойство amqphub.amqp10jms.remoteUrl
. На данный момент мы собираемся использовать переменную среды, установленную на уровне Kubernetes Deployment
.
amqphub.amqp10jms.remoteUrl = ${ARTEMIS_URL}
Приложение производителя очень похоже. Вместо аннотации для получения сообщений мы используем Spring JmsTemplate
для создания и отправки сообщений в целевую очередь. Метод отправки сообщений предоставляется как POST /producer/send
конечная точка HTTP.
@RestController
@RequestMapping("/producer")
public class ProducerController {
private static long id = 1;
private final JmsTemplate jmsTemplate;
@Value("${DESTINATION}")
private String destination;
public ProducerController(JmsTemplate jmsTemplate) {
this.jmsTemplate = jmsTemplate;
}
@PostMapping("/send")
public SimpleMessage send(@RequestBody SimpleMessage message) {
if (message.getId() == null) {
message.setId(id++);
}
jmsTemplate.convertAndSend(destination, message);
return message;
}
}
Наши примеры приложений готовы. Перед их развертыванием нам нужно подготовить локальный кластер Kubernetes. Мы развернём там кластер ActiveMQ, состоящий из трёх брокеров. Поэтому наш кластер Kubernetes также будет состоять из трех нод. Следовательно, в Kubernetes работает три экземпляра пользовательского приложения. Они подключаются к брокерам ActiveMQ по протоколу AMQP. Существует также один экземпляр приложения производителя, которое отправляет сообщения по запросу. Вот схема нашей архитектуры.
Для локального запуска многоузлового кластера Kubernetes мы будем использовать Kind. Мы проверим не только связь по протоколу AMQP, но и предоставим консоль управления ActiveMQ через HTTP. Поскольку ActiveMQ использует безголовые службы для предоставления веб-консоли, мы должны создать и настроить Ingress on Kind для доступа к ней. Давайте начнем.
На первом этапе мы собираемся создать кластер Kind. Он состоит из плоскости управления и трех рабочих. Конфигурация должна быть правильно подготовлена для запуска Nginx Ingress Controller. Мы должны добавить ingress-ready
метку к одному рабочему узлу и открыть порты 80
и файлы 443
. Вот окончательная версия конфигурационного файла Kind:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- role: worker
- role: worker
Теперь давайте создадим кластер Kind, выполнив следующую команду:
$ kind create cluster --config kind-config.yaml
Если ваш кластер был успешно создан, вы должны увидеть аналогичную информацию:
После этого давайте установим Nginx Ingress Controller. Это всего лишь одна команда:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
Проверим установку:
$ kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-wbbzh 0/1 Completed 0 1m
ingress-nginx-admission-patch-ws2mv 0/1 Completed 0 1m
ingress-nginx-controller-86b6d5756c-rkbmz 1/1 Running 0 1m
Наконец, мы можем перейти к установке ActiveMQ Artemis. Во-первых, давайте установим необходимые CRD. Вы можете найти все манифесты YAML в репозитории оператора на GitHub.
$ git clone https://github.com/artemiscloud/activemq-artemis-operator.git
$ cd activemq-artemis-operator
Манифесты с CRD находятся в deploy/crds
каталоге:
$ kubectl create -f ./deploy/crds
После этого мы можем установить оператор:
$ kubectl create -f ./deploy/service_account.yaml
$ kubectl create -f ./deploy/role.yaml
$ kubectl create -f ./deploy/role_binding.yaml
$ kubectl create -f ./deploy/election_role.yaml
$ kubectl create -f ./deploy/election_role_binding.yaml
$ kubectl create -f ./deploy/operator_config.yaml
$ kubectl create -f ./deploy/operator.yaml
Чтобы создать кластер, мы должны создать ActiveMQArtemis
объект. Он содержит ряд брокеров, входящих в состав кластера (1) . Мы также должны установить метод доступа, чтобы открыть порт AMQP за пределами каждого модуля брокера (2) . Разумеется, мы также выставим консоль управления (3) .
apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemis
metadata:
name: ex-aao
spec:
deploymentPlan:
size: 3 # (1)
image: placeholder
messageMigration: true
resources:
limits:
cpu: "500m"
memory: "1024Mi"
requests:
cpu: "250m"
memory: "512Mi"
acceptors: # (2)
- name: amqp
protocols: amqp
port: 5672
connectionsAllowed: 5
console: # (3)
expose: true
После ActiveMQArtemis
создания оператор запускает процесс развертывания. Он создает StatefulSet
объект:
$ kubectl get statefulset
NAME READY AGE
ex-aao-ss 3/3 1m
Он последовательно запускает все три пода с брокерами:
$ kubectl get pod -l application=ex-aao-app
NAME READY STATUS RESTARTS AGE
ex-aao-ss-0 1/1 Running 0 5m
ex-aao-ss-1 1/1 Running 0 3m
ex-aao-ss-2 1/1 Running 0 1m
Выведем список Service
s, созданный оператором. Для Service
каждого брокера имеется отдельный порт для доступа к порту AMQP ( ex-aao-amqp-*
) и веб-консоли ( ex-aao-wsconsj-*
):
Оператор автоматически создает объекты Ingress для каждой веб-консоли Service
. Мы изменим их, добавив разные хосты. Допустим, это one.activemq.com
домен для первого брокера, two.activemq.com
для второго брокера и т. д.
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
ex-aao-wconsj-0-svc-ing <none> one.activemq.com localhost 80 1h
ex-aao-wconsj-1-svc-ing <none> two.activemq.com localhost 80 1h
ex-aao-wconsj-2-svc-ing <none> three.activemq.com localhost 80 1h
После создания входов нам нужно будет добавить следующую строку в /etc/hosts
.
127.0.0.1 one.activemq.com two.activemq.com three.activemq.com
Теперь мы получаем доступ к консоли управления, например, для третьего брокера по следующему URL-адресу http://three.activemq.com/console .
Как только брокер будет готов, мы можем определить тестовую очередь. Имя этой очереди test-1
.
apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemisAddress
metadata:
name: address-1
spec:
addressName: address-1
queueName: test-1
routingType: anycast
Теперь давайте развернем потребительское приложение. В Deployment
манифесте мы должны указать URL-адрес подключения к кластеру ActiveMQ. Но подождите… как его подключить? Есть три брокера, использующие три отдельных Kubernetes Service
. К счастью, стартер AMQP Spring Boot поддерживает его. failover
Внутри раздела мы можем указать адреса трех брокеров . Давайте попробуем, чтобы увидеть, что произойдет.
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-consumer
spec:
replicas: 3
selector:
matchLabels:
app: simple-consumer
template:
metadata:
labels:
app: simple-consumer
spec:
containers:
- name: simple-consumer
image: piomin/simple-consumer
env:
- name: ARTEMIS_URL
value: failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)
resources:
limits:
memory: 256Mi
cpu: 500m
requests:
memory: 128Mi
cpu: 250m
Приложение подготовлено для развертывания с помощью Skaffold. Если вы запустите skaffold dev
команду, вы развернете и увидите журналы всех трех экземпляров потребительского приложения. Каков результат? Все экземпляры подключаются к первому URL из списка, как показано ниже.
К счастью, существует параметр аварийного переключения, который помогает более равномерно распределять клиентские подключения между несколькими удаленными одноранговыми узлами. С этой failover.randomize
опцией URI случайным образом перемешиваются перед попыткой подключения к одному из них. Давайте заменим ARTEMIS_URL
env в Deployment
манифесте следующей строкой:
failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)?failover.randomize=true
Распределение между экземплярами брокера выглядит немного лучше. Конечно, результат случайный, поэтому вы можете получить разные результаты.
Первый способ распределения подключений — через выделенный Kubernetes Service
. Нам не нужно использовать сервисы, автоматически созданные оператором. Мы можем создать свой собственный Service
, который балансирует нагрузку между всеми доступными модулями с брокерами.
kind: Service
apiVersion: v1
metadata:
name: ex-aao-amqp-lb
spec:
ports:
- name: amqp
protocol: TCP
port: 5672
type: ClusterIP
selector:
application: ex-aao-app
Теперь мы можем отказаться от failover
раздела на стороне клиента и полностью положиться на механизмы Kubernetes.
spec:
containers:
- name: simple-consumer
image: piomin/simple-consumer
env:
- name: ARTEMIS_URL
value: amqp://ex-aao-amqp-lb:5672
На этот раз мы ничего не увидим в журналах приложения, потому что все экземпляры подключаются к одному и тому же URL-адресу. Мы можем проверить распределение между всеми экземплярами брокера, используя, например, веб-консоль управления. Вот список потребителей первого экземпляра ActiveMQ:
Ниже вы получите точно такие же результаты для второго экземпляра. Все экземпляры потребительских приложений были равномерно распределены между всеми доступными брокерами внутри кластера.
Теперь мы собираемся развернуть приложение производителя. Мы используем тот же Kubernetes Service
для подключения кластера ActiveMQ.
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-producer
spec:
replicas: 3
selector:
matchLabels:
app: simple-producer
template:
metadata:
labels:
app: simple-producer
spec:
containers:
- name: simple-producer
image: piomin/simple-producer
env:
- name: ARTEMIS_URL
value: amqp://ex-aao-amqp-lb:5672
- name: DESTINATION
value: test-1
ports:
- containerPort: 8080
Поскольку нам нужно вызвать конечную точку HTTP, давайте создадим Service
приложение для производителя:
apiVersion: v1
kind: Service
metadata:
name: simple-producer
spec:
type: ClusterIP
selector:
app: simple-producer
ports:
- port: 8080
Давайте развернем приложение-производитель с помощью Skaffold с включенной переадресацией портов:
$ skaffold dev --port-forward
Вот список наших Deployment
s:
Чтобы отправить тестовое сообщение, просто выполните следующую команду:
$ curl http://localhost:8080/producer/send \
-d "{\"source\":\"test\",\"content\":\"Hello\"}" \
-H "Content-Type:application/json"
Если вам нужно более продвинутое распределение трафика между брокерами внутри кластера, вы можете добиться этого несколькими способами. Например, мы можем динамически переопределять свойство конфигурации во время выполнения. Вот очень простой пример. После запуска приложения мы подключаемся к внешнему сервису по HTTP. Он возвращает номер следующего экземпляра.
@Configuration
public class AmqpConfig {
@PostConstruct
public void init() {
RestTemplate t = new RestTemplateBuilder().build();
int x = t.getForObject("http://simple-counter:8080/counter", Integer.class);
System.setProperty("amqphub.amqp10jms.remoteUrl",
"amqp://ex-aao-amqp-" + x + "-svc:5672");
}
}
Вот реализация приложения-счетчика. Он просто увеличивает число и делит его на количество экземпляров брокера. Конечно, мы можем создать более продвинутую реализацию и обеспечить, например, подключение к экземпляру брокера, работающему на том же узле Kubernetes, что и модуль приложения.
@SpringBootApplication
@RestController
@RequestMapping("/counter")
public class CounterApp {
private static int c = 0;
public static void main(String[] args) {
SpringApplication.run(CounterApp.class, args);
}
@Value("${DIVIDER:0}")
int divider;
@GetMapping
public Integer count() {
if (divider > 0)
return c++ % divider;
else
return c++;
}
}
ActiveMQ — интересная альтернатива RabbitMQ в качестве брокера сообщений. В этой статье вы узнали, как запускать, управлять и интегрировать ActiveMQ с Spring Boot в Kubernetes. В Kubernetes им можно декларативно управлять благодаря ActiveMQ Artemis Operator. Вы также можете легко интегрировать его с Spring Boot, используя специальный стартер. Он предоставляет различные варианты конфигурации и активно разрабатывается Red Hat и сообществом.
Ссылка: https://piotrminkowski.com/2022/07/26/activemq-artemis-with-spring-boot-on-kubernetes/
#kubernetes #springboot #java #rabbitmq
1659091080
本文將教您如何在 Kubernetes 上運行 ActiveMQ,並通過 Spring Boot 將其與您的應用程序集成。我們將使用專門的operator部署一個集群的 ActiveMQ 代理。然後我們將構建並運行兩個 Spring Boot 應用程序。第一個在多個實例中運行並從隊列接收消息,而第二個是向該隊列發送消息。為了測試 ActiveMQ 集群,我們將使用Kind。消費者應用程序使用幾種不同的模式連接到集群。我們將詳細討論這些模式。
你可以在我的博客上找到很多關於其他消息代理(如 RabbitMQ 或 Kafka)的文章。如果您想了解 Kubernetes 上的 RabbitMQ,請參閱那篇文章。要了解有關 Kafka 和 Spring Boot 集成的更多信息,您可以在此處閱讀有關 Kafka Streams 和 Spring Cloud Stream 的文章。之前我沒有寫太多關於 ActiveMQ 的文章,但它也是一個非常流行的消息代理。例如,它支持最新版本的 AMQP 協議,而 Rabbit 則是基於它們對 AMQP 0.9 的擴展。
如果您想自己嘗試一下,可以隨時查看我的源代碼。為此,您需要克隆我的 GitHub 存儲庫。然後進入messaging
目錄。您將找到三個 Spring Boot 應用程序simple-producer
:simple-consumer
和simple-counter
. 之後,您應該按照我的指示進行操作。讓我們開始。
讓我們從 Spring Boot 應用程序和 ActiveMQ Artemis 代理之間的集成開始。實際上,ActiveMQ Artemis 是 Red Hat 提供的名為AMQ Broker的商業產品的基礎。Red Hat 積極開發了一個用於 ActiveMQ 的 Spring Boot 啟動器和一個在 Kubernetes 上運行它的操作符。為了訪問 Spring Boot,您需要在pom.xml
文件中包含 Red Hat Maven 存儲庫:
<repository>
<id>red-hat-ga</id>
<url>https://maven.repository.redhat.com/ga</url>
</repository>
之後,您可以在 Maven 中包含一個啟動器pom.xml
:
<dependency>
<groupId>org.amqphub.spring</groupId>
<artifactId>amqp-10-jms-spring-boot-starter</artifactId>
<version>2.5.6</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
</exclusion>
</exclusions>
</dependency>
然後,我們只需要使用@EnableJMS
註解為我們的應用程序啟用 JMS:
@SpringBootApplication
@EnableJms
public class SimpleConsumer {
public static void main(String[] args) {
SpringApplication.run(SimpleConsumer.class, args);
}
}
我們的應用程序非常簡單。它只是接收並打印傳入的消息。接收消息的方法應該用 註釋@JmsListener
。該destination
字段包含目標隊列的名稱。
@Service
public class Listener {
private static final Logger LOG = LoggerFactory
.getLogger(Listener.class);
@JmsListener(destination = "test-1")
public void processMsg(SimpleMessage message) {
LOG.info("============= Received: " + message);
}
}
這是代表我們信息的類:
public class SimpleMessage implements Serializable {
private Long id;
private String source;
private String content;
public SimpleMessage() {
}
public SimpleMessage(Long id, String source, String content) {
this.id = id;
this.source = source;
this.content = content;
}
// ... GETTERS AND SETTERS
@Override
public String toString() {
return "SimpleMessage{" +
"id=" + id +
", source='" + source + '\'' +
", content='" + content + '\'' +
'}';
}
}
最後,我們需要設置連接配置設置。使用 AMQP Spring Boot 啟動器非常簡單。我們只需要設置屬性amqphub.amqp10jms.remoteUrl
。現在,我們將基於在 Kubernetes 級別設置的環境變量Deployment
。
amqphub.amqp10jms.remoteUrl = ${ARTEMIS_URL}
生產者應用程序非常相似。我們使用 SpringJmsTemplate
生成消息並將消息發送到目標隊列,而不是用於接收消息的註解。發送消息的方法公開為 HTTPPOST /producer/send
端點。
@RestController
@RequestMapping("/producer")
public class ProducerController {
private static long id = 1;
private final JmsTemplate jmsTemplate;
@Value("${DESTINATION}")
private String destination;
public ProducerController(JmsTemplate jmsTemplate) {
this.jmsTemplate = jmsTemplate;
}
@PostMapping("/send")
public SimpleMessage send(@RequestBody SimpleMessage message) {
if (message.getId() == null) {
message.setId(id++);
}
jmsTemplate.convertAndSend(destination, message);
return message;
}
}
我們的示例應用程序已準備就緒。在部署它們之前,我們需要準備本地 Kubernetes 集群。我們將在那裡部署由三個代理組成的 ActiveMQ 集群。因此,我們的 Kubernetes 集群也將由三個節點組成。因此,在 Kubernetes 上運行了三個消費者應用程序實例。它們通過 AMQP 協議連接到 ActiveMQ 代理。還有一個生產者應用程序實例可以按需發送消息。這是我們的架構圖。
為了在本地運行多節點 Kubernetes 集群,我們將使用 Kind。我們不僅會測試通過 AMQP 協議的通信,還會通過 HTTP 公開 ActiveMQ 管理控制台。因為 ActiveMQ 使用無頭服務來公開 Web 控制台,所以我們必須在 Kind 上創建和配置 Ingress 才能訪問它。讓我們開始。
第一步,我們將創建一個 Kind 集群。它由一個控制平面和三個工作人員組成。必須正確準備配置才能運行 Nginx 入口控制器。我們應該將ingress-ready
標籤添加到單個工作節點並公開端口80
和443
. 這是 Kind 配置文件的最終版本:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- role: worker
- role: worker
現在,讓我們通過執行以下命令創建一個 Kind 集群:
$ kind create cluster --config kind-config.yaml
如果您的集群已成功創建,您應該會看到類似的信息:
之後,讓我們安裝 Nginx Ingress Controller。它只是一個命令:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
讓我們驗證安裝:
$ kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-wbbzh 0/1 Completed 0 1m
ingress-nginx-admission-patch-ws2mv 0/1 Completed 0 1m
ingress-nginx-controller-86b6d5756c-rkbmz 1/1 Running 0 1m
最後,我們可以繼續安裝 ActiveMQ Artemis。首先,讓我們安裝所需的 CRD。您可以在 GitHub 上的操作員存儲庫中找到所有 YAML 清單。
$ git clone https://github.com/artemiscloud/activemq-artemis-operator.git
$ cd activemq-artemis-operator
帶有 CRD 的清單位於deploy/crds
目錄中:
$ kubectl create -f ./deploy/crds
之後,我們可以安裝操作符:
$ kubectl create -f ./deploy/service_account.yaml
$ kubectl create -f ./deploy/role.yaml
$ kubectl create -f ./deploy/role_binding.yaml
$ kubectl create -f ./deploy/election_role.yaml
$ kubectl create -f ./deploy/election_role_binding.yaml
$ kubectl create -f ./deploy/operator_config.yaml
$ kubectl create -f ./deploy/operator.yaml
為了創建集群,我們必須創建ActiveMQArtemis
對象。它包含許多作為集群(1)一部分的代理。我們還應該設置訪問器,以在每個代理 pod (2)之外公開 AMQP 端口。當然,我們也會暴露管理控制台(3)。
apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemis
metadata:
name: ex-aao
spec:
deploymentPlan:
size: 3 # (1)
image: placeholder
messageMigration: true
resources:
limits:
cpu: "500m"
memory: "1024Mi"
requests:
cpu: "250m"
memory: "512Mi"
acceptors: # (2)
- name: amqp
protocols: amqp
port: 5672
connectionsAllowed: 5
console: # (3)
expose: true
創建完成ActiveMQArtemis
後,操作員將開始部署過程。它創建StatefulSet
對象:
$ kubectl get statefulset
NAME READY AGE
ex-aao-ss 3/3 1m
它按順序使用代理啟動所有三個 pod:
$ kubectl get pod -l application=ex-aao-app
NAME READY STATUS RESTARTS AGE
ex-aao-ss-0 1/1 Running 0 5m
ex-aao-ss-1 1/1 Running 0 3m
ex-aao-ss-2 1/1 Running 0 1m
讓我們顯示Service
操作員創建的 s 列表。Service
每個代理都有一個用於公開 AMQP 端口 ( ex-aao-amqp-*
) 和 Web 控制台 ( ex-aao-wsconsj-*
):
操作員會自動為每個 Web 控制台創建 Ingress 對象Service
。我們將通過添加不同的主機來修改它們。假設這是one.activemq.com
第一個代理、two.activemq.com
第二個代理等的域。
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
ex-aao-wconsj-0-svc-ing <none> one.activemq.com localhost 80 1h
ex-aao-wconsj-1-svc-ing <none> two.activemq.com localhost 80 1h
ex-aao-wconsj-2-svc-ing <none> three.activemq.com localhost 80 1h
創建入口後,我們必須在/etc/hosts
.
127.0.0.1 one.activemq.com two.activemq.com three.activemq.com
現在,我們訪問管理控制台,例如以下 URL http://three.activemq.com/console下的第三個代理。
一旦代理準備好,我們就可以定義一個測試隊列。該隊列的名稱是test-1
。
apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemisAddress
metadata:
name: address-1
spec:
addressName: address-1
queueName: test-1
routingType: anycast
現在,讓我們部署消費者應用程序。在Deployment
清單中,我們必須設置 ActiveMQ 集群連接 URL。但是等等……如何連接它?使用三個單獨的 Kubernetes 暴露了三個代理Service
。幸運的是,AMQP Spring Boot starter 支持它。failover
我們可以在節內設置三個經紀人的地址。讓我們試試看會發生什麼。
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-consumer
spec:
replicas: 3
selector:
matchLabels:
app: simple-consumer
template:
metadata:
labels:
app: simple-consumer
spec:
containers:
- name: simple-consumer
image: piomin/simple-consumer
env:
- name: ARTEMIS_URL
value: failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)
resources:
limits:
memory: 256Mi
cpu: 500m
requests:
memory: 128Mi
cpu: 250m
該應用程序已準備好與 Skaffold 一起部署。如果您運行該skaffold dev
命令,您將部署並查看消費者應用程序的所有三個實例的日誌。結果是什麼?所有實例都連接到列表中的第一個 URL,如下所示。
幸運的是,有一個故障轉移參數可以幫助在多個遠程對等點之間更均勻地分配客戶端連接。使用該failover.randomize
選項,在嘗試連接到其中一個之前,URI 會被隨機打亂。讓我們ARTEMIS_URL
將清單中的 env替換Deployment
為以下行:
failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)?failover.randomize=true
代理實例之間的分佈看起來稍微好一些。當然,結果是隨機的,所以你可能會得到不同的結果。
分配連接的第一種方法是通過專用的 Kubernetes Service
。我們不必利用運營商自動創建的服務。我們可以創建自己Service
的負載均衡器,在所有可用的 Pod 之間通過代理實現負載均衡。
kind: Service
apiVersion: v1
metadata:
name: ex-aao-amqp-lb
spec:
ports:
- name: amqp
protocol: TCP
port: 5672
type: ClusterIP
selector:
application: ex-aao-app
現在,我們可以從客戶端部分辭職failover
,完全依賴 Kubernetes 機制。
spec:
containers:
- name: simple-consumer
image: piomin/simple-consumer
env:
- name: ARTEMIS_URL
value: amqp://ex-aao-amqp-lb:5672
這次我們不會在應用程序日誌中看到任何內容,因為所有實例都連接到同一個 URL。我們可以使用例如管理 Web 控制台來驗證所有代理實例之間的分佈。這是 ActiveMQ 第一個實例上的消費者列表:
下面,您將在第二個實例中得到完全相同的結果。所有消費者應用程序實例均已在集群內的所有可用代理之間平均分配。
現在,我們將部署生產者應用程序。我們使用相同的 KubernetesService
來連接 ActiveMQ 集群。
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-producer
spec:
replicas: 3
selector:
matchLabels:
app: simple-producer
template:
metadata:
labels:
app: simple-producer
spec:
containers:
- name: simple-producer
image: piomin/simple-producer
env:
- name: ARTEMIS_URL
value: amqp://ex-aao-amqp-lb:5672
- name: DESTINATION
value: test-1
ports:
- containerPort: 8080
因為我們必須調用 HTTP 端點,所以讓我們Service
為生產者應用程序創建:
apiVersion: v1
kind: Service
metadata:
name: simple-producer
spec:
type: ClusterIP
selector:
app: simple-producer
ports:
- port: 8080
讓我們使用啟用端口轉發的 Skaffold 部署生產者應用程序:
$ skaffold dev --port-forward
這是我們的清單Deployment
:
為了發送測試消息,只需執行以下命令:
$ curl http://localhost:8080/producer/send \
-d "{\"source\":\"test\",\"content\":\"Hello\"}" \
-H "Content-Type:application/json"
如果您需要在集群內的代理之間進行更高級的流量分配,您可以通過多種方式實現。例如,我們可以在運行時動態覆蓋配置屬性。這是一個非常簡單的例子。啟動應用程序後,我們將通過 HTTP 連接外部服務。它返回下一個實例編號。
@Configuration
public class AmqpConfig {
@PostConstruct
public void init() {
RestTemplate t = new RestTemplateBuilder().build();
int x = t.getForObject("http://simple-counter:8080/counter", Integer.class);
System.setProperty("amqphub.amqp10jms.remoteUrl",
"amqp://ex-aao-amqp-" + x + "-svc:5672");
}
}
這是計數器應用程序的實現。它只是增加數字並將其除以代理實例的數量。當然,我們可以創建更高級的實現,並提供與運行在與應用程序 pod 相同的 Kubernetes 節點上的代理實例的連接。
@SpringBootApplication
@RestController
@RequestMapping("/counter")
public class CounterApp {
private static int c = 0;
public static void main(String[] args) {
SpringApplication.run(CounterApp.class, args);
}
@Value("${DIVIDER:0}")
int divider;
@GetMapping
public Integer count() {
if (divider > 0)
return c++ % divider;
else
return c++;
}
}
ActiveMQ 是作為消息代理的 RabbitMQ 的一個有趣的替代方案。在本文中,您學習瞭如何在 Kubernetes 上運行、管理和集成 ActiveMQ 與 Spring Boot。借助 ActiveMQ Artemis Operator,它可以在 Kubernetes 上進行聲明式管理。您還可以使用專用的啟動器輕鬆地將其與 Spring Boot 集成。它提供各種配置選項,由紅帽和社區積極開發。
鏈接:https ://piotrminkowski.com/2022/07/26/activemq-artemis-with-spring-boot-on-kubernetes/
#kubernetes #springboot #java #rabbitmq
1659083820
Cet article vous apprendra à exécuter ActiveMQ sur Kubernetes et à l'intégrer à votre application via Spring Boot. Nous allons déployer un courtier ActiveMQ en cluster à l'aide d'un opérateur dédié . Ensuite, nous allons créer et exécuter deux applications Spring Boot. Le premier d'entre eux s'exécute dans plusieurs instances et reçoit des messages de la file d'attente, tandis que le second envoie des messages à cette file d'attente. Afin de tester le cluster ActiveMQ, nous allons utiliser Kind . L'application consommateur se connecte au cluster en utilisant plusieurs modes différents. Nous allons détailler ces modes.
Vous pouvez trouver de nombreux articles sur d'autres courtiers de messages comme RabbitMQ ou Kafka sur mon blog. Si vous souhaitez en savoir plus sur RabbitMQ sur Kubernetes, veuillez vous référer à cet article . Afin d'en savoir plus sur l'intégration de Kafka et Spring Boot, vous pouvez lire l'article sur Kafka Streams et Spring Cloud Stream disponible ici . Auparavant, je n'écrivais pas beaucoup sur ActiveMQ, mais c'est aussi un courtier de messages très populaire. Par exemple, il prend en charge la dernière version du protocole AMQP, tandis que Rabbit est basé sur leur extension AMQP 0.9.
Si vous souhaitez l'essayer par vous-même, vous pouvez toujours jeter un œil à mon code source. Pour ce faire, vous devez cloner mon référentiel GitHub . Allez ensuite dans le messaging
répertoire. Vous y trouverez trois applications Spring Boot : simple-producer
, simple-consumer
et simple-counter
. Après cela, vous n'aurez plus qu'à suivre mes instructions. Commençons.
Commençons par l'intégration entre nos applications Spring Boot et le courtier ActiveMQ Artemis. En fait, ActiveMQ Artemis est la base du produit commercial fourni par Red Hat appelé AMQ Broker . Red Hat développe activement un démarreur Spring Boot pour ActiveMQ et un opérateur pour l'exécuter sur Kubernetes. Pour accéder à Spring Boot, vous devez inclure le référentiel Red Hat Maven dans votre pom.xml
fichier :
<repository>
<id>red-hat-ga</id>
<url>https://maven.repository.redhat.com/ga</url>
</repository>
Après cela, vous pouvez inclure un starter dans votre Maven pom.xml
:
<dependency>
<groupId>org.amqphub.spring</groupId>
<artifactId>amqp-10-jms-spring-boot-starter</artifactId>
<version>2.5.6</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
</exclusion>
</exclusions>
</dependency>
Ensuite, il nous suffit d'activer JMS pour notre application avec l' @EnableJMS
annotation :
@SpringBootApplication
@EnableJms
public class SimpleConsumer {
public static void main(String[] args) {
SpringApplication.run(SimpleConsumer.class, args);
}
}
Notre application est très simple. Il reçoit et imprime simplement un message entrant. La méthode de réception des messages doit être annotée avec @JmsListener
. Le destination
champ contient le nom d'une file d'attente cible.
@Service
public class Listener {
private static final Logger LOG = LoggerFactory
.getLogger(Listener.class);
@JmsListener(destination = "test-1")
public void processMsg(SimpleMessage message) {
LOG.info("============= Received: " + message);
}
}
Voici la classe qui représente notre message :
public class SimpleMessage implements Serializable {
private Long id;
private String source;
private String content;
public SimpleMessage() {
}
public SimpleMessage(Long id, String source, String content) {
this.id = id;
this.source = source;
this.content = content;
}
// ... GETTERS AND SETTERS
@Override
public String toString() {
return "SimpleMessage{" +
"id=" + id +
", source='" + source + '\'' +
", content='" + content + '\'' +
'}';
}
}
Enfin, nous devons définir les paramètres de configuration de la connexion. Avec le démarreur AMQP Spring Boot, c'est très simple. Nous avons juste besoin de définir la propriété amqphub.amqp10jms.remoteUrl
. Pour l'instant, nous allons nous baser sur la variable d'environnement définie au niveau de Kubernetes Deployment
.
amqphub.amqp10jms.remoteUrl = ${ARTEMIS_URL}
L'application du producteur est assez similaire. Au lieu de l'annotation pour recevoir des messages, nous utilisons Spring JmsTemplate
pour produire et envoyer des messages à la file d'attente cible. La méthode d'envoi de messages est exposée en tant que point de POST /producer/send
terminaison HTTP.
@RestController
@RequestMapping("/producer")
public class ProducerController {
private static long id = 1;
private final JmsTemplate jmsTemplate;
@Value("${DESTINATION}")
private String destination;
public ProducerController(JmsTemplate jmsTemplate) {
this.jmsTemplate = jmsTemplate;
}
@PostMapping("/send")
public SimpleMessage send(@RequestBody SimpleMessage message) {
if (message.getId() == null) {
message.setId(id++);
}
jmsTemplate.convertAndSend(destination, message);
return message;
}
}
Nos exemples d'applications sont prêts. Avant de les déployer, nous devons préparer le cluster Kubernetes local. Nous y déploierons le cluster ActiveMQ composé de trois brokers. Par conséquent, notre cluster Kubernetes sera également composé de trois nœuds. Par conséquent, trois instances de l'application grand public s'exécutent sur Kubernetes. Ils se connectent aux courtiers ActiveMQ via le protocole AMQP. Il existe également une seule instance de l'application producteur qui envoie des messages à la demande. Voici le schéma de notre architecture.
Afin d'exécuter localement un cluster Kubernetes multi-nœuds, nous utiliserons Kind. Nous allons non seulement tester la communication via le protocole AMQP, mais également exposer la console de gestion ActiveMQ via HTTP. Étant donné qu'ActiveMQ utilise des services sans tête pour exposer une console Web, nous devons créer et configurer Ingress sur Kind pour y accéder. Commençons.
Dans un premier temps, nous allons créer un cluster Kind. Il se compose d'un plan de contrôle et de trois travailleurs. La configuration doit être préparée correctement pour exécuter le contrôleur d'entrée Nginx. Nous devrions ajouter l' ingress-ready
étiquette à un seul nœud de travail et exposer les ports 80
et 443
. Voici la version finale d'un fichier de configuration Kind :
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- role: worker
- role: worker
Créons maintenant un cluster Kind en exécutant la commande suivante :
$ kind create cluster --config kind-config.yaml
Si votre cluster a été créé avec succès, vous devriez voir des informations similaires :
Après cela, installons le contrôleur d'entrée Nginx. C'est juste une seule commande :
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
Vérifions l'installation :
$ kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-wbbzh 0/1 Completed 0 1m
ingress-nginx-admission-patch-ws2mv 0/1 Completed 0 1m
ingress-nginx-controller-86b6d5756c-rkbmz 1/1 Running 0 1m
Enfin, nous pouvons procéder à l'installation d'ActiveMQ Artemis. Tout d'abord, installons les CRD requis. Vous pouvez trouver tous les manifestes YAML dans le référentiel de l'opérateur sur GitHub.
$ git clone https://github.com/artemiscloud/activemq-artemis-operator.git
$ cd activemq-artemis-operator
Les manifestes avec CRD se trouvent dans le deploy/crds
répertoire :
$ kubectl create -f ./deploy/crds
Après cela, nous pouvons installer l'opérateur :
$ kubectl create -f ./deploy/service_account.yaml
$ kubectl create -f ./deploy/role.yaml
$ kubectl create -f ./deploy/role_binding.yaml
$ kubectl create -f ./deploy/election_role.yaml
$ kubectl create -f ./deploy/election_role_binding.yaml
$ kubectl create -f ./deploy/operator_config.yaml
$ kubectl create -f ./deploy/operator.yaml
Pour créer un cluster, nous devons créer l' ActiveMQArtemis
objet. Il contient un certain nombre de courtiers faisant partie du cluster (1) . Nous devons également définir l'accesseur, pour exposer le port AMQP en dehors de chaque pod de courtier (2) . Bien entendu, nous exposerons également la console de gestion (3) .
apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemis
metadata:
name: ex-aao
spec:
deploymentPlan:
size: 3 # (1)
image: placeholder
messageMigration: true
resources:
limits:
cpu: "500m"
memory: "1024Mi"
requests:
cpu: "250m"
memory: "512Mi"
acceptors: # (2)
- name: amqp
protocols: amqp
port: 5672
connectionsAllowed: 5
console: # (3)
expose: true
Une fois le ActiveMQArtemis
est créé, l'opérateur démarre le processus de déploiement. Il crée l' StatefulSet
objet :
$ kubectl get statefulset
NAME READY AGE
ex-aao-ss 3/3 1m
Il démarre les trois pods avec des courtiers de manière séquentielle :
$ kubectl get pod -l application=ex-aao-app
NAME READY STATUS RESTARTS AGE
ex-aao-ss-0 1/1 Running 0 5m
ex-aao-ss-1 1/1 Running 0 3m
ex-aao-ss-2 1/1 Running 0 1m
Affichons une liste de Service
s créés par l'opérateur. Il y a un seul Service
par courtier pour exposer le port AMQP ( ex-aao-amqp-*
) et la console Web ( ex-aao-wsconsj-*
) :
L'opérateur crée automatiquement des objets Ingress pour chaque console Web Service
. Nous allons les modifier en ajoutant différents hébergeurs. Disons que c'est le one.activemq.com
domaine du premier courtier, two.activemq.com
du deuxième courtier, etc.
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
ex-aao-wconsj-0-svc-ing <none> one.activemq.com localhost 80 1h
ex-aao-wconsj-1-svc-ing <none> two.activemq.com localhost 80 1h
ex-aao-wconsj-2-svc-ing <none> three.activemq.com localhost 80 1h
Après avoir créé les entrées, nous devrons ajouter la ligne suivante dans /etc/hosts
.
127.0.0.1 one.activemq.com two.activemq.com three.activemq.com
Maintenant, nous accédons à la console de gestion, par exemple pour le troisième courtier sous l'URL suivante http://three.activemq.com/console .
Une fois que le courtier est prêt, nous pouvons définir une file d'attente de test. Le nom de cette file d'attente est test-1
.
apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemisAddress
metadata:
name: address-1
spec:
addressName: address-1
queueName: test-1
routingType: anycast
Maintenant, déployons l'application grand public. Dans le Deployment
manifeste, nous devons définir l'URL de connexion du cluster ActiveMQ. Mais attendez… comment le connecter ? Trois courtiers sont exposés à l'aide de trois Kubernetes distincts Service
. Heureusement, le démarreur AMQP Spring Boot le prend en charge. Nous pouvons définir les adresses de trois courtiers à l'intérieur de la failover
section. Essayons pour voir ce qui va se passer.
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-consumer
spec:
replicas: 3
selector:
matchLabels:
app: simple-consumer
template:
metadata:
labels:
app: simple-consumer
spec:
containers:
- name: simple-consumer
image: piomin/simple-consumer
env:
- name: ARTEMIS_URL
value: failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)
resources:
limits:
memory: 256Mi
cpu: 500m
requests:
memory: 128Mi
cpu: 250m
L'application est prête à être déployée avec Skaffold. Si vous exécutez la skaffold dev
commande, vous déploierez et consulterez les journaux des trois instances de l'application grand public. Quel est le résultat ? Toutes les instances se connectent à la première URL de la liste, comme indiqué ci-dessous.
Heureusement, il existe un paramètre de basculement qui aide à répartir les connexions client plus uniformément sur plusieurs pairs distants. Avec cette failover.randomize
option, les URI sont mélangés de manière aléatoire avant de tenter de se connecter à l'un d'entre eux. Remplaçons l' ARTEMIS_URL
env dans le Deployment
manifeste par la ligne suivante :
failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)?failover.randomize=true
La répartition entre les instances de courtier semble légèrement meilleure. Bien sûr, le résultat est aléatoire, vous pouvez donc obtenir des résultats différents.
La première façon de distribuer les connexions est via le Kubernetes dédié Service
. Nous n'avons pas à tirer parti des services créés automatiquement par l'opérateur. Nous pouvons créer le nôtre Service
qui équilibre la charge entre tous les pods disponibles avec les courtiers.
kind: Service
apiVersion: v1
metadata:
name: ex-aao-amqp-lb
spec:
ports:
- name: amqp
protocol: TCP
port: 5672
type: ClusterIP
selector:
application: ex-aao-app
Désormais, nous pouvons démissionner de la failover
section côté client et nous fier entièrement aux mécanismes de Kubernetes.
spec:
containers:
- name: simple-consumer
image: piomin/simple-consumer
env:
- name: ARTEMIS_URL
value: amqp://ex-aao-amqp-lb:5672
Cette fois, nous ne verrons rien dans les journaux d'application, car toutes les instances se connectent à la même URL. Nous pouvons vérifier une distribution entre toutes les instances de courtier en utilisant par exemple la console Web de gestion. Voici une liste de consommateurs sur la première instance d'ActiveMQ :
Ci-dessous, vous obtiendrez exactement les mêmes résultats pour la deuxième instance. Toutes les instances d'application grand public ont été réparties équitablement entre tous les courtiers disponibles au sein du cluster.
Maintenant, nous allons déployer l'application producteur. Nous utilisons le même Kubernetes Service
pour connecter le cluster ActiveMQ.
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-producer
spec:
replicas: 3
selector:
matchLabels:
app: simple-producer
template:
metadata:
labels:
app: simple-producer
spec:
containers:
- name: simple-producer
image: piomin/simple-producer
env:
- name: ARTEMIS_URL
value: amqp://ex-aao-amqp-lb:5672
- name: DESTINATION
value: test-1
ports:
- containerPort: 8080
Comme nous devons appeler le point de terminaison HTTP, créons le Service
pour l'application productrice :
apiVersion: v1
kind: Service
metadata:
name: simple-producer
spec:
type: ClusterIP
selector:
app: simple-producer
ports:
- port: 8080
Déployons l'application du producteur à l'aide de Skaffold avec la redirection de port activée :
$ skaffold dev --port-forward
Voici une liste de nos Deployment
s:
Pour envoyer un message de test, exécutez simplement la commande suivante :
$ curl http://localhost:8080/producer/send \
-d "{\"source\":\"test\",\"content\":\"Hello\"}" \
-H "Content-Type:application/json"
Si vous avez besoin d'une distribution de trafic plus avancée entre les courtiers à l'intérieur du cluster, vous pouvez y parvenir de plusieurs manières. Par exemple, nous pouvons remplacer dynamiquement la propriété de configuration lors de l'exécution. Voici un exemple très simple. Après avoir démarré l'application, nous connectons le service externe via HTTP. Il renvoie le numéro d'instance suivant.
@Configuration
public class AmqpConfig {
@PostConstruct
public void init() {
RestTemplate t = new RestTemplateBuilder().build();
int x = t.getForObject("http://simple-counter:8080/counter", Integer.class);
System.setProperty("amqphub.amqp10jms.remoteUrl",
"amqp://ex-aao-amqp-" + x + "-svc:5672");
}
}
Voici la mise en œuvre de l'application compteur. Il incrémente simplement le nombre et le divise par le nombre d'instances de courtier. Bien sûr, nous pouvons créer une implémentation plus avancée et fournir, par exemple, une connexion à l'instance d'un courtier s'exécutant sur le même nœud Kubernetes que le pod d'application.
@SpringBootApplication
@RestController
@RequestMapping("/counter")
public class CounterApp {
private static int c = 0;
public static void main(String[] args) {
SpringApplication.run(CounterApp.class, args);
}
@Value("${DIVIDER:0}")
int divider;
@GetMapping
public Integer count() {
if (divider > 0)
return c++ % divider;
else
return c++;
}
}
ActiveMQ est une alternative intéressante à RabbitMQ en tant que courtier de messages. Dans cet article, vous avez appris à exécuter, gérer et intégrer ActiveMQ avec Spring Boot sur Kubernetes. Il peut être géré de manière déclarative sur Kubernetes grâce à ActiveMQ Artemis Operator. Vous pouvez également l'intégrer facilement à Spring Boot à l'aide d'un démarreur dédié. Il fournit diverses options de configuration et est activement développé par Red Hat et la communauté.
Lien : https://piotrminkowski.com/2022/07/26/activemq-artemis-with-spring-boot-on-kubernetes/
#kubernetes #springboot #java #rabbitmq
1659076485
This article will teach you how to run ActiveMQ on Kubernetes and integrate it with your app through Spring Boot. We will deploy a clustered ActiveMQ broker using a dedicated operator. Then we are going to build and run two Spring Boot apps. The first of them is running in multiple instances and receiving messages from the queue, while the second is sending messages to that queue. In order to test the ActiveMQ cluster, we will use Kind. The consumer app connects to the cluster using several different modes. We will discuss those modes in detail.
You can find a lot of articles about other message brokers like RabbitMQ or Kafka on my blog. If you would to read about RabbitMQ on Kubernetes please refer to that article. In order to find out more about Kafka and Spring Boot integration, you can read the article about Kafka Streams and Spring Cloud Stream available here. Previously I didn’t write much about ActiveMQ, but it is also a very popular message broker. For example, it supports the latest version of AMQP protocol, while Rabbit is based on their extension of AMQP 0.9.
See more at: https://piotrminkowski.com/2022/07/26/activemq-artemis-with-spring-boot-on-kubernetes/
#kubernetes #springboot #java #rabbitmq