🎥 YouTube video: https://www.youtube.com/watch?v=nXyKM-xds2M
✍️ blog post: https://blog.kie.org/2022/03/content-based-routing-with-quarkus-and-kogito.html
This demo is a further iteration of rule-based Kafka message routing, specifically for HL7 healthcare messages; using Quarkus, Kogito, Drools DMN Engine, Apache Camel and AtlasMap.
This demo is inspired by the use-case of Intelligent message routing for healthcare from Red Hat's iDaaS project, focusing on Apache Camel and Drools DMN Engine integration. For the previous demo, you can reference this demo recording and some additional technological details in this blog post. The previous demo code is accessible here.
This project now uses Quarkus, the Supersonic Subatomic Java Framework.
- Quarkus
- Apache Camel
- AtlasMap
- Kogito
- DMN standard for defining the business rules logic
- Maven 3.8.3 or later
- Java 11 or later
- Docker installation for running integration tests and manual demo
Showcase the ability to route data real time and create new topics on demand as needed to help facilitate information processing and addressing needs for business in real time.
- HL7
ADT
messages fromMMS
application routed toMMSAllADT
kafka topic - HL7
ADT
messages fromMMS
application forA03
"Patient Discharge" are also routed toMMSDischarges
kafka topic - etc.
Content based routing often needs to account for data mapping and extraction of relevant fields from the original message; this is achieved with AltasMap:
Routing rules are implemented with DMN:
Integration test during the Maven build phase checks the same Camel route used for intelligent routing of HL7 messages.
You can run your application in dev mode that enables live coding using:
./mvnw compile quarkus:dev
NOTE: Quarkus now ships with a Dev UI, which is available in dev mode only at http://localhost:8080/q/dev/.
The application can be packaged using:
./mvnw package
It produces the quarkus-run.jar
file in the target/quarkus-app/
directory.
Be aware that it’s not an über-jar as the dependencies are copied into the target/quarkus-app/lib/
directory.
If you want to build an über-jar, execute the following command:
./mvnw package -Dquarkus.package.type=uber-jar
The application will be runnable using java -jar target/quarkus-app/quarkus-run.jar
.
The demo can be run locally with the successfully built solution artifacts, following the steps below.
Launch a simple Kafka cluster where the messages will be routed in the different Kafka topics.
docker-compose up -d
Launch this Quarkus application:
java -jar target/quarkus-app/quarkus-run.jar
Sending an HL7 message for A03
will show being routed to the MMSAllADT
and MMSDischarges
Kafka topics
curl --location --request POST 'localhost:8080/hl7v2/new' \
--header 'Content-Type: text/plain' \
--data-raw 'MSH|^~\&|MMS|DH|LABADT|DH|201301011226||ADT^A03|HL7MSG00001|P|2.3|
EVN|A01|201301011223||
PID|||MRN12345^5^M11||APPLESEED^JOHN^A^III||19710101|M||C|1 DATICA STREET^^MADISON^WI^53005-1020|GL|(414)379-1212|(414)271-3434||S||MRN12345001^2^M10|123456789|987654^NC|
NK1|1|APPLESEED^BARBARA^J|WIFE||||||NK^NEXT OF KIN
PV1|1|I|2000^2012^01||||004777^GOOD^SIDNEY^J.|||SUR||||ADM|A0|'
Sending an HL7 message for A02
will show being routed to the MMSAllADT
Kafka topic only
curl --location --request POST 'localhost:8080/hl7v2/new' \
--header 'Content-Type: text/plain' \
--data-raw 'MSH|^~\&|MMS|1|||20050110114442||ADT^A02|59910287|P|2.3|||
EVN|A02|20050110114442|||||
PID|1||10006579^^^1^MRN^1||DUCK^DONALD^D||19241010|M||1|111^DUCK ST^^FOWL^CA^999990000^^M|1|8885551212|8885551212|1|2||40007716^^^AccMgr^VN^1|123121234|||||||||||NO
PV1|1|I|IN1^214^1^1^^^S|3||PREOP^101^|37^DISNEY^WALT^^^^^^AccMgr^^^^CI|||01||||1|||37^DISNEY^WALT^^^^^^AccMgr^^^^CI|2|40007716^^^AccMgr^VN|4|||||||||||||||||||1||I|||20050110045253||||||'
Stop the demo by quitting the java application and stop the simple Kafka cluster from the root of this project using
docker-compose down
Configure local .env
file in this root directory with:
CLIENT_ID=srvc-acct-...
CLIENT_SECRET=...
TOKEN_URL=https://identity.api.openshift.com/auth/realms/rhoas/protocol/openid-connect/token
KAFKA_BROKERCONNECT= ... .kafka.rhcloud.com:443
To run a Dockerized KafDrop connecting to the managed kafka, configure local kafdrop.properties.env
with:
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="srvc-acct-..." password="...";