-
This solution pattern builds on top an event-driven architecture in order to support the extension of the legacy stack. The architecture includes new microservices, event streaming, event processing and search indexing tools.
+
This solution pattern builds on top of an event-driven architecture in order to support the extension of the legacy stack. The architecture includes new microservices, event streaming, event processing and search indexing tools.
In respect to the story goals and targeted use cases, it’s recommended to consider adopting an Enterprise Integration Pattern for data integration, more specifically, adopting the Change Data Capture (CDC) pattern.
diff --git a/solution-pattern-modernization-cdc/single-page-pre.html b/solution-pattern-modernization-cdc/single-page-pre.html
index 0b1d539..b5dea98 100644
--- a/solution-pattern-modernization-cdc/single-page-pre.html
+++ b/solution-pattern-modernization-cdc/single-page-pre.html
@@ -286,7 +286,7 @@
-
One could think about changing the service to push the data not only to its own database, but also to elasticsearch. It becomes a distributed system where the core data operations are no longer handled in single transactions. Be aware: this is yet another anti-pattern, called dual write.
+
One could think about changing the service to push the data not only to its own database, but also to ElasticSearch. It becomes a distributed system where the core data operations are no longer handled in single transactions. Be aware: this is yet another anti-pattern, called dual write.
@@ -336,7 +336,7 @@ <
-
This solution pattern builds on top an event-driven architecture in order to support the extension of the legacy stack. The architecture includes new microservices, event streaming, event processing and search indexing tools.
+
This solution pattern builds on top of an event-driven architecture in order to support the extension of the legacy stack. The architecture includes new microservices, event streaming, event processing and search indexing tools.
In respect to the story goals and targeted use cases, it’s recommended to consider adopting an Enterprise Integration Pattern for data integration, more specifically, adopting the Change Data Capture (CDC) pattern.
@@ -481,7 +481,7 @@
Debezium streams the data them over to Kafka. The event streaming solution can be hosted on-premise or on the cloud. In this implementation, we are using Red Hat Managed OpenShift Streams for Apache Kafka.
+
Next, Debezium streams the data them over to Kafka. The event streaming solution can be hosted on-premise or on the cloud. In this implementation, we are using AMQ Streams, Red Hat’s Kubernetes-native Apache Kafka distribution.
An integration microservice, sales-streams
, reacts to events captured by Debezium and published on three topics, respective to sale-change-event
and lineitem-change-event
.
@@ -572,7 +572,7 @@ Summary
-
The solution is built on top of a hybrid cloud model, with containerized services running on OpenShift (can be on a private or public cloud depending on how you provision the demo) consuming a managed OpenShift Streams for Apache Kafka. OpenShift streams is heart of this solution - it’s a resilient and highly available Kafka instance managed by Red Hat, where all the topics reside and where all services can receive and send all events from/to.
+
The solution is built on top of a hybrid cloud model, with containerized services running on OpenShift (can be on a private or public cloud depending on how you provision the demo), using an Apache Kafka broker cluster running in the same OpenShift instance.
This design is only possible by the designing the architecture based on the Change Data Capture pattern - which was delivered with Debezium and Kafka Connectors.
diff --git a/solution-pattern-modernization-cdc/single-page.html b/solution-pattern-modernization-cdc/single-page.html
index dd2d686..8d985b7 100644
--- a/solution-pattern-modernization-cdc/single-page.html
+++ b/solution-pattern-modernization-cdc/single-page.html
@@ -596,7 +596,7 @@
Summary
-
The solution is built on top of a hybrid cloud model, with containerized services running on OpenShift (can be on a private or public cloud depending on how you provision the demo) consuming a managed OpenShift Streams for Apache Kafka. OpenShift streams is heart of this solution - it’s a resilient and highly available Kafka instance managed by Red Hat, where all the topics reside and where all services can receive and send all events from/to.
+
The solution is built on top of a hybrid cloud model, with containerized services running on OpenShift (can be on a private or public cloud depending on how you provision the demo).
This design is only possible by the designing the architecture based on the Change Data Capture pattern - which was delivered with Debezium and Kafka Connectors.