diff --git a/sitemap.xml b/sitemap.xml index 6aa909f..e3e060c 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,38 +2,38 @@ https://redhat-solution-patterns.github.io/solution-pattern-modernization-cdc/solution-pattern-modernization-cdc/01-pattern.html -2023-02-01T10:00:44.844Z +2023-06-06T08:54:26.999Z https://redhat-solution-patterns.github.io/solution-pattern-modernization-cdc/solution-pattern-modernization-cdc/02-architecture.html -2023-02-01T10:00:44.844Z +2023-06-06T08:54:26.999Z https://redhat-solution-patterns.github.io/solution-pattern-modernization-cdc/solution-pattern-modernization-cdc/03-demo.html -2023-02-01T10:00:44.844Z +2023-06-06T08:54:26.999Z https://redhat-solution-patterns.github.io/solution-pattern-modernization-cdc/solution-pattern-modernization-cdc/04-devresources.html -2023-02-01T10:00:44.844Z +2023-06-06T08:54:26.999Z https://redhat-solution-patterns.github.io/solution-pattern-modernization-cdc/solution-pattern-modernization-cdc/appendix-a.html -2023-02-01T10:00:44.844Z +2023-06-06T08:54:26.999Z https://redhat-solution-patterns.github.io/solution-pattern-modernization-cdc/solution-pattern-modernization-cdc/content-overview.html -2023-02-01T10:00:44.844Z +2023-06-06T08:54:26.999Z https://redhat-solution-patterns.github.io/solution-pattern-modernization-cdc/solution-pattern-modernization-cdc/index.html -2023-02-01T10:00:44.844Z +2023-06-06T08:54:26.999Z https://redhat-solution-patterns.github.io/solution-pattern-modernization-cdc/solution-pattern-modernization-cdc/single-page-pre.html -2023-02-01T10:00:44.844Z +2023-06-06T08:54:26.999Z https://redhat-solution-patterns.github.io/solution-pattern-modernization-cdc/solution-pattern-modernization-cdc/single-page.html -2023-02-01T10:00:44.844Z +2023-06-06T08:54:26.999Z diff --git a/solution-pattern-modernization-cdc/01-pattern.html b/solution-pattern-modernization-cdc/01-pattern.html index 90f1cb2..2f48675 100644 --- a/solution-pattern-modernization-cdc/01-pattern.html +++ b/solution-pattern-modernization-cdc/01-pattern.html @@ -278,7 +278,7 @@

The story behind th

Technical overview

-

This solution pattern builds on top an event-driven architecture in order to support the extension of the legacy stack. The architecture includes new microservices, event streaming, event processing and search indexing tools.

+

This solution pattern builds on top of an event-driven architecture in order to support the extension of the legacy stack. The architecture includes new microservices, event streaming, event processing and search indexing tools.

In respect to the story goals and targeted use cases, it’s recommended to consider adopting an Enterprise Integration Pattern for data integration, more specifically, adopting the Change Data Capture (CDC) pattern.

diff --git a/solution-pattern-modernization-cdc/02-architecture.html b/solution-pattern-modernization-cdc/02-architecture.html index 65e9db1..984a6e2 100644 --- a/solution-pattern-modernization-cdc/02-architecture.html +++ b/solution-pattern-modernization-cdc/02-architecture.html @@ -264,7 +264,7 @@

-

One could think about changing the service to push the data not only to its own database, but also to elasticsearch. It becomes a distributed system where the core data operations are no longer handled in single transactions. Be aware: this is yet another anti-pattern, called dual write.

+

One could think about changing the service to push the data not only to its own database, but also to ElasticSearch. It becomes a distributed system where the core data operations are no longer handled in single transactions. Be aware: this is yet another anti-pattern, called dual write.

@@ -408,7 +408,7 @@

Debezium streams the data them over to Kafka. The event streaming solution can be hosted on-premise or on the cloud. In this implementation, we are using Red Hat Managed OpenShift Streams for Apache Kafka.

+

Next, Debezium streams the data them over to Kafka. The event streaming solution can be hosted on-premise or on the cloud. In this implementation, we are using AMQ Streams, Red Hat’s Kubernetes-native Apache Kafka distribution.

  • An integration microservice, sales-streams, reacts to events captured by Debezium and published on three topics, respective to sale-change-event and lineitem-change-event.

    diff --git a/solution-pattern-modernization-cdc/03-demo.html b/solution-pattern-modernization-cdc/03-demo.html index 1f1e3fc..541dedc 100644 --- a/solution-pattern-modernization-cdc/03-demo.html +++ b/solution-pattern-modernization-cdc/03-demo.html @@ -260,13 +260,7 @@

    Ansible playbook and enjoy the demo.

  • @@ -303,7 +297,7 @@

    2.2.2. Preparing the platforms

    • -

      OpenShift cluster (version >= 4.9) with cluster-admin privileges.

      +

      OpenShift cluster (version >= 4.12) with cluster-admin privileges.

    @@ -362,22 +344,7 @@

    - -
    -If you have access to rhpds, you can request and use an OpenShift 4.10 Workshop enviroment. -
    -
    - -
  • -

    Access to OpenShift Streams for Apache Kafka.

    -
    - - - -
    - - -If it’s your first time using OpenShift Streams, don’t worry. It’s a zero-cost service for developers and everyone can try it out. You can register and order your instance at https://red.ht/TryKafka. +If you have access to RHDP, you can request and use an OpenShift 4.12 Workshop enviroment.
    @@ -390,165 +357,15 @@

    2.3. Provisioning the demo

    -

    The solution’s components and services can be automatically provisioned using an ansible playbook.

    +

    The solution’s components and services can be automatically provisioned using an Ansible playbook.

    The following steps will guide you on setting up an instance of OpenShift Streams for Apache Kafka and its resources, plus provisioning the demo services using Ansible.

    -

    2.3.1. Provisioning OpenShift Streams (Kafka)

    -
    -

    Before moving ahead to the steps of provisioning the services within your OpenShift cluster, first you should provision and configure your Kafka instance.

    -
    -
    - - - - - -
    - - -If you need detailed instructions on how to provision, configure and operate of your managed Kafka instance, please check this step-by-step Getting Started with OpenShift Streams for Apache Kafka guide. -
    -
    -
    -

    See below a straightforward guide to create and your instance:

    -
    -
    -
      -
    1. -

      Navigate to https://console.redhat.com and log in with your Red Hat Account ID;

      -
    2. -
    3. -

      Select the Service Account menu and create a new Service Account to connect to your Kafka instance;

      -
      - - - - - -
      - - -Take note of the service account id and password, you’ll need both information during the provisioning. -
      -
      -
    4. -
    5. -

      Next, in the left menu, select Application and Data Services → Streams for Apache Kafka → Kafka instances;

      -
    6. -
    7. -

      Create a new Kafka instance;

      -
      -
        -
      • -

        Use a name of your choice. You can use the default values for creating the instance.

        -
      • -
      -
      -
    8. -
    9. -

      Once your instance is ready, click on the instance and open the "connection" tab. take note of the following data:

      -
      -
        -
      • -

        Bootstrap server (e.g. cdc-kafka-caah-ekucfsh—​lhhsqa.bf2.kafka.rhcloud.com:443)

        -
        -
        -kafka instance connection info -
        -
        -
      • -
      -
      -
    10. -
    11. -

      Configure the ACL for your Service Account. The Service Account should have the following permissions:

      -
      -
        -
      • -

        read, write, create permissions for all topics

        -
      • -
      • -

        read permissions for all consumer groups

        -
      • -
      • -

        If you have rhosak CLI installed, you can execute the following commands to login to the service, select your kafka instance and add the proper configuration, replacing srvc-acct-9999 with your service client id:

        -
        - - - - - -
        - - -If you do not use the right service account id, the deployed services will throw an authentication error. -
        -
        -
        -
        -
        rhoas login
        -rhoas kafka list
        -rhoas kafka use
        -rhoas kafka acl grant-access --producer --consumer --service-account srvc-acct-9999 --topic all --group all -y
        -
        -
        -
      • -
      -
      -
    12. -
    13. -

      Create the following topics, all with 1 partition:

      -
      -
        -
      • -

        retail.sale-aggregated

        -
      • -
      • -

        retail.expense-event

        -
      • -
      • -

        retail.updates.public.line_item

        -
      • -
      • -

        retail.updates.public.sale

        -
      • -
      • -

        retail.updates.public.customer

        -
      • -
      • -

        retail.updates.public.product

        -
        -
        -kafka instance topics -
        -
        -
      • -
      • -

        if you are using rhoas cli, you can create the topics with these commands:

        -
        -
        -
        rhoas kafka topic create --name=retail.sale-aggregated --partitions=1
        -rhoas kafka topic create --name=retail.updates.public.customer --partitions=1
        -rhoas kafka topic create --name=retail.updates.public.product --partitions=1
        -rhoas kafka topic create --name=retail.updates.public.sale --partitions=1
        -rhoas kafka topic create --name=retail.updates.public.line_item --partitions=1
        -rhoas kafka topic create --name=retail.expense-event --partitions=1
        -
        -
        -
      • -
      -
      -
    14. -
    -
    -
    -
    -

    2.3.2. Installing the demo

    +

    2.3.1. Installing the demo

    -

    This solution pattern offers an easy installation process through ansible automation and helm charts. To get your environment up and running, follow the steps below:

    +

    This solution pattern offers an easy installation process through Ansible automation, Red hat OpenShift Gitops and Helm charts. To get your environment up and running, follow the steps below:

      @@ -556,35 +373,16 @@

      Clone the repository below to your workstation

      -
      git clone https://github.com/solution-pattern-cdc/ansible.git
      +
      git clone https://github.com/solution-pattern-cdc/ansible.git
       cd ansible
    1. -

      Copy the inventories/inventory.template file to inventories/inventory;

      -
    2. -
    3. -

      Remember the OpenShift Streams values we took note? It’s time to use them. In the inventories/inventory file, provide the connection details for your Kafka instance:

      -
      -
        -
      • -

        rhosak_bootstrap_server: Bootstrap server of your managed Kafka instance;

        -
      • -
      • -

        rhosak_service_account_client_id: Client ID of your Service Account;

        -
      • -
      • -

        rhosak_service_account_client_secret: Client Secret of your Service Account;

        -
      • -
      -
      -
    4. -
    5. Run the Ansible playbook:

      -
      ansible-playbook -i inventories/inventory playbooks/install.yml
      +
      ansible-playbook -playbooks/install.yml
    6. @@ -616,7 +414,7 @@

      2.4. Accessing the services

      -

      You can access the three services that exposes a UI through the exposed routes. Use one of the two options below to get the routes:

      +

      You can access the three services that expose a UI through the exposed routes. Use one of the two options below to get the routes:

        @@ -624,7 +422,7 @@

        -

        Two new functionalities are now part of the retail solution: -1. Enhanced search capabilities for products -1. Cashback wallet for customers

        +

        Two new functionalities are now part of the retail solution:

        +
        +
        +
          +
        1. +

          Enhanced search capabilities for products

          +
        2. +
        3. +

          Cashback wallet for customers

          +
        4. +

        Both solutions are build on top of an event driven architecture, which means that all services are integrated with an orchestration where each one execute its own operations when relevant events are published in the ecosystem.

        @@ -673,10 +479,10 @@

        1. -

          Use the search service to see existing data that is available in the ElasticSearch index;

          +

          Use the search service to see existing data that is available in the ElasticSearch index;

        2. -

          Add a new product directly to the retail database (legacy), to check the ecosystem behavior;

          +

          Add a new product directly to the retail database (legacy), to check the ecosystem behavior;

        3. Confirm that the new product shows up in the search;

          @@ -688,7 +494,7 @@

          1. -

            Using your browser, open the search service.

            +

            Using your browser, open the search service.

            @@ -731,25 +537,25 @@ + @@ -870,17 +672,17 @@

            PostgreSQL database used by the legacy services;

            PostgreSQL database used by the legacy services

            Persistence

            @@ -967,7 +769,7 @@

            cashback wallet customer id @@ -975,7 +777,7 @@

            simulate purchase @@ -983,7 +785,7 @@

            simulate purchase result @@ -1000,7 +802,7 @@

            kafdrop sales aggregated messages @@ -1041,7 +843,7 @@

            3.3.2. Looking behind the scenes - cashback solution

            -

            Differently than the search capability that only requires the integration layer (Retail DB → ElasticSearch), to create cashback wallets we’ll need to process and enrich the data before we use it. We will also need to guarantee the synchronization between the customer data in the retail-db and the cashback-db.

            +

            Differently than the search capability that only requires the integration layer (Retail DB → ElasticSearch), to create cashback wallets we’ll need to process and enrich the data before we use it. We will also need to guarantee the synchronization between the customer data in the retail-db and the cashback-db.

              @@ -1049,7 +851,7 @@

              sales-streams) before we actually do the cashback operations in another service (cashback-service);

              +

              Debezium then tracks and publishes events to two topics, one for each respective table, and one event for each respective line added/updated event that was tracked. But notice that in order for us to apply the cashback calculation business logic, we’ll have in mind good design and architecture practices for microservices, where each microservice is supposed to do one thing, and do it well. So, the event data aggregation, processing and enrichment will be executed by a service (sales-streams) before we actually do the cashback operations in another service (cashback-service);

              Here’s another way to explain this:

              @@ -1057,7 +859,7 @@

              +

            @@ -1180,7 +982,7 @@

            +
                    from("kafka:{{kafka.expenses.topic.name}}?groupId={{kafka.cashback_processor.consumer.group}}" + (1)
            @@ -1280,7 +1082,7 @@ 

            -

            The solution is built on top of a hybrid cloud model, with containerized services running on OpenShift (can be on a private or public cloud depending on how you provision the demo) consuming a managed OpenShift Streams for Apache Kafka. OpenShift streams is heart of this solution - it’s a resilient and highly available Kafka instance managed by Red Hat, where all the topics reside and where all services can receive and send all events from/to.

            +

            The solution is built on top of a hybrid cloud model, with containerized services running on OpenShift (can be on a private or public cloud depending on how you provision the demo), using an Apache Kafka broker cluster running in the same OpenShift instance.

            This design is only possible by the designing the architecture based on the Change Data Capture pattern - which was delivered with Debezium and Kafka Connectors.

            diff --git a/solution-pattern-modernization-cdc/appendix-a.html b/solution-pattern-modernization-cdc/appendix-a.html index f2114bf..e28d0c2 100644 --- a/solution-pattern-modernization-cdc/appendix-a.html +++ b/solution-pattern-modernization-cdc/appendix-a.html @@ -289,7 +289,7 @@

            <

            Components configuration

            -

            All the customization of the services is externalized using OpenShift secrets. As an example, let’s check the connection information for the cashback-connector service.

            +

            All the customization of the services is externalized using OpenShift secrets. As an example, let’s check the connection information for the cashback-connector service.

              @@ -302,7 +302,7 @@

            @@ -336,7 +336,7 @@

            <

            Modernization boosted by event-driven architecture and enterprise integration patterns

            -

            This solution pattern builds on top an event-driven architecture in order to support the extension of the legacy stack. The architecture includes new microservices, event streaming, event processing and search indexing tools.

            +

            This solution pattern builds on top of an event-driven architecture in order to support the extension of the legacy stack. The architecture includes new microservices, event streaming, event processing and search indexing tools.

            In respect to the story goals and targeted use cases, it’s recommended to consider adopting an Enterprise Integration Pattern for data integration, more specifically, adopting the Change Data Capture (CDC) pattern.

            @@ -481,7 +481,7 @@

            Debezium streams the data them over to Kafka. The event streaming solution can be hosted on-premise or on the cloud. In this implementation, we are using Red Hat Managed OpenShift Streams for Apache Kafka.

            +

            Next, Debezium streams the data them over to Kafka. The event streaming solution can be hosted on-premise or on the cloud. In this implementation, we are using AMQ Streams, Red Hat’s Kubernetes-native Apache Kafka distribution.

          2. An integration microservice, sales-streams, reacts to events captured by Debezium and published on three topics, respective to sale-change-event and lineitem-change-event.

            @@ -572,7 +572,7 @@
            -

            The solution is built on top of a hybrid cloud model, with containerized services running on OpenShift (can be on a private or public cloud depending on how you provision the demo) consuming a managed OpenShift Streams for Apache Kafka. OpenShift streams is heart of this solution - it’s a resilient and highly available Kafka instance managed by Red Hat, where all the topics reside and where all services can receive and send all events from/to.

            +

            The solution is built on top of a hybrid cloud model, with containerized services running on OpenShift (can be on a private or public cloud depending on how you provision the demo), using an Apache Kafka broker cluster running in the same OpenShift instance.

            This design is only possible by the designing the architecture based on the Change Data Capture pattern - which was delivered with Debezium and Kafka Connectors.

            diff --git a/solution-pattern-modernization-cdc/single-page.html b/solution-pattern-modernization-cdc/single-page.html index dd2d686..8d985b7 100644 --- a/solution-pattern-modernization-cdc/single-page.html +++ b/solution-pattern-modernization-cdc/single-page.html @@ -596,7 +596,7 @@
            -

            The solution is built on top of a hybrid cloud model, with containerized services running on OpenShift (can be on a private or public cloud depending on how you provision the demo) consuming a managed OpenShift Streams for Apache Kafka. OpenShift streams is heart of this solution - it’s a resilient and highly available Kafka instance managed by Red Hat, where all the topics reside and where all services can receive and send all events from/to.

            +

            The solution is built on top of a hybrid cloud model, with containerized services running on OpenShift (can be on a private or public cloud depending on how you provision the demo).

            This design is only possible by the designing the architecture based on the Change Data Capture pattern - which was delivered with Debezium and Kafka Connectors.