From ecc1da10c791dfd48e69dead464e7cccd2e5b083 Mon Sep 17 00:00:00 2001 From: Avani Bhatt Date: Tue, 19 Sep 2023 13:40:23 +0100 Subject: [PATCH 1/7] Renaming main files for the med-diagnosis pattern for modular docs --- .../{cluster-sizing.adoc => med-cluster-sizing.adoc} | 0 .../{getting-started.adoc => med-getting-started.adoc} | 0 ...as-for-customization.adoc => med-ideas-for-customization.adoc} | 0 .../{troubleshooting.adoc => med-troubleshooting.adoc} | 0 4 files changed, 0 insertions(+), 0 deletions(-) rename content/patterns/medical-diagnosis/{cluster-sizing.adoc => med-cluster-sizing.adoc} (100%) rename content/patterns/medical-diagnosis/{getting-started.adoc => med-getting-started.adoc} (100%) rename content/patterns/medical-diagnosis/{ideas-for-customization.adoc => med-ideas-for-customization.adoc} (100%) rename content/patterns/medical-diagnosis/{troubleshooting.adoc => med-troubleshooting.adoc} (100%) diff --git a/content/patterns/medical-diagnosis/cluster-sizing.adoc b/content/patterns/medical-diagnosis/med-cluster-sizing.adoc similarity index 100% rename from content/patterns/medical-diagnosis/cluster-sizing.adoc rename to content/patterns/medical-diagnosis/med-cluster-sizing.adoc diff --git a/content/patterns/medical-diagnosis/getting-started.adoc b/content/patterns/medical-diagnosis/med-getting-started.adoc similarity index 100% rename from content/patterns/medical-diagnosis/getting-started.adoc rename to content/patterns/medical-diagnosis/med-getting-started.adoc diff --git a/content/patterns/medical-diagnosis/ideas-for-customization.adoc b/content/patterns/medical-diagnosis/med-ideas-for-customization.adoc similarity index 100% rename from content/patterns/medical-diagnosis/ideas-for-customization.adoc rename to content/patterns/medical-diagnosis/med-ideas-for-customization.adoc diff --git a/content/patterns/medical-diagnosis/troubleshooting.adoc b/content/patterns/medical-diagnosis/med-troubleshooting.adoc similarity index 100% rename from content/patterns/medical-diagnosis/troubleshooting.adoc rename to content/patterns/medical-diagnosis/med-troubleshooting.adoc From 01a488df56ff39e00df099547b5d121f6cbab07d Mon Sep 17 00:00:00 2001 From: Avani Bhatt Date: Wed, 20 Sep 2023 22:37:39 +0100 Subject: [PATCH 2/7] created modules for all main file, except for getting started --- .../patterns/medical-diagnosis/_index.adoc | 78 +------ .../medical-diagnosis/med-cluster-sizing.adoc | 94 +-------- .../med-ideas-for-customization.adoc | 23 +-- .../med-troubleshooting.adoc | 190 +----------------- modules/med-about-cluster-sizing.adoc | 41 ++++ modules/med-about-customizing-pattern.adoc | 18 ++ modules/med-about-makefile.adoc | 31 +++ modules/med-about-medical-diagnosis.adoc | 46 +++++ modules/med-architecture-schema.adoc | 29 +++ modules/med-ocp-cluster-sizing.adoc | 47 +++++ modules/med-troubleshooting-deployment.adoc | 164 +++++++++++++++ 11 files changed, 386 insertions(+), 375 deletions(-) create mode 100644 modules/med-about-cluster-sizing.adoc create mode 100644 modules/med-about-customizing-pattern.adoc create mode 100644 modules/med-about-makefile.adoc create mode 100644 modules/med-about-medical-diagnosis.adoc create mode 100644 modules/med-architecture-schema.adoc create mode 100644 modules/med-ocp-cluster-sizing.adoc create mode 100644 modules/med-troubleshooting-deployment.adoc diff --git a/content/patterns/medical-diagnosis/_index.adoc b/content/patterns/medical-diagnosis/_index.adoc index ad77c1b49..d3f77eead 100644 --- a/content/patterns/medical-diagnosis/_index.adoc +++ b/content/patterns/medical-diagnosis/_index.adoc @@ -22,84 +22,14 @@ ci: medicaldiag :toc: :imagesdir: /images :_content-type: ASSEMBLY -include::modules/comm-attributes.adoc[] - -//Module to be included -//:_content-type: CONCEPT -//:imagesdir: ../../images -[id="about-med-diag-pattern"] -= About the {med-pattern} - -Background:: - -This validated pattern is based on a demo implementation of an automated data pipeline for chest X-ray analysis that was previously developed by {redhat}. You can find the original demonstration link:https://github.com/red-hat-data-services/jumpstart-library[here]. It was developed for the US Department of Veteran Affairs. - -This validated pattern includes the same functionality as the original demonstration. The difference is that this solution uses the GitOps framework to deploy the pattern including Operators, creation of namespaces, and cluster configuration. Using GitOps provides an efficient means of implementing continuous deployment. - -Workflow:: - -* Ingest chest X-rays from a simulated X-ray machine and puts them into an `objectStore` based on Ceph. -* The `objectStore` sends a notification to a Kafka topic. -* A KNative Eventing listener to the topic triggers a KNative Serving function. -* An ML-trained model running in a container makes a risk assessment of Pneumonia for incoming images. -* A Grafana dashboard displays the pipeline in real time, along with images incoming, processed, anonymized, and full metrics collected from Prometheus. - -This pipeline is showcased link:https://www.youtube.com/watch?v=zja83FVsm14[in this video]. - -image::medical-edge/dashboard.png[link="/images/medical-edge/dashboard.png"] - -//[NOTE] -//==== -//This validated pattern is still under development. If you have any questions or concerns contact mailto:jrickard@redhat.com[Jonny Rickard] or mailto:claudiol@redhat.com[Lester Claudio]. -//==== - -[id="about-solution-med"] -== About the solution elements - -The solution aids the understanding of the following: -* How to use a GitOps approach to keep in control of configuration and operations. -* How to deploy AI/ML technologies for medical diagnosis using GitOps. - -The {med-pattern} uses the following products and technologies: - -* {rh-ocp} for container orchestration -* {rh-gitops}, a GitOps continuous delivery (CD) solution -* {rh-amq-first}, an event streaming platform based on the Apache Kafka -* {rh-serverless-first} for event-driven applications -* {rh-ocp-data-first} for cloud native storage capabilities -* {grafana-op} to manage and share Grafana dashboards, data sources, and so on -* S3 storage - -[id="about-architecture-med"] -== About the architecture - -[IMPORTANT] -==== -Presently, the {med-pattern} does not have an edge component. Edge deployment capabilities are planned as part of the pattern architecture for a future release. -==== - -image::medical-edge/edge-medical-diagnosis-marketing-slide.png[link="/images/medical-edge/edge-medical-diagnosis-marketing-slide.png"] - -Components are running on OpenShift either at the data center, at the medical facility, or public cloud running OpenShift. - -[id="about-physical-schema-med"] -=== About the physical schema - -The following diagram shows the components that are deployed with the various networks that connect them. - -image::medical-edge/physical-network.png[link="/images/medical-edge/physical-network.png"] - -The following diagram shows the components that are deployed with the the data flows and API calls between them. - -image::medical-edge/physical-dataflow.png[link="/images/medical-edge/physical-dataflow.png"] +include::modules/comm-attributes.adoc[] -== Recorded demo +include::modules/med-about-medical-diagnosis.adoc[leveloffset=+1] -link:/videos/xray-deployment.svg[image:/videos/xray-deployment.svg[Demo\]] +include::modules/med-architecture-schema.adoc[leveloffset=+1] [id="next-steps_med-diag-index"] == Next steps -* Getting started link:getting-started[Deploy the Pattern] -//We have relevant links on the patterns page +* Getting started link:getting-started[Deploy the Pattern] \ No newline at end of file diff --git a/content/patterns/medical-diagnosis/med-cluster-sizing.adoc b/content/patterns/medical-diagnosis/med-cluster-sizing.adoc index 7f4c9584b..b70524ee6 100644 --- a/content/patterns/medical-diagnosis/med-cluster-sizing.adoc +++ b/content/patterns/medical-diagnosis/med-cluster-sizing.adoc @@ -7,97 +7,9 @@ aliases: /medical-diagnosis/cluster-sizing/ :toc: :imagesdir: /images :_content-type: ASSEMBLY -include::modules/comm-attributes.adoc[] - -//Module to be included -//:_content-type: CONCEPT -//:imagesdir: ../../images -[id="about-openshift-cluster-sizing-med"] -= About OpenShift cluster sizing for the {med-pattern} - -To understand cluster sizing requirements for the {med-pattern}, consider the following components that the {med-pattern} deploys on the datacenter or the hub OpenShift cluster: - -|=== -| Name | Kind | Namespace | Description - -| Medical Diagnosis Hub -| Application -| medical-diagnosis-hub -| Hub GitOps management - -| {rh-gitops} -| Operator -| openshift-operators -| {rh-gitops-short} - -| {rh-ocp-data-first} -| Operator -| openshift-storage -| Cloud Native storage solution - -| {rh-amq-streams} -| Operator -| openshift-operators -| AMQ Streams provides Apache Kafka access - -| {rh-serverless-first} -| Operator -| - knative-serving (knative-eventing) -| Provides access to Knative Serving and Eventing functions -|=== - -//AI: Removed the following since we have CI status linked on the patterns page -//[id="tested-platforms-cluster-sizing"] -//== Tested Platforms -: Removed the following in favor of the link to OCP docs -//[id="general-openshift-minimum-requirements-cluster-sizing"] -//== General OpenShift Minimum Requirements -The minimum requirements for an {ocp} cluster depend on your installation platform. For instance, for AWS, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_aws/preparing-to-install-on-aws.html#requirements-for-installing-ocp-on-aws[Installing {ocp} on AWS], and for bare-metal, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_bare_metal/installing-bare-metal.html#installation-minimum-resource-requirements_installing-bare-metal[Installing {ocp} on bare metal]. - -For information about requirements for additional platforms, see link:https://docs.openshift.com/container-platform/4.13/installing/installing-preparing.html[{ocp} documentation]. - -//Module to be included -//:_content-type: CONCEPT -//:imagesdir: ../../images - -[id="med-openshift-cluster-size"] -=== About {med-pattern} OpenShift cluster size - -The {med-pattern} has been tested with a defined set of configurations that represent the most common combinations that {ocp} customers are using for the x86_64 architecture. - -For {med-pattern}, the OpenShift cluster size must be a bit larger to support the compute and storage demands of OpenShift Data Foundations and other Operators. -//AI:Removed a few lines from here since the content is updated to remove any ambiguity. We rather use direct links (OCP docs/ GCP/AWS/Azure) -[NOTE] -==== -You might want to add resources when more developers are working on building their applications. -==== - -The OpenShift cluster is a standard deployment of 3 control plane nodes and 3 or more worker nodes. - -[cols="^,^,^,^"] -|=== -| Node type | Number of nodes | Cloud provider | Instance type - -| Control plane and worker -| 3 and 3 -| Google Cloud -| n1-standard-8 - -| Control plane and worker -| 3 and 3 -| Amazon Cloud Services -| m5.2xlarge +include::modules/comm-attributes.adoc[] -| Control plane and worker -| 3 and 3 -| Microsoft Azure -| Standard_D8s_v3 -|=== +include::modules/med-about-cluster-sizing.adoc[leveloffset=+1] -[role="_additional-resources"] -.Additional resource -* link:https://aws.amazon.com/ec2/instance-types/[AWS instance types] -* link:https://learn.microsoft.com/en-us/azure/virtual-machines/sizes[Azure instance types: Sizes for virtual machines in Azure] -* link:https://cloud.google.com/compute/docs/machine-resource[Google Cloud Platform instance types: Machine families resource and comparison guide] -//Removed section for instance types as we did for MCG +include::modules/med-ocp-cluster-sizing.adoc[leveloffset=+1] diff --git a/content/patterns/medical-diagnosis/med-ideas-for-customization.adoc b/content/patterns/medical-diagnosis/med-ideas-for-customization.adoc index fba7350e2..88c73a08f 100644 --- a/content/patterns/medical-diagnosis/med-ideas-for-customization.adoc +++ b/content/patterns/medical-diagnosis/med-ideas-for-customization.adoc @@ -8,25 +8,4 @@ aliases: /medical-diagnosis/ideas-for-customization/ :_content-type: ASSEMBLY include::modules/comm-attributes.adoc[] -//Module to be included -//:_content-type: CONCEPT -//:imagesdir: ../../images - -[id="about-customizing-pattern-med"] -= About customizing the pattern {med-pattern} - -One of the major goals of the {solution-name-upstream} development process is to create modular and customizable demos. The {med-pattern} is just an example of how AI/ML workloads built for object detection and classification can be run on OpenShift clusters. Consider your workloads for a moment - how would your workload best consume the pattern framework? Do your consumers require on-demand or near real-time responses when using your application? Is your application processing images or data that is protected by either Government Privacy Laws or HIPAA? -The {med-pattern} can answer the call to either of these requirements by using {serverless-short} and {ocp-data-short}. - -[id="understanding-different-ways-to-use-med-pattern"] -== Understanding different ways to use the {med-pattern} - -. The {med-pattern} is scanning X-Ray images to determine the probability that a patient might or might not have Pneumonia. Continuing with the medical path, the pattern could be used for other early detection scenarios that use object detection and classification. For example, the pattern could be used to scan C/T images for anomalies in the body such as Sepsis, Cancer, or even benign tumors. Additionally, the pattern could be used for detecting blood clots, some heart disease, and bowel disorders like Crohn's disease. -. The Transportation Security Agency (TSA) could use the {med-pattern} in a way that enhances their existing scanning capabilities to detect with a higher probability restricted items carried on a person or hidden away in a piece of luggage. With Machine Learning Operations (MLOps), the model is constantly training and learning to better detect those items that are dangerous but which are not necessarily metallic, such as a firearm or a knife. The model is also training to dismiss those items that are authorized; ultimately saving passengers from being stopped and searched at security checkpoints. -. Militaries could use images collected from drones, satellites, or other platforms to identify objects and determine with probability what that object is. For example, the model could be trained to determine a type of ship, potentially its country of origin, and other such identifying characteristics. -. Manufacturing companies could use the pattern to inspect finished products as they roll off a production line. An image of the item, including using different types of light, could be analyzed to help expose defects before packaging and distributing. The item could be routed to a defect area. - -These are just a few ideas to help you understand how you could use the {med-pattern} as a framework for your application. - -//We have relevant links on the patterns page -//AI: Why does this point to AEG though? https://github.com/validatedpatterns/ansible-edge-gitops/issues[Report Bugs] +include::modules/med-about-customizing-pattern.adoc[leveloffset=+1] diff --git a/content/patterns/medical-diagnosis/med-troubleshooting.adoc b/content/patterns/medical-diagnosis/med-troubleshooting.adoc index a7b59e0c4..ee4e8ee27 100644 --- a/content/patterns/medical-diagnosis/med-troubleshooting.adoc +++ b/content/patterns/medical-diagnosis/med-troubleshooting.adoc @@ -9,192 +9,6 @@ aliases: /medical-diagnosis/troubleshooting/ :_content-type: REFERENCE include::modules/comm-attributes.adoc[] -[id="med-understanding-the-makefile-troubleshooting"] -=== Understanding the Makefile +include::modules/med-about-makefile.adoc[leveloffset=+1] -The Makefile is the entrypoint for the pattern. We use the Makefile to bootstrap the pattern to the cluster. After the initial bootstrapping of the pattern, the Makefile isn't required for ongoing operations but can often be useful when needing to make a change to a config within the pattern by running a `make upgrade` which allows us to refresh the bootstrap resources without having to tear down the pattern or cluster. - -[id="about-make-install-make-deploy-troubleshooting"] -==== About the make install and make deploy commands - -Running `make install` within the pattern application triggers a `make deploy` from `/common` directory. This initializes the `common` components of the pattern framework and install a helm chart in the `default` namespace. At this point, cluster services, such as {rh-rhacm-first} and {rh-gitops} are deployed. - -After components from the `common` directory are installed, the remaining tasks within the `make install` target run. -//AI: Check which are these other tasks - -[id="make-vault-init-make-load-secrets-troubleshooting"] -==== About the make vault-init and make load-secrets commands - -The {med-pattern} is integrated with {hashicorp-vault} and {eso-op} services for secrets management within the cluster. These targets install vault from a {helm-chart} and load the secret `(values-secret.yaml)` that you created during link:../getting-started/#preparing-for-deployment[Getting Started]. - -If `values-secret.yaml` does not exist, make will exit with an error saying so. Furthermore, if the `values-secret.yaml` file does exist but is improperly formatted, {rh-ansible} exits with an error about being improperly formatted. To verify the format of the secret, see link:../getting-started/#preparing-for-deployment[Getting Started]. - -[id="make-bootstrap-make-upgrade-troubleshooting"] -==== About the make bootstrap and make upgrade commands -The `make bootstrap` command is the target used for deploying the application specific components of the pattern. It is the final step in the initial `make install` target. You might want to consider running the `make upgrade` command instead of the `make bootstrap` command directly. - -Generally, running the `make upgrade` command is required when you encounter errors with the application pattern deployment. For instance, if a value was missed and the chart was not rendered correctly, executing `make upgrade` command after fixing the value would be necessary. - -You might want to review the `Makefile` for the `common` and `Medical Diagnosis` components, which are located in `common/Makefile` and `./Makefile` respectively. - -[id="troubleshooting-the-pattern-deployment-troubleshooting"] -=== Troubleshooting the Pattern Deployment - -Occasionally the pattern will encounter issues during the deployment. This can happen for any number of reasons, but most often it is because of either a change within the operator itself or something has changed in the {olm-first} which determines which operators are available in the operator catalog. Generally, when an issue occurs with the {olm-short}, the operator is unavailable for installation. To ensure that the operator is in the catalog, run the following command: - -[source,terminal] ----- -$ oc get packagemanifests | grep ----- - -When an issue occurs with the operator itself you can verify the status of the `subscription` and make sure that there are no warnings.An additional option is to log into the OpenShift Console, click on Operators, and check the status of the operator. - -Other issues encounter could be with a specific application within the pattern misbehaving. Most of the pattern is deployed into the `xraylab-1` namespace. Other components like ODF are deployed into `openshift-storage` and the OpenShift Serverless Operators are deployed into `knative-serving, knative-eventing` namespaces. - -[NOTE] -==== -Use the grafana dashboard to assist with debugging and identifying the issue -==== - -''' -Problem:: No information is being processed in the dashboard - -Solution:: Most often this is due to the image-generator deploymentConfig needing to be scaled up. The image-generator by design is *scaled to 0*; -+ -[source,terminal] ----- -$ oc scale -n xraylab-1 dc/image-generator --replicas=1 ----- -+ -Alternatively, complete the following steps: - -. Navigate to the {rh-ocp} web console, and select *Workloads → DeploymentConfigs* -. Select `image-generator` and scale the pod to 1 or more. -//AI: Needs review - -''' -Problem:: When browsing to the *xraylab* grafana dashboard and there are no images in the right-pane, only a security warning. - -Solution:: The certificates for the openshift cluster are untrusted by your system. The easiest way to solve this is to open a browser and go to the s3-rgw route (oc get route -n openshift-storage), then acknowledge and accept the security warning. - -''' -Problem:: In the dashboard interface, no metrics data is available. - -Solution:: There is likely something wrong with the Prometheus Data Source for the grafana dashboard. You can check the status of the data source by executing the following: -+ -[source,terminal] ----- -$ oc get grafanadatasources -n xraylab-1 ----- -+ -Ensure that the Prometheus data source exists and that the status is available. This could potentially be the token from the service account, for example, grafana-serviceaccount, that is provided to the data source as a bearer token. - -''' -Problem:: The dashboard is showing red in the corners of the dashboard panes. -+ -image::medical-edge/medDiag-noDB.png[link="/images/medical-edge/medDiag-noDB.png"] - -Solution:: This is most likely due to the *xraylab* database not being available or misconfigured. Please check the database and ensure that it is functioning properly. - -. Ensure that the database is populated with the correct tables: -+ -[source,terminal] ----- -$ oc exec -it xraylabdb-1- bash -$ mysql -u root - -USE xraylabdb; - -SHOW tables; ----- -+ -.Example output -[source,terminal] ----- - -Welcome to the MariaDB monitor. Commands end with ; or \g. -Your MariaDB connection id is 75 -Server version: 10.3.32-MariaDB MariaDB Server - -Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. - -Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. - -MariaDB [(none)]> USE xraylabdb; -Database changed -MariaDB [xraylabdb]> show tables; -+---------------------+ -| Tables_in_xraylabdb | -+---------------------+ -| images_anonymized | -| images_processed | -| images_uploaded | -+---------------------+ -3 rows in set (0.000 sec) ----- -+ -. Verify the password set in the `values-secret.yaml` is working -+ -[source,terminal] ----- -$ oc exec -it xraylabdb-1- bash -$ mysql -u xraylab -D xraylabdb -h xraylabdb -p - ----- -+ -If you are able to successfully login then your password has been configured correctly in vault, the external secrets operator and mounted to the database correctly. - -''' -Problem:: The image-generator is scaled correctly, but the dashboard is not updating. - -Solution:: The serverless eventing function might not be able to fetch the notifications from ODF and therefore, not triggering the knative-serving function to scale up. You may want to check the logs of the `rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-` pod in the `openshift-storage` namespace. -+ -[source,terminal] ----- -$ oc logs -n openshift-storage -f -c rgw ----- -+ -You should see the `PUT` statement with a status code of `200` -+ -Ensure that the `kafkasource`, `kafkservice`, and `kafka topic` resources are created: -+ -[source,terminal] ----- -$ oc get -n xraylab-1 kafkasource ----- -+ -.Example output -[source,terminal] ----- -NAME TOPICS BOOTSTRAPSERVERS READY REASON AGE -xray-images ["xray-images"] ["xray-cluster-kafka-bootstrap.xraylab-1.svc:9092"] True 23m ----- -+ -[source,terminal] ----- -$ oc get -n xraylab-1 kservice ----- -+ -.Example output -[source,terminal] ----- -NAME URL LATESTCREATED LATESTREADY READY REASON -risk-assessment https://risk-assessment-xraylab-1.apps. risk-assessment-00001 risk-assessment-00001 True ----- -+ -[source,terminal] ----- -$ oc get -n xraylab-1 kafkatopics ----- -+ -.Example output -[source,terminal] ----- -NAME CLUSTER PARTITIONS REPLICATION FACTOR READY -consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a xray-cluster 50 1 True -strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 xray-cluster 1 3 True -strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b xray-cluster 1 1 True -xray-images xray-cluster 1 1 True ----- - -''' +include::modules/med-troubleshooting-deployment.adoc[leveloffset=+1] \ No newline at end of file diff --git a/modules/med-about-cluster-sizing.adoc b/modules/med-about-cluster-sizing.adoc new file mode 100644 index 000000000..722835156 --- /dev/null +++ b/modules/med-about-cluster-sizing.adoc @@ -0,0 +1,41 @@ + +:_content-type: CONCEPT +:imagesdir: ../../images + +[id="about-openshift-cluster-sizing-med"] += About OpenShift cluster sizing for the {med-pattern} + +To understand cluster sizing requirements for the {med-pattern}, consider the following components that the {med-pattern} deploys on the datacenter or the hub OpenShift cluster: + +|=== +| Name | Kind | Namespace | Description + +| Medical Diagnosis Hub +| Application +| medical-diagnosis-hub +| Hub GitOps management + +| {rh-gitops} +| Operator +| openshift-operators +| {rh-gitops-short} + +| {rh-ocp-data-first} +| Operator +| openshift-storage +| Cloud Native storage solution + +| {rh-amq-streams} +| Operator +| openshift-operators +| AMQ Streams provides Apache Kafka access + +| {rh-serverless-first} +| Operator +| - knative-serving (knative-eventing) +| Provides access to Knative Serving and Eventing functions +|=== + +//AI: Removed the following since we have CI status linked on the patterns page +//[id="tested-platforms-cluster-sizing"] +//== Tested Platforms \ No newline at end of file diff --git a/modules/med-about-customizing-pattern.adoc b/modules/med-about-customizing-pattern.adoc new file mode 100644 index 000000000..e0b1e14a9 --- /dev/null +++ b/modules/med-about-customizing-pattern.adoc @@ -0,0 +1,18 @@ +:_content-type: CONCEPT +:imagesdir: ../../images + +[id="about-customizing-pattern-med"] += About customizing the pattern {med-pattern} + +One of the major goals of the {solution-name-upstream} development process is to create modular and customizable demos. The {med-pattern} is just an example of how AI/ML workloads built for object detection and classification can be run on OpenShift clusters. Consider your workloads for a moment - how would your workload best consume the pattern framework? Do your consumers require on-demand or near real-time responses when using your application? Is your application processing images or data that is protected by either Government Privacy Laws or HIPAA? +The {med-pattern} can answer the call to either of these requirements by using {serverless-short} and {ocp-data-short}. + +[id="understanding-different-ways-to-use-med-pattern"] +== Understanding different ways to use the {med-pattern} + +. The {med-pattern} is scanning X-Ray images to determine the probability that a patient might or might not have Pneumonia. Continuing with the medical path, the pattern could be used for other early detection scenarios that use object detection and classification. For example, the pattern could be used to scan C/T images for anomalies in the body such as Sepsis, Cancer, or even benign tumors. Additionally, the pattern could be used for detecting blood clots, some heart disease, and bowel disorders like Crohn's disease. +. The Transportation Security Agency (TSA) could use the {med-pattern} in a way that enhances their existing scanning capabilities to detect with a higher probability restricted items carried on a person or hidden away in a piece of luggage. With Machine Learning Operations (MLOps), the model is constantly training and learning to better detect those items that are dangerous but which are not necessarily metallic, such as a firearm or a knife. The model is also training to dismiss those items that are authorized; ultimately saving passengers from being stopped and searched at security checkpoints. +. Militaries could use images collected from drones, satellites, or other platforms to identify objects and determine with probability what that object is. For example, the model could be trained to determine a type of ship, potentially its country of origin, and other such identifying characteristics. +. Manufacturing companies could use the pattern to inspect finished products as they roll off a production line. An image of the item, including using different types of light, could be analyzed to help expose defects before packaging and distributing. The item could be routed to a defect area. + +These are just a few ideas to help you understand how you could use the {med-pattern} as a framework for your application. diff --git a/modules/med-about-makefile.adoc b/modules/med-about-makefile.adoc new file mode 100644 index 000000000..d61fd7496 --- /dev/null +++ b/modules/med-about-makefile.adoc @@ -0,0 +1,31 @@ +:_content-type: CONCEPT +:imagesdir: ../../images + +[id="med-understanding-the-makefile-troubleshooting"] +=== Understanding the Makefile + +The Makefile is the entrypoint for the pattern. We use the Makefile to bootstrap the pattern to the cluster. After the initial bootstrapping of the pattern, the Makefile isn't required for ongoing operations but can often be useful when needing to make a change to a config within the pattern by running a `make upgrade` which allows us to refresh the bootstrap resources without having to tear down the pattern or cluster. + +[id="about-make-install-make-deploy-command"] +==== About the make install and make deploy commands + +Running `make install` within the pattern application triggers a `make deploy` from `/common` directory. This initializes the `common` components of the pattern framework and install a helm chart in the `default` namespace. At this point, cluster services, such as {rh-rhacm-first} and {rh-gitops} are deployed. + +After components from the `common` directory are installed, the remaining tasks within the `make install` target run. +//AI: Check which are these other tasks + +[id="about-make-vault-init-make-load-secrets-commands"] +==== About the make vault-init and make load-secrets commands + +The {med-pattern} is integrated with {hashicorp-vault} and {eso-op} services for secrets management within the cluster. These targets install vault from a {helm-chart} and load the secret `(values-secret.yaml)` that you created during link:../getting-started/#preparing-for-deployment[Getting Started]. + +If `values-secret.yaml` does not exist, make will exit with an error saying so. Furthermore, if the `values-secret.yaml` file does exist but is improperly formatted, {rh-ansible} exits with an error about being improperly formatted. To verify the format of the secret, see link:../getting-started/#preparing-for-deployment[Getting Started]. + +[id="about-make-bootstrap-make-upgrade-commands"] +==== About the make bootstrap and make upgrade commands +The `make bootstrap` command is the target used for deploying the application specific components of the pattern. It is the final step in the initial `make install` target. You might want to consider running the `make upgrade` command instead of the `make bootstrap` command directly. + +Generally, running the `make upgrade` command is required when you encounter errors with the application pattern deployment. For instance, if a value was missed and the chart was not rendered correctly, executing `make upgrade` command after fixing the value would be necessary. + +You might want to review the `Makefile` for the `common` and `Medical Diagnosis` components, which are located in `common/Makefile` and `./Makefile` respectively. + diff --git a/modules/med-about-medical-diagnosis.adoc b/modules/med-about-medical-diagnosis.adoc new file mode 100644 index 000000000..92e533f0a --- /dev/null +++ b/modules/med-about-medical-diagnosis.adoc @@ -0,0 +1,46 @@ +:_content-type: CONCEPT +:imagesdir: ../../images + +[id="about-med-diag-pattern"] += About the {med-pattern} + +Background:: + +This validated pattern is based on a demo implementation of an automated data pipeline for chest X-ray analysis that was previously developed by {redhat}. You can find the original demonstration link:https://github.com/red-hat-data-services/jumpstart-library[here]. It was developed for the US Department of Veteran Affairs. + +This validated pattern includes the same functionality as the original demonstration. The difference is that this solution uses the GitOps framework to deploy the pattern including Operators, creation of namespaces, and cluster configuration. Using GitOps provides an efficient means of implementing continuous deployment. + +Workflow:: + +* Ingest chest X-rays from a simulated X-ray machine and puts them into an `objectStore` based on Ceph. +* The `objectStore` sends a notification to a Kafka topic. +* A KNative Eventing listener to the topic triggers a KNative Serving function. +* An ML-trained model running in a container makes a risk assessment of Pneumonia for incoming images. +* A Grafana dashboard displays the pipeline in real time, along with images incoming, processed, anonymized, and full metrics collected from Prometheus. + +This pipeline is showcased link:https://www.youtube.com/watch?v=zja83FVsm14[in this video]. + +image::medical-edge/dashboard.png[link="/images/medical-edge/dashboard.png"] + +//[NOTE] +//==== +//This validated pattern is still under development. If you have any questions or concerns contact mailto:jrickard@redhat.com[Jonny Rickard] or mailto:claudiol@redhat.com[Lester Claudio]. +//==== + +[id="about-solution-med"] +== About the solution elements + +The solution aids the understanding of the following: + +* How to use a GitOps approach to keep in control of configuration and operations. +* How to deploy AI/ML technologies for medical diagnosis using GitOps. + +The {med-pattern} uses the following products and technologies: + +* {rh-ocp} for container orchestration +* {rh-gitops}, a GitOps continuous delivery (CD) solution +* {rh-amq-first}, an event streaming platform based on the Apache Kafka +* {rh-serverless-first} for event-driven applications +* {rh-ocp-data-first} for cloud native storage capabilities +* {grafana-op} to manage and share Grafana dashboards, data sources, and so on +* S3 storage \ No newline at end of file diff --git a/modules/med-architecture-schema.adoc b/modules/med-architecture-schema.adoc new file mode 100644 index 000000000..f44328282 --- /dev/null +++ b/modules/med-architecture-schema.adoc @@ -0,0 +1,29 @@ +:_content-type: CONCEPT +:imagesdir: ../../images + +[id="about-architecture-med"] +== About the architecture + +[IMPORTANT] +==== +Presently, the {med-pattern} does not have an edge component. Edge deployment capabilities are planned as part of the pattern architecture for a future release. +==== + +image::medical-edge/edge-medical-diagnosis-marketing-slide.png[link="/images/medical-edge/edge-medical-diagnosis-marketing-slide.png"] + +Components are running on OpenShift either at the data center, at the medical facility, or public cloud running OpenShift. + +[id="about-physical-schema-med"] +=== About the physical schema + +The following diagram shows the components that are deployed with the various networks that connect them. + +image::medical-edge/physical-network.png[link="/images/medical-edge/physical-network.png"] + +The following diagram shows the components that are deployed with the the data flows and API calls between them. + +image::medical-edge/physical-dataflow.png[link="/images/medical-edge/physical-dataflow.png"] + +== Recorded demo + +link:/videos/xray-deployment.svg[image:/videos/xray-deployment.svg[Demo\]] diff --git a/modules/med-ocp-cluster-sizing.adoc b/modules/med-ocp-cluster-sizing.adoc new file mode 100644 index 000000000..6cae0e0ff --- /dev/null +++ b/modules/med-ocp-cluster-sizing.adoc @@ -0,0 +1,47 @@ +:_content-type: CONCEPT +:imagesdir: ../../images + +[id="med-openshift-cluster-size"] +=== About {med-pattern} OpenShift cluster size + +The {med-pattern} has been tested with a defined set of configurations that represent the most common combinations that {ocp} customers are using for the x86_64 architecture. + +For {med-pattern}, the OpenShift cluster size must be a bit larger to support the compute and storage demands of OpenShift Data Foundations and other Operators. + +The minimum requirements for an {ocp} cluster depend on your installation platform. For instance, for AWS, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_aws/preparing-to-install-on-aws.html#requirements-for-installing-ocp-on-aws[Installing {ocp} on AWS], and for bare-metal, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_bare_metal/installing-bare-metal.html#installation-minimum-resource-requirements_installing-bare-metal[Installing {ocp} on bare metal]. + +For information about requirements for additional platforms, see link:https://docs.openshift.com/container-platform/4.13/installing/installing-preparing.html[{ocp} documentation]. + + +[NOTE] +==== +You might want to add resources when more developers are working on building their applications. +==== + +The OpenShift cluster is a standard deployment of 3 control plane nodes and 3 or more worker nodes. + +[cols="^,^,^,^"] +|=== +| Node type | Number of nodes | Cloud provider | Instance type + +| Control plane and worker +| 3 and 3 +| Google Cloud +| n1-standard-8 + +| Control plane and worker +| 3 and 3 +| Amazon Cloud Services +| m5.2xlarge + +| Control plane and worker +| 3 and 3 +| Microsoft Azure +| Standard_D8s_v3 +|=== + +[role="_additional-resources"] +.Additional resource +* link:https://aws.amazon.com/ec2/instance-types/[AWS instance types] +* link:https://learn.microsoft.com/en-us/azure/virtual-machines/sizes[Azure instance types: Sizes for virtual machines in Azure] +* link:https://cloud.google.com/compute/docs/machine-resource[Google Cloud Platform instance types: Machine families resource and comparison guide] diff --git a/modules/med-troubleshooting-deployment.adoc b/modules/med-troubleshooting-deployment.adoc new file mode 100644 index 000000000..98ce07ef3 --- /dev/null +++ b/modules/med-troubleshooting-deployment.adoc @@ -0,0 +1,164 @@ +:_content-type: REFERENCE +:imagesdir: ../../images + +[id="troubleshooting-the-pattern-deployment-troubleshooting"] +=== Troubleshooting the Pattern Deployment + +Occasionally the pattern will encounter issues during the deployment. This can happen for any number of reasons, but most often it is because of either a change within the operator itself or something has changed in the {olm-first} which determines which operators are available in the operator catalog. Generally, when an issue occurs with the {olm-short}, the operator is unavailable for installation. To ensure that the operator is in the catalog, run the following command: + +[source,terminal] +---- +$ oc get packagemanifests | grep +---- + +When an issue occurs with the operator itself you can verify the status of the `subscription` and make sure that there are no warnings.An additional option is to log into the OpenShift Console, click on Operators, and check the status of the operator. + +Other issues encounter could be with a specific application within the pattern misbehaving. Most of the pattern is deployed into the `xraylab-1` namespace. Other components like ODF are deployed into `openshift-storage` and the OpenShift Serverless Operators are deployed into `knative-serving, knative-eventing` namespaces. + +[NOTE] +==== +Use the grafana dashboard to assist with debugging and identifying the issue +==== + +''' +Problem:: No information is being processed in the dashboard + +Solution:: Most often this is due to the image-generator deploymentConfig needing to be scaled up. The image-generator by design is *scaled to 0*; ++ +[source,terminal] +---- +$ oc scale -n xraylab-1 dc/image-generator --replicas=1 +---- ++ +Alternatively, complete the following steps: + +. Navigate to the {rh-ocp} web console, and select *Workloads → DeploymentConfigs* +. Select `image-generator` and scale the pod to 1 or more. +//AI: Needs review + +''' +Problem:: When browsing to the *xraylab* grafana dashboard and there are no images in the right-pane, only a security warning. + +Solution:: The certificates for the openshift cluster are untrusted by your system. The easiest way to solve this is to open a browser and go to the s3-rgw route (oc get route -n openshift-storage), then acknowledge and accept the security warning. + +''' +Problem:: In the dashboard interface, no metrics data is available. + +Solution:: There is likely something wrong with the Prometheus Data Source for the grafana dashboard. You can check the status of the data source by executing the following: ++ +[source,terminal] +---- +$ oc get grafanadatasources -n xraylab-1 +---- ++ +Ensure that the Prometheus data source exists and that the status is available. This could potentially be the token from the service account, for example, grafana-serviceaccount, that is provided to the data source as a bearer token. + +''' +Problem:: The dashboard is showing red in the corners of the dashboard panes. ++ +image::medical-edge/medDiag-noDB.png[link="/images/medical-edge/medDiag-noDB.png"] + +Solution:: This is most likely due to the *xraylab* database not being available or misconfigured. Please check the database and ensure that it is functioning properly. + +. Ensure that the database is populated with the correct tables: ++ +[source,terminal] +---- +$ oc exec -it xraylabdb-1- bash +$ mysql -u root + +USE xraylabdb; + +SHOW tables; +---- ++ +.Example output +[source,terminal] +---- + +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 75 +Server version: 10.3.32-MariaDB MariaDB Server + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> USE xraylabdb; +Database changed +MariaDB [xraylabdb]> show tables; ++---------------------+ +| Tables_in_xraylabdb | ++---------------------+ +| images_anonymized | +| images_processed | +| images_uploaded | ++---------------------+ +3 rows in set (0.000 sec) +---- ++ +. Verify the password set in the `values-secret.yaml` is working ++ +[source,terminal] +---- +$ oc exec -it xraylabdb-1- bash +$ mysql -u xraylab -D xraylabdb -h xraylabdb -p + +---- ++ +If you are able to successfully login then your password has been configured correctly in vault, the external secrets operator and mounted to the database correctly. + +''' +Problem:: The image-generator is scaled correctly, but the dashboard is not updating. + +Solution:: The serverless eventing function might not be able to fetch the notifications from ODF and therefore, not triggering the knative-serving function to scale up. You may want to check the logs of the `rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-` pod in the `openshift-storage` namespace. ++ +[source,terminal] +---- +$ oc logs -n openshift-storage -f -c rgw +---- ++ +You should see the `PUT` statement with a status code of `200` ++ +Ensure that the `kafkasource`, `kafkservice`, and `kafka topic` resources are created: ++ +[source,terminal] +---- +$ oc get -n xraylab-1 kafkasource +---- ++ +.Example output +[source,terminal] +---- +NAME TOPICS BOOTSTRAPSERVERS READY REASON AGE +xray-images ["xray-images"] ["xray-cluster-kafka-bootstrap.xraylab-1.svc:9092"] True 23m +---- ++ +[source,terminal] +---- +$ oc get -n xraylab-1 kservice +---- ++ +.Example output +[source,terminal] +---- +NAME URL LATESTCREATED LATESTREADY READY REASON +risk-assessment https://risk-assessment-xraylab-1.apps. risk-assessment-00001 risk-assessment-00001 True +---- ++ +[source,terminal] +---- +$ oc get -n xraylab-1 kafkatopics +---- ++ +.Example output +[source,terminal] +---- +NAME CLUSTER PARTITIONS REPLICATION FACTOR READY +consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a xray-cluster 50 1 True +strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 xray-cluster 1 3 True +strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b xray-cluster 1 1 True +xray-images xray-cluster 1 1 True +---- + +''' \ No newline at end of file From e0d37172bf1912735f04597a8f9f94293d15bd8c Mon Sep 17 00:00:00 2001 From: Avani Bhatt Date: Mon, 9 Oct 2023 16:58:03 +0100 Subject: [PATCH 3/7] reorg the getting started sections for better flow of info --- content/blog/2021-12-31-medical-diagnosis.md | 2 +- content/learn/importing-a-cluster.adoc | 4 +- content/learn/vault.adoc | 4 +- .../patterns/medical-diagnosis/_index.adoc | 2 +- .../medical-diagnosis/med-cluster-sizing.adoc | 4 +- .../med-getting-started.adoc | 380 +----------------- .../med-ideas-for-customization.adoc | 3 +- .../med-troubleshooting.adoc | 3 +- modules/med-about-makefile.adoc | 4 +- modules/med-deploying-med-diag-pattern.adoc | 28 ++ modules/med-ocp-cluster-sizing.adoc | 4 +- modules/med-preparing-for-deployment.adoc | 167 ++++++++ ...ed-setup-aws-s3-bucket-with-utilities.adoc | 32 ++ ...sing-ocp-gitops-to-check-app-progress.adoc | 44 ++ modules/med-viewing-grafana-dashboard.adoc | 75 ++++ 15 files changed, 378 insertions(+), 378 deletions(-) create mode 100644 modules/med-deploying-med-diag-pattern.adoc create mode 100644 modules/med-preparing-for-deployment.adoc create mode 100644 modules/med-setup-aws-s3-bucket-with-utilities.adoc create mode 100644 modules/med-using-ocp-gitops-to-check-app-progress.adoc create mode 100644 modules/med-viewing-grafana-dashboard.adoc diff --git a/content/blog/2021-12-31-medical-diagnosis.md b/content/blog/2021-12-31-medical-diagnosis.md index 035583283..a628a2086 100644 --- a/content/blog/2021-12-31-medical-diagnosis.md +++ b/content/blog/2021-12-31-medical-diagnosis.md @@ -30,7 +30,7 @@ For a recorded demo deploying the pattern and seeing the dashboards available to --- -To deploy this pattern, follow the instructions outlined on the [getting-started](https://validatedpatterns.io/medical-diagnosis/getting-started/) page. +To deploy this pattern, follow the instructions outlined on the [getting-started](https://validatedpatterns.io/medical-diagnosis/med-getting-started/) page. ### What's happening? diff --git a/content/learn/importing-a-cluster.adoc b/content/learn/importing-a-cluster.adoc index 36f16e71e..559c14f7d 100644 --- a/content/learn/importing-a-cluster.adoc +++ b/content/learn/importing-a-cluster.adoc @@ -112,7 +112,7 @@ If you use the command line tools above you need to explicitly indicate that the We do this by adding the label referenced in the managedSite's `clusterSelector`. -1. Find the new cluster. +. Find the new cluster. + [source,terminal] @@ -120,7 +120,7 @@ We do this by adding the label referenced in the managedSite's `clusterSelector` oc get managedclusters.cluster.open-cluster-management.io ---- -1. Apply the label. +. Apply the label. + [source,terminal] diff --git a/content/learn/vault.adoc b/content/learn/vault.adoc index f7c6317e0..4a8ea98ee 100644 --- a/content/learn/vault.adoc +++ b/content/learn/vault.adoc @@ -14,12 +14,12 @@ include::modules/comm-attributes.adoc[] = Deploying HashiCorp Vault in a validated pattern [id="prerequisites"] -= Prerequisites +== Prerequisites You have deployed/installed a validated pattern using the instructions provided for that pattern. This should include setting having logged into the cluster using `oc login` or setting you `KUBECONFIG` environment variable and running a `./pattern.sh make install`. [id="setting-up-hashicorp-vault"] -= Setting up HashiCorp Vault +== Setting up HashiCorp Vault Any validated pattern that uses HashiCorp Vault already has deployed Vault as part of the `./pattern.sh make install`. To verify that Vault is installed you can first see that the `vault` project exists and then select the Workloads/Pods: diff --git a/content/patterns/medical-diagnosis/_index.adoc b/content/patterns/medical-diagnosis/_index.adoc index d3f77eead..ac498c674 100644 --- a/content/patterns/medical-diagnosis/_index.adoc +++ b/content/patterns/medical-diagnosis/_index.adoc @@ -32,4 +32,4 @@ include::modules/med-architecture-schema.adoc[leveloffset=+1] [id="next-steps_med-diag-index"] == Next steps -* Getting started link:getting-started[Deploy the Pattern] \ No newline at end of file +* link:med-getting-started[Deploy the pattern] \ No newline at end of file diff --git a/content/patterns/medical-diagnosis/med-cluster-sizing.adoc b/content/patterns/medical-diagnosis/med-cluster-sizing.adoc index b70524ee6..49e9e5426 100644 --- a/content/patterns/medical-diagnosis/med-cluster-sizing.adoc +++ b/content/patterns/medical-diagnosis/med-cluster-sizing.adoc @@ -1,7 +1,7 @@ --- -title: Cluster Sizing +title: Cluster sizing weight: 20 -aliases: /medical-diagnosis/cluster-sizing/ +aliases: /medical-diagnosis/med-cluster-sizing/ --- :toc: diff --git a/content/patterns/medical-diagnosis/med-getting-started.adoc b/content/patterns/medical-diagnosis/med-getting-started.adoc index 3fff9f7f6..c3c9b4979 100644 --- a/content/patterns/medical-diagnosis/med-getting-started.adoc +++ b/content/patterns/medical-diagnosis/med-getting-started.adoc @@ -1,7 +1,7 @@ --- -title: Getting Started +title: Getting started weight: 10 -aliases: /medical-diagnosis/getting-started/ +aliases: /medical-diagnosis/med-getting-started/ --- :toc: @@ -9,18 +9,13 @@ aliases: /medical-diagnosis/getting-started/ :_content-type: ASSEMBLY include::modules/comm-attributes.adoc[] -//Module to be included -//:_content-type: PROCEDURE -//:imagesdir: ../../../images -[id="deploying-med-pattern"] -= Deploying the {med-pattern} - -.Prerequisites +[id="general-prerequisites_{context}"] += Prerequisites * An OpenShift cluster ** To create an OpenShift cluster, go to the https://console.redhat.com/[Red Hat Hybrid Cloud console]. - ** Select *Services* -> *Containers* -> *Create cluster*. - ** The cluster must have a dynamic `StorageClass` to provision `PersistentVolumes`. See link:../../medical-diagnosis/cluster-sizing[sizing your cluster]. + ** Select *OpenShift* -> *Clusters* -> *Create cluster*. + ** The cluster must have a dynamic `StorageClass` to provision `PersistentVolumes`. See link:../../medical-diagnosis/med-cluster-sizing[sizing your cluster]. * A GitHub account and a token for it with repositories permissions, to read from and write to your forks. * An S3-capable Storage set up in your public or private cloud for the x-ray images * The Helm binary, see link:https://helm.sh/docs/intro/install/[Installing Helm] @@ -31,367 +26,24 @@ For installation tooling dependencies, see link:https://validatedpatterns.io/lea The {med-pattern} does not have a dedicated hub or edge cluster. ==== -[id="setting-up-an-s3-bucket-for-the-xray-images-getting-started"] -=== Setting up an S3 Bucket for the xray-images - -An S3 bucket is required for image processing. -For information about creating a bucket in AWS S3, see the <> section. +[id="setting-up-storage-for-xray-images"] +== Setting up storage for the X-ray images -For information about creating the buckets on other cloud providers, see the following links: +Setting up storage is required for image processing.For information about creating the buckets on other cloud providers, see the following links: * link:https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html[AWS S3] * link:https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal[Azure Blob Storage] * link:https://cloud.google.com/storage/docs/quickstart-console[GCP Cloud Storage] -//Module to be included -//:_content-type: PROCEDURE -//:imagesdir: ../../../images - -[id="utilities"] -= Utilities -//AI: Update the use of community and VP post naming tier update - -To use the link:https://github.com/validatedpatterns/utilities[utilities] that are available, export some environment variables for your cloud provider. - -.Example for AWS. Ensure that you replace values with your keys: - -[source,terminal] ----- -export AWS_ACCESS_KEY_ID=AKXXXXXXXXXXXXX -export AWS_SECRET_ACCESS_KEY=gkXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX ----- - -Create the S3 bucket and copy over the data from the validated patterns public bucket to the created bucket for your demo. You can do this on the cloud providers console or you can use the scripts that are provided in link:https://github.com/validatedpatterns/utilities[utilities] repository. - -[source,terminal] ----- -$ python s3-create.py -b mytest-bucket -r us-west-2 -p -$ python s3-sync-buckets.py -s validated-patterns-md-xray -t mytest-bucket -r us-west-2 ----- - -.Example output - -image:/videos/bucket-setup.svg[Bucket setup] - -Note the name and URL for the bucket for further pattern configuration. For example, you must update these values in a `values-global.yaml` file, where there is a section for `s3:` - -[id="preparing-for-deployment"] -= Preparing for deployment -.Procedure - -. Fork the link:https://github.com/validatedpatterns/medical-diagnosis[medical-diagnosis] repository on GitHub. You must fork the repository because your fork will be updated as part of the GitOps and DevOps processes. -. Clone the forked copy of this repository. -+ -[source,terminal] ----- -$ git clone git@github.com:/medical-diagnosis.git ----- - -. Create a local copy of the Helm values file that can safely include credentials. -+ -[WARNING] -==== -Do not commit this file. You do not want to push personal credentials to GitHub. -==== -+ -Run the following commands: -+ -[source,terminal] ----- -$ cp values-secret.yaml.template ~/values-secret-medical-diagnosis.yaml -$ vi ~/values-secret-medical-diagnosis.yaml ----- -+ -.Example `values-secret.yaml` file - -[source,yaml] ----- -version "2.0" -secrets: - # NEVER COMMIT THESE VALUES TO GIT - - # Database login credentials and configuration - - name: xraylab - fields: - - name: database-user - value: xraylab - - name: database-host - value: xraylabdb - - name: database-db - value: xraylabdb - - name: database-master-user - value: xraylab - - name: database-password - onMissingValue: generate - vaultPolicy: validatedPatternDefaultPolicy - - name: database-root-password - onMissingValue: generate - vaultPolicy: validatedPatternDefaultPolicy - - name: database-master-password - onMissingValue: generate - vaultPolicy: validatedPatternDefaultPolicy - - # Grafana Dashboard admin user/password - - name: grafana - fields: - - name: GF_SECURITY_ADMIN_USER: - value: root - - name: GF_SECURITY_ADMIN_PASSWORD: - onMissingValue: generate - vaultPolicy: validatedPatternDefaultPolicy ----- -+ -By default, Vault password policy generates the passwords for you. However, you can create your own passwords. -+ -[NOTE] -==== -When defining a custom password for the database users, avoid using the `$` special character as it gets interpreted by the shell and will ultimately set the incorrect desired password. -==== - -. To customize the deployment for your cluster, update the `values-global.yaml` file by running the following commands: -+ -[source,terminal] ----- -$ git checkout -b my-branch -$ vi values-global.yaml ----- -+ -Replace instances of PROVIDE_ with your specific configuration -+ -[source,yaml] ----- - ...omitted - datacenter: - cloudProvider: PROVIDE_CLOUD_PROVIDER #AWS, AZURE, GCP - storageClassName: PROVIDE_STORAGECLASS_NAME #gp3-csi - region: PROVIDE_CLOUD_REGION #us-east-2 - clustername: PROVIDE_CLUSTER_NAME #OpenShift clusterName - domain: PROVIDE_DNS_DOMAIN #example.com - - s3: - # Values for S3 bucket access - # Replace with AWS region where S3 bucket was created - # Replace and with your OpenShift cluster values - # bucketSource: "https://s3..amazonaws.com/" - bucketSource: PROVIDE_BUCKET_SOURCE #validated-patterns-md-xray - # Bucket base name used for xray images - bucketBaseName: "xray-source" ----- -+ -[source,terminal] ----- -$ git add values-global.yaml -$ git commit values-global.yaml -$ git push origin my-branch ----- - -. To deploy the pattern, you can use the link:/infrastructure/using-validated-pattern-operator/[{validated-patterns-op}]. If you do use the Operator, skip to <>. - -. To preview the changes that will be implemented to the Helm charts, run the following command: -+ -[source,terminal] ----- -$ ./pattern.sh make show ----- - -. Login to your cluster by running the following command: -+ -[source,terminal] ----- -$ oc login ----- -+ -Optional: Set the `KUBECONFIG` variable for the `kubeconfig` file path: -+ -[source,terminal] ----- - export KUBECONFIG=~/ ----- - -[id="check-the-values-files-before-deployment"] -== Check the values files before deployment - -To ensure that you have the required variables to deploy the {med-pattern}, run the `./pattern.sh make predeploy` command. You can review your values and make updates, if required. - -You must review the following values files before deploying the {med-pattern}: - -|=== -| Values File | Description - -| values-secret.yaml -| Values file that includes the secret parameters required by the pattern - -| values-global.yaml -| File that contains all the global values used by Helm to deploy the pattern -|=== - -[NOTE] -==== -Before you run the `./pattern.msh make install` command, ensure that you have the correct values for: -``` -- domain -- clusterName -- cloudProvider -- storageClassName -- region -- bucketSource -``` -==== - -//image::/videos/predeploy.svg[link="/videos/predeploy.svg"] - -//Module to be included -//:_content-type: PROCEDURE -//:imagesdir: ../../../images -[id="med-deploy-pattern_{context}"] -= Deploy - -. To apply the changes to your cluster, run the following command: -+ -[source,terminal] ----- -$ ./pattern.sh make install ----- -+ -If the installation fails, you can go over the instructions and make updates, if required. -To continue the installation, run the following command: -+ -[source,terminal] ----- -$ ./pattern.sh make update ----- -+ -This step might take some time, especially for the {ocp-data-short} Operator components to install and synchronize. The `./pattern.sh make install` command provides some progress updates during the installation process. It can take up to twenty minutes. Compare your `./pattern.sh make install` run progress with the following video that shows a successful installation. -+ -image::/videos/xray-deployment.svg[link="/videos/xray-deployment.svg"] - -. Verify that the Operators have been installed. -.. To verify, in the {ocp} web console, navigate to *Operators* → *Installed Operators* page. -.. Check that the Operator is installed in the `openshift-operators` namespace and its status is `Succeeded`. Ensure that {ocp-data-short} is listed in the list of installed Operators. - - -//Module to be included -//:_content-type: PROCEDURE -//:imagesdir: ../../../images -[id="using-openshift-gitops-to-check-on-application-progress-getting-started"] -== Using OpenShift GitOps to check on Application progress - -To check the various applications that are being deployed, you can view the progress of the {rh-gitops-short} Operator. - -. Obtain the ArgoCD URLs and passwords. -+ -The URLs and login credentials for ArgoCD change depending on the pattern name and the site names they control. Follow the instructions below to find them, however you choose to deploy the pattern. -+ -Display the fully qualified domain names, and matching login credentials, for -all ArgoCD instances: -+ -[source,terminal] ----- -ARGO_CMD=`oc get secrets -A -o jsonpath='{range .items[*]}{"oc get -n "}{.metadata.namespace}{" routes; oc -n "}{.metadata.namespace}{" extract secrets/"}{.metadata.name}{" --to=-\\n"}{end}' | grep gitops-cluster` -CMD=`echo $ARGO_CMD | sed 's|- oc|-;oc|g'` -eval $CMD ----- -+ -.Example output -+ -[source,text] ----- -NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD -hub-gitops-server hub-gitops-server-medical-diagnosis-hub.apps.wh-medctr.blueprints.rhecoeng.com hub-gitops-server https passthrough/Redirect None -# admin.password -xsyYU6eSWtwniEk1X3jL0c2TGfQgVpDH -NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD -cluster cluster-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com cluster 8080 reencrypt/Allow None -kam kam-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com kam 8443 passthrough/None None -openshift-gitops-server openshift-gitops-server-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com openshift-gitops-server https passthrough/Redirect None -# admin.password -FdGgWHsBYkeqOczE3PuRpU1jLn7C2fD6 ----- -+ -[IMPORTANT] -==== -Examine the `medical-diagnosis-hub` ArgoCD instance. You can track all the applications for the pattern in this instance. -==== - -. Check that all applications are synchronized. There are thirteen different ArgoCD `applications` that are deployed as part of this pattern. - - -//Module to be included -//:_content-type: PROCEDURE -//:imagesdir: ../../../images -[id="viewing-the-grafana-based-dashboard-getting-started"] -== Viewing the Grafana based dashboard - -. Accept the SSL certificates on the browser for the dashboard. In the {ocp} web console, go to the Routes for project `openshift-storage``. Click the URL for the `s3-rgw`. -+ -image::medical-edge/storage-route.png[link="/images/medical-edge/storage-route.png"] -+ -Ensure that you see some XML and not the access denied error message. -+ -image::medical-edge/storage-rgw-route.png[link="/images/medical-edge/storage-rgw-route.png"] - -. While still looking at Routes, change the project to `xraylab-1`. Click the URL for the `image-server`. Ensure that you do not see an access denied error message. You must to see a `Hello World` message. -+ -image::medical-edge/grafana-routes.png[link="/images/medical-edge/grafana-routes.png"] - -. Turn on the image file flow. There are three ways to go about this. -+ -You can go to the command-line (make sure you have KUBECONFIG set, or are logged into the cluster. -+ -[source,terminal] ----- -$ oc scale deploymentconfig/image-generator --replicas=1 -n xraylab-1 ----- -+ -Or you can go to the OpenShift UI and change the view from Administrator to Developer and select Topology. From there select the `xraylab-1` project. -+ -image::medical-edge/dev-topology.png[link="/images/medical-edge/dev-topology.png"] -+ -Right-click on the `image-generator` pod icon and select `Edit Pod count`. -+ -image::medical-edge/dev-topology-menu.png[link="/images/medical-edge/dev-topology-menu.png"] -+ -Up the pod count from `0` to `1` and save. -+ -image::medical-edge/dev-topology-pod-count.png[link="/images/medical-edge/dev-topology-pod-count.png"] -+ -Alternatively, you can have the same outcome on the Administrator console. -+ -Go to the OpenShift UI under Workloads, select Deploymentconfigs for Project `xraylab-1`. -Click `image-generator` and increase the pod count to 1. -+ -image::medical-edge/start-image-flow.png[link="/images/medical-edge/start-image-flow.png"] +include::modules/med-setup-aws-s3-bucket-with-utilities.adoc[leveloffset=+2] +include::modules/med-preparing-for-deployment.adoc[leveloffset=+1] -//Module to be included -//:_content-type: PROCEDURE -//:imagesdir: ../../../images -[id="making-some-changes-on-the-dashboard-getting-started"] -== Making some changes on the dashboard +include::modules/med-deploying-med-diag-pattern.adoc[leveloffset=+1] -You can change some of the parameters and watch how the changes effect the dashboard. +[id="post-deployment-configuration_{context}"] +== Post-deployment configuration -. You can increase or decrease the number of image generators. -+ -[source,terminal] ----- -$ oc scale deploymentconfig/image-generator --replicas=2 ----- -+ -Check the dashboard. -+ -[source,terminal] ----- -$ oc scale deploymentconfig/image-generator --replicas=0 ----- -+ -Watch the dashboard stop processing images. +include::modules/med-using-ocp-gitops-to-check-app-progress.adoc[leveloffset=+2] -. You can also simulate the change of the AI model version - as it's only an environment variable in the Serverless Service configuration. -+ -[source,terminal] ----- -$ oc patch service.serving.knative.dev/risk-assessment --type=json -p '[{"op":"replace","path":"/spec/template/metadata/annotations/revisionTimestamp","value":"'"$(date +%F_%T)"'"},{"op":"replace","path":"/spec/template/spec/containers/0/env/0/value","value":"v2"}]' ----- -+ -This changes the model version value, and the `revisionTimestamp` in the annotations, which triggers a redeployment of the service. +include::modules/med-viewing-grafana-dashboard.adoc[leveloffset=+2] diff --git a/content/patterns/medical-diagnosis/med-ideas-for-customization.adoc b/content/patterns/medical-diagnosis/med-ideas-for-customization.adoc index 88c73a08f..16763f775 100644 --- a/content/patterns/medical-diagnosis/med-ideas-for-customization.adoc +++ b/content/patterns/medical-diagnosis/med-ideas-for-customization.adoc @@ -1,11 +1,12 @@ --- title: Ideas for customization weight: 50 -aliases: /medical-diagnosis/ideas-for-customization/ +aliases: /medical-diagnosis/med-ideas-for-customization/ --- :toc: :imagesdir: /images :_content-type: ASSEMBLY + include::modules/comm-attributes.adoc[] include::modules/med-about-customizing-pattern.adoc[leveloffset=+1] diff --git a/content/patterns/medical-diagnosis/med-troubleshooting.adoc b/content/patterns/medical-diagnosis/med-troubleshooting.adoc index ee4e8ee27..d3e7fcef0 100644 --- a/content/patterns/medical-diagnosis/med-troubleshooting.adoc +++ b/content/patterns/medical-diagnosis/med-troubleshooting.adoc @@ -1,12 +1,13 @@ --- title: Troubleshooting weight: 40 -aliases: /medical-diagnosis/troubleshooting/ +aliases: /medical-diagnosis/med-troubleshooting/ --- :toc: :imagesdir: /images :_content-type: REFERENCE + include::modules/comm-attributes.adoc[] include::modules/med-about-makefile.adoc[leveloffset=+1] diff --git a/modules/med-about-makefile.adoc b/modules/med-about-makefile.adoc index d61fd7496..d6b21b16f 100644 --- a/modules/med-about-makefile.adoc +++ b/modules/med-about-makefile.adoc @@ -17,9 +17,9 @@ After components from the `common` directory are installed, the remaining tasks [id="about-make-vault-init-make-load-secrets-commands"] ==== About the make vault-init and make load-secrets commands -The {med-pattern} is integrated with {hashicorp-vault} and {eso-op} services for secrets management within the cluster. These targets install vault from a {helm-chart} and load the secret `(values-secret.yaml)` that you created during link:../getting-started/#preparing-for-deployment[Getting Started]. +The {med-pattern} is integrated with {hashicorp-vault} and {eso-op} services for secrets management within the cluster. These targets install vault from a {helm-chart} and load the secret `(values-secret.yaml)` that you created during link:../med-getting-started/#preparing-for-deployment[Getting Started]. -If `values-secret.yaml` does not exist, make will exit with an error saying so. Furthermore, if the `values-secret.yaml` file does exist but is improperly formatted, {rh-ansible} exits with an error about being improperly formatted. To verify the format of the secret, see link:../getting-started/#preparing-for-deployment[Getting Started]. +If `values-secret.yaml` does not exist, make will exit with an error saying so. Furthermore, if the `values-secret.yaml` file does exist but is improperly formatted, {rh-ansible} exits with an error about being improperly formatted. To verify the format of the secret, see link:../med-getting-started/#preparing-for-deployment[Getting Started]. [id="about-make-bootstrap-make-upgrade-commands"] ==== About the make bootstrap and make upgrade commands diff --git a/modules/med-deploying-med-diag-pattern.adoc b/modules/med-deploying-med-diag-pattern.adoc new file mode 100644 index 000000000..f1ef75aeb --- /dev/null +++ b/modules/med-deploying-med-diag-pattern.adoc @@ -0,0 +1,28 @@ +:_content-type: PROCEDURE +:imagesdir: ../../../images + +[id="med-deploy-pattern_{context}"] += Deploying the {med-pattern} + +. To apply the changes to your cluster, run the following command: ++ +[source,terminal] +---- +$ ./pattern.sh make install +---- ++ +If the installation fails, you can go over the instructions and make updates, if required. +To continue the installation, run the following command: ++ +[source,terminal] +---- +$ ./pattern.sh make update +---- ++ +This step might take some time, especially for the {ocp-data-short} Operator components to install and synchronize. The `./pattern.sh make install` command provides some progress updates during the installation process. It can take up to twenty minutes. Compare your `./pattern.sh make install` run progress with the following video that shows a successful installation. ++ +image::/videos/xray-deployment.svg[link="/videos/xray-deployment.svg"] + +. Verify that the Operators have been installed. +.. To verify, in the {ocp} web console, navigate to *Operators* → *Installed Operators* page. +.. Check that the Operator is installed in the `openshift-operators` namespace and its status is `Succeeded`. Ensure that {ocp-data-short} is listed in the list of installed Operators. \ No newline at end of file diff --git a/modules/med-ocp-cluster-sizing.adoc b/modules/med-ocp-cluster-sizing.adoc index 6cae0e0ff..bb25fe849 100644 --- a/modules/med-ocp-cluster-sizing.adoc +++ b/modules/med-ocp-cluster-sizing.adoc @@ -2,7 +2,7 @@ :imagesdir: ../../images [id="med-openshift-cluster-size"] -=== About {med-pattern} OpenShift cluster size +== About {med-pattern} OpenShift cluster size The {med-pattern} has been tested with a defined set of configurations that represent the most common combinations that {ocp} customers are using for the x86_64 architecture. @@ -41,7 +41,7 @@ The OpenShift cluster is a standard deployment of 3 control plane nodes and 3 or |=== [role="_additional-resources"] -.Additional resource +.Additional resources * link:https://aws.amazon.com/ec2/instance-types/[AWS instance types] * link:https://learn.microsoft.com/en-us/azure/virtual-machines/sizes[Azure instance types: Sizes for virtual machines in Azure] * link:https://cloud.google.com/compute/docs/machine-resource[Google Cloud Platform instance types: Machine families resource and comparison guide] diff --git a/modules/med-preparing-for-deployment.adoc b/modules/med-preparing-for-deployment.adoc new file mode 100644 index 000000000..5323ec590 --- /dev/null +++ b/modules/med-preparing-for-deployment.adoc @@ -0,0 +1,167 @@ +:_content-type: PROCEDURE +:imagesdir: ../../../images + +[id="preparing-for-deployment"] += Preparing to deploy the {med-pattern} + +.Procedure + +. Fork the link:https://github.com/validatedpatterns/medical-diagnosis[medical-diagnosis] repository on GitHub. You must fork the repository because your fork will be updated as part of the GitOps and DevOps processes. +. Clone the forked copy of this repository. ++ +[source,terminal] +---- +$ git clone git@github.com:/medical-diagnosis.git +---- + +. Create a local copy of the Helm values file that can safely include credentials. ++ +[WARNING] +==== +Do not commit this file. You do not want to push personal credentials to GitHub. +==== ++ +Run the following commands: ++ +[source,terminal] +---- +$ cp values-secret.yaml.template ~/values-secret-medical-diagnosis.yaml +$ vi ~/values-secret-medical-diagnosis.yaml +---- ++ +.Example `values-secret.yaml` file + +[source,yaml] +---- +version "2.0" +secrets: + # NEVER COMMIT THESE VALUES TO GIT + + # Database login credentials and configuration + - name: xraylab + fields: + - name: database-user + value: xraylab + - name: database-host + value: xraylabdb + - name: database-db + value: xraylabdb + - name: database-master-user + value: xralab + - name: database-password + onMissingValue: generate + vaultPolicy: validatedPatternDefaultPolicy + - name: database-root-password + onMissingValue: generate + vaultPolicy: validatedPatternDefaultPolicy + - name: database-master-password + onMissingValue: generate + vaultPolicy: validatedPatternDefaultPolicy + + # Grafana Dashboard admin user/password + - name: grafana + fields: + - name: GF_SECURITY_ADMIN_USER: + value: root + - name: GF_SECURITY_ADMIN_PASSWORD: + onMissingValue: generate + vaultPolicy: validatedPatternDefaultPolicy +---- ++ +By default, Vault password policy generates the passwords for you. However, you can create your own passwords. ++ +[NOTE] +==== +When defining a custom password for the database users, avoid using the `$` special character because it gets interpreted by the shell and will ultimately set the incorrect desired password. +==== + +. To customize the deployment for your cluster, update the `values-global.yaml` file by running the following commands: ++ +[source,terminal] +---- +$ git checkout -b my-branch +$ vi values-global.yaml +---- ++ +Replace instances of PROVIDE_ with your specific configuration ++ +[source,yaml] +---- + ...omitted + datacenter: + cloudProvider: PROVIDE_CLOUD_PROVIDER #AWS, AZURE, GCP + storageClassName: PROVIDE_STORAGECLASS_NAME #gp3-csi + region: PROVIDE_CLOUD_REGION #us-east-2 + clustername: PROVIDE_CLUSTER_NAME #OpenShift clusterName + domain: PROVIDE_DNS_DOMAIN #example.com + + s3: + # Values for S3 bucket access + # Replace with AWS region where S3 bucket was created + # Replace and with your OpenShift cluster values + # bucketSource: "https://s3..amazonaws.com/" + bucketSource: PROVIDE_BUCKET_SOURCE #validated-patterns-md-xray + # Bucket base name used for xray images + bucketBaseName: "xray-source" +---- ++ +[source,terminal] +---- +$ git add values-global.yaml +$ git commit values-global.yaml +$ git push origin my-branch +---- + +. To deploy the pattern, you can use the link:/infrastructure/using-validated-pattern-operator/[{validated-patterns-op}]. If you use the Operator to deploy the pattern, skip to the _Verification_ section of this procedure. + +. To preview the changes that will be implemented to the Helm charts, run the following command: ++ +[source,terminal] +---- +$ ./pattern.sh make show +---- + +. Login to your cluster by running the following command: ++ +[source,terminal] +---- +$ oc login +---- ++ +Optional: Set the `KUBECONFIG` variable for the `kubeconfig` file path: ++ +[source,terminal] +---- + export KUBECONFIG=~/ +---- + +.Verification + +To ensure that you have the required variables to deploy the {med-pattern}, run the `./pattern.sh make predeploy` command. You can review your values and make updates, if required. + +You must review the following `values*` files before deploying the {med-pattern}: + +|=== +| Values File | Description + +| values-secret.yaml +| Values file that includes the secret parameters required by the pattern + +| values-global.yaml +| File that contains all the global values used by Helm to deploy the pattern +|=== + +[NOTE] +==== +Before you run the `./pattern.msh make install` command, ensure that you have the correct values for: +``` +- domain +- clusterName +- cloudProvider +- storageClassName +- region +- bucketSource +``` +==== + +//image::/videos/predeploy.svg[link="/videos/predeploy.svg"] \ No newline at end of file diff --git a/modules/med-setup-aws-s3-bucket-with-utilities.adoc b/modules/med-setup-aws-s3-bucket-with-utilities.adoc new file mode 100644 index 000000000..7aa7740de --- /dev/null +++ b/modules/med-setup-aws-s3-bucket-with-utilities.adoc @@ -0,0 +1,32 @@ +:_content-type: PROCEDURE +:imagesdir: ../../../images + +[id="setting-up-s3-bucket-for-xray-images"] += Using {solution-name-upstream} utilities to set up AWS S3 bucket + +To use the link:https://github.com/validatedpatterns/utilities/tree/main/aws-tools[aws-tools], complete the following steps: + +.Procedure + +. Export the following environment variables for AWS. Ensure that you replace the values with your keys: + +[source,terminal] +---- +export AWS_ACCESS_KEY_ID=AKXXXXXXXXXXXXX +export AWS_SECRET_ACCESS_KEY=gkXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX +---- + +. Create the S3 bucket and copy over the data from the {solution-name-upstream} public bucket to the created bucket for your demo. You can do this on the cloud providers console or you can use the scripts that are provided in link:https://github.com/validatedpatterns/utilities[utilities] repository: + +[source,terminal] +---- +$ python s3-create.py -b mytest-bucket -r us-west-2 -p +$ python s3-sync-buckets.py -s validated-patterns-md-xray -t mytest-bucket -r us-west-2 +---- + +.Example output + +image:/videos/bucket-setup.svg[Bucket setup] + +Make a note of the name and the URL for the bucket for further pattern configuration. For example, you must update these values in a `values-global.yaml` file, where there is a section for `s3:` + diff --git a/modules/med-using-ocp-gitops-to-check-app-progress.adoc b/modules/med-using-ocp-gitops-to-check-app-progress.adoc new file mode 100644 index 000000000..4d2ad5042 --- /dev/null +++ b/modules/med-using-ocp-gitops-to-check-app-progress.adoc @@ -0,0 +1,44 @@ +:_content-type: PROCEDURE +:imagesdir: ../../../images + +[id="using-openshift-gitops-to-check-application-progress"] +== Using {rh-gitops-short} to check application progress + +To check the various applications that are being deployed, you can view the progress of the {rh-gitops-short} Operator. + +. Obtain the ArgoCD URLs and passwords. ++ +The URLs and login credentials for ArgoCD change depending on the pattern name and the site names they control. Follow the instructions below to find them, however you choose to deploy the pattern. ++ +Display the fully qualified domain names, and matching login credentials, for +all ArgoCD instances: ++ +[source,terminal] +---- +ARGO_CMD=`oc get secrets -A -o jsonpath='{range .items[*]}{"oc get -n "}{.metadata.namespace}{" routes; oc -n "}{.metadata.namespace}{" extract secrets/"}{.metadata.name}{" --to=-\\n"}{end}' | grep gitops-cluster` +CMD=`echo $ARGO_CMD | sed 's|- oc|-;oc|g'` +eval $CMD +---- ++ +.Example output ++ +[source,text] +---- +NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD +hub-gitops-server hub-gitops-server-medical-diagnosis-hub.apps.wh-medctr.blueprints.rhecoeng.com hub-gitops-server https passthrough/Redirect None +# admin.password +xsyYU6eSWtwniEk1X3jL0c2TGfQgVpDH +NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD +cluster cluster-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com cluster 8080 reencrypt/Allow None +kam kam-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com kam 8443 passthrough/None None +openshift-gitops-server openshift-gitops-server-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com openshift-gitops-server https passthrough/Redirect None +# admin.password +FdGgWHsBYkeqOczE3PuRpU1jLn7C2fD6 +---- ++ +[IMPORTANT] +==== +Examine the `medical-diagnosis-hub` ArgoCD instance. You can track all the applications for the pattern in this instance. +==== + +. Check that all applications are synchronized. There are thirteen different ArgoCD `applications` that are deployed as part of this pattern. diff --git a/modules/med-viewing-grafana-dashboard.adoc b/modules/med-viewing-grafana-dashboard.adoc new file mode 100644 index 000000000..d99e013cc --- /dev/null +++ b/modules/med-viewing-grafana-dashboard.adoc @@ -0,0 +1,75 @@ +:_content-type: PROCEDURE +:imagesdir: ../../../images + +[id="viewing-the-grafana-based-dashboard-getting-started"] += Viewing the Grafana based dashboard + +. Accept the SSL certificates on the browser for the dashboard. In the {ocp} web console, go to the Routes for project `openshift-storage``. Click the URL for the `s3-rgw`. ++ +image::medical-edge/storage-route.png[link="/images/medical-edge/storage-route.png"] ++ +Ensure that you see some XML and not the access denied error message. ++ +image::medical-edge/storage-rgw-route.png[link="/images/medical-edge/storage-rgw-route.png"] + +. While still looking at Routes, change the project to `xraylab-1`. Click the URL for the `image-server`. Ensure that you do not see an access denied error message. You must to see a `Hello World` message. ++ +image::medical-edge/grafana-routes.png[link="/images/medical-edge/grafana-routes.png"] + +. Turn on the image file flow. There are three ways to go about this. ++ +You can go to the command-line (make sure you have KUBECONFIG set, or are logged into the cluster. ++ +[source,terminal] +---- +$ oc scale deploymentconfig/image-generator --replicas=1 -n xraylab-1 +---- ++ +Or you can go to the OpenShift UI and change the view from Administrator to Developer and select Topology. From there select the `xraylab-1` project. ++ +image::medical-edge/dev-topology.png[link="/images/medical-edge/dev-topology.png"] ++ +Right-click on the `image-generator` pod icon and select `Edit Pod count`. ++ +image::medical-edge/dev-topology-menu.png[link="/images/medical-edge/dev-topology-menu.png"] ++ +Up the pod count from `0` to `1` and save. ++ +image::medical-edge/dev-topology-pod-count.png[link="/images/medical-edge/dev-topology-pod-count.png"] ++ +Alternatively, you can have the same outcome on the Administrator console. ++ +Go to the OpenShift UI under Workloads, select Deploymentconfigs for Project `xraylab-1`. +Click `image-generator` and increase the pod count to 1. ++ +image::medical-edge/start-image-flow.png[link="/images/medical-edge/start-image-flow.png"] + +[id="customizing-dashboard"] +== Customizing the dashboard + +You can change some of the parameters and watch how the changes effect the dashboard. + +. To increase or decrease the number of image generators, run the following command: ++ +[source,terminal] +---- +$ oc scale deploymentconfig/image-generator --replicas=2 +---- ++ +Check the dashboard. ++ +[source,terminal] +---- +$ oc scale deploymentconfig/image-generator --replicas=0 +---- ++ +Watch the dashboard stop processing images. + +. You can also simulate the change of the AI model version, which is an environment variable in the Serverless Service configuration. ++ +[source,terminal] +---- +$ oc patch service.serving.knative.dev/risk-assessment --type=json -p '[{"op":"replace","path":"/spec/template/metadata/annotations/revisionTimestamp","value":"'"$(date +%F_%T)"'"},{"op":"replace","path":"/spec/template/spec/containers/0/env/0/value","value":"v2"}]' +---- ++ +This changes the model version value, and the `revisionTimestamp` in the annotations, which triggers a redeployment of the service. From 4b9a96f8629df8d6475d8ff3e3514dadd9cdfc0b Mon Sep 17 00:00:00 2001 From: Avani Bhatt Date: Mon, 9 Oct 2023 17:45:37 +0100 Subject: [PATCH 4/7] fixed linking issues due to updated filenames --- modules/med-about-customizing-pattern.adoc | 10 +++++----- modules/med-about-medical-diagnosis.adoc | 2 +- modules/med-ocp-cluster-sizing.adoc | 2 +- modules/med-setup-aws-s3-bucket-with-utilities.adoc | 6 +++--- modules/med-troubleshooting-deployment.adoc | 4 ++-- 5 files changed, 12 insertions(+), 12 deletions(-) diff --git a/modules/med-about-customizing-pattern.adoc b/modules/med-about-customizing-pattern.adoc index e0b1e14a9..32d37b6e7 100644 --- a/modules/med-about-customizing-pattern.adoc +++ b/modules/med-about-customizing-pattern.adoc @@ -2,7 +2,7 @@ :imagesdir: ../../images [id="about-customizing-pattern-med"] -= About customizing the pattern {med-pattern} += About customizing the {med-pattern} One of the major goals of the {solution-name-upstream} development process is to create modular and customizable demos. The {med-pattern} is just an example of how AI/ML workloads built for object detection and classification can be run on OpenShift clusters. Consider your workloads for a moment - how would your workload best consume the pattern framework? Do your consumers require on-demand or near real-time responses when using your application? Is your application processing images or data that is protected by either Government Privacy Laws or HIPAA? The {med-pattern} can answer the call to either of these requirements by using {serverless-short} and {ocp-data-short}. @@ -10,9 +10,9 @@ The {med-pattern} can answer the call to either of these requirements by using [id="understanding-different-ways-to-use-med-pattern"] == Understanding different ways to use the {med-pattern} -. The {med-pattern} is scanning X-Ray images to determine the probability that a patient might or might not have Pneumonia. Continuing with the medical path, the pattern could be used for other early detection scenarios that use object detection and classification. For example, the pattern could be used to scan C/T images for anomalies in the body such as Sepsis, Cancer, or even benign tumors. Additionally, the pattern could be used for detecting blood clots, some heart disease, and bowel disorders like Crohn's disease. -. The Transportation Security Agency (TSA) could use the {med-pattern} in a way that enhances their existing scanning capabilities to detect with a higher probability restricted items carried on a person or hidden away in a piece of luggage. With Machine Learning Operations (MLOps), the model is constantly training and learning to better detect those items that are dangerous but which are not necessarily metallic, such as a firearm or a knife. The model is also training to dismiss those items that are authorized; ultimately saving passengers from being stopped and searched at security checkpoints. -. Militaries could use images collected from drones, satellites, or other platforms to identify objects and determine with probability what that object is. For example, the model could be trained to determine a type of ship, potentially its country of origin, and other such identifying characteristics. -. Manufacturing companies could use the pattern to inspect finished products as they roll off a production line. An image of the item, including using different types of light, could be analyzed to help expose defects before packaging and distributing. The item could be routed to a defect area. +* The {med-pattern} is scanning X-Ray images to determine the probability that a patient might or might not have Pneumonia. Continuing with the medical path, the pattern could be used for other early detection scenarios that use object detection and classification. For example, the pattern could be used to scan C/T images for anomalies in the body such as Sepsis, Cancer, or even benign tumors. Additionally, the pattern could be used for detecting blood clots, some heart disease, and bowel disorders like Crohn's disease. +* The Transportation Security Agency (TSA) could use the {med-pattern} in a way that enhances their existing scanning capabilities to detect with a higher probability restricted items carried on a person or hidden away in a piece of luggage. With Machine Learning Operations (MLOps), the model is constantly training and learning to better detect those items that are dangerous but which are not necessarily metallic, such as a firearm or a knife. The model is also training to dismiss those items that are authorized; ultimately saving passengers from being stopped and searched at security checkpoints. +* Militaries could use images collected from drones, satellites, or other platforms to identify objects and determine with probability what that object is. For example, the model could be trained to determine a type of ship, potentially its country of origin, and other such identifying characteristics. +* Manufacturing companies could use the pattern to inspect finished products as they roll off a production line. An image of the item, including using different types of light, could be analyzed to help expose defects before packaging and distributing. The item could be routed to a defect area. These are just a few ideas to help you understand how you could use the {med-pattern} as a framework for your application. diff --git a/modules/med-about-medical-diagnosis.adoc b/modules/med-about-medical-diagnosis.adoc index 92e533f0a..6323a8631 100644 --- a/modules/med-about-medical-diagnosis.adoc +++ b/modules/med-about-medical-diagnosis.adoc @@ -43,4 +43,4 @@ The {med-pattern} uses the following products and technologies: * {rh-serverless-first} for event-driven applications * {rh-ocp-data-first} for cloud native storage capabilities * {grafana-op} to manage and share Grafana dashboards, data sources, and so on -* S3 storage \ No newline at end of file +* Storage, such as AWS S3 buckets \ No newline at end of file diff --git a/modules/med-ocp-cluster-sizing.adoc b/modules/med-ocp-cluster-sizing.adoc index bb25fe849..07fa311b0 100644 --- a/modules/med-ocp-cluster-sizing.adoc +++ b/modules/med-ocp-cluster-sizing.adoc @@ -2,7 +2,7 @@ :imagesdir: ../../images [id="med-openshift-cluster-size"] -== About {med-pattern} OpenShift cluster size +== About OpenShift cluster size for the {med-pattern} The {med-pattern} has been tested with a defined set of configurations that represent the most common combinations that {ocp} customers are using for the x86_64 architecture. diff --git a/modules/med-setup-aws-s3-bucket-with-utilities.adoc b/modules/med-setup-aws-s3-bucket-with-utilities.adoc index 7aa7740de..ad722fae1 100644 --- a/modules/med-setup-aws-s3-bucket-with-utilities.adoc +++ b/modules/med-setup-aws-s3-bucket-with-utilities.adoc @@ -9,7 +9,7 @@ To use the link:https://github.com/validatedpatterns/utilities/tree/main/aws-too .Procedure . Export the following environment variables for AWS. Ensure that you replace the values with your keys: - ++ [source,terminal] ---- export AWS_ACCESS_KEY_ID=AKXXXXXXXXXXXXX @@ -17,13 +17,13 @@ export AWS_SECRET_ACCESS_KEY=gkXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX ---- . Create the S3 bucket and copy over the data from the {solution-name-upstream} public bucket to the created bucket for your demo. You can do this on the cloud providers console or you can use the scripts that are provided in link:https://github.com/validatedpatterns/utilities[utilities] repository: - ++ [source,terminal] ---- $ python s3-create.py -b mytest-bucket -r us-west-2 -p $ python s3-sync-buckets.py -s validated-patterns-md-xray -t mytest-bucket -r us-west-2 ---- - ++ .Example output image:/videos/bucket-setup.svg[Bucket setup] diff --git a/modules/med-troubleshooting-deployment.adoc b/modules/med-troubleshooting-deployment.adoc index 98ce07ef3..bc226ae64 100644 --- a/modules/med-troubleshooting-deployment.adoc +++ b/modules/med-troubleshooting-deployment.adoc @@ -1,8 +1,8 @@ :_content-type: REFERENCE -:imagesdir: ../../images +:imagesdir: ../../../images [id="troubleshooting-the-pattern-deployment-troubleshooting"] -=== Troubleshooting the Pattern Deployment +=== Troubleshooting the pattern deployment Occasionally the pattern will encounter issues during the deployment. This can happen for any number of reasons, but most often it is because of either a change within the operator itself or something has changed in the {olm-first} which determines which operators are available in the operator catalog. Generally, when an issue occurs with the {olm-short}, the operator is unavailable for installation. To ensure that the operator is in the catalog, run the following command: From 6ec2563b3e8dd759dc9b4fcd61abf0ca056cfaa6 Mon Sep 17 00:00:00 2001 From: Avani Bhatt Date: Thu, 16 Nov 2023 00:02:27 +0000 Subject: [PATCH 5/7] Addressed review comments and made some more minor edits --- .vale.ini | 17 ++++++++++++++++ modules/med-about-cluster-sizing.adoc | 2 +- modules/med-about-customizing-pattern.adoc | 14 +++++++++---- modules/med-about-makefile.adoc | 20 +++++++++---------- modules/med-about-medical-diagnosis.adoc | 2 +- modules/med-deploying-med-diag-pattern.adoc | 4 ++-- modules/med-ocp-cluster-sizing.adoc | 9 ++++++--- modules/med-preparing-for-deployment.adoc | 6 ++++-- ...ed-setup-aws-s3-bucket-with-utilities.adoc | 2 +- modules/med-troubleshooting-deployment.adoc | 6 +++--- modules/med-viewing-grafana-dashboard.adoc | 14 +++++++------ 11 files changed, 63 insertions(+), 33 deletions(-) create mode 100644 .vale.ini diff --git a/.vale.ini b/.vale.ini new file mode 100644 index 000000000..339770a3b --- /dev/null +++ b/.vale.ini @@ -0,0 +1,17 @@ +StylesPath = .vale/styles + +MinAlertLevel = suggestion + +Packages = RedHat, AsciiDoc +Vocab = OpenShiftDocs + +# Ignore files in dirs starting with `.` to avoid raising errors for `.vale/fixtures/*/testinvalid.adoc` files +[[!.]*.adoc] +BasedOnStyles = RedHat, AsciiDoc, + +# Optional: pass doc attributes to asciidoctor before linting +#[asciidoctor] +#openshift-enterprise = YES + +# Disabling rules (NO) +RedHat.ReleaseNotes = NO diff --git a/modules/med-about-cluster-sizing.adoc b/modules/med-about-cluster-sizing.adoc index 722835156..719764c4b 100644 --- a/modules/med-about-cluster-sizing.adoc +++ b/modules/med-about-cluster-sizing.adoc @@ -5,7 +5,7 @@ [id="about-openshift-cluster-sizing-med"] = About OpenShift cluster sizing for the {med-pattern} -To understand cluster sizing requirements for the {med-pattern}, consider the following components that the {med-pattern} deploys on the datacenter or the hub OpenShift cluster: +The {med-pattern} deploys the following components on the datacenter or the hub OpenShift cluster: |=== | Name | Kind | Namespace | Description diff --git a/modules/med-about-customizing-pattern.adoc b/modules/med-about-customizing-pattern.adoc index 32d37b6e7..53bd2582b 100644 --- a/modules/med-about-customizing-pattern.adoc +++ b/modules/med-about-customizing-pattern.adoc @@ -4,14 +4,20 @@ [id="about-customizing-pattern-med"] = About customizing the {med-pattern} -One of the major goals of the {solution-name-upstream} development process is to create modular and customizable demos. The {med-pattern} is just an example of how AI/ML workloads built for object detection and classification can be run on OpenShift clusters. Consider your workloads for a moment - how would your workload best consume the pattern framework? Do your consumers require on-demand or near real-time responses when using your application? Is your application processing images or data that is protected by either Government Privacy Laws or HIPAA? -The {med-pattern} can answer the call to either of these requirements by using {serverless-short} and {ocp-data-short}. +One of the major goals of the {solution-name-upstream} development process is to create modular and customizable demos. The {med-pattern} is just an example of how AI/ML workloads built for object detection and classification can be run on OpenShift clusters. Consider your workloads for a moment: +* How would your workload best consume the pattern framework? + +* Do your consumers require on-demand or near real-time responses when using your application? + +* Is your application processing images or data that is protected by either Government Privacy Laws or HIPAA? + +The {med-pattern} can address either of these requirements by using {serverless-short} and {ocp-data-short}. [id="understanding-different-ways-to-use-med-pattern"] == Understanding different ways to use the {med-pattern} -* The {med-pattern} is scanning X-Ray images to determine the probability that a patient might or might not have Pneumonia. Continuing with the medical path, the pattern could be used for other early detection scenarios that use object detection and classification. For example, the pattern could be used to scan C/T images for anomalies in the body such as Sepsis, Cancer, or even benign tumors. Additionally, the pattern could be used for detecting blood clots, some heart disease, and bowel disorders like Crohn's disease. -* The Transportation Security Agency (TSA) could use the {med-pattern} in a way that enhances their existing scanning capabilities to detect with a higher probability restricted items carried on a person or hidden away in a piece of luggage. With Machine Learning Operations (MLOps), the model is constantly training and learning to better detect those items that are dangerous but which are not necessarily metallic, such as a firearm or a knife. The model is also training to dismiss those items that are authorized; ultimately saving passengers from being stopped and searched at security checkpoints. +* The {med-pattern} is scanning X-ray images to determine the probability that a patient might or might not have pneumonia. Continuing with the medical path, the pattern could be used for other early detection scenarios that use object detection and classification. For example, the pattern could be used to scan computed tomography (CT) images for anomalies in the body such as sepsis, cancer, or even benign tumors. Additionally, the pattern could be used for detecting blood clots, some heart disease, and bowel disorders like Crohn's disease. +* The United States Transportation Security Agency (TSA) could use the {med-pattern} in a way that enhances their existing scanning capabilities to detect with a higher probability restricted items carried on a person or hidden away in a piece of luggage. With Machine Learning Operations (MLOps), the model is constantly training and learning to better detect those items that are dangerous but which are not necessarily metallic, such as a firearm or a knife. The model is also training to dismiss those items that are authorized, which can save passengers from being stopped and searched at security checkpoints. * Militaries could use images collected from drones, satellites, or other platforms to identify objects and determine with probability what that object is. For example, the model could be trained to determine a type of ship, potentially its country of origin, and other such identifying characteristics. * Manufacturing companies could use the pattern to inspect finished products as they roll off a production line. An image of the item, including using different types of light, could be analyzed to help expose defects before packaging and distributing. The item could be routed to a defect area. diff --git a/modules/med-about-makefile.adoc b/modules/med-about-makefile.adoc index d6b21b16f..365fa239f 100644 --- a/modules/med-about-makefile.adoc +++ b/modules/med-about-makefile.adoc @@ -2,30 +2,30 @@ :imagesdir: ../../images [id="med-understanding-the-makefile-troubleshooting"] -=== Understanding the Makefile += Understanding the Makefile -The Makefile is the entrypoint for the pattern. We use the Makefile to bootstrap the pattern to the cluster. After the initial bootstrapping of the pattern, the Makefile isn't required for ongoing operations but can often be useful when needing to make a change to a config within the pattern by running a `make upgrade` which allows us to refresh the bootstrap resources without having to tear down the pattern or cluster. +The Makefile is the entrypoint for the pattern. We use the Makefile to bootstrap the pattern to the cluster. After the initial bootstrapping of the pattern, the Makefile isn't required for ongoing operations but can often be useful when you need to make a change to a config within the pattern. Run the `make upgrade` command to refresh the bootstrap resources without having to tear down the pattern or cluster. [id="about-make-install-make-deploy-command"] -==== About the make install and make deploy commands +== About the make install and make deploy commands -Running `make install` within the pattern application triggers a `make deploy` from `/common` directory. This initializes the `common` components of the pattern framework and install a helm chart in the `default` namespace. At this point, cluster services, such as {rh-rhacm-first} and {rh-gitops} are deployed. +Running `make install` within the pattern application triggers a `make deploy` from `/common` directory. This initializes the `common` components of the pattern framework and installs a Helm chart in the `default` namespace. At this point, cluster services, such as {rh-rhacm-first} and {rh-gitops} are deployed. -After components from the `common` directory are installed, the remaining tasks within the `make install` target run. +After you have installed the components from the `common` directory, the pattern runs the remaining tasks within the `make install` target. //AI: Check which are these other tasks [id="about-make-vault-init-make-load-secrets-commands"] -==== About the make vault-init and make load-secrets commands +== About the make vault-init and make load-secrets commands The {med-pattern} is integrated with {hashicorp-vault} and {eso-op} services for secrets management within the cluster. These targets install vault from a {helm-chart} and load the secret `(values-secret.yaml)` that you created during link:../med-getting-started/#preparing-for-deployment[Getting Started]. -If `values-secret.yaml` does not exist, make will exit with an error saying so. Furthermore, if the `values-secret.yaml` file does exist but is improperly formatted, {rh-ansible} exits with an error about being improperly formatted. To verify the format of the secret, see link:../med-getting-started/#preparing-for-deployment[Getting Started]. +If `values-secret.yaml` does not exist, `make` will exit with an error saying so. Furthermore, if the `values-secret.yaml` file does exist but is improperly formatted, {rh-ansible} exits with an error about being improperly formatted. To verify the format of the secret, see link:../med-getting-started/#preparing-for-deployment[Getting Started]. [id="about-make-bootstrap-make-upgrade-commands"] -==== About the make bootstrap and make upgrade commands +== About the make bootstrap and make upgrade commands The `make bootstrap` command is the target used for deploying the application specific components of the pattern. It is the final step in the initial `make install` target. You might want to consider running the `make upgrade` command instead of the `make bootstrap` command directly. -Generally, running the `make upgrade` command is required when you encounter errors with the application pattern deployment. For instance, if a value was missed and the chart was not rendered correctly, executing `make upgrade` command after fixing the value would be necessary. +Generally, running the `make upgrade` command is required when you encounter errors with the application pattern deployment. For example, if you miss a value and the chart does not rendered correctly, you must execute the run the `make upgrade` command after fixing the value. -You might want to review the `Makefile` for the `common` and `Medical Diagnosis` components, which are located in `common/Makefile` and `./Makefile` respectively. +Review the `Makefile` for the `common` and `Medical Diagnosis` components, which are located in `common/Makefile` and `./Makefile` respectively. diff --git a/modules/med-about-medical-diagnosis.adoc b/modules/med-about-medical-diagnosis.adoc index 6323a8631..b6bb1a2f3 100644 --- a/modules/med-about-medical-diagnosis.adoc +++ b/modules/med-about-medical-diagnosis.adoc @@ -6,7 +6,7 @@ Background:: -This validated pattern is based on a demo implementation of an automated data pipeline for chest X-ray analysis that was previously developed by {redhat}. You can find the original demonstration link:https://github.com/red-hat-data-services/jumpstart-library[here]. It was developed for the US Department of Veteran Affairs. +This validated pattern is based on a demo implementation of an automated data pipeline for chest X-ray analysis that {redhat} developed for the US Department of Veteran Affairs. You can find the original demonstration link:https://github.com/red-hat-data-services/jumpstart-library[here]. This validated pattern includes the same functionality as the original demonstration. The difference is that this solution uses the GitOps framework to deploy the pattern including Operators, creation of namespaces, and cluster configuration. Using GitOps provides an efficient means of implementing continuous deployment. diff --git a/modules/med-deploying-med-diag-pattern.adoc b/modules/med-deploying-med-diag-pattern.adoc index f1ef75aeb..23910093a 100644 --- a/modules/med-deploying-med-diag-pattern.adoc +++ b/modules/med-deploying-med-diag-pattern.adoc @@ -11,7 +11,7 @@ $ ./pattern.sh make install ---- + -If the installation fails, you can go over the instructions and make updates, if required. +If the installation fails, review the instructions and make any required updates. To continue the installation, run the following command: + [source,terminal] @@ -19,7 +19,7 @@ To continue the installation, run the following command: $ ./pattern.sh make update ---- + -This step might take some time, especially for the {ocp-data-short} Operator components to install and synchronize. The `./pattern.sh make install` command provides some progress updates during the installation process. It can take up to twenty minutes. Compare your `./pattern.sh make install` run progress with the following video that shows a successful installation. +This step might take up to twenty minutes to complete, especially for the {ocp-data-short} Operator components to install and synchronize. The `./pattern.sh make install` command provides some progress updates during the installation process. It can take up to twenty minutes. Compare your `./pattern.sh make install` run progress with the following video that shows a successful installation. + image::/videos/xray-deployment.svg[link="/videos/xray-deployment.svg"] diff --git a/modules/med-ocp-cluster-sizing.adoc b/modules/med-ocp-cluster-sizing.adoc index 07fa311b0..f80bfd95a 100644 --- a/modules/med-ocp-cluster-sizing.adoc +++ b/modules/med-ocp-cluster-sizing.adoc @@ -6,12 +6,15 @@ The {med-pattern} has been tested with a defined set of configurations that represent the most common combinations that {ocp} customers are using for the x86_64 architecture. -For {med-pattern}, the OpenShift cluster size must be a bit larger to support the compute and storage demands of OpenShift Data Foundations and other Operators. +For {med-pattern}, the OpenShift cluster size must be larger than a standard cluster to support the compute and storage demands of OpenShift Data Foundations and other Operators. -The minimum requirements for an {ocp} cluster depend on your installation platform. For instance, for AWS, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_aws/preparing-to-install-on-aws.html#requirements-for-installing-ocp-on-aws[Installing {ocp} on AWS], and for bare-metal, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_bare_metal/installing-bare-metal.html#installation-minimum-resource-requirements_installing-bare-metal[Installing {ocp} on bare metal]. +The minimum requirements for an {ocp} cluster depend on your installation platform, for example: -For information about requirements for additional platforms, see link:https://docs.openshift.com/container-platform/4.13/installing/installing-preparing.html[{ocp} documentation]. +* For AWS, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_aws/preparing-to-install-on-aws.html#requirements-for-installing-ocp-on-aws[Installing {ocp} on AWS] + +* For bare-metal, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_bare_metal/installing-bare-metal.html#installation-minimum-resource-requirements_installing-bare-metal[Installing {ocp} on bare metal]. +For information about requirements for additional platforms, see link:https://docs.openshift.com/container-platform/4.13/installing/installing-preparing.html[{ocp} documentation]. [NOTE] ==== diff --git a/modules/med-preparing-for-deployment.adoc b/modules/med-preparing-for-deployment.adoc index 5323ec590..ce060c363 100644 --- a/modules/med-preparing-for-deployment.adoc +++ b/modules/med-preparing-for-deployment.adoc @@ -72,7 +72,7 @@ By default, Vault password policy generates the passwords for you. However, you + [NOTE] ==== -When defining a custom password for the database users, avoid using the `$` special character because it gets interpreted by the shell and will ultimately set the incorrect desired password. +When defining a custom password for the database users, avoid using the `$` special character because it gets interpreted by the shell and will ultimately set the incorrect password. ==== . To customize the deployment for your cluster, update the `values-global.yaml` file by running the following commands: @@ -105,6 +105,8 @@ Replace instances of PROVIDE_ with your specific configuration bucketBaseName: "xray-source" ---- + +Save the values-global.yaml file and commit it to your branch: + [source,terminal] ---- $ git add values-global.yaml @@ -137,7 +139,7 @@ Optional: Set the `KUBECONFIG` variable for the `kubeconfig` file path: .Verification -To ensure that you have the required variables to deploy the {med-pattern}, run the `./pattern.sh make predeploy` command. You can review your values and make updates, if required. +To ensure that you have the required variables to deploy the {med-pattern}, run the `./pattern.sh make predeploy` command. You can review your values and m0ake any required updates. You must review the following `values*` files before deploying the {med-pattern}: diff --git a/modules/med-setup-aws-s3-bucket-with-utilities.adoc b/modules/med-setup-aws-s3-bucket-with-utilities.adoc index ad722fae1..4291111a6 100644 --- a/modules/med-setup-aws-s3-bucket-with-utilities.adoc +++ b/modules/med-setup-aws-s3-bucket-with-utilities.adoc @@ -28,5 +28,5 @@ $ python s3-sync-buckets.py -s validated-patterns-md-xray -t mytest-bucket -r us image:/videos/bucket-setup.svg[Bucket setup] -Make a note of the name and the URL for the bucket for further pattern configuration. For example, you must update these values in a `values-global.yaml` file, where there is a section for `s3:` +Make a note of the name and the URL for the bucket for further pattern configuration. For example, you must update these values in a `values-global.yaml` file, where there is a section for `s3:`. diff --git a/modules/med-troubleshooting-deployment.adoc b/modules/med-troubleshooting-deployment.adoc index bc226ae64..a0a058df6 100644 --- a/modules/med-troubleshooting-deployment.adoc +++ b/modules/med-troubleshooting-deployment.adoc @@ -2,7 +2,7 @@ :imagesdir: ../../../images [id="troubleshooting-the-pattern-deployment-troubleshooting"] -=== Troubleshooting the pattern deployment += Troubleshooting the pattern deployment Occasionally the pattern will encounter issues during the deployment. This can happen for any number of reasons, but most often it is because of either a change within the operator itself or something has changed in the {olm-first} which determines which operators are available in the operator catalog. Generally, when an issue occurs with the {olm-short}, the operator is unavailable for installation. To ensure that the operator is in the catalog, run the following command: @@ -23,7 +23,7 @@ Use the grafana dashboard to assist with debugging and identifying the issue ''' Problem:: No information is being processed in the dashboard -Solution:: Most often this is due to the image-generator deploymentConfig needing to be scaled up. The image-generator by design is *scaled to 0*; +Solution:: Solution:: Most often this is due to the image-generator deploymentConfig needing to be scaled up. The image-generator by design is *scaled to 0*": + [source,terminal] ---- @@ -97,7 +97,7 @@ MariaDB [xraylabdb]> show tables; 3 rows in set (0.000 sec) ---- + -. Verify the password set in the `values-secret.yaml` is working +. Verify the password set in the `values-secret.yaml` is working: + [source,terminal] ---- diff --git a/modules/med-viewing-grafana-dashboard.adoc b/modules/med-viewing-grafana-dashboard.adoc index d99e013cc..baf9657bc 100644 --- a/modules/med-viewing-grafana-dashboard.adoc +++ b/modules/med-viewing-grafana-dashboard.adoc @@ -16,16 +16,17 @@ image::medical-edge/storage-rgw-route.png[link="/images/medical-edge/storage-rgw + image::medical-edge/grafana-routes.png[link="/images/medical-edge/grafana-routes.png"] -. Turn on the image file flow. There are three ways to go about this. +. Turn on the image file flow. There are three methods to do this. + -You can go to the command-line (make sure you have KUBECONFIG set, or are logged into the cluster. +-- +* Method 1: Go to the command-line and log into the cluster. Ensure you have exported the `KUBECONFIG` file. + [source,terminal] ---- $ oc scale deploymentconfig/image-generator --replicas=1 -n xraylab-1 ---- + -Or you can go to the OpenShift UI and change the view from Administrator to Developer and select Topology. From there select the `xraylab-1` project. +* Method 2: Go to the {opc} web console and change the view from *Administrator* perspective to *Developer* perspective and select *Topology*. From there select the `xraylab-1` project. + image::medical-edge/dev-topology.png[link="/images/medical-edge/dev-topology.png"] + @@ -33,16 +34,17 @@ Right-click on the `image-generator` pod icon and select `Edit Pod count`. + image::medical-edge/dev-topology-menu.png[link="/images/medical-edge/dev-topology-menu.png"] + -Up the pod count from `0` to `1` and save. +Increase the pod count from `0` to `1` and save. + image::medical-edge/dev-topology-pod-count.png[link="/images/medical-edge/dev-topology-pod-count.png"] + -Alternatively, you can have the same outcome on the Administrator console. +* Method 3: Go to the {opc} web console and change to the *Administrator* perspective. + -Go to the OpenShift UI under Workloads, select Deploymentconfigs for Project `xraylab-1`. +Under *Workloads*, select *DeploymentConfigs* for *Project:xraylab-1*. Click `image-generator` and increase the pod count to 1. + image::medical-edge/start-image-flow.png[link="/images/medical-edge/start-image-flow.png"] +-- [id="customizing-dashboard"] == Customizing the dashboard From b1913b37dc90d39b678a56454190891b022e1341 Mon Sep 17 00:00:00 2001 From: Avani Bhatt Date: Fri, 17 Nov 2023 19:46:47 +0000 Subject: [PATCH 6/7] trying to fix the htmltest errors --- content/blog/2021-12-31-medical-diagnosis.md | 2 +- content/patterns/ansible-edge-gitops/installation-details.md | 2 +- content/patterns/medical-diagnosis/_index.adoc | 2 +- content/patterns/medical-diagnosis/med-troubleshooting.adoc | 1 - modules/med-deploying-med-diag-pattern.adoc | 2 +- modules/med-preparing-for-deployment.adoc | 2 +- 6 files changed, 5 insertions(+), 6 deletions(-) diff --git a/content/blog/2021-12-31-medical-diagnosis.md b/content/blog/2021-12-31-medical-diagnosis.md index a628a2086..43de51bcb 100644 --- a/content/blog/2021-12-31-medical-diagnosis.md +++ b/content/blog/2021-12-31-medical-diagnosis.md @@ -30,7 +30,7 @@ For a recorded demo deploying the pattern and seeing the dashboards available to --- -To deploy this pattern, follow the instructions outlined on the [getting-started](https://validatedpatterns.io/medical-diagnosis/med-getting-started/) page. +To deploy this pattern, follow the instructions outlined on the [Getting started](/patterns/medical-diagnosis/med-getting-started/) page. ### What's happening? diff --git a/content/patterns/ansible-edge-gitops/installation-details.md b/content/patterns/ansible-edge-gitops/installation-details.md index 792629e37..86ffe5b7e 100644 --- a/content/patterns/ansible-edge-gitops/installation-details.md +++ b/content/patterns/ansible-edge-gitops/installation-details.md @@ -93,7 +93,7 @@ OpenShift GitOps is central to this pattern as it is responsible for installing # ODF (OpenShift Data Foundations) -ODF is the storage framework that is needed to provide resilient storage for OpenShift Virtualization. It is managed via the helm chart [here](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/charts/hub/openshift-data-foundations). This is basically the same chart that our Medical Diagnosis pattern uses (see [here](/patterns/medical-diagnosis/getting-started/) for details on the Medical Edge pattern's use of storage). +ODF is the storage framework that is needed to provide resilient storage for OpenShift Virtualization. It is managed via the helm chart [here](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/charts/hub/openshift-data-foundations). This is basically the same chart that our Medical Diagnosis pattern uses (see [here](/patterns/medical-diagnosis/med-getting-started/) for details on the Medical Edge pattern's use of storage). Please note that this chart will create a Noobaa S3 bucket named nb.epoch_timestamp.cluster-domain which will not be destroyed when the cluster is destroyed. diff --git a/content/patterns/medical-diagnosis/_index.adoc b/content/patterns/medical-diagnosis/_index.adoc index ac498c674..77c5cfba8 100644 --- a/content/patterns/medical-diagnosis/_index.adoc +++ b/content/patterns/medical-diagnosis/_index.adoc @@ -32,4 +32,4 @@ include::modules/med-architecture-schema.adoc[leveloffset=+1] [id="next-steps_med-diag-index"] == Next steps -* link:med-getting-started[Deploy the pattern] \ No newline at end of file +* link:med-getting-started/#med-deploy-pattern[Deploying the Medical Diagnosis pattern] \ No newline at end of file diff --git a/content/patterns/medical-diagnosis/med-troubleshooting.adoc b/content/patterns/medical-diagnosis/med-troubleshooting.adoc index d3e7fcef0..8b5ce7c58 100644 --- a/content/patterns/medical-diagnosis/med-troubleshooting.adoc +++ b/content/patterns/medical-diagnosis/med-troubleshooting.adoc @@ -7,7 +7,6 @@ aliases: /medical-diagnosis/med-troubleshooting/ :toc: :imagesdir: /images :_content-type: REFERENCE - include::modules/comm-attributes.adoc[] include::modules/med-about-makefile.adoc[leveloffset=+1] diff --git a/modules/med-deploying-med-diag-pattern.adoc b/modules/med-deploying-med-diag-pattern.adoc index 23910093a..0ce0db2ad 100644 --- a/modules/med-deploying-med-diag-pattern.adoc +++ b/modules/med-deploying-med-diag-pattern.adoc @@ -1,7 +1,7 @@ :_content-type: PROCEDURE :imagesdir: ../../../images -[id="med-deploy-pattern_{context}"] +[id="med-deploy-pattern"] = Deploying the {med-pattern} . To apply the changes to your cluster, run the following command: diff --git a/modules/med-preparing-for-deployment.adoc b/modules/med-preparing-for-deployment.adoc index ce060c363..0c42ca9a0 100644 --- a/modules/med-preparing-for-deployment.adoc +++ b/modules/med-preparing-for-deployment.adoc @@ -106,7 +106,7 @@ Replace instances of PROVIDE_ with your specific configuration ---- + Save the values-global.yaml file and commit it to your branch: - ++ [source,terminal] ---- $ git add values-global.yaml From d14ae641cb09ef4073a6d961bba901f4f115298b Mon Sep 17 00:00:00 2001 From: Avani Bhatt Date: Fri, 17 Nov 2023 19:57:12 +0000 Subject: [PATCH 7/7] trying a setting for the cannonical links by updating the htmltest.yml --- .htmltest.yml | 4 +++- modules/med-ocp-cluster-sizing.adoc | 2 +- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/.htmltest.yml b/.htmltest.yml index ddb5b5cdc..f7eeb2f0b 100644 --- a/.htmltest.yml +++ b/.htmltest.yml @@ -1,2 +1,4 @@ DirectoryPath: public/ -IgnoreDirectoryMissingTrailingSlash: true \ No newline at end of file +IgnoreDirectoryMissingTrailingSlash: true +IgnoreCanonicalBrokenLinks: false +TestFilesConcurrently: true \ No newline at end of file diff --git a/modules/med-ocp-cluster-sizing.adoc b/modules/med-ocp-cluster-sizing.adoc index f80bfd95a..c26e70b58 100644 --- a/modules/med-ocp-cluster-sizing.adoc +++ b/modules/med-ocp-cluster-sizing.adoc @@ -10,7 +10,7 @@ For {med-pattern}, the OpenShift cluster size must be larger than a standard clu The minimum requirements for an {ocp} cluster depend on your installation platform, for example: -* For AWS, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_aws/preparing-to-install-on-aws.html#requirements-for-installing-ocp-on-aws[Installing {ocp} on AWS] +* For AWS, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_aws/preparing-to-install-on-aws.html#requirements-for-installing-ocp-on-aws[Installing {ocp} on AWS]. * For bare-metal, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_bare_metal/installing-bare-metal.html#installation-minimum-resource-requirements_installing-bare-metal[Installing {ocp} on bare metal].