diff --git a/serverlessworkflow/modules/ROOT/pages/cloud/operator/referencing-resource-files.adoc b/serverlessworkflow/modules/ROOT/pages/cloud/operator/referencing-resource-files.adoc index bbfa15941..be56488cf 100644 --- a/serverlessworkflow/modules/ROOT/pages/cloud/operator/referencing-resource-files.adoc +++ b/serverlessworkflow/modules/ROOT/pages/cloud/operator/referencing-resource-files.adoc @@ -58,11 +58,11 @@ spec: <1> The workflow defines an input schema <2> The workflow requires an OpenAPI specification file to make a REST invocation -The `Hello Service` workflow int the example offer two options. You can either create two `ConfigMaps`, each for one file, to have a clear separation of concerns or group them into one. +The `Hello Service` workflow in the example offers two options. You can either create two `ConfigMaps`, each for one file, to have a clear separation of concerns or group them into one. From the operator perspective, it won't make any difference since both files will be available for the workflow application at runtime. -To make it simple, you can create only one `ConfigMap`. Navigate into the directory where your resources files are available and create the config map using following command: +To make it simple, you can create only one `ConfigMap`. Navigate into the directory where your resource files are available and create the config map using following command: .Creating a ConfigMap from the current directory [source,bash,subs="attributes+"] @@ -81,7 +81,7 @@ You should have a `ConfigMap` with two data entries similar to this one: [source,yaml,subs="attributes+"] ---- kind: ConfigMap -apiVersion: v1w +apiVersion: v1 metadata: name: service-files data: diff --git a/serverlessworkflow/modules/ROOT/pages/cloud/operator/supporting-services.adoc b/serverlessworkflow/modules/ROOT/pages/cloud/operator/supporting-services.adoc index 8e5368fee..a4aef68d2 100644 --- a/serverlessworkflow/modules/ROOT/pages/cloud/operator/supporting-services.adoc +++ b/serverlessworkflow/modules/ROOT/pages/cloud/operator/supporting-services.adoc @@ -6,7 +6,7 @@ // links :kogito_serverless_operator_url: https://github.com/apache/incubator-kie-kogito-serverless-operator/ -Uder the hood, {operator_name} supports several services that ehnance its capabilities. For example +Under the hood, {operator_name} supports several services that enhance its capabilities. For example xref:data-index/data-index-core-concepts.adoc[Data Index] or xref:job-services/core-concepts.adoc[Job service]. Please take a look at these guides to learn more about them. @@ -19,7 +19,7 @@ By default, workflows deployed by the operator use an embedded version of xref:d .Prerequisites * The {operator_name} installed. See xref:cloud/operator/install-serverless-operator.adoc[] guide -* A postgresql database. Required when you are targetting non-embedded postgresql versions of supporting services. We recommend creating a postgresql deployment in your cluster. Please note your credentials. +* A postgresql database. Required if you are planning to use non-embedded postgresql versions of supporting services. We recommend creating a postgresql deployment in your cluster. Please note your credentials. [#deploy-supporting-services] == Deploy supporting services @@ -27,7 +27,7 @@ By default, workflows deployed by the operator use an embedded version of xref:d [#deploy-data-index-service] === Data Index -You can deploy Data Index via `SonataFlowPlatform` configuration. The operator will then configure all new workflows, with the "preview" profile, to use that Data Index. +You can deploy Data Index via `SonataFlowPlatform` configuration. The operator will then configure all new workflows, with the "preview" or "gitops" profile, to use that Data Index. Following is a basic configuration. It will deploy an ephemeral Data Index to the same namespace as the `SonataFlowPlatform`. @@ -96,7 +96,7 @@ spec: image: <5> ---- -<1> Determines whether "preview" profile workflows should be configured to use this service, defaults to `true` +<1> Determines whether "preview" or "gitops" profile workflows should be configured to use this service, defaults to `true` <2> Secret key of your postgresql credentials user, defaults to `POSTGRESQL_USER` <3> PostgreSql JDBC URL <4> Number of Data Index pods, defaults to `1` @@ -104,7 +104,7 @@ spec: [#deploy-job-service] -=== Jobs Service +=== Job Service You can deploy Job Service via `SonataFlowPlatform` configuration. The operator will then configure all new workflows, with the "preview" profile, to use that Job Service. @@ -175,7 +175,7 @@ spec: image: <5> ---- -<1> Determines whether "preview" profile workflows should be configured to use this service, defaults to `true` +<1> Determines whether "preview" or "gitops" profile workflows should be configured to use this service, defaults to `true` <2> Secret key of your postgresql credentials user, defaults to `POSTGRESQL_USER` <3> PostgreSql JDBC URL <4> Number of Job Service pods, defaults to `1` diff --git a/serverlessworkflow/modules/ROOT/pages/cloud/operator/workflow-status-conditions.adoc b/serverlessworkflow/modules/ROOT/pages/cloud/operator/workflow-status-conditions.adoc index ec91f132d..f3fc0f34a 100644 --- a/serverlessworkflow/modules/ROOT/pages/cloud/operator/workflow-status-conditions.adoc +++ b/serverlessworkflow/modules/ROOT/pages/cloud/operator/workflow-status-conditions.adoc @@ -84,7 +84,7 @@ The following table lists the possible `Conditions`. |=== -In normal conditions, the Workflow will transition from `Running` to `WaitingForDeployment`and to `Running` condition. In case something wrong happens, consult the section xref:cloud/operator/developing-workflows.adoc#troubleshooting[workflow troubleshooting in development mode]. +In normal conditions, the Workflow will transition from `Running` to `WaitingForDeployment`and to `Running` condition. In case something wrong happens, consult the section xref:cloud/operator/developing-workflows.adoc#troubleshooting[Workflow Troubleshooting in development mode]. == Preview Profile Conditions diff --git a/serverlessworkflow/modules/ROOT/pages/getting-started/preparing-environment.adoc b/serverlessworkflow/modules/ROOT/pages/getting-started/preparing-environment.adoc index 6979fe367..9155e6aaf 100644 --- a/serverlessworkflow/modules/ROOT/pages/getting-started/preparing-environment.adoc +++ b/serverlessworkflow/modules/ROOT/pages/getting-started/preparing-environment.adoc @@ -17,7 +17,7 @@ start the development on your local machine using our guides. . Install link:{minikube_start_url}[minikube] or link:{kind_install_url}[kind]. . Install link:{kubectl_install_url}[Kubernetes CLI]. . Install link:{knative_quickstart_url}[Knative using quickstart]. This will also set up Knative Serving and Eventing for you and the cluster should be running. -. Install the xref:cloud/operator/install-serverless-operator.adoc#_sonataflow_operator_manual_installation[{product_name} operator manually]. +. Install the xref:cloud/operator/install-serverless-operator.adoc#_sonataflow_operator_manual_installation[{operator_name} manually]. . Install xref:testing-and-troubleshooting/kn-plugin-workflow-overview.adoc[Knative Workflow CLI]. . Install link:{visual_studio_code_url}[Visual Studio Code] with link:{visual_studio_code_swf_extension_url}[our extension] that simplifies development of workflows by providing visual aids and auto-complete features.