diff --git a/Dockerfile b/Dockerfile index 7d3d2d0..f70b269 100644 --- a/Dockerfile +++ b/Dockerfile @@ -1,4 +1,6 @@ -# docker build -t zeebe-cherry-officepdf:1.0.0 . +# docker build -t pierre-yves-monnet/processautomator:1.5.0 . +# JDK 17: openjdk:17-alpine +# JDK 21: alpine/java:21-jdk FROM openjdk:17-alpine EXPOSE 9081 COPY target/process-execution-automator-*-exec.jar /app.jar diff --git a/README.md b/README.md index 66879c8..865a1cd 100644 --- a/README.md +++ b/README.md @@ -87,6 +87,9 @@ The flow scenario has a duration and objective to verify. You can specify objectives: produce 1000 Process Instances, end 500 process instances, and produce 300 tasks in a user task. +The method to conduct a [Load Test](doc/howRunLoadTest/README.md) is available here. + + Visit [Load Test Scenario](doc/loadtestscenario/README.md) and the [Load test Tutorial](doc/loadtestscenario/Tutorial.md) ## Scenario @@ -106,7 +109,7 @@ The scenario does not contain any server information. It has only the server. Process-Automator references a list of servers in the configuration in multiple ways: * serverConnection : String, containing a list of connections separated by ; * serverList: List of records. -* camunda7 : information to connnect a Camunda 7 server +* camunda7 : information to connect a Camunda 7 server * camunda8 : information to connect a Camunda 8 server * camunda8Saas: information to connect a Camunda 8 Saas server @@ -185,6 +188,7 @@ The application runs only these role. Doing that, in a cluster, it's possible to ## server connection + ### String connection The string contains a list of connections, separate by a semi-colon (":"). @@ -215,18 +219,21 @@ The following parameters depend on the type. **CAMUNDA_8_SAAS** -* zeebeCloudRegion, -* zeebeCloudClusterId, -* zeebeCloudClientId, -* zeebeCloudOAuthUrl, -* zeebeCloudAudience, +* zeebeSaasRegion, +* zeebeSaasClusterId, +* zeebeSaasClientId, * clientSecret, -* OperateUserName, -* OperateUserPassword, -* OperateUrl, +* zeebeAudience +* OperateClientId, +* OperateClientPassword, +* TaskClientId + TaskClientSecret * ExecutionThreads, * MaxJobActive + + + **Example** ````yaml @@ -323,4 +330,39 @@ mvn springboot:build-image The docker image is build using the Dockerfile present on the root level. -Push the image to ghcr.io/camunda-community-hub/process-execution-automator: + +Push the image to +``` +ghcr.io/camunda-community-hub/process-execution-automator: +``` + +## Detail + +Run command +```` +mvn clean install +```` +Now, create a docker image +```` +docker build -t pierre-yves-monnet/processautomator:1.5.2 . + +```` + + +Push the image to the Camunda hub (you must be login first to the docker registry) + +```` +docker tag pierre-yves-monnet/processautomator:1.5.2 ghcr.io/camunda-community-hub/process-execution-automator:1.5.2 +docker push ghcr.io/camunda-community-hub/process-execution-automator:1.5.2 + +```` + + +Tag as the latest: +```` +docker tag pierre-yves-monnet/processautomator:1.5.2 ghcr.io/camunda-community-hub/process-execution-automator:latest +docker push ghcr.io/camunda-community-hub/process-execution-automator:latest +```` + +Check on +https://github.com/camunda-community-hub/process-execution-automator/pkgs/container/process-execution-automator diff --git a/doc/howRunLoadTest/README.md b/doc/howRunLoadTest/README.md new file mode 100644 index 0000000..52cfb23 --- /dev/null +++ b/doc/howRunLoadTest/README.md @@ -0,0 +1,635 @@ +# How to conduct a load test + +# Introduction + +Zeebe engine has different parameters that fit the performance. + +* The number of partitions is a primary parameter. + +Process instances are distributed against partitions. +The more partitions you have, the more cases you can handle simultaneously. +But, having too many partitions implied a delay in service tasks: to search for a job, they must address all partitions. + +* The cluster size, which is the number of pods you created to host the Zeebe Engine +* The service task: How many workers do you need? More workers mean the throughput increases, + but the network loads increase simultaneously on the ZeebeEngine and then on Zeebe. + +* Data is exported in ElasticSearch and reindexed by Operate to display them. + Multiple ElasticSearch and Operate pods may be needed. + +The best way to find the correct configuration is to simulate the process load. The peak load must be used because changing the number of partitions is impossible. Then, downsizing the number of nodes (but not the partition/cluster size) is possible. + +This is why identifying the goal is crucial. + +# Identify the goal + +The goal must be identified carefully to absorb the three next years, plus a margin. +Keep in mind that it is not possible at the moment to change the main parameters of a cluster (partitions, cluster size). +Refrain from overloading the goal: You may result in a large cluster, which is costly when unnecessary. + +Let's take an example. + +A system may peak at 18000 service tasks/ second for 30 minutes, but the average for the rest of the day is about 800 service tasks/second. + +Does it make sense to configure the system to manage the peak or to absorb it in 2 or 3 hours? + +For example, a cluster managing 1000 tasks/seconds. The number of extra tasks not handled by this cluster is + +``` +1800-1000*60*30=1,440,00 +``` + +How long will it take to absorb these tasks? A cluster able to manage 1000 service tasks per second leaves a bandwidth of 1000-800=200 tasks/second. + +I am so, managing 1,440,000 (service tasks) /200 (service tasks/s) / 60 (seconds/mn) = 120 minutes. + +The cluster will be able to absorb the peak in 2 hours and will be 1800/1000 smaller. + + + +## Number of process instances completed per period + +The number of process instances created and executed for a period. + +Example: +* 10000 process instances per day, in 20 hours, regularly + +* Or 4000 process instances per day, but the peak of absorbing is 120 process instances per second for a 2 hours duration + +If the platform must absorb the peak, then the throughput at the peak is your goal. + + +## Latency + +The latency is the time for one process instance to be completed. The goal may be to create and complete 120 process instances per second, +and 95% of the completion must be done in under 4 minutes. +A platform can face this throughput, but execution may take longer due to the batch mechanism. Respecting a high latency means, in general +increase the platform resource to ensure all execution runs as fast as possible. + + +## Replication factor + +A replication factor determines the number of brokers that update in real-time. For example, with a replication factor of 3, 3 databases are updated in real-time. + +According to the Raft protocol algorithm, the system continues to process if the quorum of the replica is reached. With a replication factor of 3, +``` +quorum = 3/2 +1 = 2 +``` +So, if one broker dies, the system continues to work. If a second broker dies, data are safe, but the system will pause, waiting for a broker to restart. + +Is this solution acceptable? To reduce the risk, multiple data centers can be used, and a broker can be placed on each one. +If a data center is down, it's acceptable. +The risk of having two data centers down simultaneously can be considered extremely rare and acceptable for a customer when another can decide not to take this risk. +Then, a replication factor of 5 may be better. + +4 has no advantages: quorum if 4/2+1=3, so the system is paused if two brokers are down. With 5, quorum = 5/2+1=3, so two brokers can be down. +Using multiple regions and a replication factor of 3 can be reasonable, too. With 2 regions, this implies 2 brokers will be in the same region, so if this region dies, the cluster will pause. +To reach the quorum, a new broker can be started in the survivor region, or three areas can be used, with one broker in each region. + +This strategy impacts performance. Communication between regions (or between data centers) is generally less performant. +This means a partition can handle only 120 service tasks per second, not 150. Thus, more partitions are needed to handle the throughput. + +## Machine, memory + +A Zeebe server is not very CPU—or memory-consuming, but this aspect must be taken into account in the load test. +The advice here is to target the maximum environment the customer can use to determine the platform. Then, if the platform is less used over time, it's possible to downsize it. + + + +# Inputs + +Inputs to simulate the Load are mandatory: + +* the different processes running on the platform + +* The Load on each process: how many process instances per period to create + +* The path on each process: one process may have a decision gateway, and one path contains 3 service tasks, the second 20 service tasks. Does + Does processes instance follow the 3 services task path? 20% on this path, 80% on the second one? + +* Do you have an iteration? How many iterations on average? + + +* Data are essential. The more data you have, the more pressure you send to the Zeebe Engine to manipulate and store it. + +* The service task and its execution time. This is important to determine the number of workers. + For example, a task may be called 2000 times per minute, but its execution requires 1 minute. + The workload is then 2000 * 1 mn every minute: 2000 pod workers are necessary (or 20 pod workers with 100 threads) to address that. + The more pod workers in the platform, the more pressure you put on the Zeebe engine, and you may need more partitions to absorb this pressure. + + +With this information, a platform test can be set up, and a tool can load the platform. Service tasks can be simulated. + + +# Worker implementation + +When you execute a service task, you can set up multiple threads and ask for various jobs. + +Let's say you set up, for the service task "credit-charging", 3 threads and 3 jobs simultaneously. This service task takes 1 to 5 seconds to answer. + +Visit https://docs.camunda.io/docs/components/best-practices/development/writing-good-workers/#blocking--synchronous-code-and-thread-pools + +## Synchronous +In synchronous mode, the service task "handles" the call, and the execution is done using the handle method. +Doing that, Zeebe's client +• Request 3 jobs +• Send 3 jobs in 3 different threads (call handle() method) +• Wait until the 3 jobs are finished to ask again for a new batch of 3 jobs. +If the execution varies between 1 and 5 seconds, it will wait for the most extended execution to ask again for the next job: it will wait 5 seconds. So, the pod may have a low CPU time execution. + +Some threads will not be used when a new job is requested, so the worker does not work at 100% efficiency. +• To have 100% efficiency, use only one thread. +• Or request a batch higher than the number of threads: Thread 1 will pick up a new job when it finishes. The issue is still here at the end of the batch. + +## Synchronous Thread (reactive programming) + +A solution involves creating a new thread in the handle() method to realize the work. Then, all handle() methods were executed quickly, and Zeebe asked for a new batch. + +https://blog.bernd-ruecker.com/writing-good-workers-for-camunda-cloud-61d322cad862 + +By doing that, there is no limitation if one job requires more time. The Java Client will immediately ask again for a new job with Zeebe. +Many threads are created, and the main issue is the overflow on the Java machine. If the management sends a request to an external service and has one thread to capture the answer, this is acceptable. However, if the management consists of executing a Java execution, this method can overflow the Java machine. +Synchronous Limited Thread +The idea is to control the number of threads that can be executed simultaneously to remove the main issue in the precedent implementation. The Concurrent Java class is used to manage a limited number of tokens. The handle() method must first get a token to create a new thread. It will wait if it can't get one, and the handle() method will be frozen. Then, the zeebeClient will stop to request a new job for Zeebe. + +## Asynchronous call + +The asynchronous call sends the complete() feedback before the worker executes it. +This implementation has this aspect: +• The process execution is faster: the task is immediately released, and the process instance can advance to the next step, even if the treatment is not performed. +• Because the feedback is sent before the treatment, returning any values is impossible. The process instance is already advanced, and it may be finished. +• Does the treatment face an issue? Is it not possible to send an error or ask for a retry? +• The treatment is immediately done, but the ZeebeClient will not ask for a new Job until all handle() methods are finished +The asynchronous call is an option for implementing a worker, but the number of concerns is significant, and this implementation is not recommended. + +## Conclusion +The synchronous implementation is simple and should handle 80% of the use cases. +If the number of tasks to execute is essential, and the treatment may vary from one task to another, then the Synchronous Limited Thread is the best option. This implementation works even when treatment has the same time) increase the efficiency by 30%. Then, to handle a throughput, the number of pods to handle it can be reduced by 30%. The limited implementation is the best because the non-limited can arrive to have a pod with exhausted resources, and a local test can hide the issue if the number of tasks is not so significant. + +# Tooling + +It's essential to be able to run a load test on demand. +Your platform is up and running. You deployed the process on it. How do you run a load test? +Two tools exist: + +## C8 Benchmak +This tool creates process instances on the flow and increases the frequency of creation: start creating 10 process instances per second, increase the frequency to 15, 2), and so on. The goal is to find the maximum Load the platform can handle at one moment and have performance reach a plateau. With this tool, you can simulate some service tasks, but the simulations are in "synchronous thread mode." + +## Process Automator +This tool is used to make the load test as close as possible to reality. You create a scenario, asking to create 10 processes per second or with a different frequency, for example, 10 process instances per 15 minutes. +It can simulate the user, executing user tasks at a particular frequency. +There are different ways to simulate the service task: synchronously, using a synchronous thread, and using synchronous limited thread mode. Variables can be updated during the service task. +It is possible to execute multiple scenarios simultaneously to load various processes with different frequencies. +Last, the tool can simulate a platform: Do you want to set up 100 pods running the service task "credit-charging" and 200 pods running "customer-credit?" The tool accepts the configuration to execute this configuration. Then, you can see how Zeebe reacts when 300 pods request jobs via the gateway to size the gateway component correctly. In the same way, you can select the number of threads on each simulation service task. + + +# Principle +Change one parameter at a time. To see the impact of a change, changing multiple parameters between two executions can't help to understand the effect on one parameter. + +Choose a reasonable time to run a test. Calculate the time to execute one process instance: for example, in addition to the time of all service tasks, a process instance needs 4 minutes. +This implies a warming up of 4 minutes minimum; adding 20% means 5 minutes. +With Zeebe, when a worker does not have anything to do, it will sleep a little before asking. This means that when tasks arrive, nothing may arrive before 10 or 20 seconds. +When multiple workers are on a service task, enough tasks must arrive to wake up all workers. This is why, during the warmup, the task thought increased slowly. +To measure the throughput, this warmup period must be over. + +Using a too-long time is Counterproductive: you reduce the number of tests you can run. + +In the end, when you fit the expected throughput, run an extended test (1 or 2 hours); some issues may be visible after some time. +If the Operate's import does not follow the instructions, it's hard to detect it. +If the cluster is built on some unstable performance resource, the issue may not be visible in a 10-minute test. + +# Warmup +When a process instance is created, the Zeebe engine processes it. In general, the process has a service task. +100 process instances are created every 15 seconds. Then, 100 jobs are created. + +A long pooling strategy is used to reduce the amount of communication between workers and the cluster. The worker contacts the cluster, and the request is handled if there is nothing to proceed. After a timeout, the cluster returns the answer to the worker; there is nothing to do. +Then, the worker used a back-off strategy. It makes no sense to ask immediately for jobs. The worker should sleep for a few seconds before asking. +On the second call, if there is still nothing to do, it waits double the time to wait before asking. + +When jobs arrive, a job may need some time before a worker starts to get it. +It has multiple workers: the first batch may wake up only a few workers (the first one may capture the first 100 jobs). + +A process may have multiple service tasks with the same strategy. The first instance may need 2 minutes to arrive at the end of the process, but when on Load, when all workers are up and running, only 20 seconds are required. + +Here comes the goal: do you want to verify a complete load batch (for example, time to process 13000 process instances, wake-up included), +or a load, analyzing 10 minutes when the load period is more than 6 hours daily? + +In the second go, it's important to wake up the process before analyzing the result. In our example, the first two minutes are not representative. + +You can calculate the wake-up time or choose a metric (a process instance is completed, the curb of "complete process instance" starts, and arrive at a plateau) + + +# How to start + +A technical estimation can be done to run a first load test. + +A cluster has multiple parameters, and the main ones are +* The number of partitions +* the cluster size +* The replication factor + +## replica factor +The replication factor comes directly from the goal. So, fix it and forget it. + +## cluster size +The cluster size is the number of brokers, i.e., the number of pods in the cluster. + +The primary advice on the `cluster size` is to link it to the `partition number`. Why? + +The number of partitions determines the number of Leaders. Followers are the replica factor -1. +If the cluster size = number of partitions, then there is one Leader on each broker. +Of course, this may change: a broker can die, and then a Leader is elected, which means one broker will host two leaders at this moment. +But this is not the primary behavior, and the cluster will be in a downgrading mode so that performance may be slower. +If this is not acceptable, the cluster size can be more than the number of partitions, but even with that, you don't know for sure that a broker does not host two Leaders. +If you want this actuality, you must set +``` +cluster size=number of partitions * replication factor +``` + +Having cluster size > number of partitions means you will have a broker-only follower(s). +In the execution, the follower only records the Leader (saves the data), which does not consume a lot of CPU. +In terms of throughput, this situation will reduce the CPU on the Leader broker (there will be one less follower) and cause some pods to be underused. + +Having cluster size < number of partitions means one broker hosts multiple leaders. Some pods may use more CPU and react slower than others. +This may cause a heterogeneous cluster, which may affect the result. + +Considering linking the cluster size to the number of partitions is more reliable because the system is homogeneous. Adding then a partition is more predictable. + +Note: you can run a load test based on the thourghput expected in the next two years. Then, you can create a cluster with the number of partitions determined by the load test, and reduce the cluster size to reduce the cost of the cluster. +The cluster will have fewer pods. The minimum size of ClusterSize is the replication factor. This make no sense to have two "streamer" for the same partition on the same broker: if the borker (the pod) stops, two streamers are lost. + +Then, when you need to increase the performance, scale the cluster size +https://docs.camunda.io/docs/self-managed/zeebe-deployment/operations/cluster-scaling/ + +## number of partitions + +This is the main parameter to play with. +How do you estimate the first number of partitions? + +According to the Saas estimation, a cluster can handle 500 service tasks/ seconds. +See https://docs.camunda.io/docs/components/best-practices/architecture/sizing-your-environment/#camunda-8-saas + +This means a partition can handle +``` +500/3=166 service tasks per second +``` + +This approach implies estimating the number of service tasks per second. +To do that, start from the goal of process instances, look into processes to estimate the path, and determine the number of service tasks per process instance. Consider sending a message as a service task. + +Attention: This is an estimation to start with. Of course, a process instance with a large payload will require more partitions to handle the required throughput. +Processes with a lot of gateway and loops with multiple instances have an impact, too. + +Starting with this method, give a first estimation. + +Then, estimate the number of workers you need for each service task. A worker can handle 250 threads. Then, 1 partition for every 5 workers will be added. + +Multiple other factors will impact the number of partitions, such as latency between regions for various regions, latency between data centers, disk speed, and elastic search throughput. + + +## Identify the number of workers + +IdentifyingIdentifying the number of workers for a service task takes work. +For example, let's say a service task needs 8 seconds to execute a task. +If you have a throughput of 1 service task per minute, one thread can handle this throughput. + +But if there is a throughput of 10 service tasks every minute, more than 1 thread is needed: one thread can handle 60/8=7.5 tasks per minute. +And what about if the requirement is 750 service tasks per second? + +The simple way to calculate the number of workers is to calculate +* The Capacity for one thread. The Capacity is how many tasks a thread can handle on a period +* the Load. The Load is how many tasks can be performed in a period + +First, fix the "period of time." To make the calculation simple, use the unit of time to execute a task. If a task is running in 12 seconds, choose the minute. +If a task is running in milliseconds, choose the second. + +Let's choose the minute. +The Capacity is the number of tasks a thread can run in a minute. + +A service task needs 8 seconds. So, the Capacity is + +```` +Capacity/mn = 60 s / 8 s = 7.5 +```` + +Each minute, a thread worker can run 7.5 tasks. + + +The Load depends on the number of tasks to be executed in the same period (the minute) +The requirement is 750 task/s +```` +Load/mn = 750 task/s * 60 = 45,000 +```` +How many thread workers do we need? + +```` +Nb Thread worker = /Capacity> = 45,000/7.5 = 6,000 +```` + + +In Camunda 8, a worker can handle multiple threads. The type and implementation of the worker depend on each other. +If the worker needs 8 seconds and consults the CPU during these 8 seconds (it merges images in a PDF), a worker can handle 10 to 50 threads. +If the worker calls an external service using the reactive programming or thread pattern, it may handle 200 or 1000 threads, maybe more. + +If the worker uses the classical pattern and it's acceptable from the CPU point of view, a worker can host 200 to 250 threads. After the Java machine and the Zeebe client manage a lot of threads, performance decreases. +6,000/2,50=24 pods may be necessary for the simulation. + + + +# Understand the main concept + +With Zeebe, a partition is a complete server. So, adding a partition increases the throughput, But different factor comes: +* adding partitions increases the work for the zeebe gateway to search for jobs for workers. Maybe this must be added +* Adding partitions implies to add a pod. Does the physical node can handle that? Is a node necessary? + +Adding partition is maybe not the solution: may a worker be underestimated, and the throughput is not reached because there are a lot of jobs waiting in a task? + +Monitoring the cluster during the load test and detecting any suspended jobs is also mandatory. + +During the load test, these metrics must be checked. + +## Partitions +A partition is a logical server. When a process instance is created, the Zeebe Gateway chooses a partition. +Then, all operations will be redirected to this partition. A second partition doubles the cluster's Capacity. +This is why partitions are one of the main parameters that must be played out. + +## Backpressure +Process instances are created. Workers query the cluster to get jobs to execute. Jobs are executed, and workers connect the cluster again to send the result. +Partitions handle these requests, but there may be too many requests to proceed. +This may arrive if there is too much creation or an execution needs time: a multi-instance, with 1000 instances to create, requires some milliseconds or seconds for the engine to proceed. + +When a partition receives too many requests, it will reject them. This is the back pressure. +Having some back pressure from time to time is acceptable. Zeebe client manages that a process instance creation order will be retried. A worker will delay the request to get new jobs. +However, having more than 1 % on a partition is counterproductive: more requests are sent (because the client will retry), which indicates that the cluster can't handle the throughput. + +Visit https://docs.camunda.io/docs/self-managed/zeebe-deployment/operations/backpressure/ + +## Worker – synchronous or asynchronous + +Workers can be implemented in multiple ways. The principal impact on the cluster is on how the cluster accepts jobs. + +Let's take a worker with a method handle like this: +``` +handle() +{ + // do the job + jobs.complete().send().join() +} +``` + +This implementation will execute the job, send the answer, and wait for the status. +Then, and only after the answer, will the thread return to the Zeebe client library. The library will then ask for new jobs, whether immediately or not. + +If a number of jobs is asked during the subscription (for example, 20), then the library collects 20 jobs (or less) and starts 20 threads. But it will wait until 70% of these jobs are finished to ask for a new batch. + +This method is excellent when the worker is not under pressure. But then different methods to improve the speed are possible +Some are +* do not join() the complete: the worker does not wait for the answer. If you need to log the result, use method ".exceptional()" or ".result()": your code will be called, but not in that thread +* use the "streamEnabled(true)" parameters: the library will not wait that 70% of the batch is finished to get a new job. +* use the reactive programming method. Then, in the "execute()", a thread is created, or a request is sent, and the result will be managed in a new thread, but the current thread is returned, and new jobs can be accepted + + +Visit "writing good worker" and "C8 implementation" documentation. +https://docs.camunda.io/docs/components/best-practices/development/writing-good-workers/#blocking--synchronous-code-and-thread-pools + +https://github.com/camunda-community-hub/C8-workers-implementation- + +## Flows: Zeebe, Exporter, Reindex + +The flow is the following on the creation of a process instance: +* an application asks to create a process instance; it connects the Zeebe Gateway, which chooses a partition, and then connects the Leader of the partition +* the Leader registers the order in the stream, sends the information to followers, and waits for the acknowledge +* When the Leader gets the acknowledgment, it executes the order (creates the process instance), returns the information to the client, and sends the status to followers +* when followers returned a status pointer in the stream advance +* a second pointer in the stream exports data to Elastic search in raw data +* Operate monitor the Elastic Raw Data, and import it in the Operate indexes +* TaskList and optimize do the same. + +One simple order goes to multiple components until they are visible in Operate. During a load test, all these components are under pressure. + +To execute a service task, the operation is different +* a worker connects the Zeebe Gateway to get jobs related to a topic +* The zeebe gateway contacts all partitions to collect jobs +* each partition registers the lock (workerId W3453 lock the jobId 9989). Then, it replicates the information to its follower +* Zeebe gateway returns the list to the worker + +When the worker executes a job +* Zeebe gateway sends the status to the partition +* the partition contacts its follower to register the result +* then, the partition returns the information to the worker +* This information is exported to ElasticSearch and will be imported by Operate/TaskList/Optimize + + +# Check metrics +During the load test, the Grafana page and Operate are the primary information to consider. + + + +## Check the throughput +Four throughputs must be checked: + +**Creation of process instance** +Does the tooling create enough process instances? If not, the tool must be checked. The cluster may be the issue if the tool is not the issue. No partition must be added to face the throughput. Check the back pressure, the GRPC latency + +**Process completion** +This is the main parameter directly related to the goal. Check the other parameter if it is lower than the expected goal. +In general, this curb starts after the creation of a process instance: if a process instance needs at least 1 minute to complete, then this curb will have a minimum one-minute delay. +See the warmup strategy. + +**Jobs creation** +This is an essential factor. If the job creation is insufficient, the goal can't be reached. + +**Jobs completion** +This is the first metric to complete. If the level does not reach the goal, it may be due to multiple factors. See the action/reaction section. + +## CPU/memory on different pod +Check the CPU on each component. If the element is overflow, add more resources. + + +## GRPC latency + +The different GRPC latencies are important to check +**Creation** +The creation must be as smooth as possible. The correct time to create a process instance should stay under 50 ms. The target is a 10 ms bucket. + +**get jobs** +Zeebe handles a get jobs call via a long pooling method when there is nothing to do. Requests in the "infinite" level are normal. This is the sign that you have workers waiting for jobs. +On the opposite, the main request must be under 50 ms. +Having requests in the range of 500 ms-10 seconds is not a good situation: there are jobs, but it takes time for the worker to catch them. + +**Complete job** +A worker completes a job and sends the request to the cluster. Like the instance creation process, the answer must be as fast as possible. + +## Elastic search exporter + +This metric is important because it's a vicious situation. +Zeebe has a two-pointer in the stream +* one to follow the execution +* and one to follow the exporter to ElasticSearch. + +The Zeebe cluster can be correctly sized to handle the whole, but the two pointers will diverge if the Elastic search needs to be faster. +The stream is saved in memory and on disk. The stream grows, and at one moment, when the disk is entire, zeebe will pause the execution. +Backpressure will be visible, and the throughput will be the Elastic search throughout. +The difficulty here is that the situation will be visible only when the disk is complete. + +The Elastic search pointer is behind: Zeebe writes records per batch rather than record per record to be more efficient. However, the difference must be manageable and manageable. +To check that, verify the position of the Zeebe pointer and the Elastic search exporter. The Elastic search exporter must be behind by 800 or 1000 positions, and the difference must be stable, + + +Note: It is acceptable if the cluster faces a peak, and the disk will absorb the situation. + + +## Operate +Via Operate, check the number of jobs per task. If the number of workers is underestimated, a service task will make the number of functions too essential. +This may be related to the GRPC latency (there are enough workers, but the completion is too long) or insufficient workers to handle the throughput. + +The second effect is Operate's importer. +Zeebe exports data in a Raw index, and then Operate imports the data in the Operate index. +If the importer is behind, Operate will show the situation from the past. +The simple way to detect the situation is at the end of the load test. Stop creations and workers: now, zeebe, finish work. +During one minute, information in Operate should change (the importer finishes importing the last information), +but if the data still changes after this time, operate importer is too slow and must be scaled. + + +# Procedures + +## Broker size scaling + +the number of pods (clusters) can be changed dynamically. + +Check the procedure here: + +https://docs.camunda.io/docs/self-managed/zeebe-deployment/operations/cluster-scaling/ + +Attention: maintains the VALUE.YAML should have the same number; +otherwise, any helm upgrade will downsize the cluster to the original value. +Worst: you may lose data at this moment. When you downscale, the first step consists of sending +the request to the cluster via a REST CALL. The cluster will then move partitions to pods. +Only when this operation is finished it’s possible to reduce the number of pods. + + +## Operate scaling + +There is two methods to scale Operate, + +### Multiple threads to import + +This parameter pilot the importer. Each time a new record to import is detected by Operate, it submit the record to a Thread pool. +Then, a thread realize the import. +There is two thread pool +* to read record from ElasticSeach from Zeebe indexes +* to write records to the Operate indexes +* +This is possible to change the number of threads in these pools. Default value is 1. + +The first one is to change the value of +camunda.operate.importer.threadsCount +you can change this parameter via the env section + +````yaml +env: + - name: CAMUNDA_OPERATE_IMPORTER_READERTHREADSCOUNT + value: 10 + + - name: CAMUNDA_OPERATE_IMPORTER_THREADSCOUNT + value: 20 +```` + +The most efficient way is to double importer thread number. For one reader, set two importers. + + +You have to check the CPU on the Operate pod. If the CPU and memory is high, it's counter-productive to increase the number of threads. + +https://docs.camunda.io/docs/self-managed/operate-deployment/importer-and-archiver/#scaling + + +### Create multiple pods +Operate contains 3 applications: + +* the Importer application +* the Webapp +* the Archiver + +The Webapp application can be scaled without any control. This is the UI. +The Archiver and the Archiver can be scaled, but each instance must have a unique number and know the total number of instances. + +This can be done via + + +````yaml +env: + - name: CAMUNDA_OPERATE_IMPORTERENABLED + value: "true" + - name: CAMUNDA_OPERATE_ARCHIVERENABLED + value: "false" + - name: CAMUNDA_OPERATE_WEBAPPENABLED + value: "false" + - name: CAMUNDA_OPERATE_CLUSTERNODE_NODECOUNT + value: "3" + - name: CAMUNDA_OPERATE_CLUSTERNODE_CURRENTNODEID + value: "0" +```` +Three deployment (each with replicas:1 and a different CURRENTNODEID, moving for 0 to 2) must be specified. +The total number of nodes is the total number of partitions. +It's possible to have less importer than partitions: then a node will import multiple partition. +This is why it's primordial to have the NODE_COUNT : each node calculate the number of partition to import. + +ATTENTION: only one Operate pod must import a partition. If you decide to go to that direction: +* set up the helm Chart to create a pod only for the Archiver, or the Web Application. Disable the importer. Then, Helm will create all other component (Kubernetes service) +* Create your own Kubernetes Deployment file. If you set the NODECOUNT to 5, you must create 5 (and only 5) deployment, each with a different CURRENTNODEID. These deployuments must have a replica of 1 + +At any time, you can stop this collection of pod, and change it, moving from 5 to 10 if needed. + + +# Action-Reaction + +## No back pressure, but throughput is lower than expected + + +## Backpressure +A back pressure means a partition receives too many requests to handle. +It may be due to +* there is too much request +* Leaders take time to contact followers, especially when followers are in a different region +* disk is too slow (check the IO metrics) +* Elastic search exporter is too slow (check the exporter pointer) + +A simple solution is to increase the number of partitions. If you have a 5% backpressure, start by adding 5% more partitions. + + +## Platform is not stable + +The throughput of the GRPC is not stable. + + +![NotStable_GrpcLatency.png](images/NotStable_GrpcLatency.png) + +![notStable_Throughput.png](images/notStable_Throughput.png) + +![NotStable_backpressure.png](images/NotStable_backpressure.png) + +It may come from multiple factors. +* The disk throughput is not stable. It's typical with a NFS disk +* Check the infrastructure: CPU, Memory, network +* search what changed at this moment: new process instance started? More workers? + +## Low GRPC throughput + +The GRPC latency is high for complete jobs. Partitions get a lot of operations to perform. + +![LowGrpc.png](images/FLowGrpc.png) + + +Increase the number of partitions to share the Load. + +## Low creation of process instance +The throughput of the creation is behind the expected one. +Zeebe does a speedy job on the creation, reaching 1000 PI/second quickly. + +Check the tool that creates the process instances or increases the number of partitions. Also, check the GRPC latency during creation. + +## Operate is behind the reality +At the end of a load test, stop the creation and workers. +Check the value in operation. After one minute, does the value still change? + +If yes, the Operate import is behind and needs to be scaled. \ No newline at end of file diff --git a/doc/howRunLoadTest/images/LowGrpc.png b/doc/howRunLoadTest/images/LowGrpc.png new file mode 100644 index 0000000..1fb590c Binary files /dev/null and b/doc/howRunLoadTest/images/LowGrpc.png differ diff --git a/doc/howRunLoadTest/images/NotStable_GrpcLatency.png b/doc/howRunLoadTest/images/NotStable_GrpcLatency.png new file mode 100644 index 0000000..cc5f9e5 Binary files /dev/null and b/doc/howRunLoadTest/images/NotStable_GrpcLatency.png differ diff --git a/doc/howRunLoadTest/images/NotStable_backpressure.png b/doc/howRunLoadTest/images/NotStable_backpressure.png new file mode 100644 index 0000000..7115899 Binary files /dev/null and b/doc/howRunLoadTest/images/NotStable_backpressure.png differ diff --git a/doc/howRunLoadTest/images/notStable_Throughput.png b/doc/howRunLoadTest/images/notStable_Throughput.png new file mode 100644 index 0000000..ef5b658 Binary files /dev/null and b/doc/howRunLoadTest/images/notStable_Throughput.png differ diff --git a/doc/loadtestscenario/Tutorial.md b/doc/loadtestscenario/Tutorial.md index a7839be..18f203e 100644 --- a/doc/loadtestscenario/Tutorial.md +++ b/doc/loadtestscenario/Tutorial.md @@ -45,7 +45,7 @@ he will decide to process the URL. 200 orders must be processed every 30 seconds. An order contains multiple sub-searchs. This may vary from 10 to 20. -To check the peak, the test will consider the number of sub-search is 20. +The test will consider the number of sub-searchers to be 20 to reach the peak. The user task will be simulated to accept each request in less than 2 seconds. @@ -84,7 +84,7 @@ The scenario created is ![Process Automator Scenario](images/ProcessAutomatorScenario.png) **STARTEVENT** -Two types of Start event is created: one for the main flow (5 process instance per second) and the second one. +Two types of Start events are created: one for the main flow (5 process instances per second) and the other for the second one. For the user task @@ -122,7 +122,7 @@ For the user task ```` -Then, one service task simulator per service task and one for the user task. +Then, one service task simulator is used per service task, and one is used for the user task. ````json [ @@ -215,7 +215,7 @@ On Intellij, run this command ### Via the application -Specify in the application parameter what you want to run. +Specify what you want to run in the application parameter. `````yaml Automator.startup: @@ -235,7 +235,7 @@ Run the command mvn spring-boot:run ```` -or via Intellij: +Or via Intellij: ![Intellij Automator Execution](images/IntellijAutomatorApplication.png) Note: The application will start the scenario automatically but will not stop. @@ -247,13 +247,13 @@ To be close to the final platform, let's run the process-automator not locally b The main point is to provide the scenario to the pod. -Create a config map for the scenario +Create a config map for the scenario. ```` cd doc/loadtestscenario/ kubectl create configmap crawurlscnmap --from-file=resources/C8CrawlUrlScn.json ```` -How this scenario is accessible in the pod? Check the `ku-c8CrawUrl.yaml` file +How is this scenario accessible in the pod? Check the `ku-c8CrawUrl.yaml` file 1. Create a volume and mount the configMap in that volume ````yaml @@ -291,7 +291,7 @@ Follow the advance ```` kubectl get pods ```` -Identify the correct pods, and access the log +Identify the correct pods and access the log. ```` kubectl logs -f ku-processautomator-xxxxxx ```` @@ -299,7 +299,7 @@ kubectl logs -f ku-processautomator-xxxxxx ### Generate the Docker image again -An alternative consists of placing the scenario under `src/resources/` and building a new image. +An alternative involves placing the scenario under `src/resources/` and building a new image. Build the docker image via the build command. Replace `pierreyvesmonnet` with your docker user ID, @@ -340,7 +340,7 @@ o.c.a.engine.flow.RunScenarioFlows : [SERVICETASK crawl-store-main#0] RUNN ## Check the result -Via the CLI or via the command line, the first execution is +Via the CLI or the command line, the first execution is ![First execution](images/RunCrawlUrl-1.png) @@ -356,20 +356,82 @@ To improve the performance, the number of worker The requirement is 200 process instances every 30 seconds. Let's base the calculation per minute. This is then 400 process instances/minute. +| Name | Value | +|--------------------|------------:| +| Process instances | 200 PI/30 s | +| Per minute | 400 PI / mn | + The first task needs 2 seconds duration. To execute 400 process instances, it will need 2*400=800 s. Because this throughput is required by minute, multiple workers must do it in parallel. -One worker has a throughput of 60 s per 60 s. Workers are mandatory to handle 800 s, 800/60=13.3 (so, 14). +One worker has a throughput of 60 s per 60 s. Workers must handle 800 s, 800/60=13.3 (so, 14). + +The simple way to calculate the number of workers is to calculate +* The Capacity for one thread. The capacity is how many tasks a thread can handle on a period +* the Load. The Load is how many tasks can be performed in a period + +First, fix the "period of time." To make the calculation simple, use the unit of time to execute a task. If a task is running in 12 seconds, choose the minute. +If a task is running in milliseconds, choose the second. + +Let's choose the minute. + +Capacity: +``` +Capacity(mn) = 60/Duration(s) +``` + +| Name | Duration | Capacity/mn | +|---------------|---------:|------------:| +| Retrieve Work | 2 s | 30 | +| Search | 10 s | 6 | +| Message | 1 s | 60 | +| Add | 5 s | 12 | +| Filter | 1 s | 60 | +| Store | 1 s | 60 | + +Load: +``` +Load(mn)=NumberOfTask/s * 60 +``` + +Load is the number of process instances per minute (400) * number of tasks per process instance. + +| Name | Loop | Load/mn | +|---------------|-----:|--------:| +| Retrieve Work | 1 | 400 | +| Search | 10 | 4000 | +| Message | 10 | 4000 | +| Add | 10 | 4000 | +| Filter | 10 | 4000 | +| Store | 10 | 4000 | + + +The number of worker threads is the Load divided by the capacity. +``` +Number of threads worker = Load / Capacity +``` + +| Name | Load | Capacity | Worker threads | +|---------------|----------------:|---------:|---------------:| +| Retrieve Work | 400 | 30 | 13.3 | +| Search | 4000 | 6 | 666.7 | +| Message | 4000 | 60 | 66.7 | +| Add | 4000 | 12 | 333.3 | +| Filter | 4000 | 60 | 66.7 | +| Store | 4000 | 60 | 66.7 | + +How many worker threads can a worker (a pod) handle? It depends on the implementation. +* If the implementation uses a lot of CPU, a worker can support only 5 threads. More may overflow the CPU. +* If the implementation is very ligh, calling an external service and using the Reactiv Programming, a worker can support 500 to 1000 threads worker + -This can be done in different ways: -* one application(pod) manage multiple threads. A worker with 14 threads is mandatory (one thread= one worker) -* or multiple applications(pods), with one thread, can be used (14 applications/pods) -* A mix of the two approaches is possible. The adjustment is made according to the resource. +Visit https://docs.camunda.io/docs/components/best-practices/development/writing-good-workers/, +https://blog.bernd-ruecker.com/writing-good-workers-for-camunda-cloud-61d322cad862 +and +https://github.com/camunda-community-hub/C8-workers-implementation- -If the treatment of the worker is to manage a movie, one pod can maybe deal with two or three workers at the same time. -So, to handle 14 workers, 14/3=5 pods may be necessary. -From the Zeebe client point of view, a pod can manage up to 200 threads after the multithreading is less efficient. +In the "classical" implementation, with little CPU and memory consumption, a worker can manage up to 200 threads after multithreading becomes less efficient. We are in a simulation in our scenario, so the only limit is about 200 threads per pod. @@ -417,7 +479,7 @@ kubectl delete -f ku-c8CrawlUrlMultiple.yaml During the load test, access the Grafana page. -**Throughput / process Instance creation per second** +**Throughput/process Instance creation per second** This is the first indicator. Do you have enough process instances created per second? @@ -425,7 +487,7 @@ In our example, the scenario creates 200 Process Instances / 30 seconds. The gra ![Process Instance creation per second ](images/ThroughputProcessInstanceCreationPerSecond.png) -**Throughput / process Instance completion per second** +**Throughput/process Instance completion per second** This is the last indicator: if the scenario's objective consists of complete process instances, it should move to the same level as the creation. Executing a process may need time, @@ -435,7 +497,7 @@ so this graph should be symmetric but may start after. **Job Creation per second** -Job creation and job completion are the second key factors. Creating process instances is generally not a big deal for Zeebe. Executing jobs (service tasks) is more challenging. +The second key factor is job creation and job completion. Creating process instances is generally not a big deal for Zeebe, but executing jobs (service tasks) is more challenging. For example, in our example, for a process instance, there is 1+(10*4)=41 service tasks. Creating 200 Process Instances / 30 seconds means 200*2*41=16400 jobs/minute, 273 jobs/second. @@ -448,7 +510,7 @@ throughput. ![Job Completion per second ](images/ThroughputJobCompletionPerSecond.png) **CPU Usage** -CPU and Memory usage is part of the excellent health of the platform. Elasticsearch is, in general, the most consumer for the CPU. +CPU and Memory usage are part of the platform's excellent health. Elasticsearch is, in general, the most CPU-consuming component. If Zeebe is close to the value allocated, it's time to increase it or create new partitions. ![CPU Usage](images/CPU.png) @@ -459,19 +521,19 @@ If Zeebe is close to the value allocated, it's time to increase it or create new **Gateway** The gateway is the first actor for the worker. Each worker communicates to a gateway, which asks Zeebe's broker. -If the response time is terrible, increasing the number of gateways is the first option. However, the issue may come from the number of partitions: there may be insufficient partitions, and Zeebe needs time to process the request. +If the response time could be better, increasing the number of gateways is the first option. However, the issue may come from the number of partitions: there may be insufficient partitions, and Zeebe needs time to process the request. ![Grafana Gateway](images/Gateway.png) **GRPC** -GRPC graph is essential to see how the network is doing and if all the traffic gets a correct response time. +The GRPC graph is essential for assessing the network's performance and determining whether all traffic gets a correct response time. If the response is high, consider increasing the number of gateways or partitions. ![Grafana GRPC](images/GRPC.png) **GRPC Jobs Latency** -Jobs latency is essential. This metric gives the time when a worker asks for a job or submits a job the time Zeebe considers the request. If the response is high, consider increasing the number of gateways or partitions. +Jobs latency is essential. This metric shows the time a worker asks for or submits a job and the time Zeebe considers the request. If the response is high, consider increasing the number of gateways or partitions. ![Jobs Latency](images/GRPCJobsLatency.png) @@ -492,7 +554,7 @@ Zeebe maintains a stream to execute a process instance. In this stream, two poin Where there is a lot of data to process, the Elasticsearch pointer may be late behind the execution: The stream grows up. This may not be a big deal if, at one moment, the flow slows down, then the second pointer -will catch up. But if this is not the situation, the stream may reach the PVC limit. If this happens, then +will catch up. However, if this differs from the situation, the stream may reach the PVC limit. If this happens, then the first pointer will slow down, and the Zeebe Engine will stop to accept new jobs: the speed will be then the slowest limit. In the case of a high throughput test, it is nice to keep an eye on this indicator. If the positions differ a lot, you should enlarge the test period to check the performance when the stream is full because this @@ -521,7 +583,7 @@ Looking Operate, we can identify which service task was the bottleneck. ![Operate](test_1/test-1-operate.png) -Attention: When the test is finished, you must stop the cluster as soon as possible. Because +Attention: When the test is finished, you must stop the cluster immediately. Because Multiple pods are created to execute service tasks. If you don't stop these workers, they will continue to process Note: To access this log after the creation, do a @@ -557,7 +619,7 @@ replicas: 3 During the execution, this log shows up ```` STARTEVENT Step #1-STARTEVENT CrawlUrl-StartEvent-CrawlUrl-01#0 Error at creation: [Can't create in process [CrawlUrl] :Expected to execute the comma -nd on one of the partitions, but all failed; there are no more partitions available to retry. Please try again. If the error persists contact your zeebe operator] +nd on one of the partitions, but all failed; there are no more partitions available to retry. Please try again. If the error persists, contact your zeebe operator] ```` Looking at the Grafana overview, one partition gets a backpressure ![Back pressure](test_2/test-2-Backpressure.png) @@ -622,7 +684,7 @@ The execution went correctly. The back pressure is very low ![back pressure](test_3/test-3-Backpressure.png) -CPU stays at a normal level +CPU stays at an average level ![](test_3/test-3-CPU.png) @@ -630,18 +692,18 @@ Jobs Latency stays under a reasonable level. ![](test_3/test-3-JobsLatency.png) -Jobs per second can now reach 400 per second as a peak and then run at 300 per second. This level is stable. +Jobs per second can reach 400 per second as a peak and then run at 300 per second. This level is stable. ![](test_3/test-3-JobsPerSeconds.png) -At the end, Operate show that all process are mainly processed. Just some tasks are pending in +At the end, Operate shows that all processes are mainly processed. Just some tasks are pending in a worker (this node stopped before the other, maybe) ![](test_3/test-3-Operate.png) Objectives are mainly reached: 3800 process instances were processed. -The reliqua comes from the startup of the different pods: the cluster starts different pods on the scenario. +The reliability comes from the startup of the different pods: the cluster starts different pods in the scenario. ````log 2023-09-07 19:10:53.400 INFO 1 --- [AutomatorSetup1] o.c.a.engine.flow.RunScenarioFlows : Objective: SUCCESS type CREATED label [Creation} processId[CrawlUrl] reach 4010 (objective is 4000 ) analysis [Objective Creation: ObjectiveCreation[ @@ -662,7 +724,7 @@ To ensure the sizing is correct, we make a new test with more input * The number of process instances is set to 250 / 30 seconds (requirement is 200 / 30 seconds) - this is a 25% increase * Increase the number of threads in each worker by 25 % -Load the new scenario +Load the new scenario. ```` cd doc/loadtestscenario kubectl create configmap crawurlscnmap250 --from-file=resources/C8CrawlUrlScn250.json @@ -692,4 +754,4 @@ alysis [} # Conclusion Using the scenario and Process-Automator tool helps determine the platform's correct sizing. -Analysis tools (Grafana, Operate) are essential to qualify the platform. +Analysis tools (Grafana, Operate) are essential to qualify the platform. \ No newline at end of file diff --git a/doc/scenarioreference/README.md b/doc/scenarioreference/README.md index 3c18979..76752b2 100644 --- a/doc/scenarioreference/README.md +++ b/doc/scenarioreference/README.md @@ -194,18 +194,19 @@ certain position, you may want to simulate the worker. Then, the Process-Aautoma service task. The real worker shouldbe deactivated then. If the service task is not found, then the scenario will have an error. -| Parameter | Explanation | Example | -|--------------------|------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------| -| name | name of the step, optional | "name": "get Score" | -| type | Specify the type (SERVICETASK) | "type": "SERVICETASK" | -| delay | Deplay to wait before looking for the task, in ISO 8601 | "delay" : "PT0.1S" waits 100 ms | +| Parameter | Explanation | Example | +|--------------------|------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------| +| name | name of the step, optional | "name": "get Score" | +| type | Specify the type (SERVICETASK) | "type": "SERVICETASK" | +| delay | Deplay to wait before looking for the task, in ISO 8601 | "delay" : "PT0.1S" waits 100 ms | | waitingTime | Wait for maximum this time before returning an error. Process-Automator queries the engine every 500 ms until this delay. Default value is 5 minutes | "waitingTime" : "PT10S" | -| taskId | Activity ID to query | "activityId": "review" | -| topic | Topic to search the task (mandatory in C8) | "topic" : "get-score" | -| variables | List of variables (JSON file) to update | "variables": {"amount": 450, "account": "myBankAccount", "colors": ["blue","red"]} | +| taskId | Activity ID to query | "activityId": "review" | +| topic | Topic to search the task (mandatory in C8) | "topic" : "get-score" | +| streamEnqbled | Specify if the worker use the streamEnabled function . Default is true. | "streamEnabled: true | +| variables | List of variables (JSON file) to update | "variables": {"amount": 450, "account": "myBankAccount", "colors": ["blue","red"]} | | variablesOperation | List of variables, but the value is an operation | | -| modeExecution | Implementation: options are CLASSICAL, THREAD, THREADTOKEN. Default is CLASSICAL | "modeExecution" : "CLASSICAL" | -| numberOfExecutions | Number of execution, the task may be multi-instance. Default is 1 | "numberOfExecutions" : 3 | +| modeExecution | Implementation: options are CLASSICAL, THREAD, THREADTOKEN. Default is CLASSICAL | "modeExecution" : "CLASSICAL" | +| numberOfExecutions | Number of execution, the task may be multi-instance. Default is 1 | "numberOfExecutions" : 3 | There is different implementation for the worker. Choose the one you will use for the simulation. diff --git a/pom.xml b/pom.xml index 8ee1824..b0c956c 100644 --- a/pom.xml +++ b/pom.xml @@ -5,16 +5,18 @@ org.camunda.community.automator process-execution-automator - 1.4.0 + + 1.5.2 + 17 ${java.version} ${java.version} - 8.3.0 - 8.3.0 + 8.5.7 + 8.5.5 7.19.0 @@ -48,15 +50,18 @@ + + io.camunda.spring spring-boot-starter-camunda - ${zeebe.version} + ${version.zeebe} + io.camunda zeebe-client-java - ${zeebe-client.version} + ${version.zeebe-client} @@ -91,17 +96,18 @@ - - io.camunda - camunda-operate-client-java - 8.1.8.1 - + + io.camunda camunda-tasklist-client-java - 1.6.1 + 8.5.3.5 + + + javax.xml.bind @@ -212,10 +218,13 @@ org.apache.maven.plugins maven-compiler-plugin - 3.10.1 + 3.13.0 17 17 + + -parameters + diff --git a/src/main/frontend/README.md b/src/main/frontend/README.md index 58beeac..21ae553 100644 --- a/src/main/frontend/README.md +++ b/src/main/frontend/README.md @@ -17,7 +17,9 @@ You may also see any lint errors in the console. ### `npm test` Launches the test runner in the interactive watch mode.\ -See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information. +See the section +about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more +information. ### `npm run build` @@ -27,44 +29,58 @@ It correctly bundles React in production mode and optimizes the build for the be The build is minified and the filenames include the hashes.\ Your app is ready to be deployed! -See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information. +See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for +more information. ### `npm run eject` **Note: this is a one-way operation. Once you `eject`, you can't go back!** -If you aren't satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. +If you aren't satisfied with the build tool and configuration choices, you can `eject` at any time. +This command will remove the single build dependency from your project. -Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you're on your own. +Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, +ESLint, etc) right into your project so you have full control over them. All of the commands +except `eject` will still work, but they will point to the copied scripts so you can tweak them. At +this point you're on your own. -You don't have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn't feel obligated to use this feature. However we understand that this tool wouldn't be useful if you couldn't customize it when you are ready for it. +You don't have to ever use `eject`. The curated feature set is suitable for small and middle +deployments, and you shouldn't feel obligated to use this feature. However we understand that this +tool wouldn't be useful if you couldn't customize it when you are ready for it. ## Learn More -You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started). +You can learn more in +the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started). To learn React, check out the [React documentation](https://reactjs.org/). ### Code Splitting -This section has moved here: [https://facebook.github.io/create-react-app/docs/code-splitting](https://facebook.github.io/create-react-app/docs/code-splitting) +This section has moved +here: [https://facebook.github.io/create-react-app/docs/code-splitting](https://facebook.github.io/create-react-app/docs/code-splitting) ### Analyzing the Bundle Size -This section has moved here: [https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size](https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size) +This section has moved +here: [https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size](https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size) ### Making a Progressive Web App -This section has moved here: [https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app](https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app) +This section has moved +here: [https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app](https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app) ### Advanced Configuration -This section has moved here: [https://facebook.github.io/create-react-app/docs/advanced-configuration](https://facebook.github.io/create-react-app/docs/advanced-configuration) +This section has moved +here: [https://facebook.github.io/create-react-app/docs/advanced-configuration](https://facebook.github.io/create-react-app/docs/advanced-configuration) ### Deployment -This section has moved here: [https://facebook.github.io/create-react-app/docs/deployment](https://facebook.github.io/create-react-app/docs/deployment) +This section has moved +here: [https://facebook.github.io/create-react-app/docs/deployment](https://facebook.github.io/create-react-app/docs/deployment) ### `npm run build` fails to minify -This section has moved here: [https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify](https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify) +This section has moved +here: [https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify](https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify) diff --git a/src/main/frontend/public/index.html b/src/main/frontend/public/index.html index aa069f2..ef8f7c0 100644 --- a/src/main/frontend/public/index.html +++ b/src/main/frontend/public/index.html @@ -1,43 +1,43 @@ - - - - - - - - - - + + - React App - - - -
- + React App + + + +
+ - + To begin the development, run `npm start` or `yarn start`. + To create a production bundle, use `npm run build` or `yarn build`. +--> + diff --git a/src/main/frontend/src/App.js b/src/main/frontend/src/App.js index 3784575..53ceec4 100644 --- a/src/main/frontend/src/App.js +++ b/src/main/frontend/src/App.js @@ -5,7 +5,7 @@ function App() { return (
- logo + logo

Edit src/App.js and save to reload.

diff --git a/src/main/frontend/src/App.test.js b/src/main/frontend/src/App.test.js index 1f03afe..ed340df 100644 --- a/src/main/frontend/src/App.test.js +++ b/src/main/frontend/src/App.test.js @@ -1,8 +1,8 @@ -import { render, screen } from '@testing-library/react'; +import {render, screen} from '@testing-library/react'; import App from './App'; test('renders learn react link', () => { - render(); + render(); const linkElement = screen.getByText(/learn react/i); expect(linkElement).toBeInTheDocument(); }); diff --git a/src/main/frontend/src/index.css b/src/main/frontend/src/index.css index ec2585e..7d30ace 100644 --- a/src/main/frontend/src/index.css +++ b/src/main/frontend/src/index.css @@ -1,13 +1,13 @@ body { margin: 0; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen', - 'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue', - sans-serif; + 'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue', + sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; } code { font-family: source-code-pro, Menlo, Monaco, Consolas, 'Courier New', - monospace; + monospace; } diff --git a/src/main/frontend/src/index.js b/src/main/frontend/src/index.js index d563c0f..8fa4036 100644 --- a/src/main/frontend/src/index.js +++ b/src/main/frontend/src/index.js @@ -7,7 +7,7 @@ import reportWebVitals from './reportWebVitals'; const root = ReactDOM.createRoot(document.getElementById('root')); root.render( - + ); diff --git a/src/main/frontend/src/logo.svg b/src/main/frontend/src/logo.svg index 9dfc1c0..21c4c2e 100644 --- a/src/main/frontend/src/logo.svg +++ b/src/main/frontend/src/logo.svg @@ -1 +1,8 @@ - \ No newline at end of file + + + + + + + \ No newline at end of file diff --git a/src/main/frontend/src/reportWebVitals.js b/src/main/frontend/src/reportWebVitals.js index 5253d3a..b0320f3 100644 --- a/src/main/frontend/src/reportWebVitals.js +++ b/src/main/frontend/src/reportWebVitals.js @@ -1,6 +1,6 @@ const reportWebVitals = onPerfEntry => { if (onPerfEntry && onPerfEntry instanceof Function) { - import('web-vitals').then(({ getCLS, getFID, getFCP, getLCP, getTTFB }) => { + import('web-vitals').then(({getCLS, getFID, getFCP, getLCP, getTTFB}) => { getCLS(onPerfEntry); getFID(onPerfEntry); getFCP(onPerfEntry); diff --git a/src/main/java/org/camunda/automator/AutomatorCLI.java b/src/main/java/org/camunda/automator/AutomatorCLI.java index 2b911d3..0ac51ba 100644 --- a/src/main/java/org/camunda/automator/AutomatorCLI.java +++ b/src/main/java/org/camunda/automator/AutomatorCLI.java @@ -2,6 +2,7 @@ import org.camunda.automator.bpmnengine.BpmnEngine; import org.camunda.automator.configuration.BpmnEngineList; +import org.camunda.automator.configuration.ConfigurationStartup; import org.camunda.automator.definition.Scenario; import org.camunda.automator.engine.AutomatorException; import org.camunda.automator.engine.RunParameters; @@ -30,6 +31,9 @@ public class AutomatorCLI implements CommandLineRunner { @Autowired BpmnEngineList engineConfiguration; + @Autowired + ConfigurationStartup configurationStartup; + public static void main(String[] args) { isRunningCLI = true; SpringApplication app = new SpringApplication(AutomatorCLI.class); @@ -122,6 +126,19 @@ public void run(String[] args) { File folderRecursive = null; RunParameters runParameters = new RunParameters(); + runParameters.setExecution(true) + .setServerName(configurationStartup.getServerName()) + .setLogLevel(configurationStartup.getLogLevelEnum()) + .setCreation(configurationStartup.isPolicyExecutionCreation()) + .setServiceTask(configurationStartup.isPolicyExecutionServiceTask()) + .setUserTask(configurationStartup.isPolicyExecutionUserTask()) + .setWarmingUp(configurationStartup.isPolicyExecutionWarmingUp()) + .setDeploymentProcess(configurationStartup.isPolicyDeployProcess()) + .setDeepTracking(configurationStartup.deepTracking()); + List filterService = configurationStartup.getFilterService(); + if (filterService != null) { + runParameters.setFilterExecutionServiceTask(filterService); + } Integer overrideNumberOfExecution = null; int i = 0; ACTION action = null; @@ -193,7 +210,7 @@ public void run(String[] args) { serverDefinition = engineConfiguration.getByServerName(serverName); if (serverDefinition == null) { - throw new AutomatorException("Check configuration: name[" + serverName + throw new AutomatorException("Check configuration: Server name (from parameter)[" + serverName + "] does not exist in the list of servers in application.yaml file"); } diff --git a/src/main/java/org/camunda/automator/bpmnengine/BpmnEngine.java b/src/main/java/org/camunda/automator/bpmnengine/BpmnEngine.java index 8bda01c..c0cedd6 100644 --- a/src/main/java/org/camunda/automator/bpmnengine/BpmnEngine.java +++ b/src/main/java/org/camunda/automator/bpmnengine/BpmnEngine.java @@ -105,7 +105,7 @@ List searchUserTasksByProcessInstance(String processInstanceId, String u /** * @param workerId workerId * @param topic topic to register - * @param streamEnable true if the stream enable is open + * @param streamEnabled true if the stream enable is open * @param lockTime lock time for the job * @param jobHandler C7: must implement ExternalTaskHandler. C8: must implement JobHandler * @param backoffSupplier backOffStrategy @@ -113,7 +113,7 @@ List searchUserTasksByProcessInstance(String processInstanceId, String u */ RegisteredTask registerServiceTask(String workerId, String topic, - boolean streamEnable, + boolean streamEnabled, Duration lockTime, Object jobHandler, FixedBackoffSupplier backoffSupplier); diff --git a/src/main/java/org/camunda/automator/bpmnengine/BpmnEngineConfigurationInstance.java b/src/main/java/org/camunda/automator/bpmnengine/BpmnEngineConfigurationInstance.java index 44051a2..578fab0 100644 --- a/src/main/java/org/camunda/automator/bpmnengine/BpmnEngineConfigurationInstance.java +++ b/src/main/java/org/camunda/automator/bpmnengine/BpmnEngineConfigurationInstance.java @@ -7,13 +7,13 @@ */ public class BpmnEngineConfigurationInstance { - public static BpmnEngineList getZeebeSaas(String zeebeGatewayAddress, String zeebeSecurityPlainText) { + public static BpmnEngineList getZeebeSaas(String zeebeGatewayAddress, Boolean zeebePlainText) { BpmnEngineList bpmEngineConfiguration = new BpmnEngineList(); BpmnEngineList.BpmnServerDefinition serverDefinition = new BpmnEngineList.BpmnServerDefinition(); serverDefinition.serverType = BpmnEngineList.CamundaEngine.CAMUNDA_8; serverDefinition.zeebeGatewayAddress = zeebeGatewayAddress; - serverDefinition.zeebeSecurityPlainText = zeebeSecurityPlainText; + serverDefinition.zeebePlainText = zeebePlainText; bpmEngineConfiguration.addExplicitServer(serverDefinition); return bpmEngineConfiguration; @@ -53,7 +53,7 @@ public static BpmnEngineList getCamundaSaas8(String zeebeCloudRegister, serverDefinition.serverType = BpmnEngineList.CamundaEngine.CAMUNDA_8; serverDefinition.zeebeSaasRegion = zeebeCloudRegion; serverDefinition.zeebeSaasClusterId = zeebeCloudClusterId; - serverDefinition.zeebeSaasClientId = zeebeCloudClientId; + serverDefinition.zeebeClientId = zeebeCloudClientId; bpmEngineConfiguration.addExplicitServer(serverDefinition); diff --git a/src/main/java/org/camunda/automator/bpmnengine/BpmnEngineFactory.java b/src/main/java/org/camunda/automator/bpmnengine/BpmnEngineFactory.java index 06a959f..6cc16f4 100644 --- a/src/main/java/org/camunda/automator/bpmnengine/BpmnEngineFactory.java +++ b/src/main/java/org/camunda/automator/bpmnengine/BpmnEngineFactory.java @@ -7,6 +7,7 @@ package org.camunda.automator.bpmnengine; import org.camunda.automator.bpmnengine.camunda7.BpmnEngineCamunda7; +import org.camunda.automator.bpmnengine.camunda8.BenchmarkStartPiExceptionHandlingStrategy; import org.camunda.automator.bpmnengine.camunda8.BpmnEngineCamunda8; import org.camunda.automator.bpmnengine.dummy.BpmnEngineDummy; import org.camunda.automator.configuration.BpmnEngineList; @@ -15,18 +16,28 @@ import java.util.EnumMap; import java.util.Map; -/** +/* * This can't be a Component, to be used in AutomatorAPI */ public class BpmnEngineFactory { private static final BpmnEngineFactory bpmnEngineFactory = new BpmnEngineFactory(); Map cacheEngine = new EnumMap<>(BpmnEngineList.CamundaEngine.class); + BenchmarkStartPiExceptionHandlingStrategy benchmarkStartPiExceptionHandlingStrategy = null; + + private BpmnEngineFactory() { + // use the getInstance() method + } public static BpmnEngineFactory getInstance() { return bpmnEngineFactory; } + public static BpmnEngineFactory getInstance(BenchmarkStartPiExceptionHandlingStrategy benchmarkStartPiExceptionHandlingStrategy) { + bpmnEngineFactory.benchmarkStartPiExceptionHandlingStrategy = benchmarkStartPiExceptionHandlingStrategy; + return bpmnEngineFactory; + } + public BpmnEngine getEngineFromConfiguration(BpmnEngineList.BpmnServerDefinition serverDefinition, boolean logDebug) throws AutomatorException { BpmnEngine engine = cacheEngine.get(serverDefinition.serverType); @@ -42,9 +53,13 @@ public BpmnEngine getEngineFromConfiguration(BpmnEngineList.BpmnServerDefinition engine = switch (serverDefinition.serverType) { case CAMUNDA_7 -> new BpmnEngineCamunda7(serverDefinition, logDebug); - case CAMUNDA_8 -> BpmnEngineCamunda8.getFromServerDefinition(serverDefinition, logDebug); + case CAMUNDA_8 -> + BpmnEngineCamunda8.getFromServerDefinition(serverDefinition, benchmarkStartPiExceptionHandlingStrategy, + logDebug); - case CAMUNDA_8_SAAS -> BpmnEngineCamunda8.getFromServerDefinition(serverDefinition, logDebug); + case CAMUNDA_8_SAAS -> + BpmnEngineCamunda8.getFromServerDefinition(serverDefinition, benchmarkStartPiExceptionHandlingStrategy, + logDebug); case DUMMY -> new BpmnEngineDummy(serverDefinition); diff --git a/src/main/java/org/camunda/automator/bpmnengine/camunda8/BpmnEngineCamunda8.java b/src/main/java/org/camunda/automator/bpmnengine/camunda8/BpmnEngineCamunda8.java index 68605cf..9404ffa 100644 --- a/src/main/java/org/camunda/automator/bpmnengine/camunda8/BpmnEngineCamunda8.java +++ b/src/main/java/org/camunda/automator/bpmnengine/camunda8/BpmnEngineCamunda8.java @@ -1,21 +1,36 @@ package org.camunda.automator.bpmnengine.camunda8; +import io.camunda.common.auth.Authentication; +import io.camunda.common.auth.JwtConfig; +import io.camunda.common.auth.JwtCredential; +import io.camunda.common.auth.Product; +import io.camunda.common.auth.SaaSAuthentication; +import io.camunda.common.auth.SaaSAuthenticationBuilder; +import io.camunda.common.auth.SimpleAuthentication; +import io.camunda.common.auth.SimpleConfig; +import io.camunda.common.auth.SimpleCredential; +import io.camunda.common.auth.identity.IdentityConfig; +import io.camunda.common.auth.identity.IdentityContainer; +import io.camunda.common.json.SdkObjectMapper; +import io.camunda.identity.sdk.Identity; +import io.camunda.identity.sdk.IdentityConfiguration; import io.camunda.operate.CamundaOperateClient; -import io.camunda.operate.auth.AuthInterface; -import io.camunda.operate.dto.FlownodeInstance; -import io.camunda.operate.dto.FlownodeInstanceState; -import io.camunda.operate.dto.ProcessInstance; -import io.camunda.operate.dto.ProcessInstanceState; -import io.camunda.operate.dto.SearchResult; +import io.camunda.operate.CamundaOperateClientBuilder; import io.camunda.operate.exception.OperateException; +import io.camunda.operate.model.FlowNodeInstance; +import io.camunda.operate.model.FlowNodeInstanceState; +import io.camunda.operate.model.ProcessInstance; +import io.camunda.operate.model.ProcessInstanceState; +import io.camunda.operate.model.SearchResult; import io.camunda.operate.search.DateFilter; -import io.camunda.operate.search.FlownodeInstanceFilter; +import io.camunda.operate.search.FlowNodeInstanceFilter; import io.camunda.operate.search.ProcessInstanceFilter; import io.camunda.operate.search.SearchQuery; import io.camunda.operate.search.Sort; import io.camunda.operate.search.SortOrder; import io.camunda.operate.search.VariableFilter; import io.camunda.tasklist.CamundaTaskListClient; +import io.camunda.tasklist.CamundaTaskListClientBuilder; import io.camunda.tasklist.dto.Pagination; import io.camunda.tasklist.dto.Task; import io.camunda.tasklist.dto.TaskList; @@ -44,9 +59,10 @@ import org.camunda.automator.engine.flow.FixedBackoffSupplier; import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import org.springframework.beans.factory.annotation.Autowired; import java.io.File; +import java.net.URI; +import java.net.URL; import java.time.Duration; import java.util.ArrayList; import java.util.HashMap; @@ -61,7 +77,9 @@ public class BpmnEngineCamunda8 implements BpmnEngine { public static final String THIS_IS_A_COMPLETE_IMPOSSIBLE_VARIABLE_NAME = "ThisIsACompleteImpossibleVariableName"; public static final int SEARCH_MAX_SIZE = 100; + public static final String SAAS_AUTHENTICATE_URL = "https://login.cloud.camunda.io/oauth/token"; private final Logger logger = LoggerFactory.getLogger(BpmnEngineCamunda8.class); + private final BenchmarkStartPiExceptionHandlingStrategy exceptionHandlingStrategy; boolean hightFlowMode = false; /** * It is not possible to search user task for a specfic processInstance. So, to realize this, a marker is created in each process instance. Retrieving the user task, @@ -73,23 +91,23 @@ public class BpmnEngineCamunda8 implements BpmnEngine { private ZeebeClient zeebeClient; private CamundaOperateClient operateClient; private CamundaTaskListClient taskClient; - @Autowired - private BenchmarkStartPiExceptionHandlingStrategy exceptionHandlingStrategy; // Default private BpmnEngineList.CamundaEngine typeCamundaEngine = BpmnEngineList.CamundaEngine.CAMUNDA_8; - private BpmnEngineCamunda8() { + private BpmnEngineCamunda8(BenchmarkStartPiExceptionHandlingStrategy exceptionHandlingStrategy) { + this.exceptionHandlingStrategy = exceptionHandlingStrategy; } /** * Constructor from existing object * * @param serverDefinition server definition - * @param logDebug if true, operation will be log as debug level + * @param logDebug if true, operation will be logged as debug level */ public static BpmnEngineCamunda8 getFromServerDefinition(BpmnEngineList.BpmnServerDefinition serverDefinition, + BenchmarkStartPiExceptionHandlingStrategy benchmarkStartPiExceptionHandlingStrategy, boolean logDebug) { - BpmnEngineCamunda8 bpmnEngineCamunda8 = new BpmnEngineCamunda8(); + BpmnEngineCamunda8 bpmnEngineCamunda8 = new BpmnEngineCamunda8(benchmarkStartPiExceptionHandlingStrategy); bpmnEngineCamunda8.serverDefinition = serverDefinition; return bpmnEngineCamunda8; @@ -98,25 +116,26 @@ public static BpmnEngineCamunda8 getFromServerDefinition(BpmnEngineList.BpmnServ /** * Constructor to specify a Self Manage Zeebe Address por a Zeebe Saas * - * @param zeebeSelfGatewayAddress Self Manage : zeebe address - * @param zeebeSelfSecurityPlainText Self Manage: Plain text - * @param operateUrl URL to access Operate - * @param operateUserName Operate user name - * @param operateUserPassword Operate password - * @param tasklistUrl Url to access TaskList + * @param zeebeSelfGatewayAddress Self Manage : zeebe address + * @param zeebePlainText Self Manage: Plain text + * @param operateUrl URL to access Operate + * @param operateUserName Operate user name + * @param operateUserPassword Operate password + * @param tasklistUrl Url to access TaskList */ public static BpmnEngineCamunda8 getFromCamunda8(String zeebeSelfGatewayAddress, - String zeebeSelfSecurityPlainText, + Boolean zeebePlainText, String operateUrl, String operateUserName, String operateUserPassword, - String tasklistUrl) { - BpmnEngineCamunda8 bpmnEngineCamunda8 = new BpmnEngineCamunda8(); + String tasklistUrl, + BenchmarkStartPiExceptionHandlingStrategy benchmarkStartPiExceptionHandlingStrategy) { + BpmnEngineCamunda8 bpmnEngineCamunda8 = new BpmnEngineCamunda8(benchmarkStartPiExceptionHandlingStrategy); bpmnEngineCamunda8.serverDefinition = new BpmnEngineList.BpmnServerDefinition(); bpmnEngineCamunda8.serverDefinition.serverType = BpmnEngineList.CamundaEngine.CAMUNDA_8; bpmnEngineCamunda8.serverDefinition = new BpmnEngineList.BpmnServerDefinition(); bpmnEngineCamunda8.serverDefinition.zeebeGatewayAddress = zeebeSelfGatewayAddress; - bpmnEngineCamunda8.serverDefinition.zeebeSecurityPlainText = zeebeSelfSecurityPlainText; + bpmnEngineCamunda8.serverDefinition.zeebePlainText = zeebePlainText; /* @@ -133,7 +152,6 @@ public static BpmnEngineCamunda8 getFromCamunda8(String zeebeSelfGatewayAddress, /** * Constructor to specify a Self Manage Zeebe Address por a Zeebe Saas * - * @param zeebeSaasCloudRegister Saas Cloud Register information * @param zeebeSaasCloudRegion Saas Cloud region * @param zeebeSaasCloudClusterId Saas Cloud ClusterID * @param zeebeSaasCloudClientId Saas Cloud ClientID @@ -143,20 +161,18 @@ public static BpmnEngineCamunda8 getFromCamunda8(String zeebeSelfGatewayAddress, * @param operateUserPassword Operate password * @param tasklistUrl Url to access TaskList */ - public static BpmnEngineCamunda8 getFromCamunda8SaaS( - - String zeebeSaasCloudRegister, - String zeebeSaasCloudRegion, - String zeebeSaasCloudClusterId, - String zeebeSaasCloudClientId, - String zeebeSaasOAuthUrl, - String zeebeSaasAudience, - String zeebeSaasClientSecret, - String operateUrl, - String operateUserName, - String operateUserPassword, - String tasklistUrl) { - BpmnEngineCamunda8 bpmnEngineCamunda8 = new BpmnEngineCamunda8(); + public static BpmnEngineCamunda8 getFromCamunda8SaaS(String zeebeSaasCloudRegion, + String zeebeSaasCloudClusterId, + String zeebeSaasAudience, + String zeebeSaasCloudClientId, + String zeebeSaasClientSecret, + String zeebeSaasAuthenticationUrl, + String operateUrl, + String operateUserName, + String operateUserPassword, + String tasklistUrl, + BenchmarkStartPiExceptionHandlingStrategy benchmarkStartPiExceptionHandlingStrategy) { + BpmnEngineCamunda8 bpmnEngineCamunda8 = new BpmnEngineCamunda8(benchmarkStartPiExceptionHandlingStrategy); bpmnEngineCamunda8.serverDefinition = new BpmnEngineList.BpmnServerDefinition(); bpmnEngineCamunda8.serverDefinition.serverType = BpmnEngineList.CamundaEngine.CAMUNDA_8; @@ -166,10 +182,10 @@ public static BpmnEngineCamunda8 getFromCamunda8SaaS( */ bpmnEngineCamunda8.serverDefinition.zeebeSaasRegion = zeebeSaasCloudRegion; bpmnEngineCamunda8.serverDefinition.zeebeSaasClusterId = zeebeSaasCloudClusterId; - bpmnEngineCamunda8.serverDefinition.zeebeSaasClientId = zeebeSaasCloudClientId; - bpmnEngineCamunda8.serverDefinition.zeebeSaasClientSecret = zeebeSaasClientSecret; - bpmnEngineCamunda8.serverDefinition.zeebeSaasOAuthUrl = zeebeSaasOAuthUrl; - bpmnEngineCamunda8.serverDefinition.zeebeSaasAudience = zeebeSaasAudience; + bpmnEngineCamunda8.serverDefinition.zeebeClientId = zeebeSaasCloudClientId; + bpmnEngineCamunda8.serverDefinition.zeebeClientSecret = zeebeSaasClientSecret; + bpmnEngineCamunda8.serverDefinition.authenticationUrl = zeebeSaasAuthenticationUrl; + bpmnEngineCamunda8.serverDefinition.zeebeAudience = zeebeSaasAudience; /* * Connection to Operate @@ -188,151 +204,21 @@ public void init() { public void connection() throws AutomatorException { - final String defaultAddress = "localhost:26500"; - final String envVarAddress = System.getenv("ZEEBE_ADDRESS"); - - // connection is critical, so let build the analysis - StringBuilder analysis = new StringBuilder(); - - - boolean isOk = true; - - isOk = stillOk(serverDefinition.name, "ZeebeConnection", analysis, false, isOk); this.typeCamundaEngine = this.serverDefinition.serverType; - - final ZeebeClientBuilder clientBuilder; - AuthInterface saOperate; - io.camunda.tasklist.auth.AuthInterface saTaskList; - - // ---------------------------- Camunda Saas - if (BpmnEngineList.CamundaEngine.CAMUNDA_8_SAAS.equals(this.typeCamundaEngine)) { - String gatewayAddressCloud = - serverDefinition.zeebeSaasClusterId + "." + serverDefinition.zeebeSaasRegion + ".zeebe.camunda.io:443"; - stillOk(gatewayAddressCloud, "GatewayAddress", analysis, false, true); - stillOk(serverDefinition.zeebeSaasClientId, "ClientId", analysis, false, true); - - /* Connect to Camunda Cloud Cluster, assumes that credentials are set in environment variables. - * See JavaDoc on class level for details - */ - isOk = stillOk(serverDefinition.zeebeSaasOAuthUrl, "OAutorisationServerUrl", analysis, true, isOk); - isOk = stillOk(serverDefinition.zeebeSaasClientId, "ClientId", analysis, true, isOk); - isOk = stillOk(serverDefinition.zeebeSaasClientSecret, "ClientSecret", analysis, true, isOk); - - try { - String audience = serverDefinition.zeebeSaasAudience != null ? serverDefinition.zeebeSaasAudience : ""; - OAuthCredentialsProvider credentialsProvider = new OAuthCredentialsProviderBuilder() // formatting - .authorizationServerUrl(serverDefinition.zeebeSaasOAuthUrl) - .audience(audience) - .clientId(serverDefinition.zeebeSaasClientId) - .clientSecret(serverDefinition.zeebeSaasClientSecret) - .build(); - - clientBuilder = ZeebeClient.newClientBuilder() - .gatewayAddress(gatewayAddressCloud) - .credentialsProvider(credentialsProvider); - - } catch (Exception e) { - zeebeClient = null; - throw new AutomatorException( - "BadCredential[" + serverDefinition.name + "] Analysis:" + analysis + " : " + e.getMessage()); - } - - saOperate = new io.camunda.operate.auth.SaasAuthentication(serverDefinition.zeebeSaasClientId, - serverDefinition.zeebeSaasClientSecret); - saTaskList = new io.camunda.tasklist.auth.SaasAuthentication(serverDefinition.zeebeSaasClientId, - serverDefinition.zeebeSaasClientSecret); - - typeCamundaEngine = BpmnEngineList.CamundaEngine.CAMUNDA_8_SAAS; - - //---------------------------- Camunda 8 Self Manage - } else if (serverDefinition.zeebeGatewayAddress != null && !this.serverDefinition.zeebeGatewayAddress.trim() - .isEmpty()) { - isOk = stillOk(serverDefinition.zeebeGatewayAddress, "GatewayAddress", analysis, true, isOk); - - // connect to local deployment; assumes that authentication is disabled - clientBuilder = ZeebeClient.newClientBuilder() - .gatewayAddress(serverDefinition.zeebeGatewayAddress) - .usePlaintext(); - saOperate = new io.camunda.operate.auth.SimpleAuthentication(serverDefinition.operateUserName, - serverDefinition.operateUserPassword, serverDefinition.operateUrl); - saTaskList = new io.camunda.tasklist.auth.SimpleAuthentication(serverDefinition.operateUserName, - serverDefinition.operateUserPassword); - typeCamundaEngine = BpmnEngineList.CamundaEngine.CAMUNDA_8; - } else - throw new AutomatorException("Invalid configuration"); - - // ---------------- connection - boolean zeebeOk = false; - boolean operateOk = false; - boolean tasklistOk = false; + StringBuilder analysis = new StringBuilder(); try { - isOk = stillOk(serverDefinition.workerExecutionThreads, "ExecutionThread", analysis, false, isOk); - - analysis.append(" ExecutionThread["); - analysis.append(serverDefinition.workerExecutionThreads); - analysis.append("] MaxJobsActive["); - analysis.append(serverDefinition.workerMaxJobsActive); - analysis.append("] "); - if (serverDefinition.workerMaxJobsActive == -1) { - serverDefinition.workerMaxJobsActive = serverDefinition.workerExecutionThreads; - analysis.append("No workerMaxJobsActive defined, align to the number of threads, "); - } - if (serverDefinition.workerExecutionThreads > serverDefinition.workerMaxJobsActive) { - logger.error( - "Camunda8 [{}] Incorrect definition: the workerExecutionThreads {} must be <= workerMaxJobsActive {} , else ZeebeClient will not fetch enough jobs to feed threads", - serverDefinition.name, serverDefinition.workerExecutionThreads, serverDefinition.workerMaxJobsActive); - } - - if (!isOk) - throw new AutomatorException("Invalid configuration " + analysis); - - clientBuilder.numJobWorkerExecutionThreads(serverDefinition.workerExecutionThreads); - clientBuilder.defaultJobWorkerMaxJobsActive(serverDefinition.workerMaxJobsActive); - - analysis.append("Zeebe connection..."); - zeebeClient = clientBuilder.build(); + connectZeebe(analysis); + connectOperate(analysis); + connectTaskList(analysis); + logger.info("Zeebe: OK, Operate: OK, TaskList:OK {}", analysis); - // simple test - Topology join = zeebeClient.newTopologyRequest().send().join(); - - // Actually, if an error arrived, an exception is thrown - analysis.append(join != null ? "successfully," : "error"); - zeebeOk = join != null; - - isOk = stillOk(serverDefinition.operateUrl, "operateUrl", analysis, false, isOk); - - analysis.append("Operate connection..."); - operateClient = new CamundaOperateClient.Builder().operateUrl(serverDefinition.operateUrl) - .authentication(saOperate) - .build(); - analysis.append("successfully,"); - operateOk = true; - - // TaskList is not mandatory - if (serverDefinition.taskListUrl != null && !serverDefinition.taskListUrl.isEmpty()) { - isOk = stillOk(serverDefinition.taskListUrl, "taskListUrl", analysis, false, isOk); - analysis.append("Tasklist ..."); - - taskClient = new CamundaTaskListClient.Builder().taskListUrl(serverDefinition.taskListUrl) - .authentication(saTaskList) - .build(); - analysis.append("successfully,"); - tasklistOk = true; - } - //get tasks assigned to demo - logger.info("Zeebe: OK, Operate: OK, TaskList:OK " + analysis.toString()); - - } catch (Exception e) { + } catch (AutomatorException e) { zeebeClient = null; - throw new AutomatorException("NoConnection[" + serverDefinition.name // server name - + "] Zeebe:" + (zeebeOk ? "OK" : "FAIL") // zeebe status - + ", Operate:" + (operateOk ? "OK" : "FAIL") // Operate status - + ", Tasklist:" + (tasklistOk ? "OK" : "FAIL") // taskList status - + ", Analysis:" + analysis + " fail : " + e.getMessage()); + throw e; } } - public void disconnection() throws AutomatorException { + public void disconnection() { // nothing to do here } @@ -510,7 +396,7 @@ public List searchUserTasks(String userTaskId, int maxResult) throws Aut @Override public RegisteredTask registerServiceTask(String workerId, String topic, - boolean streamEnable, + boolean streamEnabled, Duration lockTime, Object jobHandler, FixedBackoffSupplier backoffSupplier) { @@ -530,7 +416,7 @@ public RegisteredTask registerServiceTask(String workerId, .jobType(topic) .handler((JobHandler) jobHandler) .timeout(lockTime) - .streamEnabled(streamEnable) // according the parameter + .streamEnabled(streamEnabled) .name(workerId); if (backoffSupplier != null) { @@ -555,9 +441,13 @@ public void executeUserTask(String userTaskId, String userId, Map searchServiceTasks(String processInstanceId, String serviceTaskId, String topic, int maxResult) throws AutomatorException { try { + if (operateClient == null) { + throw new AutomatorException("No Operate connection was provided"); + } long processInstanceIdLong = Long.parseLong(processInstanceId); - ProcessInstanceFilter processInstanceFilter = new ProcessInstanceFilter.Builder().parentKey(processInstanceIdLong) + ProcessInstanceFilter processInstanceFilter = ProcessInstanceFilter.builder() + .parentKey(processInstanceIdLong) .build(); SearchQuery processInstanceQuery = new SearchQuery.Builder().filter(processInstanceFilter).size(100).build(); @@ -630,17 +520,22 @@ public void throwBpmnServiceTask(String serviceTaskId, public List searchTasksByProcessInstanceId(String processInstanceId, String taskId, int maxResult) throws AutomatorException { try { + if (operateClient == null) { + throw new AutomatorException("No Operate connection was provided"); + } + // impossible to filter by the task name/ task tyoe, so be ready to get a lot of flowNode and search the correct onee - FlownodeInstanceFilter flownodeFilter = new FlownodeInstanceFilter.Builder().processInstanceKey( - Long.valueOf(processInstanceId)).build(); + FlowNodeInstanceFilter flownodeFilter = FlowNodeInstanceFilter.builder() + .processInstanceKey(Long.valueOf(processInstanceId)) + .build(); SearchQuery flownodeQuery = new SearchQuery.Builder().filter(flownodeFilter).size(maxResult).build(); - List flownodes = operateClient.searchFlownodeInstances(flownodeQuery); + List flownodes = operateClient.searchFlowNodeInstances(flownodeQuery); return flownodes.stream().filter(t -> taskId.equals(t.getFlowNodeId())).map(t -> { TaskDescription taskDescription = new TaskDescription(); taskDescription.taskId = t.getFlowNodeId(); taskDescription.type = getTaskType(t.getType()); // to implement - taskDescription.isCompleted = FlownodeInstanceState.COMPLETED.equals(t.getState()); // to implement + taskDescription.isCompleted = FlowNodeInstanceState.COMPLETED.equals(t.getState()); // to implement return taskDescription; }).toList(); @@ -653,9 +548,12 @@ public List searchProcessInstanceByVariable(String processId Map filterVariables, int maxResult) throws AutomatorException { try { + if (operateClient == null) { + throw new AutomatorException("No Operate connection was provided"); + } + // impossible to filter by the task name/ task tyoe, so be ready to get a lot of flowNode and search the correct onee - ProcessInstanceFilter processInstanceFilter = new ProcessInstanceFilter.Builder().bpmnProcessId(processId) - .build(); + ProcessInstanceFilter processInstanceFilter = ProcessInstanceFilter.builder().bpmnProcessId(processId).build(); SearchQuery processInstanceQuery = new SearchQuery.Builder().filter(processInstanceFilter) .size(maxResult) @@ -707,12 +605,17 @@ else if (taskTypeC8.equals("PARALLEL_GATEWAY")) @Override public Map getVariables(String processInstanceId) throws AutomatorException { try { + if (operateClient == null) { + throw new AutomatorException("No Operate connection was provided"); + } + // impossible to filter by the task name/ task tyoe, so be ready to get a lot of flowNode and search the correct onee - VariableFilter variableFilter = new VariableFilter.Builder().processInstanceKey(Long.valueOf(processInstanceId)) + VariableFilter variableFilter = VariableFilter.builder() + .processInstanceKey(Long.valueOf(processInstanceId)) .build(); SearchQuery variableQuery = new SearchQuery.Builder().filter(variableFilter).build(); - List listVariables = operateClient.searchVariables(variableQuery); + List listVariables = operateClient.searchVariables(variableQuery); Map variables = new HashMap<>(); listVariables.forEach(t -> variables.put(t.getName(), t.getValue())); @@ -730,11 +633,15 @@ public Map getVariables(String processInstanceId) throws Automat /* ******************************************************************** */ public long countNumberOfProcessInstancesCreated(String processId, DateFilter startDate, DateFilter endDate) throws AutomatorException { + if (operateClient == null) { + throw new AutomatorException("No Operate connection was provided"); + } + SearchQuery.Builder queryBuilder = new SearchQuery.Builder(); try { int cumul = 0; SearchResult searchResult = null; - queryBuilder = queryBuilder.filter(new ProcessInstanceFilter.Builder().bpmnProcessId(processId).build()); + queryBuilder = queryBuilder.filter(ProcessInstanceFilter.builder().bpmnProcessId(processId).build()); queryBuilder.sort(new Sort("key", SortOrder.ASC)); int maxLoop = 0; do { @@ -757,12 +664,16 @@ public long countNumberOfProcessInstancesCreated(String processId, DateFilter st public long countNumberOfProcessInstancesEnded(String processId, DateFilter startDate, DateFilter endDate) throws AutomatorException { + if (operateClient == null) { + throw new AutomatorException("No Operate connection was provided"); + } + SearchQuery.Builder queryBuilder = new SearchQuery.Builder(); try { int cumul = 0; SearchResult searchResult = null; - queryBuilder = queryBuilder.filter(new ProcessInstanceFilter.Builder().bpmnProcessId(processId) + queryBuilder = queryBuilder.filter(ProcessInstanceFilter.builder().bpmnProcessId(processId) // .startDate(startDate) // .endDate(endDate) .state(ProcessInstanceState.COMPLETED).build()); @@ -793,24 +704,27 @@ public long countNumberOfProcessInstancesEnded(String processId, DateFilter star /* ******************************************************************** */ public long countNumberOfTasks(String processId, String taskId) throws AutomatorException { + if (operateClient == null) { + throw new AutomatorException("No Operate connection was provided"); + } try { int cumul = 0; - SearchResult searchResult = null; + SearchResult searchResult = null; int maxLoop = 0; do { maxLoop++; SearchQuery.Builder queryBuilder = new SearchQuery.Builder(); - queryBuilder = queryBuilder.filter(new FlownodeInstanceFilter.Builder().flowNodeId(taskId).build()); + queryBuilder = queryBuilder.filter(FlowNodeInstanceFilter.builder().flowNodeId(taskId).build()); queryBuilder.sort(new Sort("key", SortOrder.ASC)); if (searchResult != null && !searchResult.getItems().isEmpty()) { queryBuilder.searchAfter(searchResult.getSortValues()); } SearchQuery searchQuery = queryBuilder.build(); searchQuery.setSize(SEARCH_MAX_SIZE); - searchResult = operateClient.searchFlownodeInstanceResults(searchQuery); + searchResult = operateClient.searchFlowNodeInstanceResults(searchQuery); cumul += (long) searchResult.getItems().size(); } while (searchResult.getItems().size() >= SEARCH_MAX_SIZE && maxLoop < 1000); return cumul; @@ -850,7 +764,7 @@ public String getSignature() { String signature = typeCamundaEngine.toString() + " "; if (typeCamundaEngine.equals(BpmnEngineList.CamundaEngine.CAMUNDA_8_SAAS)) signature += - "Cloud ClientId[" + serverDefinition.zeebeSaasClientId + "] ClusterId[" + serverDefinition.zeebeSaasClusterId + "Cloud ClientId[" + serverDefinition.zeebeClientId + "] ClusterId[" + serverDefinition.zeebeSaasClusterId + "]"; else signature += "Address[" + serverDefinition.zeebeGatewayAddress + "]"; @@ -872,30 +786,455 @@ public ZeebeClient getZeebeClient() { return zeebeClient; } + + + /* ******************************************************************** */ + /* */ + /* Connection to each component */ + /* */ + /* ******************************************************************** */ + + private void connectZeebe(StringBuilder analysis) throws AutomatorException { + + // connection is critical, so let build the analysis + + boolean isOk = true; + + isOk = stillOk(serverDefinition.name, "ZeebeConnection", analysis, false, true, isOk); + this.typeCamundaEngine = this.serverDefinition.serverType; + + ZeebeClientBuilder clientBuilder; + + // ---------------------------- Camunda Saas + if (BpmnEngineList.CamundaEngine.CAMUNDA_8_SAAS.equals(this.typeCamundaEngine)) { + String gatewayAddressCloud = + serverDefinition.zeebeSaasClusterId + "." + serverDefinition.zeebeSaasRegion + ".zeebe.camunda.io:443"; + isOk = stillOk(gatewayAddressCloud, "GatewayAddress", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.zeebeClientId, "ClientId", analysis, true, true, isOk); + + /* Connect to Camunda Cloud Cluster, assumes that credentials are set in environment variables. + * See JavaDoc on class level for details + */ + isOk = stillOk(serverDefinition.authenticationUrl, "authenticationUrl", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.zeebeAudience, "zeebeAudience", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.zeebeClientId, "ClientId", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.zeebeClientSecret, "ClientSecret", analysis, true, true, isOk); + + try { + + OAuthCredentialsProvider credentialsProvider = new OAuthCredentialsProviderBuilder() // formatting + .authorizationServerUrl( + serverDefinition.authenticationUrl != null ? serverDefinition.authenticationUrl : SAAS_AUTHENTICATE_URL) + .audience(serverDefinition.zeebeAudience) + .clientId(serverDefinition.zeebeClientId) + .clientSecret(serverDefinition.zeebeClientSecret) + .build(); + + clientBuilder = ZeebeClient.newClientBuilder() + .gatewayAddress(gatewayAddressCloud) + .credentialsProvider(credentialsProvider); + + } catch (Exception e) { + zeebeClient = null; + throw new AutomatorException( + "BadCredential[" + serverDefinition.name + "] Analysis:" + analysis + " : " + e.getMessage()); + } + } + + //---------------------------- Camunda 8 Self Manage + else if (BpmnEngineList.CamundaEngine.CAMUNDA_8.equals(this.typeCamundaEngine)) { + isOk = stillOk(serverDefinition.zeebeGatewayAddress, "GatewayAddress", analysis, true, true, isOk); + if (serverDefinition.isAuthenticationUrl()) { + isOk = stillOk(serverDefinition.authenticationUrl, "authenticationUrl", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.zeebeAudience, "zeebeAudience", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.zeebeClientId, "zeebeClientId", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.zeebeClientSecret, "zeebeClientSecret", analysis, true, false, isOk); + isOk = stillOk(serverDefinition.zeebePlainText, "zeebePlainText", analysis, true, true, isOk); + + try { + OAuthCredentialsProvider credentialsProvider = new OAuthCredentialsProviderBuilder() // builder + .authorizationServerUrl(serverDefinition.authenticationUrl) + .audience(serverDefinition.zeebeAudience) + .clientId(serverDefinition.zeebeClientId) + .clientSecret(serverDefinition.zeebeClientSecret) + .build(); + clientBuilder = ZeebeClient.newClientBuilder() + .gatewayAddress(serverDefinition.zeebeGatewayAddress) + .defaultTenantId(serverDefinition.zeebeTenantId == null ? "" : serverDefinition.zeebeTenantId) + .credentialsProvider(credentialsProvider); + if (Boolean.TRUE.equals(serverDefinition.zeebePlainText)) + clientBuilder.usePlaintext(); + + } catch (Exception e) { + zeebeClient = null; + logger.error("Can't connect to Server[{}] Analysis:{} : {}", serverDefinition.name, analysis, e); + throw new AutomatorException( + "BadCredential[" + serverDefinition.name + "] Analysis:" + analysis + " : " + e.getMessage()); + } + } else { + // connect to local deployment; assumes that authentication is disabled + clientBuilder = ZeebeClient.newClientBuilder() + .gatewayAddress(serverDefinition.zeebeGatewayAddress) + .usePlaintext(); + } + } else + throw new AutomatorException("Invalid configuration"); + + // ---------------- connection + try { + isOk = stillOk(serverDefinition.workerExecutionThreads, "ExecutionThread", analysis, false, true, isOk); + + analysis.append(" ExecutionThread["); + analysis.append(serverDefinition.workerExecutionThreads); + analysis.append("] MaxJobsActive["); + analysis.append(serverDefinition.workerMaxJobsActive); + analysis.append("] "); + if (serverDefinition.workerMaxJobsActive == -1) { + serverDefinition.workerMaxJobsActive = serverDefinition.workerExecutionThreads; + analysis.append("No workerMaxJobsActive defined, align to the number of threads, "); + } + if (serverDefinition.workerExecutionThreads > serverDefinition.workerMaxJobsActive) { + logger.error( + "Camunda8 [{}] Incorrect definition: the workerExecutionThreads {} must be <= workerMaxJobsActive {} , else ZeebeClient will not fetch enough jobs to feed threads", + serverDefinition.name, serverDefinition.workerExecutionThreads, serverDefinition.workerMaxJobsActive); + } + + if (!isOk) + throw new AutomatorException("Invalid configuration " + analysis); + + clientBuilder.numJobWorkerExecutionThreads(serverDefinition.workerExecutionThreads); + clientBuilder.defaultJobWorkerMaxJobsActive(serverDefinition.workerMaxJobsActive); + + analysis.append("Zeebe connection..."); + zeebeClient = clientBuilder.build(); + + // simple test + Topology join = zeebeClient.newTopologyRequest().send().join(); + + // Actually, if an error arrived, an exception is thrown + + analysis.append(join != null ? "successfully, " : "error, "); + + } catch (Exception e) { + zeebeClient = null; + logger.error("Can't connect to Server[{}] Analysis:{} : {}", serverDefinition.name, analysis, e); + throw new AutomatorException( + "Can't connect to Server[" + serverDefinition.name + "] Analysis:" + analysis + " Fail : " + e.getMessage()); + } + } + + /** + * Connect Operate + * + * @param analysis to cpmplete the analysis + * @throws AutomatorException in case of error + */ + private void connectOperate(StringBuilder analysis) throws AutomatorException { + if (!serverDefinition.isOperate()) { + analysis.append("No operate connection required, "); + return; + } + analysis.append("Operate connection..."); + + boolean isOk = true; + isOk = stillOk(serverDefinition.operateUrl, "operateUrl", analysis, true, true, isOk); + + CamundaOperateClientBuilder camundaOperateClientBuilder = new CamundaOperateClientBuilder(); + // ---------------------------- Camunda Saas + if (BpmnEngineList.CamundaEngine.CAMUNDA_8_SAAS.equals(this.typeCamundaEngine)) { + + try { + isOk = stillOk(serverDefinition.zeebeSaasRegion, "zeebeSaasRegion", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.zeebeSaasClusterId, "zeebeSaasClusterId", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.zeebeClientId, "zeebeClientId", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.zeebeClientSecret, "zeebeClientSecret", analysis, true, false, isOk); + + URL operateUrl = URI.create("https://" + serverDefinition.zeebeSaasRegion + ".operate.camunda.io/" + + serverDefinition.zeebeSaasClusterId).toURL(); + + SaaSAuthenticationBuilder saaSAuthenticationBuilder = SaaSAuthentication.builder(); + JwtConfig jwtConfig = new JwtConfig(); + jwtConfig.addProduct(Product.TASKLIST, + new JwtCredential(serverDefinition.zeebeClientId, serverDefinition.zeebeClientSecret, + serverDefinition.operateAudience != null ? serverDefinition.operateAudience : "operate.camunda.io", + serverDefinition.authenticationUrl != null ? + serverDefinition.authenticationUrl : + SAAS_AUTHENTICATE_URL)); + + Authentication saasAuthentication = SaaSAuthentication.builder() + .withJwtConfig(jwtConfig) + .withJsonMapper(new SdkObjectMapper()) + .build(); + + camundaOperateClientBuilder.authentication(saasAuthentication) + .operateUrl(serverDefinition.operateUrl) + .setup() + .build(); + + } catch (Exception e) { + zeebeClient = null; + logger.error("Can't connect to SaaS environemnt[{}] Analysis:{} : {}", serverDefinition.name, analysis, e); + throw new AutomatorException( + "Can't connect to SaaS environment[" + serverDefinition.name + "] Analysis:" + analysis + " fail : " + + e.getMessage()); + } + + //---------------------------- Camunda 8 Self Manage + } else if (BpmnEngineList.CamundaEngine.CAMUNDA_8.equals(this.typeCamundaEngine)) { + + isOk = stillOk(serverDefinition.zeebeGatewayAddress, "GatewayAddress", analysis, true, true, isOk); + + try { + if (serverDefinition.isAuthenticationUrl()) { + isOk = stillOk(serverDefinition.authenticationUrl, "authenticationUrl", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.operateClientId, "operateClientId", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.operateClientSecret, "operateClientSecret", analysis, true, false, isOk); + + IdentityConfiguration identityConfiguration = new IdentityConfiguration.Builder().withBaseUrl( + serverDefinition.identityUrl) + .withIssuer(serverDefinition.authenticationUrl) + .withIssuerBackendUrl(serverDefinition.authenticationUrl) + .withClientId(serverDefinition.operateClientId) + .withClientSecret(serverDefinition.operateClientSecret) + .withAudience(serverDefinition.operateAudience) + .build(); + Identity identity = new Identity(identityConfiguration); + + IdentityConfig identityConfig = new IdentityConfig(); + identityConfig.addProduct(Product.OPERATE, new IdentityContainer(identity, identityConfiguration)); + + JwtConfig jwtConfig = new JwtConfig(); + jwtConfig.addProduct(Product.OPERATE, new JwtCredential(serverDefinition.operateClientId, // clientId + serverDefinition.operateClientSecret, // clientSecret + "zeebe-api", // audience + serverDefinition.authenticationUrl)); + + io.camunda.common.auth.SelfManagedAuthenticationBuilder identityAuthenticationBuilder = io.camunda.common.auth.SelfManagedAuthentication.builder(); + identityAuthenticationBuilder.withJwtConfig(jwtConfig); + identityAuthenticationBuilder.withIdentityConfig(identityConfig); + + Authentication identityAuthentication = identityAuthenticationBuilder.build(); + camundaOperateClientBuilder.authentication(identityAuthentication) + .operateUrl(serverDefinition.operateUrl) + .setup() + .build(); + + } else { + // Simple authentication + isOk = stillOk(serverDefinition.operateUserName, "operateUserName", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.operateUserPassword, "operateUserPassword", analysis, true, false, isOk); + + SimpleCredential simpleCredential = new SimpleCredential(serverDefinition.operateUrl, + serverDefinition.operateUserName, serverDefinition.operateUserPassword); + + SimpleConfig jwtConfig = new io.camunda.common.auth.SimpleConfig(); + jwtConfig.addProduct(Product.OPERATE, simpleCredential); + + io.camunda.common.auth.SimpleAuthenticationBuilder simpleAuthenticationBuilder = SimpleAuthentication.builder(); + simpleAuthenticationBuilder.withSimpleConfig(jwtConfig); + + Authentication simpleAuthentication = simpleAuthenticationBuilder.build(); + camundaOperateClientBuilder.authentication(simpleAuthentication) + .operateUrl(serverDefinition.operateUrl) + .setup() + .build(); + } + } catch (Exception e) { + logger.error("Can't connect to SaaS environment[{}] Analysis:{} : {}", serverDefinition.name, analysis, e); + throw new AutomatorException( + "Can't connect to SaaS environment[" + serverDefinition.name + "] Analysis:" + analysis + " fail : " + + e.getMessage()); + } + + } else + throw new AutomatorException("Invalid configuration"); + + if (!isOk) + throw new AutomatorException("Invalid configuration " + analysis); + + // ---------------- connection + try { + + operateClient = camundaOperateClientBuilder.build(); + + analysis.append("successfully, "); + + } catch (Exception e) { + logger.error("Can't connect to Server[{}] Analysis:{} : {}", serverDefinition.name, analysis, e); + throw new AutomatorException( + "Can't connect to Server[" + serverDefinition.name + "] Analysis:" + analysis + " Fail : " + e.getMessage()); + } + } + + /** + * Connect to TaskList + * + * @param analysis complete the analysis + * @throws AutomatorException in case of error + */ + private void connectTaskList(StringBuilder analysis) throws AutomatorException { + + if (!serverDefinition.isTaskList()) { + analysis.append("No TaskList connection required, "); + return; + } + analysis.append("Tasklist ..."); + + boolean isOk = true; + isOk = stillOk(serverDefinition.taskListUrl, "taskListUrl", analysis, true, true, isOk); + + CamundaTaskListClientBuilder taskListBuilder = CamundaTaskListClient.builder(); + // ---------------------------- Camunda Saas + if (BpmnEngineList.CamundaEngine.CAMUNDA_8_SAAS.equals(this.typeCamundaEngine)) { + try { + isOk = stillOk(serverDefinition.zeebeSaasRegion, "zeebeSaasRegion", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.zeebeSaasClusterId, "zeebeSaasClusterId", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.taskListClientId, "taskListClientId", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.taskListClientSecret, "taskListClientSecret", analysis, true, false, isOk); + + String taskListUrl = "https://" + serverDefinition.zeebeSaasRegion + ".tasklist.camunda.io/" + + serverDefinition.zeebeSaasClusterId; + + taskListBuilder.taskListUrl(taskListUrl) + .saaSAuthentication(serverDefinition.taskListClientId, serverDefinition.taskListClientSecret); + } catch (Exception e) { + logger.error("Can't connect to SaaS environemnt[{}] Analysis:{} : {}", serverDefinition.name, analysis, e); + throw new AutomatorException( + "Can't connect to SaaS environment[" + serverDefinition.name + "] Analysis:" + analysis + " fail : " + + e.getMessage()); + } + + //---------------------------- Camunda 8 Self Manage + } else if (BpmnEngineList.CamundaEngine.CAMUNDA_8.equals(this.typeCamundaEngine)) { + + if (serverDefinition.isAuthenticationUrl()) { + isOk = stillOk(serverDefinition.taskListClientId, "taskListClientId", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.taskListClientSecret, "taskListClientSecret", analysis, true, false, isOk); + isOk = stillOk(serverDefinition.authenticationUrl, "authenticationUrl", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.taskListKeycloakUrl, "taskListKeycloakUrl", analysis, true, true, isOk); + + taskListBuilder.taskListUrl(serverDefinition.taskListUrl) + .selfManagedAuthentication(serverDefinition.taskListClientId, serverDefinition.taskListClientSecret, + serverDefinition.taskListKeycloakUrl); + } else { + isOk = stillOk(serverDefinition.taskListUserName, "User", analysis, true, true, isOk); + isOk = stillOk(serverDefinition.taskListUserPassword, "Password", analysis, true, false, isOk); + + SimpleConfig simpleConf = new SimpleConfig(); + simpleConf.addProduct(Product.TASKLIST, + new SimpleCredential(serverDefinition.taskListUrl, serverDefinition.taskListUserName, + serverDefinition.taskListUserPassword)); + Authentication auth = SimpleAuthentication.builder().withSimpleConfig(simpleConf).build(); + + taskListBuilder.taskListUrl(serverDefinition.taskListUrl) + .authentication(auth) + .cookieExpiration(Duration.ofSeconds(5)); + } + } else + throw new AutomatorException("Invalid configuration"); + + if (!isOk) + throw new AutomatorException("Invalid configuration " + analysis); + + // ---------------- connection + try { + + taskClient = taskListBuilder.build(); + analysis.append("successfully, "); + + } catch (Exception e) { + logger.error("Can't connect to Server[{}] Analysis:{} : {}", serverDefinition.name, analysis, e); + throw new AutomatorException( + "Can't connect to Server[" + serverDefinition.name + "] Analysis:" + analysis + " Fail : " + e.getMessage()); + } + + /* 1.6.1 + boolean isOk = true; + io.camunda.tasklist.auth.AuthInterface saTaskList; + + // ---------------------------- Camunda Saas + if (BpmnEngineList.CamundaEngine.CAMUNDA_8_SAAS.equals(this.typeCamundaEngine)) { + try { + saTaskList = new io.camunda.tasklist.auth.SaasAuthentication(serverDefinition.zeebeSaasClientId, + serverDefinition.zeebeSaasClientSecret); + } catch (Exception e) { + logger.error("Can't connect to SaaS environment[{}] Analysis:{} : {}", serverDefinition.name, analysis, e); + throw new AutomatorException( + "Can't connect to SaaS environment[" + serverDefinition.name + "] Analysis:" + analysis + " fail : " + + e.getMessage()); + } + + //---------------------------- Camunda 8 Self Manage + } else if (BpmnEngineList.CamundaEngine.CAMUNDA_8.equals(this.typeCamundaEngine)) { + saTaskList = new io.camunda.tasklist.auth.SimpleAuthentication(serverDefinition.operateUserName, + serverDefinition.operateUserPassword); + } else + throw new AutomatorException("Invalid configuration"); + + if (!isOk) + throw new AutomatorException("Invalid configuration " + analysis); + + // ---------------- connection + try { + isOk = stillOk(serverDefinition.taskListUrl, "taskListUrl", analysis, false, isOk); + analysis.append("Tasklist ..."); + + taskClient = new CamundaTaskListClient.Builder().taskListUrl(serverDefinition.taskListUrl) + .authentication(saTaskList) + .build(); + analysis.append("successfully, "); + //get tasks assigned to demo + logger.info("Zeebe: OK, Operate: OK, TaskList:OK " + analysis); + + } catch (Exception e) { + logger.error("Can't connect to Server[{}] Analysis:{} : {}", serverDefinition.name, analysis, e); + throw new AutomatorException( + "Can't connect to Server[" + serverDefinition.name + "] Analysis:" + analysis + " Fail : " + e.getMessage()); + } + */ + + } + /** * add in analysis and check the consistence * - * @param value value to check - * @param message name of parameter - * @param analysis analysis builder - * @param check true if the value must not be null or empty - * @param wasOkBefore previous value, is returned if this check is Ok + * @param value value to check + * @param message name of parameter + * @param analysis analysis builder + * @param check true if the value must not be null or empty + * @param displayValueInAnalysis true if the value can be added in the analysis + * @param wasOkBefore previous value, is returned if this check is Ok * @return previous value is ok false else */ - private boolean stillOk(Object value, String message, StringBuilder analysis, boolean check, boolean wasOkBefore) { + private boolean stillOk(Object value, + String message, + StringBuilder analysis, + boolean check, + boolean displayValueInAnalysis, + boolean wasOkBefore) { analysis.append(message); - analysis.append(" ["); - analysis.append(value); - analysis.append(" ]"); + analysis.append("["); + analysis.append(getDisplayValue(value, displayValueInAnalysis)); + analysis.append("], "); if (check) { - if (value == null || (value instanceof String && ((String) value).isEmpty())) { + if (value == null || (value instanceof String valueString && valueString.isEmpty())) { analysis.append("No "); analysis.append(message); + logger.error("Check failed {} value:[{}]", message, getDisplayValue(value, displayValueInAnalysis)); return false; } } return wasOkBefore; } + private String getDisplayValue(Object value, boolean displayValueInAnalysis) { + if (value == null) + return "null"; + if (displayValueInAnalysis) + return value.toString(); + if (value.toString().length() <= 3) + return "***"; + return value.toString().substring(0, 3) + "***"; + } } diff --git a/src/main/java/org/camunda/automator/configuration/BpmnEngineList.java b/src/main/java/org/camunda/automator/configuration/BpmnEngineList.java index ff57985..3d11dfc 100644 --- a/src/main/java/org/camunda/automator/configuration/BpmnEngineList.java +++ b/src/main/java/org/camunda/automator/configuration/BpmnEngineList.java @@ -27,9 +27,22 @@ public class BpmnEngineList { public static final String CONF_WORKER_MAX_JOBS_ACTIVE = "workerMaxJobsActive"; public static final String CONF_WORKER_EXECUTION_THREADS = "workerExecutionThreads"; public static final String CONF_TASK_LIST_URL = "taskListUrl"; + public static final String CONF_TASK_LIST_USER = "taskListUserName"; + public static final String CONF_TASK_LIST_PASSWORD = "taskListUserPassword"; + public static final String CONF_TASK_LIST_CLIENT_ID = "taskListClientId"; + public static final String CONF_TASK_LIST_CLIENT_SECRET = "taskListClientSecret"; + // Example taskListKeycloakUrl: "http://localhost:18080/auth/realms/camunda-platform" + public static final String CONF_TASK_LIST_KEYCLOAK_URL = "taskListKeycloakUrl"; + + public static final String CONF_IDENTITY_URL = "identityUrl"; public static final String CONF_OPERATE_URL = "operateUrl"; public static final String CONF_OPERATE_USER_PASSWORD = "operateUserPassword"; public static final String CONF_OPERATE_USER_NAME = "operateUserName"; + public static final String CONF_AUTHENTICATIONURL = "authenticationUrl"; + public static final String CONF_OPERATE_CLIENT_ID = "operateClientId"; + public static final String CONF_OPERATE_CLIENT_SECRET = "operateClientSecret"; + public static final String CONF_OPERATE_AUDIENCE = "operateAudientce"; + public static final String CONF_ZEEBE_GATEWAY_ADDRESS = "zeebeGatewayAddress"; public static final String CONF_URL = "url"; public static final String CONF_TYPE = "type"; @@ -38,11 +51,12 @@ public class BpmnEngineList { public static final String CONF_TYPE_V_CAMUNDA_7 = "camunda7"; public static final String CONF_ZEEBE_SAAS_REGION = "region"; - public static final String CONF_ZEEBE_SAAS_SECRET = "secret"; + public static final String CONF_ZEEBE_SECRET = "zeebeClientSecret"; public static final String CONF_ZEEBE_SAAS_CLUSTER_ID = "clusterId"; - public static final String CONF_ZEEBE_SAAS_CLIENT_ID = "clientId"; - public static final String CONF_ZEEBE_SAAS_OAUTHURL = "oAuthUrl"; - public static final String CONF_ZEEBE_SAAS_AUDIENCE = "audience"; + public static final String CONF_ZEEBE_CLIENT_ID = "zeebeClientId"; + public static final String CONF_ZEEBE_AUDIENCE = "zeebeAudience"; + public static final String CONF_ZEEBE_PLAINTEXT = "zeebePlainText"; + public static final String ZEEBE_DEFAULT_AUDIENCE = "zeebe.camunda.io"; static Logger logger = LoggerFactory.getLogger(BpmnEngineList.class); @@ -70,21 +84,22 @@ public void init() { for (BpmnServerDefinition server : allServers) { String serverDetails = "Configuration Server Type[" + server.serverType + "] "; if (server.serverType == null) { - logger.error("ServerType not declared for server [" + server.name + "]"); + logger.error("ServerType not declared for server [{}]", server.name); return; } serverDetails += switch (server.serverType) { case CAMUNDA_8 -> "ZeebeadressGateway [" + server.zeebeGatewayAddress + "]"; - case CAMUNDA_8_SAAS -> "ZeebeClientId [" + server.zeebeSaasClientId + "] ClusterId[" - + server.zeebeSaasClusterId + "] RegionId[" + server.zeebeSaasRegion + "]"; + case CAMUNDA_8_SAAS -> + "ZeebeClientId [" + server.zeebeClientId + "] ClusterId[" + server.zeebeSaasClusterId + "] RegionId[" + + server.zeebeSaasRegion + "]"; case CAMUNDA_7 -> "Camunda7URL [" + server.camunda7ServerUrl + "]"; case DUMMY -> "Dummy"; }; logger.info(serverDetails); } } catch (Exception e) { - logger.error("Error during initialization : " + e.getMessage()); + logger.error("Error during initialization : {}", e.getMessage()); } } @@ -120,7 +135,7 @@ public BpmnEngineList.BpmnServerDefinition getByServerName(String serverName) th * @return a server * @throws AutomatorException on any error */ - public BpmnEngineList.BpmnServerDefinition getByServerType(CamundaEngine serverType) throws AutomatorException { + public BpmnEngineList.BpmnServerDefinition getByServerType(CamundaEngine serverType) { Optional first = allServers.stream() .filter(t -> sameType(t.serverType, serverType)) .findFirst(); @@ -176,48 +191,82 @@ private List getFromServersList() throws AutomatorExceptio for (Map serverMap : configurationServersEngine.getServersList()) { count++; BpmnServerDefinition bpmnServerDefinition = new BpmnServerDefinition(); - bpmnServerDefinition.name = getString("name", serverMap, null, "ServerList #" + count); + bpmnServerDefinition.name = getString("name", serverMap, null, "ServerList #" + count, true); String contextLog = "ServerList #" + count + " Name [" + bpmnServerDefinition.name + "]"; bpmnServerDefinition.workerMaxJobsActive = getInteger(CONF_WORKER_MAX_JOBS_ACTIVE, serverMap, DEFAULT_VALUE_MAX_JOBS_ACTIVE, contextLog); - if (CONF_TYPE_V_CAMUNDA_7.equalsIgnoreCase(getString(CONF_TYPE, serverMap, null, contextLog))) { + if (CONF_TYPE_V_CAMUNDA_7.equalsIgnoreCase(getString(CONF_TYPE, serverMap, null, contextLog, true))) { bpmnServerDefinition.serverType = CamundaEngine.CAMUNDA_7; - bpmnServerDefinition.camunda7ServerUrl = getString(CONF_URL, serverMap, null, contextLog); + bpmnServerDefinition.camunda7ServerUrl = getString(CONF_URL, serverMap, null, contextLog, true); if (bpmnServerDefinition.camunda7ServerUrl == null) throw new AutomatorException( "Incorrect Definition - [url] expected for [" + CONF_TYPE_V_CAMUNDA_7 + "] type " + contextLog); } - if (CONF_TYPE_V_CAMUNDA_8.equalsIgnoreCase(getString(CONF_TYPE, serverMap, null, contextLog))) { + + if (CONF_TYPE_V_CAMUNDA_8.equalsIgnoreCase(getString(CONF_TYPE, serverMap, null, contextLog, true))) { bpmnServerDefinition.serverType = CamundaEngine.CAMUNDA_8; - bpmnServerDefinition.zeebeGatewayAddress = getString(CONF_ZEEBE_GATEWAY_ADDRESS, serverMap, null, contextLog); - bpmnServerDefinition.operateUserName = getString(CONF_OPERATE_USER_NAME, serverMap, null, contextLog); - bpmnServerDefinition.operateUserPassword = getString(CONF_OPERATE_USER_PASSWORD, serverMap, null, contextLog); - bpmnServerDefinition.operateUrl = getString(CONF_OPERATE_URL, serverMap, null, contextLog); - bpmnServerDefinition.taskListUrl = getString(CONF_TASK_LIST_URL, serverMap, null, contextLog); + bpmnServerDefinition.zeebeGatewayAddress = getString(CONF_ZEEBE_GATEWAY_ADDRESS, serverMap, null, contextLog, + true); + bpmnServerDefinition.zeebeClientId = getString(CONF_ZEEBE_CLIENT_ID, serverMap, null, contextLog, false); + bpmnServerDefinition.zeebeClientSecret = getString(CONF_ZEEBE_SECRET, serverMap, null, contextLog, false); + bpmnServerDefinition.zeebeAudience = getString(CONF_ZEEBE_AUDIENCE, serverMap, ZEEBE_DEFAULT_AUDIENCE, + contextLog, false); + bpmnServerDefinition.zeebePlainText = getBoolean(CONF_ZEEBE_PLAINTEXT, serverMap, true, contextLog, false); + bpmnServerDefinition.authenticationUrl = getString(CONF_AUTHENTICATIONURL, serverMap, null, contextLog, false); + + bpmnServerDefinition.identityUrl = getString(CONF_IDENTITY_URL, serverMap, null, contextLog, false); + bpmnServerDefinition.operateUrl = getString(CONF_OPERATE_URL, serverMap, null, contextLog, false); + bpmnServerDefinition.operateUserName = getString(CONF_OPERATE_USER_NAME, serverMap, "Demo", contextLog, false); + bpmnServerDefinition.operateUserPassword = getString(CONF_OPERATE_USER_PASSWORD, serverMap, "Demo", contextLog, + false); + bpmnServerDefinition.operateClientId = getString(CONF_OPERATE_CLIENT_ID, serverMap, null, contextLog, false); + bpmnServerDefinition.operateClientSecret = getString(CONF_OPERATE_CLIENT_SECRET, serverMap, null, contextLog, + false); + bpmnServerDefinition.operateAudience = getString(CONF_OPERATE_AUDIENCE, serverMap, null, contextLog, false); + + bpmnServerDefinition.taskListUrl = getString(CONF_TASK_LIST_URL, serverMap, null, contextLog, false); + bpmnServerDefinition.taskListUserName = getString(CONF_TASK_LIST_USER, serverMap, null, contextLog, false); + bpmnServerDefinition.taskListUserPassword = getString(CONF_TASK_LIST_PASSWORD, serverMap, null, contextLog, + false); + bpmnServerDefinition.taskListClientId = getString(CONF_TASK_LIST_CLIENT_ID, serverMap, null, contextLog, false); + bpmnServerDefinition.taskListClientSecret = getString(CONF_TASK_LIST_CLIENT_SECRET, serverMap, null, contextLog, + false); + bpmnServerDefinition.taskListKeycloakUrl = getString(CONF_TASK_LIST_KEYCLOAK_URL, serverMap, null, contextLog, + false); + bpmnServerDefinition.workerExecutionThreads = getInteger(CONF_WORKER_EXECUTION_THREADS, serverMap, DEFAULT_VALUE_EXECUTION_THREADS, contextLog); if (bpmnServerDefinition.zeebeGatewayAddress == null) throw new AutomatorException( "Incorrect Definition - [zeebeGatewayAddress] expected for [" + CONF_TYPE_V_CAMUNDA_8 + "] type"); } - if (CONF_TYPE_V_CAMUNDA_8_SAAS.equalsIgnoreCase(getString(CONF_TYPE, serverMap, null, contextLog))) { + + if (CONF_TYPE_V_CAMUNDA_8_SAAS.equalsIgnoreCase(getString(CONF_TYPE, serverMap, null, contextLog, true))) { bpmnServerDefinition.serverType = CamundaEngine.CAMUNDA_8_SAAS; - bpmnServerDefinition.zeebeSaasRegion = getString(CONF_ZEEBE_SAAS_REGION, serverMap, null, contextLog); - bpmnServerDefinition.zeebeSaasClientSecret = getString(CONF_ZEEBE_SAAS_SECRET, serverMap, null, contextLog); - bpmnServerDefinition.zeebeSaasClusterId = getString(CONF_ZEEBE_SAAS_CLUSTER_ID, serverMap, null, contextLog); - bpmnServerDefinition.zeebeSaasClientId = getString(CONF_ZEEBE_SAAS_CLIENT_ID, serverMap, null, contextLog); - bpmnServerDefinition.zeebeSaasOAuthUrl = getString(CONF_ZEEBE_SAAS_OAUTHURL, serverMap, null, contextLog); - bpmnServerDefinition.zeebeSaasAudience = getString(CONF_ZEEBE_SAAS_AUDIENCE, serverMap, null, contextLog); + bpmnServerDefinition.zeebeSaasRegion = getString(CONF_ZEEBE_SAAS_REGION, serverMap, null, contextLog, true); + bpmnServerDefinition.zeebeSaasClusterId = getString(CONF_ZEEBE_SAAS_CLUSTER_ID, serverMap, null, contextLog, + true); + bpmnServerDefinition.zeebeClientId = getString(CONF_ZEEBE_CLIENT_ID, serverMap, null, contextLog, true); + bpmnServerDefinition.zeebeClientSecret = getString(CONF_ZEEBE_SECRET, serverMap, null, contextLog, true); + bpmnServerDefinition.zeebeAudience = getString(CONF_ZEEBE_AUDIENCE, serverMap, ZEEBE_DEFAULT_AUDIENCE, + contextLog, true); + bpmnServerDefinition.authenticationUrl = getString(CONF_AUTHENTICATIONURL, serverMap, + "https://login.cloud.camunda.io/oauth/token", contextLog, false); bpmnServerDefinition.workerExecutionThreads = getInteger(CONF_WORKER_EXECUTION_THREADS, serverMap, DEFAULT_VALUE_EXECUTION_THREADS, contextLog); - bpmnServerDefinition.operateUserName = getString(CONF_OPERATE_USER_NAME, serverMap, null, contextLog); - bpmnServerDefinition.operateUserPassword = getString(CONF_OPERATE_USER_PASSWORD, serverMap, null, contextLog); - bpmnServerDefinition.operateUrl = getString(CONF_OPERATE_URL, serverMap, null, contextLog); - bpmnServerDefinition.taskListUrl = getString(CONF_TASK_LIST_URL, serverMap, null, contextLog); - if (bpmnServerDefinition.zeebeSaasRegion == null || bpmnServerDefinition.zeebeSaasClientSecret == null - || bpmnServerDefinition.zeebeSaasClusterId == null || bpmnServerDefinition.zeebeSaasClientId == null) + bpmnServerDefinition.operateUserName = getString(CONF_OPERATE_USER_NAME, serverMap, null, contextLog, false); + bpmnServerDefinition.operateUserPassword = getString(CONF_OPERATE_USER_PASSWORD, serverMap, null, contextLog, + false); + bpmnServerDefinition.operateUrl = getString(CONF_OPERATE_URL, serverMap, null, contextLog, false); + bpmnServerDefinition.taskListUrl = getString(CONF_TASK_LIST_URL, serverMap, null, contextLog, false); + bpmnServerDefinition.taskListClientId = getString(CONF_TASK_LIST_CLIENT_ID, serverMap, null, contextLog, false); + bpmnServerDefinition.taskListClientSecret = getString(CONF_TASK_LIST_CLIENT_SECRET, serverMap, null, contextLog, + false); + + if (bpmnServerDefinition.zeebeSaasRegion == null || bpmnServerDefinition.zeebeClientSecret == null + || bpmnServerDefinition.zeebeSaasClusterId == null || bpmnServerDefinition.zeebeClientId == null) throw new AutomatorException( "Incorrect Definition - [zeebeCloudRegister],[zeebeCloudRegion], [zeebeClientSecret},[zeebeCloudClusterId],[zeebeCloudClientId] expected for [Camunda8SaaS] type"); } @@ -259,14 +308,14 @@ private BpmnServerDefinition decodeServerConnection(String connectionString, Str } else if (CamundaEngine.CAMUNDA_8_SAAS.equals(bpmnServerDefinition.serverType)) { bpmnServerDefinition.zeebeSaasRegion = (st.hasMoreTokens() ? st.nextToken() : null); bpmnServerDefinition.zeebeSaasClusterId = (st.hasMoreTokens() ? st.nextToken() : null); - bpmnServerDefinition.zeebeSaasClientId = (st.hasMoreTokens() ? st.nextToken() : null); - bpmnServerDefinition.zeebeSaasClientSecret = (st.hasMoreTokens() ? st.nextToken() : null); - bpmnServerDefinition.zeebeSaasOAuthUrl = (st.hasMoreTokens() ? st.nextToken() : null); - bpmnServerDefinition.zeebeSaasAudience = (st.hasMoreTokens() ? st.nextToken() : null); - bpmnServerDefinition.operateUrl = (st.hasMoreTokens() ? st.nextToken() : null); - bpmnServerDefinition.operateUserName = (st.hasMoreTokens() ? st.nextToken() : null); - bpmnServerDefinition.operateUserPassword = (st.hasMoreTokens() ? st.nextToken() : null); - bpmnServerDefinition.taskListUrl = (st.hasMoreTokens() ? st.nextToken() : null); + bpmnServerDefinition.zeebeClientId = (st.hasMoreTokens() ? st.nextToken() : null); + bpmnServerDefinition.zeebeClientSecret = (st.hasMoreTokens() ? st.nextToken() : null); + bpmnServerDefinition.zeebeAudience = (st.hasMoreTokens() ? st.nextToken() : null); + bpmnServerDefinition.operateClientId = (st.hasMoreTokens() ? st.nextToken() : null); + bpmnServerDefinition.operateClientSecret = (st.hasMoreTokens() ? st.nextToken() : null); + bpmnServerDefinition.taskListClientId = (st.hasMoreTokens() ? st.nextToken() : null); + bpmnServerDefinition.taskListClientSecret = (st.hasMoreTokens() ? st.nextToken() : null); + bpmnServerDefinition.workerExecutionThreads = (st.hasMoreTokens() ? parseInt(CONF_WORKER_EXECUTION_THREADS, st.nextToken(), DEFAULT_VALUE_EXECUTION_THREADS, contextLog) : null); @@ -334,10 +383,10 @@ private List getFromServerConfiguration() { camunda8.name = configurationServersEngine.zeebeName; camunda8.zeebeSaasRegion = configurationServersEngine.zeebeSaasRegion; camunda8.zeebeSaasClusterId = configurationServersEngine.zeebeSaasClusterId; - camunda8.zeebeSaasClientId = configurationServersEngine.zeebeSaasClientId; - camunda8.zeebeSaasClientSecret = configurationServersEngine.zeebeSaasClientSecret; - camunda8.zeebeSaasOAuthUrl = configurationServersEngine.zeebeSaasOAuthUrl; - camunda8.zeebeSaasAudience = configurationServersEngine.zeebeSaasAudience; + camunda8.zeebeClientId = configurationServersEngine.zeebeSaasClientId; + camunda8.zeebeClientSecret = configurationServersEngine.zeebeSaasClientSecret; + camunda8.authenticationUrl = configurationServersEngine.zeebeSaasOAuthUrl; + camunda8.zeebeAudience = configurationServersEngine.zeebeSaasAudience; camunda8.operateUrl = configurationServersEngine.zeebeOperateUrl; camunda8.operateUserName = configurationServersEngine.zeebeOperateUserName; camunda8.operateUserPassword = configurationServersEngine.zeebeOperateUserPassword; @@ -357,32 +406,62 @@ private List getFromServerConfiguration() { /* */ /* ******************************************************************** */ - private String getString(String name, Map record, String defaultValue, String contextLog) { + private String getString(String name, + Map recordData, + String defaultValue, + String contextLog, + boolean isMandatory) { try { - if (!record.containsKey(name)) { - if (defaultValue == null) - logger.error(contextLog + "Variable [{}] not defined in {}", name, contextLog); - else - logger.info(contextLog + "Variable [{}] not defined in {}", name, contextLog); + if (!recordData.containsKey(name)) { + if (isMandatory) { + if (defaultValue == null) + logger.error("{}Variable [{}] not defined in {}", contextLog, name, contextLog); + else + logger.info("{} Variable [{}] not defined in {}", contextLog, name, contextLog); + } + return defaultValue; + } + return (String) recordData.get(name); + } catch (Exception e) { + logger.error("{} Variable [{}] {} bad definition {}", contextLog, name, contextLog, e.getMessage()); + return defaultValue; + } + } + + private Boolean getBoolean(String name, + Map recordData, + Boolean defaultValue, + String contextLog, + boolean isMandatory) { + try { + if (!recordData.containsKey(name)) { + if (isMandatory) { + if (defaultValue == null) + logger.error("{}Variable [{}] not defined in {}", contextLog, name, contextLog); + else + logger.info("{} Variable [{}] not defined in {}", contextLog, name, contextLog); + } return defaultValue; } - return (String) record.get(name); + if (recordData.get(name) instanceof Boolean valueBoolean) + return valueBoolean; + return Boolean.valueOf(recordData.get(name).toString()); } catch (Exception e) { - logger.error(contextLog + "Variable [{}] {} bad definition {}", name, contextLog, e.getMessage()); + logger.error("{} Variable [{}] {} bad definition {}", contextLog, name, contextLog, e.getMessage()); return defaultValue; } } - private Integer getInteger(String name, Map record, Integer defaultValue, String contextLog) { + private Integer getInteger(String name, Map recordData, Integer defaultValue, String contextLog) { try { - if (!record.containsKey(name)) { + if (!recordData.containsKey(name)) { if (defaultValue == null) logger.error("Variable [{}] not defined in {}", name, contextLog); else logger.info("Variable [{}] not defined in {}", name, contextLog); return defaultValue; } - return (Integer) record.get(name); + return (Integer) recordData.get(name); } catch (Exception e) { logger.error("Variable [{}] {} bad definition {}", name, contextLog, e.getMessage()); return defaultValue; @@ -434,25 +513,38 @@ public static class BpmnServerDefinition { * My Zeebe Address */ public String zeebeGatewayAddress; - public String zeebeSecurityPlainText; + public Boolean zeebePlainText; /** * SaaS Zeebe */ public String zeebeSaasRegion; public String zeebeSaasClusterId; - public String zeebeSaasClientId; - public String zeebeSaasClientSecret; - public String zeebeSaasOAuthUrl; - public String zeebeSaasAudience; + public String zeebeClientId; + public String zeebeClientSecret; + public String zeebeAudience; + public String zeebeTenantId = null; + public String identityUrl; /** * Connection to Operate */ public String operateUserName; public String operateUserPassword; public String operateUrl; + + // something like "http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token" + public String authenticationUrl; + public String operateClientId; + public String operateClientSecret; + public String operateAudience; + public String taskListUrl; + public String taskListUserName; + public String taskListUserPassword; + public String taskListClientId; + public String taskListClientSecret; + public String taskListKeycloakUrl; /** * Camunda 7 @@ -467,17 +559,35 @@ public static class BpmnServerDefinition { public Integer workerExecutionThreads = Integer.valueOf(DEFAULT_VALUE_EXECUTION_THREADS); public Integer workerMaxJobsActive = Integer.valueOf(DEFAULT_VALUE_MAX_JOBS_ACTIVE); + /** + * return true if the definition have an Operate connection valid + * + * @return true is Operate is required + */ + public boolean isOperate() { + return !(operateUrl == null || operateUrl.isEmpty()); + } + + public boolean isTaskList() { + return !(taskListUrl == null || taskListUrl.isEmpty()); + } + + public boolean isAuthenticationUrl() { + return !(authenticationUrl == null || authenticationUrl.isEmpty()); + } public String getSynthesis() { - String synthesis= serverType.name(); + String synthesis = serverType.name(); if (serverType.equals(CamundaEngine.CAMUNDA_7)) { - synthesis+=" url["+camunda7ServerUrl+"] userName["+camunda7UserName+"]"; + synthesis += " url[" + camunda7ServerUrl + "] userName[" + camunda7UserName + "]"; } - if (serverType.equals(CamundaEngine.CAMUNDA_8) ) { - synthesis+=" address["+zeebeGatewayAddress+"] workerThread["+workerExecutionThreads+"] MaxJobActive["+workerMaxJobsActive+"]"; + if (serverType.equals(CamundaEngine.CAMUNDA_8)) { + synthesis += " address[" + zeebeGatewayAddress + "] workerThread[" + workerExecutionThreads + "] MaxJobActive[" + + workerMaxJobsActive + "]"; } - if (serverType.equals(CamundaEngine.CAMUNDA_8_SAAS) ) { - synthesis+=" clientId["+zeebeSaasClientId+"] workerThread["+workerExecutionThreads+"] MaxJobActive["+workerMaxJobsActive+"]"; + if (serverType.equals(CamundaEngine.CAMUNDA_8_SAAS)) { + synthesis += " clientId[" + zeebeClientId + "] workerThread[" + workerExecutionThreads + "] MaxJobActive[" + + workerMaxJobsActive + "]"; } return synthesis; } diff --git a/src/main/java/org/camunda/automator/configuration/ConfigurationServersEngine.java b/src/main/java/org/camunda/automator/configuration/ConfigurationServersEngine.java index aeac642..7422d61 100644 --- a/src/main/java/org/camunda/automator/configuration/ConfigurationServersEngine.java +++ b/src/main/java/org/camunda/automator/configuration/ConfigurationServersEngine.java @@ -53,13 +53,15 @@ public class ConfigurationServersEngine { public String zeebeSaasClusterId; @Value("${automator.servers.camunda8Saas.clientId:''}") public String zeebeSaasClientId; + + @Value("${automator.servers.camunda8Saas.secret:''}") + public String zeebeSaasClientSecret; + @Value("${automator.servers.camunda8Saas.oAuthUrl:''}") public String zeebeSaasOAuthUrl; @Value("${automator.servers.camunda8Saas.audience:''}") public String zeebeSaasAudience; - @Value("${automator.servers.camunda8Saas.secret:''}") - public String zeebeSaasClientSecret; @Value("${automator.servers.camunda8Saas.operateUrl:''}") public String zeebeSaasOperateUrl; @Value("${automator.servers.camunda8Saas.operateUserName:''}") diff --git a/src/main/java/org/camunda/automator/definition/Scenario.java b/src/main/java/org/camunda/automator/definition/Scenario.java index 6daeb13..59bf4c5 100644 --- a/src/main/java/org/camunda/automator/definition/Scenario.java +++ b/src/main/java/org/camunda/automator/definition/Scenario.java @@ -67,7 +67,6 @@ public static Scenario createFromJson(String jsonContent) { return null; } scenario.afterUnSerialize(); - scenario.initialize(); return scenario; } @@ -118,9 +117,6 @@ public static Scenario createFromInputStream(InputStream scenarioInput, String o * Initialize the scenario and complete it */ private void initialize() { - for (int i = 0; i < flows.size(); i++) { - flows.get(i).setStepNumber(i); - } } /** diff --git a/src/main/java/org/camunda/automator/definition/ScenarioStep.java b/src/main/java/org/camunda/automator/definition/ScenarioStep.java index 36242d0..7f5a931 100644 --- a/src/main/java/org/camunda/automator/definition/ScenarioStep.java +++ b/src/main/java/org/camunda/automator/definition/ScenarioStep.java @@ -22,10 +22,15 @@ public class ScenarioStep { private final Map variablesOperation = Collections.emptyMap(); private final Long fixedBackOffDelay = 0L; private final MODEEXECUTION modeExecution = MODEEXECUTION.CLASSICAL; + private final Boolean streamEnabled = true; + /** + * Receive a step range in the scenario, which help to identify the step + */ + private final int stepNumber = -1; /** * In case of a Flow Step, the number of workers to execute this tasks */ - private Integer numberOfWorkers; + private Integer nbWorkers = Integer.valueOf(1); /** * if the step is used in a WarmingUp operation, it can decide this is the time to finish it * Expression is @@ -43,7 +48,6 @@ public class ScenarioStep { * to execute a service task in C8, topic is mandatory */ private String topic; - private final Boolean streamEnable = false; private Map variables = Collections.emptyMap(); private String userId; /** @@ -67,11 +71,6 @@ public class ScenarioStep { */ private String processId; - /** - * Receive a step range in the scenario, which help to identify the step - */ - private int stepNumber = -1; - public ScenarioStep(ScenarioExecution scnExecution) { this.scnExecution = scnExecution; } @@ -134,17 +133,10 @@ public String getTopic() { return topic; } - public boolean isStreamEnable() { - return streamEnable; + public boolean isStreamEnabled() { + return streamEnabled == null || streamEnabled.booleanValue(); } - public int getStepNumber() { - return stepNumber; - } - - public void setStepNumber(int stepNumber) { - this.stepNumber = stepNumber; - } /* ******************************************************************** */ /* */ /* getter */ @@ -214,12 +206,12 @@ public String getFrequency() { return frequency; } - public int getNumberOfWorkers() { - return numberOfWorkers == null || numberOfWorkers == 0 ? 1 : numberOfWorkers; + public int getNbWorkers() { + return nbWorkers == null || nbWorkers == 0 ? 1 : nbWorkers; } - public void setNumberOfWorkers(int nbWorkers) { - this.numberOfWorkers = nbWorkers; + public void setNbWorkers(int nbWorkers) { + this.nbWorkers = nbWorkers; } public String getProcessId() { @@ -227,7 +219,7 @@ public String getProcessId() { } public long getFixedBackOffDelay() { - return fixedBackOffDelay == null ? 0 : fixedBackOffDelay; + return fixedBackOffDelay == null ? 0 : fixedBackOffDelay.longValue(); } protected void afterUnSerialize(ScenarioExecution scnExecution) { diff --git a/src/main/java/org/camunda/automator/engine/RunParameters.java b/src/main/java/org/camunda/automator/engine/RunParameters.java index 4ca18e7..4151bfe 100644 --- a/src/main/java/org/camunda/automator/engine/RunParameters.java +++ b/src/main/java/org/camunda/automator/engine/RunParameters.java @@ -47,6 +47,9 @@ public class RunParameters { private boolean warmingUp = true; + public RunParameters() { + } + public LOGLEVEL getLogLevel() { return logLevel; } diff --git a/src/main/java/org/camunda/automator/engine/RunResult.java b/src/main/java/org/camunda/automator/engine/RunResult.java index 8e4dad2..f78dfe8 100644 --- a/src/main/java/org/camunda/automator/engine/RunResult.java +++ b/src/main/java/org/camunda/automator/engine/RunResult.java @@ -360,8 +360,9 @@ public void add(RecordCreationPI record) { nbCreated += record.nbCreated; nbFailed += record.nbFailed; } + public String toString() { - return "Created["+nbCreated+"] Failed["+nbFailed+"]"; + return "Created[" + nbCreated + "] Failed[" + nbFailed + "]"; } } diff --git a/src/main/java/org/camunda/automator/engine/flow/CreateProcessInstanceThread.java b/src/main/java/org/camunda/automator/engine/flow/CreateProcessInstanceThread.java index a281cc0..d16a316 100644 --- a/src/main/java/org/camunda/automator/engine/flow/CreateProcessInstanceThread.java +++ b/src/main/java/org/camunda/automator/engine/flow/CreateProcessInstanceThread.java @@ -48,7 +48,7 @@ public CreateProcessInstanceThread(int executionBatchNumber, */ public void createProcessInstances(Duration durationToCreateProcessInstances) { - int numberOfThreads = scenarioStep.getNumberOfWorkers() == 0 ? 1 : scenarioStep.getNumberOfWorkers(); + int numberOfThreads = scenarioStep.getNbWorkers() == 0 ? 1 : scenarioStep.getNbWorkers(); ExecutorService executor = Executors.newFixedThreadPool(numberOfThreads); int totalNumberOfPi = 0; @@ -77,9 +77,8 @@ public List getListProcessInstances() { return listStartProcess.stream().flatMap(t -> t.listProcessInstances.stream()).collect(Collectors.toList()); } - public int getNumberOfRunningThreads() { - return (int) listStartProcess.stream().filter(t->t.isRunning()).count(); + return (int) listStartProcess.stream().filter(t -> t.isRunning()).count(); } public int getTotalCreation() { @@ -125,11 +124,11 @@ private class StartProcess implements Runnable { /** * @param executionBatchNumber * @param indexInBatch the component number, when multiple component where generated to handle the flow - * @param numberOfProcessInstanceToStart number of process instance to start by this object + * @param numberOfProcessInstanceToStart number of process instance to start by this object * @param durationToCreateProcessInstances duration max allowed to create process instance - * @param scenarioStep step to use to create the process instance - * @param runScenario scenario to use - * @param runResult result object to save information + * @param scenarioStep step to use to create the process instance + * @param runScenario scenario to use + * @param runResult result object to save information */ public StartProcess(int executionBatchNumber, int indexInBatch, @@ -153,7 +152,7 @@ public StartProcess(int executionBatchNumber, */ @Override public void run() { - isRunning=true; + isRunning = true; boolean alreadyLoggedError = false; isOverload = false; long begin = System.currentTimeMillis(); @@ -191,19 +190,17 @@ public void run() { // log only at the debug mode (thread per thread), in monitoring log only at batch level if (runScenario.getRunParameters().showLevelDebug()) { // take too long to create the required process instance, so stop now. - logger.info("batch_#{} {} Over the duration. Created {} when expected {} in {} ms", - executionBatchNumber, - scenarioStep.getId(), - nbCreation, - numberOfProcessInstanceToStart, currentTimeMillis - begin); + logger.info("batch_#{} {} Over the duration. Created {} when expected {} in {} ms", executionBatchNumber, + scenarioStep.getId(), nbCreation, numberOfProcessInstanceToStart, currentTimeMillis - begin); } isOverload = true; break; } } - isRunning=false; + isRunning = false; } + public boolean isRunning() { return isRunning; diff --git a/src/main/java/org/camunda/automator/engine/flow/RunObjectives.java b/src/main/java/org/camunda/automator/engine/flow/RunObjectives.java index d8a0225..fb853a6 100644 --- a/src/main/java/org/camunda/automator/engine/flow/RunObjectives.java +++ b/src/main/java/org/camunda/automator/engine/flow/RunObjectives.java @@ -134,13 +134,12 @@ private ObjectiveResult checkObjectiveCreated(ScenarioFlowControl.Objective obje int percent = (int) (100.0 * objectiveResult.recordedSuccessValue / (objective.value == 0 ? 1 : objective.value)); - objectiveResult.analysis += - "Objective " + objective.getInformation() // informatin - + ": Goal[" + objective.value // objective - + "] Created(zeebeAPI)[" + processInstancesCreatedAPI // Value by the API, not really accurate - + "] Created(AutomatorRecord)[" + objectiveResult.recordedSuccessValue // value recorded by automator - + " (" + percent + " % )" // percent based on the recorded value - + " CreateFail(AutomatorRecord)[" + objectiveResult.recordedFailValue + "]"; + objectiveResult.analysis += "Objective " + objective.getInformation() // informatin + + ": Goal[" + objective.value // objective + + "] Created(zeebeAPI)[" + processInstancesCreatedAPI // Value by the API, not really accurate + + "] Created(AutomatorRecord)[" + objectiveResult.recordedSuccessValue // value recorded by automator + + " (" + percent + " % )" // percent based on the recorded value + + " CreateFail(AutomatorRecord)[" + objectiveResult.recordedFailValue + "]"; if (objectiveResult.recordedSuccessValue < objective.value) { objectiveResult.success = false; diff --git a/src/main/java/org/camunda/automator/engine/flow/RunScenarioFlowServiceTask.java b/src/main/java/org/camunda/automator/engine/flow/RunScenarioFlowServiceTask.java index 4163e31..af57571 100644 --- a/src/main/java/org/camunda/automator/engine/flow/RunScenarioFlowServiceTask.java +++ b/src/main/java/org/camunda/automator/engine/flow/RunScenarioFlowServiceTask.java @@ -57,6 +57,7 @@ public RunScenarioFlowServiceTask(TaskScheduler scheduler, public String getTopic() { return getScenarioStep().getTopic(); } + @Override public void execute() { registerWorker(); @@ -113,16 +114,14 @@ private void registerWorker() { durationSleep = durationSleep.plusSeconds(10); if (getRunScenario().getRunParameters().showLevelMonitoring()) { - logger.info("Start service TaskId[{}] Topic[{}] StreamEnable:{} DurationSleep[{} ms]", - getScenarioStep().getTaskId(), - getScenarioStep().getTopic(), - getScenarioStep().isStreamEnable(), + logger.info("Start service TaskId[{}] Topic[{}] StreamEnabled:{} DurationSleep[{} ms]", + getScenarioStep().getTaskId(), getScenarioStep().getTopic(), getScenarioStep().isStreamEnabled(), durationSleep.toMillis()); } registeredTask = bpmnEngine.registerServiceTask(getId(), // workerId getScenarioStep().getTopic(), // topic - getScenarioStep().isStreamEnable(), // stream + getScenarioStep().isStreamEnabled(), // stream durationSleep, // lock time new SimpleDelayHandler(this), new FixedBackoffSupplier(getScenarioStep().getFixedBackOffDelay())); } @@ -156,11 +155,11 @@ public SimpleDelayHandler(RunScenarioFlowServiceTask flowServiceTask) { public void execute(org.camunda.bpm.client.task.ExternalTask externalTask, ExternalTaskService externalTaskService) { switch (getScenarioStep().getModeExecution()) { - case CLASSICAL, WAIT -> manageWaitExecution(externalTask, externalTaskService, null, null, - durationSleep.toMillis()); + case CLASSICAL, WAIT -> + manageWaitExecution(externalTask, externalTaskService, null, null, durationSleep.toMillis()); case THREAD, ASYNCHRONOUS -> manageAsynchronousExecution(externalTask, externalTaskService, null, null); - case THREADTOKEN, ASYNCHRONOUSLIMITED -> manageAsynchronousLimitedExecution(externalTask, externalTaskService, - null, null); + case THREADTOKEN, ASYNCHRONOUSLIMITED -> + manageAsynchronousLimitedExecution(externalTask, externalTaskService, null, null); } } @@ -192,7 +191,7 @@ private void manageWaitExecution(org.camunda.bpm.client.task.ExternalTask extern variables = RunZeebeOperation.getVariablesStep(flowServiceTask.getRunScenario(), flowServiceTask.getScenarioStep(), 0); - + /** This should be moved to the Camunda Engine implementation */ /* C7 */ if (externalTask != null) { currentVariables = externalTask.getAllVariables(); @@ -211,12 +210,9 @@ private void manageWaitExecution(org.camunda.bpm.client.task.ExternalTask extern flowServiceTask.runResult.registerAddStepExecution(); } catch (Exception e) { - logger.error( - "Error task[{}] PI[{}] : {}", - flowServiceTask.getId(), + logger.error("Error task[{}] PI[{}] : {}", flowServiceTask.getId(), (externalTask != null ? externalTask.getProcessDefinitionKey() : activatedJob.getProcessInstanceKey()), - e.getMessage() - ); + e.getMessage()); flowServiceTask.runResult.registerAddErrorStepExecution(); diff --git a/src/main/java/org/camunda/automator/engine/flow/RunScenarioFlowStartEvent.java b/src/main/java/org/camunda/automator/engine/flow/RunScenarioFlowStartEvent.java index 709e51b..c1cbbe2 100644 --- a/src/main/java/org/camunda/automator/engine/flow/RunScenarioFlowStartEvent.java +++ b/src/main/java/org/camunda/automator/engine/flow/RunScenarioFlowStartEvent.java @@ -21,6 +21,7 @@ public class RunScenarioFlowStartEvent extends RunScenarioFlowBasic { private final TaskScheduler scheduler; Logger logger = LoggerFactory.getLogger(RunScenarioFlowStartEvent.class); + StartEventRunnable startEventRunnable; private boolean stopping; private boolean isRunning; /** @@ -41,8 +42,6 @@ public String getTopic() { return getScenarioStep().getTaskId(); } - StartEventRunnable startEventRunnable; - @Override public void execute() { stopping = false; @@ -129,7 +128,7 @@ public void run() { if (nbOverloaded > 0) runResult.addError(scenarioStep, - "Overloaded:" + "" + nbOverloaded + " TotalCreation:" + totalCreation // total creation we see + "Overloaded:" + nbOverloaded + " TotalCreation:" + totalCreation // total creation we see + " TheoricNumberExpectred:" + (scenarioStep.getNumberOfExecutions() * executionBatchNumber) // expected @@ -155,7 +154,6 @@ public void run() { createProcessInstanceThread = new CreateProcessInstanceThread(executionBatchNumber, scenarioStep, runScenario, runResult); - // creates all process instances, return when finish OR when duration is reach createProcessInstanceThread.createProcessInstances(durationToCreateProcessInstances); @@ -179,7 +177,6 @@ public void run() { } - // report now if (runScenario.getRunParameters().showLevelMonitoring() || createProcessInstanceThread.isOverload()) { logger.info("Step #{}-{}" + " Create (real/scenario)[{}/{} {}]" // Overload marker diff --git a/src/main/java/org/camunda/automator/engine/flow/RunScenarioFlows.java b/src/main/java/org/camunda/automator/engine/flow/RunScenarioFlows.java index a896d27..6692cd3 100644 --- a/src/main/java/org/camunda/automator/engine/flow/RunScenarioFlows.java +++ b/src/main/java/org/camunda/automator/engine/flow/RunScenarioFlows.java @@ -83,8 +83,6 @@ public void execute(RunResult runResult) { logger.info("ScenarioFlow: ------ TheEnd"); } - - /** * Start execution * @@ -157,12 +155,10 @@ private List startExecution(List lis return listFlows; } - private Optional getFromList(List listTasks, String topic) { return listTasks.stream().filter(t -> t.getTopic().equals(topic)).findFirst(); } - /** * Wait end of execution. according to the time in the scenario, wait this time * @@ -188,7 +184,7 @@ private void waitEndExecution(RunObjectives runObjectives, Date startTestDate, L while (System.currentTimeMillis() < endTimeExpected) { long currentTime = System.currentTimeMillis(); - long sleepTime = Math.min(30 * 1000, endTimeExpected - currentTime); + long sleepTime = Math.min(30 * 1000L, endTimeExpected - currentTime); try { Thread.sleep(sleepTime); } catch (InterruptedException e) { @@ -269,7 +265,7 @@ private void checkObjectives(RunObjectives runObjectives, Date startTestDate, Da // Objectives ask Operate, which get the result with a delay. So, wait 1 mn logger.info("CollectingData... (sleep 30s)"); try { - Thread.sleep(1000 * 30); + Thread.sleep(30 * 1000L); } catch (InterruptedException e) { // do nothing } @@ -319,27 +315,24 @@ private void logRealTime(List listFlows, long timeToFinish case STARTEVENT -> "PI[" + runResultFlow.getRecordCreationPI() + "] delta[" + ( runResultFlow.getRecordCreationPI().get(flowBasic.getScenarioStep().getProcessId()).nbCreated - previousValue) + "]"; - case SERVICETASK -> "StepsExecuted[" + runResultFlow.getNumberOfSteps() + "] delta [" + ( - runResultFlow.getNumberOfSteps() - previousValue) + "] StepsErrors[" + runResultFlow.getNumberOfErrorSteps() - + "]"; - case USERTASK -> "StepsExecuted[" + runResultFlow.getNumberOfSteps() + "] delta [" + ( - runResultFlow.getNumberOfSteps() - previousValue) + "] StepsErrors[" + runResultFlow.getNumberOfErrorSteps() - + "]"; + case SERVICETASK -> + "StepsExecuted[" + runResultFlow.getNumberOfSteps() + "] delta [" + (runResultFlow.getNumberOfSteps() + - previousValue) + "] StepsErrors[" + runResultFlow.getNumberOfErrorSteps() + "]"; + case USERTASK -> + "StepsExecuted[" + runResultFlow.getNumberOfSteps() + "] delta [" + (runResultFlow.getNumberOfSteps() + - previousValue) + "] StepsErrors[" + runResultFlow.getNumberOfErrorSteps() + "]"; default -> "]"; }; logger.info(key); switch (scenarioStep.getType()) { - case STARTEVENT -> { - previousValueMap.put(flowBasic.getId(), - runResultFlow.getRecordCreationPI().get(flowBasic.getScenarioStep().getProcessId()).nbCreated); - } - case SERVICETASK -> { - previousValueMap.put(flowBasic.getId(), (long) runResultFlow.getNumberOfSteps()); - } - case USERTASK -> { - previousValueMap.put(flowBasic.getId(), (long) runResultFlow.getNumberOfSteps()); - } + case STARTEVENT -> previousValueMap.put(flowBasic.getId(), + runResultFlow.getRecordCreationPI().get(flowBasic.getScenarioStep().getProcessId()).nbCreated); + + case SERVICETASK -> previousValueMap.put(flowBasic.getId(), (long) runResultFlow.getNumberOfSteps()); + + case USERTASK -> previousValueMap.put(flowBasic.getId(), (long) runResultFlow.getNumberOfSteps()); + default -> { } } diff --git a/src/main/java/org/camunda/automator/engine/flow/RunScenarioWarmingUp.java b/src/main/java/org/camunda/automator/engine/flow/RunScenarioWarmingUp.java index a8b7112..f010767 100644 --- a/src/main/java/org/camunda/automator/engine/flow/RunScenarioWarmingUp.java +++ b/src/main/java/org/camunda/automator/engine/flow/RunScenarioWarmingUp.java @@ -72,7 +72,7 @@ public void warmingUp(RunResult runResult) { .filter(t -> t.getType().equals(ScenarioStep.Step.SERVICETASK)) .toList()); } - if (warmingUp.useUserTasks && runScenario.getRunParameters().isUserTask()) { + if (warmingUp.useUserTasks && runScenario.getRunParameters().isUserTask()) { listOperationWarmingUp.addAll(runScenario.getScenario() .getFlows() .stream() @@ -80,16 +80,13 @@ public void warmingUp(RunResult runResult) { .toList()); } - logger.info("WarmingUp: Start ---- {} operations (Scenario/Policy: serviceTask:{}/{} userTask: {}/{})", listOperationWarmingUp.size(), // size of operations warmingUp.useServiceTasks, // scenario allow service task? runScenario.getRunParameters().isServiceTask(), // pod can run service task? - warmingUp.useUserTasks, - runScenario.getRunParameters().isUserTask() // pod can run User Task? + warmingUp.useUserTasks, runScenario.getRunParameters().isUserTask() // pod can run User Task? ); - for (ScenarioStep scenarioStep : listOperationWarmingUp) { switch (scenarioStep.getType()) { case STARTEVENT -> { @@ -114,9 +111,8 @@ public void warmingUp(RunResult runResult) { userTask.execute(); listWarmingUpUserTask.add(userTask); } - default -> { - logger.info("WarmingUp: Unknown [{}]", scenarioStep.getType()); - } + default -> logger.info("WarmingUp: Unknown [{}]", scenarioStep.getType()); + } } @@ -177,6 +173,7 @@ public List getListWarmingUpTask() { return Stream.concat(listWarmingUpServiceTask.stream(), listWarmingUpUserTask.stream()) .collect(Collectors.toList()); } + /** * StartEventRunnable * Must be runnable because we will schedule it. @@ -185,7 +182,6 @@ class StartEventWarmingUpRunnable implements Runnable { private final TaskScheduler scheduler; private final ScenarioStep scenarioStep; - private final int index; private final RunScenario runScenario; private final RunResult runResult; public boolean stop = false; @@ -203,7 +199,6 @@ public StartEventWarmingUpRunnable(TaskScheduler scheduler, RunResult runResult) { this.scheduler = scheduler; this.scenarioStep = scenarioStep; - this.index = index; this.runScenario = runScenario; this.runResult = runResult; } diff --git a/src/main/java/org/camunda/automator/services/AutomatorStartup.java b/src/main/java/org/camunda/automator/services/AutomatorStartup.java index 1c240ec..b984c32 100644 --- a/src/main/java/org/camunda/automator/services/AutomatorStartup.java +++ b/src/main/java/org/camunda/automator/services/AutomatorStartup.java @@ -223,8 +223,7 @@ else if (scenarioObject instanceof Resource scenarioResource) { throw new AutomatorException( "Server [" + runParameters.getServerName() + "] does not exist in the list"); - if (runParameters.showLevelMonitoring()) - { + if (runParameters.showLevelMonitoring()) { logger.info("Run scenario with Server {}", serverDefinition.getSynthesis()); } bpmnEngine = automatorAPI.getBpmnEngine(serverDefinition, true); diff --git a/src/main/resources/application.yaml b/src/main/resources/application.yaml index 89d5dd7..067578c 100644 --- a/src/main/resources/application.yaml +++ b/src/main/resources/application.yaml @@ -6,9 +6,9 @@ automator: # give the server to run all tests at startup. The name must be registered in the list of server after serverName: - scenarioPath: ./src/main/resources/loadtest + scenarioPath: # list of scenario separate by ; - scenarioFileAtStartup: D:\pym\CamundaDrive\MyWork\Challenge\loadtest\SCN_BankOfAndora.json; + scenarioFileAtStartup: # one scenario resource - to be accessible in a Docker container via a configMap scenarioResourceAtStartup: @@ -18,7 +18,7 @@ automator: # string composed with DEPLOYPROCESS, WARMINGUP, CREATION, SERVICETASK, USERTASK # (ex: "CREATION|DEPLOYPROCESS|CREATION|SERVICETASK") policyExecution: DEPLOYPROCESS|WARMINGUP|CREATION|SERVICETASK|USERTASK - filterService2: simple-task + filterService2: check-identity deepTracking: false @@ -47,26 +47,66 @@ automator: operateUserName: "demo" operateUserPassword: "demo" operateUrl: "http://localhost:8081" + taskListUserName: "demo" + taskListUserPassword: "demo" taskListUrl: "http://localhost:8082" workerExecutionThreads: 10 workerMaxJobsActive: 10 + - type: "camunda8" + name: "Camunda8Lazuli" + description: "A Zeebe+Identity server" + zeebeGatewayAddress: "127.0.0.1:26500" + zeebeClientId: "zeebe" + zeebeClientSecret: "LHwdAq56bZ" + zeebeAudience: "zeebe" + zeebePlainText: true + authenticationUrl: "http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token" + + operateClientId: "operate" + operateClientSecret: "Ns0ZGTrm24" + operateUserName: "demo" + operateUserPassword: "demo" + operateUrl: "http://localhost:8081" + + taskListClientId: "tasklist" + taskListClientSecret: "DCjtjiIwmd" + taskListUserName: "demo" + taskListUserPassword: "demo" + taskListUrl: "http://localhost:8082" + taskListKeycloakUrl: "http://localhost:18080/auth/realms/camunda-platform" + + workerExecutionThreads: 10 + workerMaxJobsActive: 10 + + - type: "camunda8" + name: "Camunda8ZeebeOnly" + zeebeGatewayAddress: "127.0.0.1:26500" + zeebePlainText: true + workerExecutionThreads: 10 + workerMaxJobsActive: 10 + + - type: "camunda8saas" name: "Camunda8Grena" workerExecutionThreads: 10 workerMaxJobsActive: 10 - + region: "jfk-1" + clusterId: "b16d70cb-b654-4d76-a3a4-d4e438e4447c" + zeebeClientId: "nDyNLPuBqNrlQs4_3RsTDsFCgn~LkmJB" + zeebeClientSecret: "6HwNaOHVjHCUSjVmzm4J8zDtyohyxk7b~JF1PatZqnpDjujneQ62~dEh6M-j3APc" # Cluster 8.3.0 - region: "bru-2" - clusterId: "4b...e2" - clientId: "bs...6a" - secret: "-Ez...ZG" - oAuthUrl: "https://login.cloud.camunda.io/oauth/token" - audience: "zeebe.camunda.io" + authenticationUrl: "https://login.cloud.camunda.io/oauth/token" + zeebeAudience: "zeebe.camunda.io" + operateUrl: "https://bru-2.operate.camunda.io/4b..e2" - taskListUrl: "https://bru-2.tasklist.camunda.io/4b..e2" + operateClientId: SJRsNvQ3sS~LeLh.bYkkIZsRCKs-Y3jr + operateClientSecret: zyB5ihdg62L5afrOcPU.RR~O4poL97BoF5k8YUv.f3WBg9QqadYypp09ffIEXchW + taskListUrl: "https://bru-2.tasklist.camunda.io/4b..e2" + taskListClientId: "H5uyrOHGkG8C8S~FlbA3EWsWsyzXP8mr" + taskListClientSecret: ".7~Lhx0~dntq5hfRc0kbD_5iZLyWWIZ6ZZXbg.LG5snMYIDIaaCDtj8~r~dq.yxk" # This definition is very simple to use in the K8 definition, because one variable can be override servers: diff --git a/src/main/resources/banner.txt b/src/main/resources/banner.txt index 0621f88..c4f684f 100644 --- a/src/main/resources/banner.txt +++ b/src/main/resources/banner.txt @@ -5,5 +5,4 @@ __________ __ |____| |__| \____/ \___ >___ >____ >____ > (____ /____/ |__| \____/|__|_| (____ /__| \____/|__| \/ \/ \/ \/ \/ \/ \/ - (v1.4.0) - + (v1.5.2) diff --git a/src/main/resources/loadtest/C7SimpleTask.json b/src/main/resources/loadtest/C7SimpleTask.json index 5fc3c87..f3f65ed 100644 --- a/src/main/resources/loadtest/C7SimpleTask.json +++ b/src/main/resources/loadtest/C7SimpleTask.json @@ -25,13 +25,12 @@ { "label": "Ended (UserTask TheEnd) Verification", "processId": "SimpleTask", - "type" : "USERTASK", - "taskId" : "CheckTask", + "type": "USERTASK", + "taskId": "CheckTask", "value": 60000 } ] }, - "flows": [ { "type": "STARTEVENT", @@ -50,5 +49,5 @@ "waitingTime": "PT0S", "modeExecution": "ASYNCHRONOUS" } - ] - } \ No newline at end of file + ] +} \ No newline at end of file diff --git a/src/main/resources/loadtest/C7SimpleTaskDelegate.json b/src/main/resources/loadtest/C7SimpleTaskDelegate.json index 53920b4..8fb805a 100644 --- a/src/main/resources/loadtest/C7SimpleTaskDelegate.json +++ b/src/main/resources/loadtest/C7SimpleTaskDelegate.json @@ -25,13 +25,12 @@ { "label": "Ended (UserTask TheEnd) Verification", "processId": "SimpleTaskDelegate", - "type" : "USERTASK", - "taskId" : "CheckTask", + "type": "USERTASK", + "taskId": "CheckTask", "value": 60000 } ] }, - "flows": [ { "type": "STARTEVENT", @@ -44,5 +43,5 @@ "loopcrawl": "generaterandomlist(1000)" } } - ] - } \ No newline at end of file + ] +} \ No newline at end of file diff --git a/src/main/resources/loadtest/DiscoverySeedExtraction.json b/src/main/resources/loadtest/DiscoverySeedExtraction.json index c960779..aac6e99 100644 --- a/src/main/resources/loadtest/DiscoverySeedExtraction.json +++ b/src/main/resources/loadtest/DiscoverySeedExtraction.json @@ -24,8 +24,8 @@ "processIdSubprocess": "DiscoverySeedExtraction", "processIdCallactivity": "DiscoverySeedExtraction-ca", "processId": "DiscoverySeedExtraction-ca", - "type" : "CREATED", - "value" : 150, + "type": "CREATED", + "value": 150, "real": "Frequency: 5/20S Duration: 10MN : 150" }, { @@ -33,16 +33,16 @@ "processIdSubprocess": "DiscoverySeedExtraction", "processIdCallactivity": "DiscoverySeedExtraction-ca", "processId": "DiscoverySeedExtraction-ca", - "type" : "ENDED", - "value" : 0 + "type": "ENDED", + "value": 0 }, { "label": "Ended (UserTask TheEnd) SeedExtraction", "processIdSubprocess": "DiscoverySeedExtraction", "processIdCallactivity": "DiscoverySeedExtraction-ca", "processId": "DiscoverySeedExtraction-ca", - "type" : "USERTASK", - "taskId" : "Activity_DiscoverySeedExtraction_TheEnd", + "type": "USERTASK", + "taskId": "Activity_DiscoverySeedExtraction_TheEnd", "value": 150 }, { @@ -50,14 +50,14 @@ "processIdSubprocess": "DiscoverySeedExtraction", "processIdCallactivity": "DiscoverySeedExtraction-ca", "processId": "DiscoverySeedExtraction-ca", - "type" : "FLOWRATEUSERTASKMN", - "taskId" : "Activity_DiscoverySeedExtraction_TheEnd", + "type": "FLOWRATEUSERTASKMN", + "taskId": "Activity_DiscoverySeedExtraction_TheEnd", "standardDeviation": 10, "value": 15 } ] }, - "warmingUp" : { + "warmingUp": { "duration": "PT4M", "operations": [ { diff --git a/src/main/resources/loadtest/Verification.json b/src/main/resources/loadtest/Verification.json index 20683c3..f05d6ff 100644 --- a/src/main/resources/loadtest/Verification.json +++ b/src/main/resources/loadtest/Verification.json @@ -25,21 +25,21 @@ { "label": "Ended (UserTask TheEnd) Verification", "processId": "Verification", - "type" : "USERTASK", - "taskId" : "Activity_Verification_TheEnd", + "type": "USERTASK", + "taskId": "Activity_Verification_TheEnd", "value": 20820 }, { "label": "Flow per minutes", "processId": "Verification", - "type" : "FLOWRATEUSERTASKMN", - "taskId" : "Activity_Verification_TheEnd", + "type": "FLOWRATEUSERTASKMN", + "taskId": "Activity_Verification_TheEnd", "standardDeviation": 10, "value": 2082 } ] }, - "warmingUp" : { + "warmingUp": { "duration": "PT4M", "operations": [ { @@ -62,7 +62,7 @@ "frequency": "PT10S", "numberOfExecutions": "347", "nbWorkers": "1", - "label" : "347 /1 worker" + "label": "347 /1 worker" }, { "type": "SERVICETASK",