diff --git a/_benchmark/change-log.md b/_benchmark/change-log.md new file mode 100644 index 0000000000..89278cd91c --- /dev/null +++ b/_benchmark/change-log.md @@ -0,0 +1,25 @@ +--- +layout: default +title: Change log +nav_order: 150 +--- + +# Change log + +This page gives details on any changes made between versions for OpenSearch Benchmark, starting in version 2.X. + +## 2.X + +The following benchmark components have been renamed from OpenSearch Benchmark 1.X and 2.X. + + +| 1.X | 2.X | +| :--- | :--- | +| execute-test | run | +| test-execution | test-run | +| test-procedure | scenario | +| load-worker-coordinator-host | worker-ip | +| results-publishing | reporting | +| provision_configs / provision_config_instance | cluster_config | + + diff --git a/_benchmark/quickstart.md b/_benchmark/quickstart.md index a6bcd59819..561d115ad7 100644 --- a/_benchmark/quickstart.md +++ b/_benchmark/quickstart.md @@ -76,7 +76,7 @@ If successful, OpenSearch returns the following response: ```bash $ opensearch-benchmark --help -usage: opensearch-benchmark [-h] [--version] {execute-test,list,info,create-workload,generate,compare,download,install,start,stop} ... +usage: opensearch-benchmark [-h] [--version] {run,list,info,create-workload,generate,compare,download,install,start,stop} ... ____ _____ __ ____ __ __ / __ \____ ___ ____ / ___/___ ____ ___________/ /_ / __ )___ ____ _____/ /_ ____ ___ ____ ______/ /__ @@ -92,13 +92,13 @@ optional arguments: --version show program's version number and exit subcommands: - {execute-test,list,info,create-workload,generate,compare,download,install,start,stop} - execute-test Run a benchmark + {run,list,info,create-workload,generate,compare,download,install,start,stop} + run Run a benchmark list List configuration options info Show info about a workload create-workload Create a Benchmark workload from existing data generate Generate artifacts - compare Compare two test_executions + compare Compare two test_runs download Downloads an artifact install Installs an OpenSearch node locally start Starts an OpenSearch node locally @@ -114,9 +114,9 @@ You can now run your first benchmark. The following benchmark uses the [percolat ### Understanding workload command flags -Benchmarks are run using the [`execute-test`]({{site.url}}{{site.baseurl}}/benchmark/commands/execute-test/) command with the following command flags: +Benchmarks are run using the [`run`]({{site.url}}{{site.baseurl}}/benchmark/commands/run/) command with the following command flags: -For additional `execute_test` command flags, see the [execute-test]({{site.url}}{{site.baseurl}}/benchmark/commands/execute-test/) reference. Some commonly used options are `--workload-params`, `--exclude-tasks`, and `--include-tasks`. +For additional `execute_test` command flags, see the [run]({{site.url}}{{site.baseurl}}/benchmark/commands/run/) reference. Some commonly used options are `--workload-params`, `--exclude-tasks`, and `--include-tasks`. {: .tip} * `--pipeline=benchmark-only` : Informs OSB that users wants to provide their own OpenSearch cluster. @@ -125,14 +125,14 @@ For additional `execute_test` command flags, see the [execute-test]({{site.url}} * `--client-options="basic_auth_user:'',basic_auth_password:''"`: The username and password for your OpenSearch cluster. * `--test-mode`: Allows a user to run the workload without running it for the entire duration. When this flag is present, Benchmark runs the first thousand operations of each task in the workload. This is only meant for sanity checks---the metrics produced are meaningless. -The `--distribution-version`, which indicates which OpenSearch version Benchmark will use when provisioning. When run, the `execute-test` command will parse the correct distribution version when it connects to the OpenSearch cluster. +The `--distribution-version`, which indicates which OpenSearch version Benchmark will use when provisioning. When run, the `run` command will parse the correct distribution version when it connects to the OpenSearch cluster. ### Running the workload -To run the [percolator](https://github.com/opensearch-project/opensearch-benchmark-workloads/tree/main/percolator) workload with OpenSearch Benchmark, use the following `execute-test` command: +To run the [percolator](https://github.com/opensearch-project/opensearch-benchmark-workloads/tree/main/percolator) workload with OpenSearch Benchmark, use the following `run` command: ```bash -opensearch-benchmark execute-test --pipeline=benchmark-only --workload=percolator --target-host=https://localhost:9200 --client-options=basic_auth_user:admin,basic_auth_password:admin,verify_certs:false --test-mode +opensearch-benchmark run --pipeline=benchmark-only --workload=percolator --target-host=https://localhost:9200 --client-options=basic_auth_user:admin,basic_auth_password:admin,verify_certs:false --test-mode ``` {% include copy.html %} @@ -260,7 +260,7 @@ For more details about how the summary report is generated, see [Summary report] ## Running OpenSearch Benchmark on your own cluster -Now that you're familiar with running OpenSearch Benchmark on a cluster, you can run OpenSearch Benchmark on your own cluster, using the same `execute-test` command, replacing the following settings. +Now that you're familiar with running OpenSearch Benchmark on a cluster, you can run OpenSearch Benchmark on your own cluster, using the same `run` command, replacing the following settings. * Replace `https://localhost:9200` with your target cluster endpoint. This could be a URI like `https://search.mydomain.com` or a `HOST:PORT` specification. * If the cluster is configured with basic authentication, replace the username and password in the command line with the appropriate credentials. @@ -271,7 +271,7 @@ Now that you're familiar with running OpenSearch Benchmark on a cluster, you can You can copy the following command template to use in your own terminal: ```bash -opensearch-benchmark execute-test --pipeline=benchmark-only --workload=percolator --target-host= --client-options=basic_auth_user:admin,basic_auth_password:admin +opensearch-benchmark run --pipeline=benchmark-only --workload=percolator --target-host= --client-options=basic_auth_user:admin,basic_auth_password:admin ``` {% include copy.html %} diff --git a/_benchmark/reference/commands/command-flags.md b/_benchmark/reference/commands/command-flags.md index 6520f80803..9fbf5cb27a 100644 --- a/_benchmark/reference/commands/command-flags.md +++ b/_benchmark/reference/commands/command-flags.md @@ -19,7 +19,7 @@ opensearch-benchmark -- Flags that accept comma-separated values, such `--telemetry`, can also accept a JSON array. This can be defined by passing a file path ending in `.json` or inline as a JSON string. -- Comma-seperated values: `opensearch-benchmark ... --test-procedure="ingest-only,search-aggregations"` +- Comma-seperated values: `opensearch-benchmark ... --scenario="ingest-only,search-aggregations"` - JSON file: `opensearch-benchmark ... --workload-params="params.json"` - JSON inline string: `opensearch-benchmark ... --telemetry='["node-stats", "recovery-stats"]'` @@ -54,13 +54,13 @@ Defines the workload to use based on the workload's name. You can find a list of Defines which variables to inject into the workload. Variables injected must be available in the workload. To see which parameters are valid in the official workloads, select the workload from [the workloads repository](https://github.com/opensearch-project/opensearch-benchmark-workloads). -## test-procedure +## scenario -Defines the test procedures to use with each workload. You can find a list of test procedures that the workload supports by specifying the workload in the `info` command, for example, `opensearch-benchmark info --workload=`. To look up information on a specific test procedure, use the command `opensearch-benchmark info --workload= --test-procedure=`. +Defines the scenarios to use with each workload. You can find a list of scenarios that the workload supports by specifying the workload in the `info` command, for example, `opensearch-benchmark info --workload=`. To look up information on a specific scenario, use the command `opensearch-benchmark info --workload= --scenario=`. -## test-execution-id +## test-run-id Defines a unique ID for the test run. @@ -69,9 +69,9 @@ Defines a unique ID for the test run. ## include-tasks -Defines a comma-separated list of test procedure tasks to run. By default, all tasks listed in a test procedure array are run. +Defines a comma-separated list of scenario tasks to run. By default, all tasks listed in a scenario array are run. -Tests are executed in the order they are defined in `test-procedure`---not in the order they are defined in the command. +Tests are executed in the order they are defined in `scenario`---not in the order they are defined in the command. All task filters are case sensitive. @@ -79,19 +79,19 @@ All task filters are case sensitive. ## exclude-tasks -Defines a comma-separated list of test procedure tasks not to run. +Defines a comma-separated list of scenario tasks not to run. ## baseline -The baseline TestExecution ID used to compare the contender TestExecution. +The baseline TestRun ID used to compare the contender TestRun. ## contender -The TestExecution ID for the contender being compared to the baseline. +The TestRun ID for the contender being compared to the baseline. ## results-format @@ -217,7 +217,7 @@ The major version of JDK to use. Defines a comma-separated list of clients to use. All options are passed to the OpenSearch Python client. Default is `timeout:60`. -## load-worker-coordinator-hosts +## worker-ips Defines a comma-separated list of hosts that coordinate loads. Default is `localhost`. diff --git a/_benchmark/reference/commands/compare.md b/_benchmark/reference/commands/compare.md index 35bafe0704..4fb215a0ea 100644 --- a/_benchmark/reference/commands/compare.md +++ b/_benchmark/reference/commands/compare.md @@ -12,11 +12,11 @@ redirect_from: # compare -The `compare` command helps you analyze the difference between two benchmark tests. This can help you analyze the performance impact of changes made from a previous test based on a specific Git revision. +The `compare` command helps you analyze the difference between two benchmark runs. This can help you analyze the performance impact of changes made from a previous run based on a specific Git revision. ## Usage -You can compare two different workload tests using their `TestExecution IDs`. To find a list of tests run from a specific workload, use `opensearch-benchmark list test_executions`. You should receive an output similar to the following: +You can compare two different workload runs using their `TestRun IDs`. To find a list of tests run from a specific workload, use `opensearch-benchmark list test_runs`. You should receive an output similar to the following: ``` @@ -26,11 +26,11 @@ You can compare two different workload tests using their `TestExecution IDs`. To / /_/ / /_/ / __/ / / /__/ / __/ /_/ / / / /__/ / / / / /_/ / __/ / / / /__/ / / / / / / / / /_/ / / / ,< \____/ .___/\___/_/ /_/____/\___/\__,_/_/ \___/_/ /_/ /_____/\___/_/ /_/\___/_/ /_/_/ /_/ /_/\__,_/_/ /_/|_| /_/ -Recent test-executions: +Recent test-runs: -Recent test_executions: +Recent test_runs: -TestExecution ID TestExecution Timestamp Workload Workload Parameters TestProcedure ProvisionConfigInstance User Tags workload Revision Provision Config Revision +TestRun ID TestRun Timestamp Workload Workload Parameters TestProcedure ProvisionConfigInstance User Tags workload Revision Provision Config Revision ------------------------------------ ------------------------- ---------- --------------------- ------------------- ------------------------- ----------- ------------------- --------------------------- 729291a0-ee87-44e5-9b75-cc6d50c89702 20230524T181718Z geonames append-no-conflicts 4gheap 30260cf f91c33d0-ec93-48e1-975e-37476a5c9fe5 20230524T170134Z geonames append-no-conflicts 4gheap 30260cf @@ -56,12 +56,12 @@ You should receive the following response comparing the final benchmark metrics /_/ Comparing baseline - TestExecution ID: 729291a0-ee87-44e5-9b75-cc6d50c89702 - TestExecution timestamp: 2023-05-24 18:17:18 + TestRun ID: 729291a0-ee87-44e5-9b75-cc6d50c89702 + TestRun timestamp: 2023-05-24 18:17:18 with contender - TestExecution ID: a33845cc-c2e5-4488-a2db-b0670741ff9b - TestExecution timestamp: 2023-05-23 21:31:45 + TestRun ID: a33845cc-c2e5-4488-a2db-b0670741ff9b + TestRun timestamp: 2023-05-23 21:31:45 ------------------------------------------------------ @@ -127,8 +127,8 @@ Query latency country_agg_cached (100.0 percentile) [ms] 3.42547 2.8681 You can use the following options to customize the results of your test comparison: -- `--baseline`: The baseline TestExecution ID used to compare the contender TestExecution. -- `--contender`: The TestExecution ID for the contender being compared to the baseline. +- `--baseline`: The baseline TestRun ID used to compare the contender TestRun. +- `--contender`: The TestRun ID for the contender being compared to the baseline. - `--results-format`: Defines the output format for the command line results, either `markdown` or `csv`. Default is `markdown`. - `--results-numbers-align`: Defines the column number alignment for when the `compare` command outputs results. Default is `right`. - `--results-file`: When provided a file path, writes the compare results to the file indicated in the path. diff --git a/_benchmark/reference/commands/index.md b/_benchmark/reference/commands/index.md index 12276d1713..9d04eb965f 100644 --- a/_benchmark/reference/commands/index.md +++ b/_benchmark/reference/commands/index.md @@ -9,11 +9,11 @@ redirect_from: /benchmark/commands/index/ # OpenSearch Benchmark command reference -This section provides a list of commands supported by OpenSearch Benchmark, including commonly used commands such as `execute-test` and `list`. +This section provides a list of commands supported by OpenSearch Benchmark, including commonly used commands such as `run` and `list`. - [compare]({{site.url}}{{site.baseurl}}/benchmark/commands/compare/) - [download]({{site.url}}{{site.baseurl}}/benchmark/commands/download/) -- [execute-test]({{site.url}}{{site.baseurl}}/benchmark/commands/execute-test/) +- [run]({{site.url}}{{site.baseurl}}/benchmark/commands/run/) - [info]({{site.url}}{{site.baseurl}}/benchmark/commands/info/) - [list]({{site.url}}{{site.baseurl}}/benchmark/commands/list/) diff --git a/_benchmark/reference/commands/info.md b/_benchmark/reference/commands/info.md index 3bfefabe99..7affcf7797 100644 --- a/_benchmark/reference/commands/info.md +++ b/_benchmark/reference/commands/info.md @@ -43,7 +43,7 @@ Showing details for workload [nyc_taxis]: TestProcedure [searchable-snapshot] =================================== -Measuring performance for Searchable Snapshot feature. Based on the default test procedure 'append-no-conflicts'. +Measuring performance for Searchable Snapshot feature. Based on the default scenario 'append-no-conflicts'. Schedule: ---------- @@ -158,6 +158,6 @@ You can use the following options with the `info` command: - `--workload-path`: Defines the path to a downloaded or custom workload. - `--workload-revision`: Defines a specific revision from the workload source tree that OpenSearch Benchmark should use. - `--workload`: Defines the workload to use based on the workload's name. You can find a list of preloaded workloads using `opensearch-benchmark list workloads`. -- `--test-procedure`: Defines a test procedure to use. You can find a list of test procedures using `opensearch-benchmark list test_procedures`. -- `--include-tasks`: Defines a comma-separated list of test procedure tasks to run. By default, all tasks listed in a test procedure array are run. -- `--exclude-tasks`: Defines a comma-separated list of test procedure tasks not to run. +- `--scenario`: Defines a scenario to use. You can find a list of scenarios using `opensearch-benchmark list scenarios`. +- `--include-tasks`: Defines a comma-separated list of scenario tasks to run. By default, all tasks listed in a scenario array are run. +- `--exclude-tasks`: Defines a comma-separated list of scenario tasks not to run. diff --git a/_benchmark/reference/commands/list.md b/_benchmark/reference/commands/list.md index 1c51cfa27e..c5716fc2ec 100644 --- a/_benchmark/reference/commands/list.md +++ b/_benchmark/reference/commands/list.md @@ -17,8 +17,8 @@ The `list` command lists the following elements used by OpenSearch Benchmark: - `telemetry`: Telemetry devices - `workloads`: Workloads - `pipelines`: Pipelines -- `test_executions`: Single run of a workload -- `provision_config_instances`: Provisioned configuration instances +- `test_runs`: Single run of a workload +- `cluster_configs`: Provisioned configuration instances - `opensearch-plugins`: OpenSearch plugins @@ -27,13 +27,13 @@ The `list` command lists the following elements used by OpenSearch Benchmark: The following example lists any workload test runs and detailed information about each test: ``` -`opensearch-benchmark list test_executions +`opensearch-benchmark list test_runs ``` OpenSearch Benchmark returns information about each test. ``` -benchmark list test_executions +benchmark list test_runs ____ _____ __ ____ __ __ / __ \____ ___ ____ / ___/___ ____ ___________/ /_ / __ )___ ____ _____/ /_ ____ ___ ____ ______/ /__ @@ -43,9 +43,9 @@ benchmark list test_executions /_/ -Recent test_executions: +Recent test_runs: -TestExecution ID TestExecution Timestamp Workload Workload Parameters TestProcedure ProvisionConfigInstance User Tags workload Revision Provision Config Revision +TestRun ID TestRun Timestamp Workload Workload Parameters TestProcedure ProvisionConfigInstance User Tags workload Revision Provision Config Revision ------------------------------------ ------------------------- ---------- --------------------- ------------------- ------------------------- ----------- ------------------- --------------------------- 729291a0-ee87-44e5-9b75-cc6d50c89702 20230524T181718Z geonames append-no-conflicts 4gheap 30260cf f91c33d0-ec93-48e1-975e-37476a5c9fe5 20230524T170134Z geonames append-no-conflicts 4gheap 30260cf diff --git a/_benchmark/reference/commands/execute-test.md b/_benchmark/reference/commands/run.md similarity index 84% rename from _benchmark/reference/commands/execute-test.md rename to _benchmark/reference/commands/run.md index 82b677d900..3c9b97c1c4 100644 --- a/_benchmark/reference/commands/execute-test.md +++ b/_benchmark/reference/commands/run.md @@ -1,25 +1,26 @@ --- layout: default -title: execute-test +title: run nav_order: 65 parent: Command reference grand_parent: OpenSearch Benchmark Reference redirect_from: + - /benchmark/commands/run/ - /benchmark/commands/execute-test/ --- -# execute-test +# run -Whether you're using the included [OpenSearch Benchmark workloads](https://github.com/opensearch-project/opensearch-benchmark-workloads) or a [custom workload]({{site.url}}{{site.baseurl}}/benchmark/creating-custom-workloads/), use the `execute-test` command to gather data about the performance of your OpenSearch cluster according to the selected workload. +Whether you're using the included [OpenSearch Benchmark workloads](https://github.com/opensearch-project/opensearch-benchmark-workloads) or a [custom workload]({{site.url}}{{site.baseurl}}/benchmark/creating-custom-workloads/), use the `run` command to gather data about the performance of your OpenSearch cluster according to the selected workload. ## Usage -The following example executes a test using the `geonames` workload in test mode: +The following example performs a quick test using the `geonames` workload in test mode: ``` -opensearch-benchmark execute-test --workload=geonames --test-mode +opensearch-benchmark run --workload=geonames --test-mode ``` After the test runs, OpenSearch Benchmark responds with a summary of the benchmark metrics: @@ -86,11 +87,11 @@ After the test runs, OpenSearch Benchmark responds with a summary of the benchma ## Options -Use the following options to customize the `execute-test` command for your use case. Options in this section are categorized by their use case. +Use the following options to customize the `run` command for your use case. Options in this section are categorized by their use case. ## General settings -The following options shape how each test runs and how results appear: +The following options customize each test run and affects how run results appear: - `--test-mode`: Runs the given workload in test mode, which is useful when checking a workload for errors. - `--user-tag`: Defines user-specific key-value pairs to be used in metric record as meta information, for example, `intention:baseline-ticket-12345`. @@ -102,11 +103,11 @@ The following options shape how each test runs and how results appear: ### Distributions -The following options set which version of OpenSearch and the OpenSearch plugins the benchmark test uses: +The following options set which version of OpenSearch and the OpenSearch plugins the benchmark run uses: - `--distribution-version`: Downloads the specified OpenSearch distribution based on version number. For a list of released OpenSearch versions, see [Version history](https://opensearch.org/docs/version-history/). - `--distribution-repository`: Defines the repository from where the OpenSearch distribution should be downloaded. Default is `release`. -- `--revision`: Defines the current source code revision to use for running a benchmark test. Default is `current`. +- `--revision`: Defines the current source code revision to use for running a benchmark run. Default is `current`. - `current`: Uses the source tree's current revision based on your OpenSearch distribution. - `latest`: Fetches the latest revision from the main branch of the source tree. - You can also use a timestamp or commit ID from the source tree. When using a timestamp, specify `@ts`, where "ts" is a valid ISO 8601 timestamp, for example, `@2013-07-27T10:37:00Z`. @@ -126,7 +127,7 @@ The following option relates to the target cluster of the benchmark. The following options help those who want to use multiple hosts to generate load to the benchmark cluster: -- `--load-worker-coordinator-hosts`: Defines a comma-separated list of hosts that coordinate loads. Default is `localhost`. +- `--worker-ips`: Defines a comma-separated list of hosts that coordinate loads. Default is `localhost`. - `--enable-worker-coordinator-profiling`: Enables an analysis of the performance of OpenSearch Benchmark's worker coordinator. Default is `false`. ### Provisioning @@ -142,21 +143,21 @@ The following options help customize how OpenSearch Benchmark provisions OpenSea ### Workload -The following options determine which workload is used to run the test: +The following options determine which workload is used during the test: - `--workload-repository`: Defines the repository from which OpenSearch Benchmark loads workloads. - `--workload-path`: Defines the path to a downloaded or custom workload. - `--workload-revision`: Defines a specific revision from the workload source tree that OpenSearch Benchmark should use. - `--workload`: Defines the workload to use based on the workload's name. You can find a list of preloaded workloads using `opensearch-benchmark list workloads`. -### Test procedures +### Scenerios -The following options define what test procedures the test uses and which operations are contained inside the procedure: +The following options define what scenarios the test uses and which operations are contained inside the procedure: -- `--test-execution-id`: Defines a unique ID for this test run. -Defines the test procedures to use with each workload. You can find a list of test procedures that the workload supports by specifying the workload in the `info` command, for example, `opensearch-benchmark info --workload=`. To look up information on a specific test procedure, use the command `opensearch-benchmark info --workload= --test-procedure=`. -- `--include-tasks`: Defines a comma-separated list of test procedure tasks to run. By default, all tasks listed in a test procedure array are run. -- `--exclude-tasks`: Defines a comma-separated list of test procedure tasks not to run. +- `--test-run-id`: Defines a unique ID for this run. +Defines the run procedures to use with each workload. You can find a list of run procedures that the workload supports by specifying the workload in the `info` command, for example, `opensearch-benchmark info --workload=`. To look up information on a specific run procedure, use the command `opensearch-benchmark info --workload= --scenario=`. +- `--include-tasks`: Defines a comma-separated list of run procedure tasks. By default, all tasks listed in a run procedure array are included. +- `--exclude-tasks`: Defines a comma-separated list of run procedure tasks not to run. - `--enable-assertions`: Enables assertion checks for tasks. Default is `false`. ### Pipelines @@ -174,10 +175,10 @@ The following options enable telemetry devices on OpenSearch Benchmark: ### Errors -The following options set how OpenSearch Benchmark handles errors when running tests: +The following options set how OpenSearch Benchmark handles errors during runs: - `--on-error`: Controls how OpenSearch Benchmark responds to errors. Default is `continue`. - - `continue`: Continues to run the test despite the error. - - `abort`: Aborts the test when an error occurs. + - `continue`: Continues to run despite the error. + - `abort`: Aborts the run when an error occurs. - `--preserve-install`: Keeps the Benchmark candidate and its index. Default is `false`. - `--kill-running-processes`: When set to `true`, stops any OpenSearch Benchmark processes currently running and allows OpenSearch Benchmark to continue to run. Default is `false`. diff --git a/_benchmark/reference/metrics/index.md b/_benchmark/reference/metrics/index.md index 63e5a799e8..0243784b62 100644 --- a/_benchmark/reference/metrics/index.md +++ b/_benchmark/reference/metrics/index.md @@ -13,14 +13,14 @@ After a workload completes, OpenSearch Benchmark stores all metric records withi ## Storing metrics -You can specify whether metrics are stored in memory or in a metrics store while running the benchmark by setting the [`datastore.type`](https://opensearch.org/docs/latest/benchmark/configuring-benchmark/#results_publishing) parameter in your `benchmark.ini` file. +You can specify whether metrics are stored in memory or in a metrics store while running the benchmark by setting the [`datastore.type`](https://opensearch.org/docs/latest/benchmark/configuring-benchmark/#reporting) parameter in your `benchmark.ini` file. ### In memory -If you want to store metrics in memory while running the benchmark, provide the following settings in the `results_publishing` section of `benchmark.ini`: +If you want to store metrics in memory while running the benchmark, provide the following settings in the `reporting` section of `benchmark.ini`: ```ini -[results_publishing] +[reporting] datastore.type = in-memory datastore.host = datastore.port = @@ -32,10 +32,10 @@ datastore.password = ### OpenSearch -If you want to store metrics in an external OpenSearch memory store while running the benchmark, provide the following settings in the `results_publishing` section of `benchmark.ini`: +If you want to store metrics in an external OpenSearch memory store while running the benchmark, provide the following settings in the `reporting` section of `benchmark.ini`: ```ini -[results_publishing] +[reporting] datastore.type = opensearch datastore.host = datastore.port = 443 @@ -52,7 +52,7 @@ After you run OpenSearch Benchmark configured to use OpenSearch as a data store, - `benchmark-metrics-YYYY-MM`: Holds granular metric and telemetry data. - `benchmark-results-YYYY-MM`: Holds data based on final results. -- `benchmark-test-executions-YYYY-MM`: Holds data about `execution-ids`. +- `benchmark-test-runs-YYYY-MM`: Holds data about `execution-ids`. You can visualize data inside these indexes in OpenSearch Dashboards. diff --git a/_benchmark/reference/metrics/metric-records.md b/_benchmark/reference/metrics/metric-records.md index 1809401783..e9882b5b05 100644 --- a/_benchmark/reference/metrics/metric-records.md +++ b/_benchmark/reference/metrics/metric-records.md @@ -21,11 +21,11 @@ OpenSearch Benchmark stores metrics in the `benchmark-metrics-*` indexes. A new "_source": { "@timestamp": 1691702842821, "relative-time-ms": 65.90720731765032, - "test-execution-id": "8c43ee4c-cb34-494b-81b2-181be244f832", - "test-execution-timestamp": "20230810T212711Z", + "test-run-id": "8c43ee4c-cb34-494b-81b2-181be244f832", + "test-run-timestamp": "20230810T212711Z", "environment": "local", "workload": "geonames", - "test_procedure": "append-no-conflicts", + "scenario": "append-no-conflicts", "provision-config-instance": "external", "name": "service_time", "value": 607.8001195564866, @@ -49,7 +49,7 @@ OpenSearch Benchmark stores metrics in the `benchmark-metrics-*` indexes. A new "@timestamp": [ "2023-08-10T21:27:22.821Z" ], - "test-execution-timestamp": [ + "test-run-timestamp": [ "2023-08-10T21:27:11.000Z" ] }, @@ -82,13 +82,13 @@ The timestamp of when the sample was taken since the epoch, in milliseconds. For The relative time since the start of the benchmark, in milliseconds. This is useful for comparing time-series graphs across multiple tests. For example, you can compare the indexing throughput over time across multiple tests. -## test-execution-id +## test-run-id A UUID that changes on every invocation of the workload. It is intended to group all samples of a benchmarking run. -## test-execution-timestamp +## test-run-timestamp The timestamp of when the workload was invoked (always in UTC). @@ -100,10 +100,10 @@ The timestamp of when the workload was invoked (always in UTC). The `environment` describes the origin of a metric record. This is defined when initially [configuring]({{site.url}}{{site.baseurl}}/benchmark/configuring-benchmark/) OpenSearch Benchmark. You can use separate environments for different benchmarks but store the metric records in the same index. -## workload, test_procedure, provision-config-instance +## workload, scenario, provision-config-instance -The workload, test procedures, and configuration instances for which the metrics are produced. +The workload, scenarios, and configuration instances for which the metrics are produced. ## name, value, unit diff --git a/_benchmark/reference/workloads/operations.md b/_benchmark/reference/workloads/operations.md index ed6e6b8527..c8dad5f438 100644 --- a/_benchmark/reference/workloads/operations.md +++ b/_benchmark/reference/workloads/operations.md @@ -215,7 +215,7 @@ The `delete-index` operation returns the following metadata: ## cluster-health -The `cluster-health` operation runs the [Cluster Health API](api-reference/cluster-api/cluster-health/), which checks the cluster health status and returns the expected status according to the parameters set for `request-params`. If an unexpected cluster health status is returned, the operation reports a failure. You can use the `--on-error` option in the OpenSearch Benchmark `execute-test` command to control how OpenSearch Benchmark behaves when the health check fails. +The `cluster-health` operation runs the [Cluster Health API](api-reference/cluster-api/cluster-health/), which checks the cluster health status and returns the expected status according to the parameters set for `request-params`. If an unexpected cluster health status is returned, the operation reports a failure. You can use the `--on-error` option in the OpenSearch Benchmark `run` command to control how OpenSearch Benchmark behaves when the health check fails. ### Usage diff --git a/_benchmark/reference/workloads/test-procedures.md b/_benchmark/reference/workloads/scenarios.md similarity index 89% rename from _benchmark/reference/workloads/test-procedures.md rename to _benchmark/reference/workloads/scenarios.md index 43099f0ab3..d1e6e15254 100644 --- a/_benchmark/reference/workloads/test-procedures.md +++ b/_benchmark/reference/workloads/scenarios.md @@ -1,25 +1,27 @@ --- layout: default -title: test_procedures +title: Scenarios parent: Workload reference grand_parent: OpenSearch Benchmark Reference nav_order: 110 +redirect_from: + - /benchmark/workloads/test-procedures/ --- -# test_procedures +# Scenarios -If your workload only defines one benchmarking scenario, specify the schedule at the top level. Use the `test-procedures` element to specify additional properties, such as a name or description. A test procedure is like a benchmarking scenario. If you have multiple test procedures, you can define a variety of challenges. +If your workload only defines one benchmarking scenario, specify the schedule at the top level. Use the `scenarios` element to specify additional properties, such as a name or description. A scenario is like a benchmarking scenario. If you have multiple scenarios, you can define a variety of challenges. -The following table lists test procedures for the benchmarking scenarios in this dataset. A test procedure can reference all operations that are defined in the operations section. +The following table lists scenarios for the benchmarking scenarios in this dataset. A scenario can reference all operations that are defined in the operations section. Parameter | Required | Type | Description :--- | :--- | :--- | :--- -`name` | Yes | String | The name of the test procedure. When naming the test procedure, do not use spaces; this ensures that the name can be easily entered on the command line. -`description` | No | String | Describes the test procedure in a human-readable format. +`name` | Yes | String | The name of the scenario. When naming the scenario, do not use spaces; this ensures that the name can be easily entered on the command line. +`description` | No | String | Describes the scenario in a human-readable format. `user-info` | No | String | Outputs a message at the start of the test to notify you about important test-related information, for example, deprecations. -`default` | No | Boolean | When set to `true`, selects the default test procedure if you did not specify a test procedure on the command line. If the workload only defines one test procedure, it is implicitly selected as the default. Otherwise, you must define `"default": true` on exactly one challenge. +`default` | No | Boolean | When set to `true`, selects the default scenario if you did not specify a scenario on the command line. If the workload only defines one scenario, it is implicitly selected as the default. Otherwise, you must define `"default": true` on exactly one challenge. [`schedule`](#Schedule) | Yes | Array | Defines the order in which workload tasks are run. diff --git a/_benchmark/tutorials/sigv4.md b/_benchmark/tutorials/sigv4.md index f7ef38f948..49d9aa6eb0 100644 --- a/_benchmark/tutorials/sigv4.md +++ b/_benchmark/tutorials/sigv4.md @@ -34,10 +34,10 @@ OpenSearch Benchmark supports AWS Signature Version 4 authentication. To run Ben If you're testing against Amazon OpenSearch Serverless, set `OSB_SERVICE` to `aoss`. -3. Customize and run the following `execute-test` command with the ` --client-options=amazon_aws_log_in:environment` flag. This flag tells OpenSearch Benchmark the location of your exported credentials. +3. Customize and run the following `run` command with the ` --client-options=amazon_aws_log_in:environment` flag. This flag tells OpenSearch Benchmark the location of your exported credentials. ```bash - opensearch-benchmark execute-test \ + opensearch-benchmark run \ --target-hosts= \ --pipeline=benchmark-only \ --workload=geonames \ diff --git a/_benchmark/user-guide/configuring-benchmark.md b/_benchmark/user-guide/configuring-benchmark.md index 2be467d587..9c51aba444 100644 --- a/_benchmark/user-guide/configuring-benchmark.md +++ b/_benchmark/user-guide/configuring-benchmark.md @@ -71,7 +71,7 @@ This section contains the settings that can be customized in the OpenSearch Benc | `local.dataset.cache` | String | The directory in which benchmark datasets are stored. Depending on the benchmarks that are run, this directory may contain hundreds of GB of data. Default path is `$HOME/.benchmark/benchmarks/data`. | -## results_publishing +## reporting This section defines how benchmark metrics are stored. @@ -109,7 +109,7 @@ You can use the following examples to set reporting values in your cluster. This example defines an unprotected metrics store in the local network: ``` -[results_publishing] +[reporting] datastore.type = opensearch datastore.host = 192.168.10.17 datastore.port = 9200 @@ -121,7 +121,7 @@ datastore.password = This example defines a secure connection to a metrics store in the local network with a self-signed certificate: ``` -[results_publishing] +[reporting] datastore.type = opensearch datastore.host = 192.168.10.22 datastore.port = 9200 diff --git a/_benchmark/user-guide/contributing-workloads.md b/_benchmark/user-guide/contributing-workloads.md index e60f60eaed..1141656c13 100644 --- a/_benchmark/user-guide/contributing-workloads.md +++ b/_benchmark/user-guide/contributing-workloads.md @@ -20,7 +20,7 @@ Provide a detailed `README.MD` file that includes the following: - The purpose of the workload. When creating a description for the workload, consider its specific use and how the that use case differs from others in the [workloads repository](https://github.com/opensearch-project/opensearch-benchmark-workloads/). - An example document from the dataset that helps users understand the data's structure. - The workload parameters that can be used to customize the workload. -- A list of default test procedures included in the workload as well as other test procedures that the workload can run. +- A list of default scenarios included in the workload as well as other scenarios that the workload can run. - An output sample produced by the workload after a test is run. - A copy of the open-source license that gives the user and OpenSearch Benchmark permission to use the dataset. @@ -33,7 +33,7 @@ The workload must include the following files: - `workload.json` - `index.json` - `files.txt` -- `test_procedures/default.json` +- `scenarios/default.json` - `operations/default.json` Both `default.json` file names can be customized to have a descriptive name. The workload can include an optional `workload.py` file to add more dynamic functionality. For more information about a file's contents, go to [Anatomy of a workload]({{site.url}}{{site.baseurl}}/benchmark/user-guide/understanding-workloads/anatomy-of-a-workload/). diff --git a/_benchmark/user-guide/creating-custom-workloads.md b/_benchmark/user-guide/creating-custom-workloads.md index ee0dca1ce9..58e87df39e 100644 --- a/_benchmark/user-guide/creating-custom-workloads.md +++ b/_benchmark/user-guide/creating-custom-workloads.md @@ -21,8 +21,8 @@ OpenSearch Benchmark (OSB) includes a set of [workloads](https://github.com/open - [Invoking your custom workload](#invoking-your-custom-workload) - [Advanced options](#advanced-options) - [Test mode](#test-mode) - - [Adding variance to test procedures](#adding-variance-to-test-procedures) - - [Separate operations and test procedures](#separate-operations-and-test-procedures) + - [Adding variance to scenarios](#adding-variance-to-scenarios) + - [Separate operations and scenarios](#separate-operations-and-scenarios) - [Next steps](#next-steps) ## Creating a workload from an existing cluster @@ -158,7 +158,7 @@ To build a workload with source files, create a directory for your workload and - `corpora`: Defines the corpora and the source file, including the: - `document-count`: The number of documents in `-documents.json`. To get an accurate number of documents, run `wc -l -documents.json`. - `uncompressed-bytes`: The number of bytes inside the index. To get an accurate number of bytes, run `stat -f %z -documents.json` on macOS or `stat -c %s -documents.json` on GNU/Linux. Alternatively, run `ls -lrt | grep -documents.json`. - - `schedule`: Defines the sequence of operations and available test procedures for the workload. + - `schedule`: Defines the sequence of operations and available scenarios for the workload. The following example `workload.json` file provides the entry point for the `movies` workload. The `indices` section creates an index called `movies`. The corpora section refers to the source file created in step one, `movie-documents.json`, and provides the document count and the amount of uncompressed bytes. Lastly, the schedule section defines a few operations the workload performs when invoked, including: @@ -260,7 +260,7 @@ opensearch-benchmark list workloads --workload-path= ## Invoking your custom workload -Use the `opensearch-benchmark execute-test` command to invoke your new workload and run a benchmark test against your OpenSearch cluster, as shown in the following example. Replace `--workload-path` with the path to your custom workload, `--target-host` with the `host:port` pairs for your cluster, and `--client-options` with any authorization options required to access the cluster. +Use the `opensearch-benchmark run` command to invoke your new workload and run a benchmark test against your OpenSearch cluster, as shown in the following example. Replace `--workload-path` with the path to your custom workload, `--target-host` with the `host:port` pairs for your cluster, and `--client-options` with any authorization options required to access the cluster. ``` opensearch-benchmark execute_test \ @@ -278,7 +278,7 @@ You can enhance your custom workload's functionality with the following advanced ### Test mode -If you want run the test in test mode to make sure your workload operates as intended, add the `--test-mode` option to the `execute-test` command. Test mode ingests only the first 1000 documents from each index provided and runs query operations against them. +If you want run the test in test mode to make sure your workload operates as intended, add the `--test-mode` option to the `run` command. Test mode ingests only the first 1000 documents from each index provided and runs query operations against them. To use test mode, create a `-documents-1k.json` file that contains the first 1000 documents from `-documents.json` using the following command: @@ -286,7 +286,7 @@ To use test mode, create a `-documents-1k.json` file that contains the fi head -n 1000 -documents.json > -documents-1k.json ``` -Then, run `opensearch-benchmark execute-test` with the option `--test-mode`. Test mode runs a quick version of the workload test. +Then, run `opensearch-benchmark run` with the option `--test-mode`. Test mode runs a quick version of the workload test. ``` opensearch-benchmark execute_test \ @@ -297,19 +297,19 @@ opensearch-benchmark execute_test \ --test-mode ``` -### Adding variance to test procedures +### Adding variance to scenarios -After using your custom workload several times, you might want to use the same workload but perform the workload's operations in a different order. Instead of creating a new workload or reorganizing the procedures directly, you can provide test procedures to vary workload operations. +After using your custom workload several times, you might want to use the same workload but perform the workload's operations in a different order. Instead of creating a new workload or reorganizing the procedures directly, you can provide scenarios to vary workload operations. -To add variance to your workload operations, go to your `workload.json` file and replace the `schedule` section with a `test_procedures` array, as shown in the following example. Each item in the array contains the following: +To add variance to your workload operations, go to your `workload.json` file and replace the `schedule` section with a `scenarios` array, as shown in the following example. Each item in the array contains the following: -- `name`: The name of the test procedure. -- `default`: When set to `true`, OpenSearch Benchmark defaults to the test procedure specified as `default` in the workload if no other test procedures are specified. -- `schedule`: All the operations the test procedure will run. +- `name`: The name of the scenario. +- `default`: When set to `true`, OpenSearch Benchmark defaults to the scenario specified as `default` in the workload if no other scenarios are specified. +- `schedule`: All the operations the scenario will run. ```json -"test_procedures": [ +"scenarios": [ { "name": "index-and-query", "default": true, @@ -367,11 +367,11 @@ To add variance to your workload operations, go to your `workload.json` file and } ``` -### Separate operations and test procedures +### Separate operations and scenarios -If you want to make your `workload.json` file more readable, you can separate your operations and test procedures into different directories and reference the path to each in `workload.json`. To separate operations and procedures, perform the following steps: +If you want to make your `workload.json` file more readable, you can separate your operations and scenarios into different directories and reference the path to each in `workload.json`. To separate operations and procedures, perform the following steps: -1. Add all test procedures to a single file. You can give the file any name. Because the `movies` workload in the preceding contains and index task and queries, this step names the test procedures file `index-and-query.json`. +1. Add all scenarios to a single file. You can give the file any name. Because the `movies` workload in the preceding contains and index task and queries, this step names the scenarios file `index-and-query.json`. 2. Add all operations to a file named `operations.json`. 3. Reference the new files in `workloads.json` by adding the following syntax, replacing `parts` with the relative path to each file, as shown in the following example: @@ -379,9 +379,9 @@ If you want to make your `workload.json` file more readable, you can separate yo "operations": [ {% raw %}{{ benchmark.collect(parts="operations/*.json") }}{% endraw %} ] - # Reference test procedure files in workload.json - "test_procedures": [ - {% raw %}{{ benchmark.collect(parts="test_procedures/*.json") }}{% endraw %} + # Reference scenario files in workload.json + "scenarios": [ + {% raw %}{{ benchmark.collect(parts="scenarios/*.json") }}{% endraw %} ] ``` diff --git a/_benchmark/user-guide/installing-benchmark.md b/_benchmark/user-guide/installing-benchmark.md index 8383cfb2f9..9d6cb29561 100644 --- a/_benchmark/user-guide/installing-benchmark.md +++ b/_benchmark/user-guide/installing-benchmark.md @@ -106,7 +106,7 @@ You can find the official Docker images for OpenSearch Benchmark on [Docker Hub] Some OpenSearch Benchmark functionality is unavailable when you run OpenSearch Benchmark in a Docker container. Specifically, the following restrictions apply: -- OpenSearch Benchmark cannot distribute load from multiple hosts, such as load worker coordinator hosts. +- OpenSearch Benchmark cannot distribute load from multiple hosts, such as worker IPs. - OpenSearch Benchmark cannot provision OpenSearch nodes and can only run tests on previously existing clusters. You can only invoke OpenSearch Benchmark commands using the `benchmark-only` pipeline. ### Pulling the Docker images @@ -146,7 +146,7 @@ Use the `-v` option to specify a local directory to mount and a directory in the The following example command creates a volume in a user's home directory, mounts the volume to the OpenSearch Benchmark container at `/opensearch-benchmark/.benchmark`, and then runs a test benchmark using the geonames workload. Some client options are also specified: ```bash -docker run -v $HOME/benchmarks:/opensearch-benchmark/.benchmark opensearchproject/opensearch-benchmark execute-test --target-hosts https://198.51.100.25:9200 --pipeline benchmark-only --workload geonames --client-options basic_auth_user:admin,basic_auth_password:admin,verify_certs:false --test-mode +docker run -v $HOME/benchmarks:/opensearch-benchmark/.benchmark opensearchproject/opensearch-benchmark run --target-hosts https://198.51.100.25:9200 --pipeline benchmark-only --workload geonames --client-options basic_auth_user:admin,basic_auth_password:admin,verify_certs:false --test-mode ``` {% include copy.html %} @@ -157,7 +157,7 @@ See [Configuring OpenSearch Benchmark]({{site.url}}{{site.baseurl}}/benchmark/co OpenSearch Benchmark is compatible with JDK versions 17, 16, 15, 14, 13, 12, 11, and 8. {: .note} -If you installed OpenSearch with PyPi, you can also provision a new OpenSearch cluster by specifying a `distribution-version` in the `execute-test` command. +If you installed OpenSearch with PyPi, you can also provision a new OpenSearch cluster by specifying a `distribution-version` in the `run` command. If you plan on having Benchmark provision a cluster, you'll need to inform Benchmark of the location of the `JAVA_HOME` path for the Benchmark cluster. To set the `JAVA_HOME` path and provision a cluster: @@ -165,10 +165,10 @@ If you plan on having Benchmark provision a cluster, you'll need to inform Bench 2. Set your corresponding JDK version environment variable by entering the path from the previous step. Enter `export JAVA17_HOME=`. -3. Run the `execute-test` command and indicate the distribution version of OpenSearch you want to use: +3. Use the `run` command and indicate the distribution version of OpenSearch you want to use: ```bash - opensearch-benchmark execute-test --distribution-version=2.3.0 --workload=geonames --test-mode + opensearch-benchmark run --distribution-version=2.3.0 --workload=geonames --test-mode ``` ## Directory structure @@ -185,7 +185,7 @@ After running OpenSearch Benchmark for the first time, you can search through al │ ├── distributions │ │ ├── opensearch-1.0.0-linux-x64.tar.gz │ │ └── opensearch-2.3.0-linux-x64.tar.gz -│ ├── test_executions +│ ├── test_runs │ │ ├── 0279b13b-1e54-49c7-b1a7-cde0b303a797 │ │ └── 0279c542-a856-4e88-9cc8-04306378cd38 │ └── workloads @@ -199,7 +199,7 @@ After running OpenSearch Benchmark for the first time, you can search through al * `benchmark.ini`: Contains any adjustable configurations for tests. For information about how to configure OpenSearch Benchmark, see [Configuring OpenSearch Benchmark]({{site.url}}{{site.baseurl}}/benchmark/configuring-benchmark/). * `data`: Contains all the data corpora and documents related to OpenSearch Benchmark's [official workloads](https://github.com/opensearch-project/opensearch-benchmark-workloads/tree/main/geonames). * `distributions`: Contains all the OpenSearch distributions downloaded from [OpenSearch.org](http://opensearch.org/) and used to provision clusters. -* `test_executions`: Contains all the test `execution_id`s from previous runs of OpenSearch Benchmark. +* `test_runs`: Contains all the test IDs (specified by `test_run_id`) from previous runs of OpenSearch Benchmark. * `workloads`: Contains all files related to workloads, except for the data corpora. * `logging.json`: Contains all of the configuration options related to how logging is performed within OpenSearch Benchmark. * `logs`: Contains all the logs from OpenSearch Benchmark runs. This can be helpful when you've encountered errors during runs. diff --git a/_benchmark/user-guide/running-workloads.md b/_benchmark/user-guide/running-workloads.md index 36108eb9c8..63570fb5cf 100644 --- a/_benchmark/user-guide/running-workloads.md +++ b/_benchmark/user-guide/running-workloads.md @@ -22,21 +22,21 @@ A list of all workloads supported by OpenSearch Benchmark appears. Review the li ## Step 2: Running the test -After you've selected the workload, you can invoke the workload using the `opensearch-benchmark execute-test` command. Replace `--target-host` with the `host:port` pairs for your cluster and `--client-options` with any authorization options required to access the cluster. The following example runs the `nyc_taxis` workload on a localhost for testing purposes. +After you've selected the workload, you can invoke the workload using the `opensearch-benchmark run` command. Replace `--target-host` with the `host:port` pairs for your cluster and `--client-options` with any authorization options required to access the cluster. The following example runs the `nyc_taxis` workload on a localhost for testing purposes. If you want to run a test on an external cluster, see [Running the workload on your own cluster](#running-a-workload-on-an-external-cluster). ```bash -opensearch-benchmark execute-test --pipeline=benchmark-only --workload=nyc_taxis --target-host=https://localhost:9200 --client-options=basic_auth_user:admin,basic_auth_password:admin,verify_certs:false +opensearch-benchmark run --pipeline=benchmark-only --workload=nyc_taxis --target-host=https://localhost:9200 --client-options=basic_auth_user:admin,basic_auth_password:admin,verify_certs:false ``` {% include copy.html %} -Results from the test appear in the directory set by the `--output-path` option in the `execute-test` command. +Results from the test appear in the directory set by the `--output-path` option in the `run` command. ### Test mode -If you want to run the test in test mode to make sure that your workload operates as intended, add the `--test-mode` option to the `execute-test` command. Test mode ingests only the first 1,000 documents from each index provided and runs query operations against them. +If you want to run the test in test mode to make sure that your workload operates as intended, add the `--test-mode` option to the `run` command. Test mode ingests only the first 1,000 documents from each index provided and runs query operations against them. ## Step 3: Validate the test @@ -163,6 +163,6 @@ Now that you're familiar with running OpenSearch Benchmark on a local cluster, y You can copy the following command template to use it in your own terminal: ```bash -opensearch-benchmark execute-test --pipeline=benchmark-only --workload=nyc_taxis --target-host= --client-options=basic_auth_user:admin,basic_auth_password:admin +opensearch-benchmark run --pipeline=benchmark-only --workload=nyc_taxis --target-host= --client-options=basic_auth_user:admin,basic_auth_password:admin ``` {% include copy.html %} diff --git a/_benchmark/user-guide/understanding-results.md b/_benchmark/user-guide/understanding-results.md index 5b8935a8c7..e1ebbd0c44 100644 --- a/_benchmark/user-guide/understanding-results.md +++ b/_benchmark/user-guide/understanding-results.md @@ -114,9 +114,9 @@ Metrics that are unique to the cluster begin at the `index` task line. The follo OpenSearch Benchmark results are stored in-memory or in external storage. -When stored in-memory, results can be found in the `/.benchmark/benchmarks/test_executions/` directory. Results are named in accordance with the `test_execution_id` of the most recent workload test. +When stored in-memory, results can be found in the `/.benchmark/benchmarks/test_runs/` directory. Results are named in accordance with the `test_run_id` of the most recent workload test. -While [running a test](https://opensearch.org/docs/latest/benchmark/reference/commands/execute-test/#general-settings), you can customize where the results are stored using any combination of the following command flags: +While [running a test](https://opensearch.org/docs/latest/benchmark/reference/commands/run/#general-settings), you can customize where the results are stored using any combination of the following command flags: * `--results-file`: When provided a file path, writes the summary report to the file indicated in the path. * `--results-format`: Defines the output format for the summary report results, either `markdown` or `csv`. Default is `markdown`. diff --git a/_benchmark/user-guide/understanding-workloads/anatomy-of-a-workload.md b/_benchmark/user-guide/understanding-workloads/anatomy-of-a-workload.md index 3bf339e4d5..fc66eda985 100644 --- a/_benchmark/user-guide/understanding-workloads/anatomy-of-a-workload.md +++ b/_benchmark/user-guide/understanding-workloads/anatomy-of-a-workload.md @@ -13,8 +13,8 @@ All workloads contain the following files and directories: - [workload.json](#workloadjson): Contains all of the workload settings. - [index.json](#indexjson): Contains the document mappings and parameters as well as index settings. - [files.txt](#filestxt): Contains the data corpora file names. -- [_test-procedures](#_operations-and-_test-procedures): Most workloads contain only one default test procedure, which is configured in `default.json`. -- [_operations](#_operations-and-_test-procedures): Contains all of the operations used in test procedures. +- [_scenarios](#_operations-and-_scenarios): Most workloads contain only one default scenario, which is configured in `default.json`. +- [_operations](#_operations-and-_scenarios): Contains all of the operations used in scenarios. - workload.py: Adds more dynamic functionality to the test. ## workload.json @@ -86,7 +86,7 @@ A workload usually includes the following elements: - [indices]({{site.url}}{{site.baseurl}}/benchmark/workloads/indices/): Defines the relevant indexes and index templates used for the workload. - [corpora]({{site.url}}{{site.baseurl}}/benchmark/workloads/corpora/): Defines all document corpora used for the workload. -- `schedule`: Defines operations and the order in which the operations run inline. Alternatively, you can use `operations` to group operations and the `test_procedures` parameter to specify the order of operations. +- `schedule`: Defines operations and the order in which the operations run inline. Alternatively, you can use `operations` to group operations and the `scenarios` parameter to specify the order of operations. - `operations`: **Optional**. Describes which operations are available for the workload and how they are parameterized. ### Indices @@ -268,11 +268,11 @@ When OpenSearch Benchmark creates an index for the workload, it uses the index s The `files.txt` file lists the files that store the workload data, which are typically stored in a zipped JSON file. -## _operations and _test-procedures +## _operations and _scenarios -To make the workload more human-readable, `_operations` and `_test-procedures` are separated into two directories. +To make the workload more human-readable, `_operations` and `_scenarios` are separated into two directories. -The `_operations` directory contains a `default.json` file that lists all of the supported operations that the test procedure can use. Some workloads, such as `nyc_taxis`, contain an additional `.json` file that lists feature-specific operations, such as `snapshot` operations. The following JSON example shows a list of operations from the `nyc_taxis` workload: +The `_operations` directory contains a `default.json` file that lists all of the supported operations that the scenario can use. Some workloads, such as `nyc_taxis`, contain an additional `.json` file that lists feature-specific operations, such as `snapshot` operations. The following JSON example shows a list of operations from the `nyc_taxis` workload: ```json { @@ -632,12 +632,12 @@ The `_operations` directory contains a `default.json` file that lists all of the } ``` -The `_test-procedures` directory contains a `default.json` file that sets the order of operations performed by the workload. Similar to the `_operations` directory, the `_test-procedures` directory can also contain feature-specific test procedures, such as `searchable_snapshots.json` for `nyc_taxis`. The following examples show the searchable snapshots test procedures for `nyc_taxis`: +The `_scenarios` directory contains a `default.json` file that sets the order of operations performed by the workload. Similar to the `_operations` directory, the `_scenarios` directory can also contain feature-specific scenarios, such as `searchable_snapshots.json` for `nyc_taxis`. The following examples show the searchable snapshots scenarios for `nyc_taxis`: ```json { "name": "searchable-snapshot", - "description": "Measuring performance for Searchable Snapshot feature. Based on the default test procedure 'append-no-conflicts'.", + "description": "Measuring performance for Searchable Snapshot feature. Based on the default scenario 'append-no-conflicts'.", "schedule": [ { "operation": "delete-index"