Skip to content

Commit

Permalink
[performance] support triggering subset of journeys against KIbana PR…
Browse files Browse the repository at this point in the history
… in CI (#193175)

## Summary

It’s common request for Dev teams to run specific journeys on a PR to
compare performance metrics against the `main` branch. These requests
usually focus on a particular area, such as the Dashboard or Discover
app.

To streamline the process, this PR groups relevant journeys into
categories that can be triggered through an environment variable. For
example, setting `JOURNEYS_GROUP=dashboard` will execute only the three
dashboard-specific journeys, which are (usually) sufficient for
evaluating the performance impact of code changes within the Dashboard
app.

Current Process for Triggering Performance Builds:
- Create a new kibana-single-user-performance
[build](https://buildkite.com/elastic/kibana-single-user-performance#new)
- Provide the following arguments:

Branch: `refs/pull/<PR_number>/head`
Under Options, set the environment variable:
`JOURNEYS_GROUP=<group_name>`

Currently supported journey groups:
- kibanaStartAndLoad
- crud
- dashboard
- discover
- maps
- ml

[Build example

](https://buildkite.com/elastic/kibana-single-user-performance/builds/14427)
Each group focuses on a specific set of journeys tied to its respective
area in Kibana, allowing for more targeted performance testing. Since
running group takes ~5-10 min on bare metal worker, it should not delay
the regular (every 3h) runs against `main` branch


test locally with `node scripts/run_performance.js --group <group_name>`
  • Loading branch information
dmlemeshko authored Sep 18, 2024
1 parent 36bd641 commit f5975d2
Show file tree
Hide file tree
Showing 3 changed files with 87 additions and 21 deletions.
9 changes: 7 additions & 2 deletions .buildkite/scripts/steps/functional/performance_playwright.sh
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,13 @@ if [ "$BUILDKITE_PIPELINE_SLUG" == "kibana-performance-data-set-extraction" ]; t
node scripts/run_performance.js --kibana-install-dir "$KIBANA_BUILD_LOCATION" --skip-warmup
else
# pipeline should use bare metal static worker
echo "--- Running performance tests"
node scripts/run_performance.js --kibana-install-dir "$KIBANA_BUILD_LOCATION"
if [[ -z "${JOURNEYS_GROUP+x}" ]]; then
echo "--- Running performance tests"
node scripts/run_performance.js --kibana-install-dir "$KIBANA_BUILD_LOCATION"
else
echo "--- Running performance tests: '$JOURNEYS_GROUP' group"
node scripts/run_performance.js --kibana-install-dir "$KIBANA_BUILD_LOCATION" --group "$JOURNEYS_GROUP"
fi
fi

echo "--- Upload journey step screenshots"
Expand Down
21 changes: 21 additions & 0 deletions dev_docs/tutorials/performance/adding_performance_journey.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,27 @@ simulate real life internet connection. This means that all requests have a fixe
In order to keep track on performance metrics stability, journeys are run on main branch with a scheduled interval.
Bare metal machine is used to produce results as stable and reproducible as possible.

#### Running subset of journeys for the PR

Some code changes might affect the Kibana performance and it might be benefitial to run relevant journeys against the PR
and compare performance metrics vs. the ones on main branch.

In oder to trigger the build for Kibana PR, you can follow these steps:

- Create a new kibana-single-user-performance [build](https://buildkite.com/elastic/kibana-single-user-performance#new)
- Provide the following arguments:
- Branch: `refs/pull/<PR_number>/head`
- Under Options, set the environment variable: `JOURNEYS_GROUP=<group_name>`

Currently supported journey groups:

- kibanaStartAndLoad
- crud
- dashboard
- discover
- maps
- ml

#### Machine specifications

All benchmarks are run on bare-metal machines with the [following specifications](https://www.hetzner.com/dedicated-rootserver/ex100):
Expand Down
78 changes: 59 additions & 19 deletions src/dev/performance/run_performance_cli.ts
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,19 @@ interface TestRunProps extends EsRunProps {
kibanaInstallDir: string | undefined;
}

interface JourneyTargetGroups {
[key: string]: string[];
}

const journeyTargetGroups: JourneyTargetGroups = {
kibanaStartAndLoad: ['login'],
crud: ['tags_listing_page', 'dashboard_listing_page'],
dashboard: ['ecommerce_dashboard', 'data_stress_test_lens', 'flight_dashboard'],
discover: ['many_fields_discover', 'many_fields_discover_esql'],
maps: ['ecommerce_dashboard_map_only'],
ml: ['aiops_log_rate_analysis', 'many_fields_transform', 'tsdb_logs_data_visualizer'],
};

const readFilesRecursively = (dir: string, callback: Function) => {
const files = fs.readdirSync(dir);
files.forEach((file) => {
Expand All @@ -48,6 +61,44 @@ const readFilesRecursively = (dir: string, callback: Function) => {
});
};

const getAllJourneys = (dir: string) => {
const journeys: Journey[] = [];

readFilesRecursively(dir, (filePath: string) =>
journeys.push({
name: path.parse(filePath).name,
path: path.resolve(dir, filePath),
})
);

return journeys;
};

const getJourneysToRun = ({ journeyPath, group }: { journeyPath?: string; group?: string }) => {
if (group && typeof group === 'string') {
if (!(group in journeyTargetGroups)) {
throw createFlagError(`Group '${group}' is not defined, try again`);
}

const fileNames = journeyTargetGroups[group];
const dir = path.resolve(REPO_ROOT, JOURNEY_BASE_PATH);

return getAllJourneys(dir).filter((journey) => fileNames.includes(journey.name));
}

if (journeyPath && !fs.existsSync(journeyPath)) {
throw createFlagError('--journey-path must be an existing path');
}

if (journeyPath && fs.statSync(journeyPath).isFile()) {
return [{ name: path.parse(journeyPath).name, path: journeyPath }];
} else {
// default dir is x-pack/performance/journeys_e2e
const dir = journeyPath ?? path.resolve(REPO_ROOT, JOURNEY_BASE_PATH);
return getAllJourneys(dir);
}
};

async function startEs(props: EsRunProps) {
const { procRunner, log, logsDir } = props;
await procRunner.run('es', {
Expand Down Expand Up @@ -115,29 +166,17 @@ run(
const skipWarmup = flagsReader.boolean('skip-warmup');
const kibanaInstallDir = flagsReader.path('kibana-install-dir');
const journeyPath = flagsReader.path('journey-path');
const group = flagsReader.string('group');

if (kibanaInstallDir && !fs.existsSync(kibanaInstallDir)) {
throw createFlagError('--kibana-install-dir must be an existing directory');
if (group && journeyPath) {
throw createFlagError('--group and --journeyPath cannot be used simultaneously');
}

if (journeyPath && !fs.existsSync(journeyPath)) {
throw createFlagError('--journey-path must be an existing path');
if (kibanaInstallDir && !fs.existsSync(kibanaInstallDir)) {
throw createFlagError('--kibana-install-dir must be an existing directory');
}

const journeys: Journey[] = [];

if (journeyPath && fs.statSync(journeyPath).isFile()) {
journeys.push({ name: path.parse(journeyPath).name, path: journeyPath });
} else {
// default dir is x-pack/performance/journeys_e2e
const dir = journeyPath ?? path.resolve(REPO_ROOT, JOURNEY_BASE_PATH);
readFilesRecursively(dir, (filePath: string) =>
journeys.push({
name: path.parse(filePath).name,
path: path.resolve(dir, filePath),
})
);
}
const journeys = getJourneysToRun({ journeyPath, group });

if (journeys.length === 0) {
throw new Error('No journeys found');
Expand Down Expand Up @@ -191,13 +230,14 @@ run(
},
{
flags: {
string: ['kibana-install-dir', 'journey-path'],
string: ['kibana-install-dir', 'journey-path', 'group'],
boolean: ['skip-warmup'],
help: `
--kibana-install-dir=dir Run Kibana from existing install directory instead of from source
--journey-path=path Define path to performance journey or directory with multiple journeys
that should be executed. '${JOURNEY_BASE_PATH}' is run by default
--skip-warmup Journey will be executed without warmup (TEST phase only)
--group Run subset of journeys, defined in the specified group
`,
},
}
Expand Down

0 comments on commit f5975d2

Please sign in to comment.