Skip to content

Commit

Permalink
Merge pull request #263 from trz42/use_result_files_for_deployment
Browse files Browse the repository at this point in the history
use result files for deployment
  • Loading branch information
laraPPr authored Mar 27, 2024
2 parents 2d15e77 + 6beaa22 commit 9d924c5
Show file tree
Hide file tree
Showing 8 changed files with 287 additions and 339 deletions.
71 changes: 18 additions & 53 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,9 @@ You can exit the virtual environment simply by running `deactivate`.

### <a name="step4.1"></a>Step 4.1: Installing tools to access S3 bucket

The [`scripts/eessi-upload-to-staging`](https://github.com/EESSI/eessi-bot-software-layer/blob/main/scripts/eessi-upload-to-staging) script uploads a tarball and an associated metadata file to an S3 bucket.
The
[`scripts/eessi-upload-to-staging`](https://github.com/EESSI/eessi-bot-software-layer/blob/main/scripts/eessi-upload-to-staging)
script uploads an artefact and an associated metadata file to an S3 bucket.

It needs two tools for this:
* the `aws` command to actually upload the files;
Expand Down Expand Up @@ -444,14 +446,17 @@ information about the result of the command that was run (can be empty).

The `[deploycfg]` section defines settings for uploading built artefacts (tarballs).
```
tarball_upload_script = PATH_TO_EESSI_BOT/scripts/eessi-upload-to-staging
artefact_upload_script = PATH_TO_EESSI_BOT/scripts/eessi-upload-to-staging
```
`tarball_upload_script` provides the location for the script used for uploading built software packages to an S3 bucket.
`artefact_upload_script` provides the location for the script used for uploading built software packages to an S3 bucket.

```
endpoint_url = URL_TO_S3_SERVER
```
`endpoint_url` provides an endpoint (URL) to a server hosting an S3 bucket. The server could be hosted by a commercial cloud provider like AWS or Azure, or running in a private environment, for example, using Minio. The bot uploads tarballs to the bucket which will be periodically scanned by the ingestion procedure at the Stratum 0 server.
`endpoint_url` provides an endpoint (URL) to a server hosting an S3 bucket. The
server could be hosted by a commercial cloud provider like AWS or Azure, or
running in a private environment, for example, using Minio. The bot uploads
artefacts to the bucket which will be periodically scanned by the ingestion procedure at the Stratum 0 server.


```ini
Expand All @@ -466,7 +471,7 @@ bucket_name = {
}
```

`bucket_name` is the name of the bucket used for uploading of tarballs.
`bucket_name` is the name of the bucket used for uploading of artefacts.
The bucket must be available on the default server (`https://${bucket_name}.s3.amazonaws.com`), or the one provided via `endpoint_url`.

`bucket_name` can be specified as a string value to use the same bucket for all target repos, or it can be mapping from target repo id to bucket name.
Expand All @@ -481,7 +486,7 @@ The `upload_policy` defines what policy is used for uploading built artefacts to
|`upload_policy` value|Policy|
|:--------|:--------------------------------|
|`all`|Upload all artefacts (mulitple uploads of the same artefact possible).|
|`latest`|For each build target (prefix in tarball name `eessi-VERSION-{software,init,compat}-OS-ARCH)` only upload the latest built artefact.|
|`latest`|For each build target (prefix in artefact name `eessi-VERSION-{software,init,compat}-OS-ARCH)` only upload the latest built artefact.|
|`once`|Only once upload any built artefact for the build target.|
|`none`|Do not upload any built artefacts.|

Expand All @@ -496,30 +501,30 @@ deployment), or a space delimited list of GitHub accounts.
no_deploy_permission_comment = Label `bot:deploy` has been set by user `{deploy_labeler}`, but this person does not have permission to trigger deployments
```
This defines a message that is added to the status table in a PR comment
corresponding to a job whose tarball should have been uploaded (e.g., after
corresponding to a job whose artefact should have been uploaded (e.g., after
setting the `bot:deploy` label).


```
metadata_prefix = LOCATION_WHERE_METADATA_FILE_GETS_DEPOSITED
tarball_prefix = LOCATION_WHERE_TARBALL_GETS_DEPOSITED
artefact_prefix = LOCATION_WHERE_TARBALL_GETS_DEPOSITED
```

These two settings are used to define where (which directory) in the S3 bucket
(see `bucket_name` above) the metadata file and the tarball will be stored. The
(see `bucket_name` above) the metadata file and the artefact will be stored. The
value `LOCATION...` can be a string value to always use the same 'prefix'
regardless of the target CVMFS repository, or can be a mapping of a target
repository id (see also `repo_target_map` below) to a prefix.

The prefix itself can use some (environment) variables that are set within
the upload script (see `tarball_upload_script` above). Currently those are:
the upload script (see `artefact_upload_script` above). Currently those are:
* `'${github_repository}'` (which would be expanded to the full name of the GitHub
repository, e.g., `EESSI/software-layer`),
* `'${legacy_aws_path}'` (which expands to the legacy/old prefix being used for
storing tarballs/metadata files, the old prefix is
storing artefacts/metadata files, the old prefix is
`EESSI_VERSION/TARBALL_TYPE/OS_TYPE/CPU_ARCHITECTURE/TIMESTAMP/`), _and_
* `'${pull_request_number}'` (which would be expanded to the number of the pull
request from which the tarball originates).
request from which the artefact originates).
Note, it's important to single-quote (`'`) the variables as shown above, because
they may likely not be defined when the bot calls the upload script.

Expand All @@ -529,7 +534,7 @@ The list of supported variables can be shown by running
**Examples:**
```
metadata_prefix = {"eessi.io-2023.06": "new/${github_repository}/${pull_request_number}"}
tarball_prefix = {
artefact_prefix = {
"eessi-pilot-2023.06": "",
"eessi.io-2023.06": "new/${github_repository}/${pull_request_number}"
}
Expand Down Expand Up @@ -656,46 +661,6 @@ running_job = job `{job_id}` is running
#### `[finished_job_comments]` section

The `[finished_job_comments]` section sets templates for messages about finished jobs.
```
success = :grin: SUCCESS tarball `{tarball_name}` ({tarball_size} GiB) in job dir
```
`success` specifies the message for a successful job that produced a tarball.

```
failure = :cry: FAILURE
```
`failure` specifies the message for a failed job.

```
no_slurm_out = No slurm output `{slurm_out}` in job dir
```
`no_slurm_out` specifies the message for missing Slurm output file.

```
slurm_out = Found slurm output `{slurm_out}` in job dir
```
`slurm_out` specifies the message for found Slurm output file.

```
missing_modules = Slurm output lacks message "No missing modules!".
```
`missing_modules` is used to signal the lack of a message that all modules were built.

```
no_tarball_message = Slurm output lacks message about created tarball.
```
`no_tarball_message` is used to signal the lack of a message about a created tarball.

```
no_matching_tarball = No tarball matching `{tarball_pattern}` found in job dir.
```
`no_matching_tarball` is used to signal a missing tarball.

```
multiple_tarballs = Found {num_tarballs} tarballs in job dir - only 1 matching `{tarball_pattern}` expected.
```
`multiple_tarballs` is used to report that multiple tarballs have been found.

```
job_result_unknown_fmt = <details><summary>:shrug: UNKNOWN _(click triangle for details)_</summary><ul><li>Job results file `{filename}` does not exist in job directory, or parsing it failed.</li><li>No artefacts were found/reported.</li></ul></details>
```
Expand Down
22 changes: 7 additions & 15 deletions app.cfg.example
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ no_build_permission_comment = Label `bot:build` has been set by user `{build_lab

[deploycfg]
# script for uploading built software packages
tarball_upload_script = PATH_TO_EESSI_BOT/scripts/eessi-upload-to-staging
artefact_upload_script = PATH_TO_EESSI_BOT/scripts/eessi-upload-to-staging

# URL to S3/minio bucket
# if attribute is set, bucket_base will be constructed as follows
Expand Down Expand Up @@ -160,11 +160,11 @@ upload_policy = once
# value can be a space delimited list of GH accounts
deploy_permission =

# template for comment when user who set a label has no permission to trigger deploying tarballs
# template for comment when user who set a label has no permission to trigger deploying artefacts
no_deploy_permission_comment = Label `bot:deploy` has been set by user `{deploy_labeler}`, but this person does not have permission to trigger deployments

# settings for where (directory) in the S3 bucket to store the metadata file and
# the tarball
# the artefact
# - Can be a string value to always use the same 'prefix' regardless of the target
# CVMFS repository, or can be a mapping of a target repository id (see also
# repo_target_map) to a prefix.
Expand All @@ -173,17 +173,17 @@ no_deploy_permission_comment = Label `bot:deploy` has been set by user `{deploy_
# * 'github_repository' (which would be expanded to the full name of the GitHub
# repository, e.g., 'EESSI/software-layer'),
# * 'legacy_aws_path' (which expands to the legacy/old prefix being used for
# storing tarballs/metadata files) and
# storing artefacts/metadata files) and
# * 'pull_request_number' (which would be expanded to the number of the pull
# request from which the tarball originates).
# request from which the artefact originates).
# - The list of supported variables can be shown by running
# `scripts/eessi-upload-to-staging --list-variables`.
# - Examples:
# metadata_prefix = {"eessi.io-2023.06": "new/${github_repository}/${pull_request_number}"}
# tarball_prefix = {"eessi-pilot-2023.06": "", "eessi.io-2023.06": "new/${github_repository}/${pull_request_number}"}
# artefact_prefix = {"eessi-pilot-2023.06": "", "eessi.io-2023.06": "new/${github_repository}/${pull_request_number}"}
# If left empty, the old/legacy prefix is being used.
metadata_prefix =
tarball_prefix =
artefact_prefix =


[architecturetargets]
Expand Down Expand Up @@ -247,14 +247,6 @@ running_job = job `{job_id}` is running


[finished_job_comments]
success = :grin: SUCCESS tarball `{tarball_name}` ({tarball_size} GiB) in job dir
failure = :cry: FAILURE
no_slurm_out = No slurm output `{slurm_out}` in job dir
slurm_out = Found slurm output `{slurm_out}` in job dir
missing_modules = Slurm output lacks message "No missing modules!".
no_tarball_message = Slurm output lacks message about created tarball.
no_matching_tarball = No tarball matching `{tarball_pattern}` found in job dir.
multiple_tarballs = Found {num_tarballs} tarballs in job dir - only 1 matching `{tarball_pattern}` expected.
job_result_unknown_fmt = <details><summary>:shrug: UNKNOWN _(click triangle for detailed information)_</summary><ul><li>Job results file `{filename}` does not exist in job directory, or parsing it failed.</li><li>No artefacts were found/reported.</li></ul></details>
job_test_unknown_fmt = <details><summary>:shrug: UNKNOWN _(click triangle for detailed information)_</summary><ul><li>Job test file `{filename}` does not exist in job directory, or parsing it failed.</li></ul></details>

Expand Down
76 changes: 19 additions & 57 deletions eessi_bot_job_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,35 +43,23 @@

# Local application imports (anything from EESSI/eessi-bot-software-layer)
from connections import github
from tools import config, run_cmd
from tools import config, job_metadata, run_cmd
from tools.args import job_manager_parse
from tools.job_metadata import read_job_metadata_from_file, read_metadata_file
from tools.pr_comments import get_submitted_job_comment, update_comment


AWAITS_LAUNCH = "awaits_launch"
FAILURE = "failure"
FINISHED_JOB_COMMENTS = "finished_job_comments"
JOB_RESULT_COMMENT_DESCRIPTION = "comment_description"
JOB_RESULT_UNKNOWN_FMT = "job_result_unknown_fmt"
JOB_TEST_COMMENT_DESCRIPTION = "comment_description"
JOB_TEST_UNKNOWN_FMT = "job_test_unknown_fmt"
MISSING_MODULES = "missing_modules"
MULTIPLE_TARBALLS = "multiple_tarballs"
NEW_JOB_COMMENTS = "new_job_comments"
NO_MATCHING_TARBALL = "no_matching_tarball"
NO_SLURM_OUT = "no_slurm_out"
NO_TARBALL_MESSAGE = "no_tarball_message"
RUNNING_JOB = "running_job"
RUNNING_JOB_COMMENTS = "running_job_comments"
SLURM_OUT = "slurm_out"
SUCCESS = "success"

REQUIRED_CONFIG = {
FINISHED_JOB_COMMENTS: [FAILURE, JOB_RESULT_UNKNOWN_FMT, MISSING_MODULES,
MULTIPLE_TARBALLS, NO_MATCHING_TARBALL,
NO_SLURM_OUT, NO_TARBALL_MESSAGE, SLURM_OUT,
SUCCESS],
FINISHED_JOB_COMMENTS: [JOB_RESULT_UNKNOWN_FMT, JOB_TEST_UNKNOWN_FMT],
NEW_JOB_COMMENTS: [AWAITS_LAUNCH],
RUNNING_JOB_COMMENTS: [RUNNING_JOB]
}
Expand Down Expand Up @@ -254,42 +242,6 @@ def determine_finished_jobs(self, known_jobs, current_jobs):

return finished_jobs

def read_job_result(self, job_result_file_path):
"""
Read job result file and return the contents of the 'RESULT' section.
Args:
job_result_file_path (string): path to job result file
Returns:
(ConfigParser): instance of ConfigParser corresponding to the
'RESULT' section or None
"""
# reuse function from module tools.job_metadata to read metadata file
result = read_metadata_file(job_result_file_path, self.logfile)
if result and "RESULT" in result:
return result["RESULT"]
else:
return None

def read_job_test(self, job_test_file_path):
"""
Read job test file and return the contents of the 'TEST' section.
Args:
job_test_file_path (string): path to job test file
Returns:
(ConfigParser): instance of ConfigParser corresponding to the
'TEST' section or None
"""
# reuse function from module tools.job_metadata to read metadata file
test = read_metadata_file(job_test_file_path, self.logfile)
if test and "TEST" in test:
return test["TEST"]
else:
return None

def process_new_job(self, new_job):
"""
Process a new job by verifying that it is a bot job and if so
Expand Down Expand Up @@ -335,7 +287,9 @@ def process_new_job(self, new_job):

# assuming that a bot job's working directory contains a metadata
# file, its existence is used to check if the job belongs to the bot
metadata_pr = read_job_metadata_from_file(job_metadata_path, self.logfile)
metadata_pr = job_metadata.get_section_from_file(job_metadata_path,
job_metadata.JOB_PR_SECTION,
self.logfile)

if metadata_pr is None:
log(f"No metadata file found at {job_metadata_path} for job {job_id}, so skipping it",
Expand Down Expand Up @@ -431,7 +385,9 @@ def process_running_jobs(self, running_job):
job_metadata_path = os.path.join(job_dir, metadata_file)

# check if metadata file exist
metadata_pr = read_job_metadata_from_file(job_metadata_path, self.logfile)
metadata_pr = job_metadata.get_section_from_file(job_metadata_path,
job_metadata.JOB_PR_SECTION,
self.logfile)
if metadata_pr is None:
raise Exception("Unable to find metadata file")

Expand Down Expand Up @@ -525,11 +481,13 @@ def process_finished_job(self, finished_job):
# check if _bot_jobJOBID.result exits
job_result_file = f"_bot_job{job_id}.result"
job_result_file_path = os.path.join(new_symlink, job_result_file)
job_results = self.read_job_result(job_result_file_path)
job_results = job_metadata.get_section_from_file(job_result_file_path,
job_metadata.JOB_RESULT_SECTION,
self.logfile)

job_result_unknown_fmt = finished_job_comments_cfg[JOB_RESULT_UNKNOWN_FMT]
# set fallback comment_description in case no result file was found
# (self.read_job_result returned None)
# (job_metadata.get_section_from_file returned None)
comment_description = job_result_unknown_fmt.format(filename=job_result_file)
if job_results:
# get preformatted comment_description or use previously set default for unknown
Expand All @@ -552,11 +510,13 @@ def process_finished_job(self, finished_job):
# --> bot/test.sh and bot/check-test.sh scripts are run in job script used by bot for 'build' action
job_test_file = f"_bot_job{job_id}.test"
job_test_file_path = os.path.join(new_symlink, job_test_file)
job_tests = self.read_job_test(job_test_file_path)
job_tests = job_metadata.get_section_from_file(job_test_file_path,
job_metadata.JOB_TEST_SECTION,
self.logfile)

job_test_unknown_fmt = finished_job_comments_cfg[JOB_TEST_UNKNOWN_FMT]
# set fallback comment_description in case no test file was found
# (self.read_job_result returned None)
# (job_metadata.get_section_from_file returned None)
comment_description = job_test_unknown_fmt.format(filename=job_test_file)
if job_tests:
# get preformatted comment_description or use previously set default for unknown
Expand All @@ -576,7 +536,9 @@ def process_finished_job(self, finished_job):
# obtain id of PR comment to be updated (from file '_bot_jobID.metadata')
metadata_file = f"_bot_job{job_id}.metadata"
job_metadata_path = os.path.join(new_symlink, metadata_file)
metadata_pr = read_job_metadata_from_file(job_metadata_path, self.logfile)
metadata_pr = job_metadata.get_section_from_file(job_metadata_path,
job_metadata.JOB_PR_SECTION,
self.logfile)
if metadata_pr is None:
raise Exception("Unable to find metadata file ... skip updating PR comment")

Expand Down
Loading

0 comments on commit 9d924c5

Please sign in to comment.