-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(ISV-5130): add pipeline steps for SBOM upload #627
base: development
Are you sure you want to change the base?
Conversation
Hi @jedinym. Thanks for your PR. I'm waiting for a konflux-ci member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
e03c642
to
96b9878
Compare
Signed-off-by: Martin Jediny <[email protected]> feat(ISV-5130): use SBOM filenames as Atlas IDs Signed-off-by: Martin Jediny <[email protected]> feat(ISV-5130): create task to collect Atlas params Signed-off-by: Martin Jediny <[email protected]> fix(ISV-5130): unify variable casing Signed-off-by: Martin Jediny <[email protected]> feat(ISV-5130): upload product SBOMs Signed-off-by: Martin Jediny <[email protected]> fix(ISV-5130): fix shellcheck and yamllint Signed-off-by: Martin Jediny <[email protected]> feat(ISV-5130): remove skipping option Signed-off-by: Martin Jediny <[email protected]> test(ISV-5130): use test image with fixed bug Signed-off-by: Martin Jediny <[email protected]> fix(ISV-513): do not decode secrets Signed-off-by: Martin Jediny <[email protected]> fix(ISV-5130): fixup bash script Signed-off-by: Martin Jediny <[email protected]> fix(ISV-5130): fix bombastic API call Signed-off-by: Martin Jediny <[email protected]> fix(ISV-5130): remove dead code Signed-off-by: Martin Jediny <[email protected]> fix(ISV-5130): use utils image with fixed script Signed-off-by: Martin Jediny <[email protected]>
Signed-off-by: Martin Jediny <[email protected]>
Signed-off-by: Martin Jediny <[email protected]>
Signed-off-by: Martin Jediny <[email protected]>
Signed-off-by: Martin Jediny <[email protected]>
Signed-off-by: Martin Jediny <[email protected]>
Signed-off-by: Martin Jediny <[email protected]>
Signed-off-by: Martin Jediny <[email protected]>
pipelines/rh-advisories/README.md
Outdated
## Changes in 2.0.0 | ||
* Create and upload product-level SBOMs to the Atlas release instance | ||
* Update and upload component-level SBOMs to the Atlas release instance | ||
* The "atlas" field in the RPA specifying which instance to use is mandatory |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This probably doesn't need to be here. The pipeline changelog should most of all say what changed in the pipeline - which is that the three tasks were added. And then perhaps briefly say what they do.
|
||
atlasServer=$(jq -r '.atlas.server' "$DATA_FILE") | ||
if [ "$atlasServer" = "null" ]; then | ||
echo "ERROR: The JSON file does not contain the 'atlasServer' field." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to have a default value for this? Or maybe have this whole thing optional? I'm just worried about the transition to enable this. If it's really gonna be mandatory, is everything ready now? Do all the RPAs have this defined now? Are the secrets ready on all clusters? Because some users may be using the development branch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was my worry as well, but I was guided by this discussion to implement it like this. Per rbean, the secrets are in the prod rh01, prod p01, and prod p02 clusters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, ok, the secrets are there, but no RPAs have the required settings yet, right? So that should be done first, otherwise this will break users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I am not so well versed in this, that's why I opened the discussion before. What do you think needs to be done now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now I noticed that there is updateComponentSBOM
and it defaults to false. So ok, without any change to RPA, this will not happen. But in that case .atlas.server
should definitely not be required to be in the data file. If you don't want to have a default for the server, then the reading of the server (and throwing an error if missing) should only happen if updateComponentSBOM is true.
Also, I noticed that updateComponentSBOM is emitted as a result, but that result is not used for anything. So what is it for? The other two tasks should probably have a when:
clause and be executed only if the updateComponentSBOM result is true, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The smoothest solution probably is:
- The Atlas field in the RPA is fully optional.
- Only enable product-level SBOM creation and push when the Atlas field is present.
- Only enable component-level SBOM update when the Atlas field is present and the enabling flag is set to true.
What do you think?
The updateComponentSBOM result flag will be used with the #656 changes once that is merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't aware there were two different things - product-level SBOM and component-level SBOMs.
I think both of them should be treated equally, no? But It seems this PR only adds tasks to create and upload product sbom, not component-level sboms.
In the latest commit, I made the SBOM processing optional when the Atlas fields are not provided in the RPA.
Yeah, that looks good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it looks ok. But we don't have that other update task yet, so I will have to review again when that gets added.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry if it was confusing, merging #656 is taking kind of long so I though I would expedite the process here by first putting product SBOMs for review and then the component stuff. In hindsight I probably should have created two PRs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No worries. One reason it takes longer is that we had our face to face meeting last week, so I responsiveness was limited.
Signed-off-by: Martin Jediny <[email protected]>
Signed-off-by: Martin Jediny <[email protected]>
@mmalina In the latest commit, I made the SBOM processing optional when the Atlas fields are not provided in the RPA. |
pipelines/rh-advisories/README.md
Outdated
* Add create-product-sbom task to create product-level SBOMs. | ||
* Add update-component-sbom task to update component-level SBOMs with release | ||
info. | ||
* Add two upload-sbom-to-atlas tasks to upload the product and updated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this mean? In the task update, I only see one upload task being added: upload-product-sbom
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, that's a very important piece information.
Component-level SBOM upload will be added when #656 is merged.
The way I read this was that once that PR is merged, the updated component sboms will be added in a subsequent PR. In that case this is really a draft - definitely not ready to be merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, I will update the description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you say the task names instead of "two upload-sbom-to-atlas tasks"
The three entries above this one are a lot clearer to me
Let's continue in this thread: #627 (comment) |
tasks/collect-atlas-params/README.md
Outdated
# collect-atlas-params | ||
|
||
Tekton task that collects the Atlas server option from the data file. Based on | ||
the value of the "atlasServer" field ("stage" or "production"), outputs values |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this was changed to atlas.server
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you say "results" instead of "values"?
- name: "UPDATE_COMPONENT_SBOM" | ||
value: '$(params.updateComponentSBOM)' | ||
script: | | ||
#!/usr/bin/env sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be better to use bash here like you do in the task (we try to keep the tasks and their tests in sync with each other)
- name: data | ||
workspace: release-workspace | ||
runAfter: | ||
- check-data-keys |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We aren't passing atlas
to the check-data-keys systems param, so I am not sure this task should rely on that one at all. Maybe another task is more fitting here?
pipelines/rh-advisories/README.md
Outdated
* Add create-product-sbom task to create product-level SBOMs. | ||
* Add update-component-sbom task to update component-level SBOMs with release | ||
info. | ||
* Add two upload-sbom-to-atlas tasks to upload the product and updated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you say the task names instead of "two upload-sbom-to-atlas tasks"
The three entries above this one are a lot clearer to me
- name: data | ||
workspace: release-workspace | ||
runAfter: | ||
- check-data-keys |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment about this line
|
||
output_path=$(workspaces.data.path)/${subdir_sbom_path} | ||
mkdir -p "$(dirname "$output_path")" | ||
cp "$tmp_sbom" "$output_path" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason to not just determine output_path
first and then use that for --output-path
in create_product_sbom
and not call mktemp
or the cp
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is done this way because the filename in the output path depends on the contents of the SBOM (the name and version of the product) and I wanted to avoid the complexity of duplicating the data extraction from the data.json. I added a comment for clarity and used mv
to make it more obvious.
tasks/upload-sbom-to-atlas/README.md
Outdated
| supportedCycloneDxVersion | If the SBOM uses a higher CycloneDX version, `syft convert` in the task will convert all SBOMs to this CycloneDX version before uploading them to Atlas. If the SBOM is already in this version or lower, it will be uploaded as is. | Yes | 1.4 | | ||
| supportedSpdxVersion | If the SBOM uses a higher SPDX version, `syft convert` in the task will convert all SBOMs to this SPDX version before uploading them to Atlas. If the SBOM is already in this version or lower, it will be uploaded as is. | Yes | 2.3 | | ||
|
||
## Changes in 0.3.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was never a 0.2.0. Why not just make this 0.2.0 with both changes in it and not falsely list what existed in 0.2.0 and call this 0.3.0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just an oversight, I didn't realize this was still a single PR.
@@ -4,20 +4,19 @@ kind: Task | |||
metadata: | |||
name: upload-sbom-to-atlas | |||
labels: | |||
app.kubernetes.io/version: "0.1.0" | |||
app.kubernetes.io/version: "0.3.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned in the README, I think 0.2.0 makes more sense. It is okay to have multiple changes in one new version
Signed-off-by: Martin Jediny <[email protected]>
Signed-off-by: Martin Jediny <[email protected]>
Signed-off-by: Martin Jediny <[email protected]>
Signed-off-by: Martin Jediny <[email protected]>
This PR extends the
rh-advisories
andrh-push-to-registry-redhat-io
pipelines to create and upload product-level SBOMs. Component-level SBOM upload commits will be added to this PR when #656 is merged.Product-level SBOMs have been tested in Konflux with the help of @.scoheb.
A new tekton Task (
collect-atlas-params
) is added to generate Atlas parameters from the Atlas field RPAs. If no Atlas-related data is provided in the RPA, the SBOM processing is skipped.