Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(ISV-5130): add pipeline steps for SBOM upload #627

Draft
wants to merge 14 commits into
base: development
Choose a base branch
from

Conversation

jedinym
Copy link
Contributor

@jedinym jedinym commented Oct 21, 2024

This PR extends the rh-advisories and rh-push-to-registry-redhat-io pipelines to create and upload product-level SBOMs. Component-level SBOM upload commits will be added to this PR when #656 is merged.

Product-level SBOMs have been tested in Konflux with the help of @.scoheb.

A new tekton Task (collect-atlas-params) is added to generate Atlas parameters from the Atlas field RPAs. If no Atlas-related data is provided in the RPA, the SBOM processing is skipped.

Copy link

openshift-ci bot commented Oct 21, 2024

Hi @jedinym. Thanks for your PR.

I'm waiting for a konflux-ci member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@jedinym jedinym force-pushed the ISV-5130 branch 2 times, most recently from e03c642 to 96b9878 Compare October 21, 2024 07:22
@jedinym jedinym changed the title WIP: Add pipeline steps for SBOM upload wip(ISV-5130): Add pipeline steps for SBOM upload Oct 21, 2024
Signed-off-by: Martin Jediny <[email protected]>

feat(ISV-5130): use SBOM filenames as Atlas IDs

Signed-off-by: Martin Jediny <[email protected]>

feat(ISV-5130): create task to collect Atlas params

Signed-off-by: Martin Jediny <[email protected]>

fix(ISV-5130): unify variable casing

Signed-off-by: Martin Jediny <[email protected]>

feat(ISV-5130): upload product SBOMs

Signed-off-by: Martin Jediny <[email protected]>

fix(ISV-5130): fix shellcheck and yamllint

Signed-off-by: Martin Jediny <[email protected]>

feat(ISV-5130): remove skipping option

Signed-off-by: Martin Jediny <[email protected]>

test(ISV-5130): use test image with fixed bug

Signed-off-by: Martin Jediny <[email protected]>

fix(ISV-513): do not decode secrets

Signed-off-by: Martin Jediny <[email protected]>

fix(ISV-5130): fixup bash script

Signed-off-by: Martin Jediny <[email protected]>

fix(ISV-5130): fix bombastic API call

Signed-off-by: Martin Jediny <[email protected]>

fix(ISV-5130): remove dead code

Signed-off-by: Martin Jediny <[email protected]>

fix(ISV-5130): use utils image with fixed script

Signed-off-by: Martin Jediny <[email protected]>
@jedinym jedinym changed the title wip(ISV-5130): Add pipeline steps for SBOM upload feat(ISV-5130): Add pipeline steps for SBOM upload Nov 12, 2024
@jedinym jedinym changed the title feat(ISV-5130): Add pipeline steps for SBOM upload feat(ISV-5130): add pipeline steps for SBOM upload Nov 12, 2024
@jedinym jedinym marked this pull request as ready for review November 12, 2024 10:24
@jedinym jedinym requested a review from a team as a code owner November 12, 2024 10:24
## Changes in 2.0.0
* Create and upload product-level SBOMs to the Atlas release instance
* Update and upload component-level SBOMs to the Atlas release instance
* The "atlas" field in the RPA specifying which instance to use is mandatory
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This probably doesn't need to be here. The pipeline changelog should most of all say what changed in the pipeline - which is that the three tasks were added. And then perhaps briefly say what they do.


atlasServer=$(jq -r '.atlas.server' "$DATA_FILE")
if [ "$atlasServer" = "null" ]; then
echo "ERROR: The JSON file does not contain the 'atlasServer' field."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to have a default value for this? Or maybe have this whole thing optional? I'm just worried about the transition to enable this. If it's really gonna be mandatory, is everything ready now? Do all the RPAs have this defined now? Are the secrets ready on all clusters? Because some users may be using the development branch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was my worry as well, but I was guided by this discussion to implement it like this. Per rbean, the secrets are in the prod rh01, prod p01, and prod p02 clusters.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, ok, the secrets are there, but no RPAs have the required settings yet, right? So that should be done first, otherwise this will break users.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I am not so well versed in this, that's why I opened the discussion before. What do you think needs to be done now?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now I noticed that there is updateComponentSBOM and it defaults to false. So ok, without any change to RPA, this will not happen. But in that case .atlas.server should definitely not be required to be in the data file. If you don't want to have a default for the server, then the reading of the server (and throwing an error if missing) should only happen if updateComponentSBOM is true.

Also, I noticed that updateComponentSBOM is emitted as a result, but that result is not used for anything. So what is it for? The other two tasks should probably have a when: clause and be executed only if the updateComponentSBOM result is true, no?

Copy link
Contributor Author

@jedinym jedinym Nov 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The smoothest solution probably is:

  • The Atlas field in the RPA is fully optional.
  • Only enable product-level SBOM creation and push when the Atlas field is present.
  • Only enable component-level SBOM update when the Atlas field is present and the enabling flag is set to true.

What do you think?


The updateComponentSBOM result flag will be used with the #656 changes once that is merged.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't aware there were two different things - product-level SBOM and component-level SBOMs.

I think both of them should be treated equally, no? But It seems this PR only adds tasks to create and upload product sbom, not component-level sboms.

In the latest commit, I made the SBOM processing optional when the Atlas fields are not provided in the RPA.

Yeah, that looks good.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it looks ok. But we don't have that other update task yet, so I will have to review again when that gets added.

Copy link
Contributor Author

@jedinym jedinym Nov 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry if it was confusing, merging #656 is taking kind of long so I though I would expedite the process here by first putting product SBOMs for review and then the component stuff. In hindsight I probably should have created two PRs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No worries. One reason it takes longer is that we had our face to face meeting last week, so I responsiveness was limited.

@jedinym
Copy link
Contributor Author

jedinym commented Nov 18, 2024

@mmalina In the latest commit, I made the SBOM processing optional when the Atlas fields are not provided in the RPA.

* Add create-product-sbom task to create product-level SBOMs.
* Add update-component-sbom task to update component-level SBOMs with release
info.
* Add two upload-sbom-to-atlas tasks to upload the product and updated
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does this mean? In the task update, I only see one upload task being added: upload-product-sbom

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The README already accounts for the component changes that will be implemented when #656 is merged (as is written in the PR description above). I don't think there will be any difficulty with that, I just want to make sure no last-minute changes in #656 break this PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, that's a very important piece information.

Component-level SBOM upload will be added when #656 is merged.

The way I read this was that once that PR is merged, the updated component sboms will be added in a subsequent PR. In that case this is really a draft - definitely not ready to be merged.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, I will update the description.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you say the task names instead of "two upload-sbom-to-atlas tasks"
The three entries above this one are a lot clearer to me

@mmalina
Copy link
Contributor

mmalina commented Nov 18, 2024

@mmalina In the latest commit, I made the SBOM processing optional when the Atlas fields are not provided in the RPA.

Let's continue in this thread: #627 (comment)

# collect-atlas-params

Tekton task that collects the Atlas server option from the data file. Based on
the value of the "atlasServer" field ("stage" or "production"), outputs values
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this was changed to atlas.server

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you say "results" instead of "values"?

- name: "UPDATE_COMPONENT_SBOM"
value: '$(params.updateComponentSBOM)'
script: |
#!/usr/bin/env sh
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be better to use bash here like you do in the task (we try to keep the tasks and their tests in sync with each other)

- name: data
workspace: release-workspace
runAfter:
- check-data-keys
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We aren't passing atlas to the check-data-keys systems param, so I am not sure this task should rely on that one at all. Maybe another task is more fitting here?

* Add create-product-sbom task to create product-level SBOMs.
* Add update-component-sbom task to update component-level SBOMs with release
info.
* Add two upload-sbom-to-atlas tasks to upload the product and updated
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you say the task names instead of "two upload-sbom-to-atlas tasks"
The three entries above this one are a lot clearer to me

- name: data
workspace: release-workspace
runAfter:
- check-data-keys
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment about this line


output_path=$(workspaces.data.path)/${subdir_sbom_path}
mkdir -p "$(dirname "$output_path")"
cp "$tmp_sbom" "$output_path"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason to not just determine output_path first and then use that for --output-path in create_product_sbom and not call mktemp or the cp?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is done this way because the filename in the output path depends on the contents of the SBOM (the name and version of the product) and I wanted to avoid the complexity of duplicating the data extraction from the data.json. I added a comment for clarity and used mv to make it more obvious.

| supportedCycloneDxVersion | If the SBOM uses a higher CycloneDX version, `syft convert` in the task will convert all SBOMs to this CycloneDX version before uploading them to Atlas. If the SBOM is already in this version or lower, it will be uploaded as is. | Yes | 1.4 |
| supportedSpdxVersion | If the SBOM uses a higher SPDX version, `syft convert` in the task will convert all SBOMs to this SPDX version before uploading them to Atlas. If the SBOM is already in this version or lower, it will be uploaded as is. | Yes | 2.3 |

## Changes in 0.3.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There was never a 0.2.0. Why not just make this 0.2.0 with both changes in it and not falsely list what existed in 0.2.0 and call this 0.3.0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just an oversight, I didn't realize this was still a single PR.

@@ -4,20 +4,19 @@ kind: Task
metadata:
name: upload-sbom-to-atlas
labels:
app.kubernetes.io/version: "0.1.0"
app.kubernetes.io/version: "0.3.0"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned in the README, I think 0.2.0 makes more sense. It is okay to have multiple changes in one new version

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants