-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
backend, frontend: implement project and build deletion in Pulp #3330
Conversation
Not yet ready, I am submitting a draft, so you can see the progress. |
13d22df
to
c4fefed
Compare
c66881f
to
4cad51e
Compare
0fd9686
to
200d0d1
Compare
@nikromen I rebased the PR from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I think the PR is nearly finished. Just some things still don't work as expected so I listed them below.
Overall I think that newly added methods to pulp client should work.
backend/copr_backend/pulp.py
Outdated
response = requests.get(url, **self.request_params) | ||
return response.json()["results"][0]["pulp_href"] | ||
|
||
def wait_for_finished_task(self, task): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this worked, thanks! What if the task gets stuck on pulp side? Do they have some hard timeout? Or do we get stuck here forever?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No idea to be honest, if a task can get stuck on the Pulp side. I think we will get stuck in this loop "forever". But "forever" should be only until Pulp gets unstuck.
Should I add some timeout on our side?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd say rather yes, since some rare errors may happen and then we could end up here again #3343. Let's just put there some unlikely high timeout - e.g. 5 hours.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
# pylint: disable=too-many-locals | ||
result = True | ||
for chroot, subdirs in chroot_builddirs.items(): | ||
if chroot == "srpm-builds": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is because repodata aren't created for srpm build thus we don't need to delete anything for srpm builds?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering what will happen with logs? We delete them in the backend storage but not in pulp - is that part of delete artifacts
? Then the logs will still remain
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is because repodata aren't created for srpm build thus we don't need to delete anything for srpm builds?
(So far) we don't upload results of source builds to Pulp, so there is nothing to remove. I added a code comment so it's clear.
However, in that case, we probably need to remove them from the backend storage, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've got srpm stored inside the pulp instance:
lftp localhost:/pulp/content/nikromen/test-pulp-3/fedora-39-x86_64/Packages/p> ls
drwxr-xr-x -- ..
-rw-r--r-- -- python-dside-2.3.3-1.fc39.src.rpm
-rw-r--r-- -- python3-dside-2.3.3-1.fc39.noarch.rpm
lftp localhost:/pulp/content/nikromen/test-pulp-3/fedora-39-x86_64/Packages/p>
but it is somehow deleted with the build deletion - probably the resources maps also the srpms.
However, in that case, we probably need to remove them from the backend storage, right?
I only checked the existence of the project inside pulp - after meeting I'll review backend storage if there are some remnants
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
after meeting I'll review backend storage if there are some remnants
yes, there are remnants from the deleted builds and deleted projects - so we need to temporarily call the delete methods from backend storage inside pulp storage to remove what's on backend.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
Thank you very much for the review @nikromen |
200d0d1
to
071f742
Compare
Thank you for the fixes! Now I can confirm deleting of project/package works! Packages/projects inside pulp instance are created and deleted as they should. I just found one more caveat (unrelated to this PR) - I can't install the packages via copr-cli :/ Could you please confirm that you were also unable to install https://copr.stg.fedoraproject.org/coprs/nikromen/test-pulp-3/ ? If so, I'd create a separate issue since this is unrelated to deletion. |
You mean |
ahaaa :D nice nice |
071f742
to
9eb9c43
Compare
Fix fedora-copr#3507 The issue was introduced in PR fedora-copr#3330. We previously did this on the backend: devel = uses_devel_repo(self.front_url, ownername, projectname) It looks like an unnecessary request, so I am sending the attribute as a part of the action data.
Fix fedora-copr#3507 The issue was introduced in PR fedora-copr#3330. We previously did this on the backend: devel = uses_devel_repo(self.front_url, ownername, projectname) It looks like an unnecessary request, so I am sending the attribute as a part of the action data.
Fix #3318
Fix #3319