-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to re-sync database and s3 contents #21093
Comments
This is rather unusual, but I think that this might have happened, for example when the GC can delete the files.
Did you run the GC, what is the outcome? |
Howdy Vadim. First, thank you for the very quick response. RE GC: our GC is scheduled to run daily and cannot complete within 24 hours – maybe even within 48 hours. We are just now working on how the DB can tell us how long a single GC runs. As of yesterday, we had 30 GC jobs pending. We also had 139 pending EXECUTION_SWEEP jobs pending. Since our operation is 24x7, we cannot stop the service, say, on the weekends. We significantly upsized our K8s pods on Tuesday. That provided tremendous improvement for teams performing push/pull/scan functions of large images. However, no observable relief for GC. Looking in the k8s pods, we could not tell if GC or EXECUTION_SWEEPS were actually running. So we Stopped the queues and let those functions re-schedule themselves.
I believe GC may be impacted by the disconnect between the s3 bucket and the DB. It would save us a lot of time if you could map for us where the S3 keys for blobs, layers, manifests, artifacts are found in the database. I know that’s a lot to ask but our AWS costs are drawing a lot of attention because it proportionately impacts AWS backups, cloudwatch, and events cost. We’ve discussed creating the tool and contributing it to GoHarbor.
Our leadership has asked about the possibility of a collaborative session with your SME(s) on this subject. If that’s possible, please let me know. Email is the best communications conduit for the moment. Any info or direction you can provide is greatly appreciated.
Thanks.
Michael David
***@***.******@***.***
Lockheed Martin – Enterprise IT
Orlando, Fl
Office: 407-306-1392
From: Vadim Bauer ***@***.***>
Sent: Thursday, October 24, 2024 11:56 AM
To: goharbor/harbor ***@***.***>
Cc: David, Michael (US) ***@***.***>; Author ***@***.***>
Subject: EXTERNAL: Re: [goharbor/harbor] how to re-sync database and s3 contents (Issue #21093)
This is rather unusual, but I think that this might have happened, for example when the GC can delete the files.
1. no such tool exist, IMO it has to be created so that it iterates over the Harbor (db) and S3 and finds layers and blobs and manifests not in Harbor but on S3.
2. I am not sure, we had some functionality storing data in S3. but you would see it in the bucket, as its top level next to docker
Did you run the GC?
—
Reply to this email directly, view it on GitHub<#21093 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AAYT44U5QINWRWHSJYGCCGTZ5EKBBAVCNFSM6AAAAABQRMDFRSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMZVGY3TCNZYGI>.
You are receiving this because you authored the thread.Message ID: ***@***.******@***.***>>
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi team: we believe our postgresql database artifact contents are out of sync with what's actually in the S3 bucket. our s3 bucket size is 143TB. From our user's standpoint, the Harbor UI, swagger, docker push/pull all functioning as expected. But there's no way we have 143TB of active projects/repos/artifacts.
Thanks.
The text was updated successfully, but these errors were encountered: