-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes 4822: rename domains with cs- prefix #865
base: main
Are you sure you want to change the base?
Conversation
c2a331f
to
c2834dd
Compare
f3607cb
to
5c4e1df
Compare
for existing repo snapshots, after running the rename-domains job i see the old domains when fetching the repos and so the urls point to a 404. also see the old domains in the config.repos from existing repo snapshots. i'm guessing it is expected to have to trigger new snapshots before we'll see the new domains in these places? |
@@ -32,8 +41,13 @@ func (dDao domainDaoImpl) FetchOrCreateDomain(ctx context.Context, orgId string) | |||
} | |||
|
|||
func (dDao domainDaoImpl) Create(ctx context.Context, orgId string) (string, error) { | |||
name := fmt.Sprintf("cs-%v", random.New().String(10, random.Hex)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
didn't notice in the previous PR but old custom domains (prefixed with cs-) have a length of 11 and new ones have a length of 13. my guess is that they were originally set to the same length as an org id? just wanted to point this out, do you think it would cause any issues?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm assuming it shouldn't matter since the new red hat domain is also a different length 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, i think it was just chosen to be small, but not too small. The actual size doesn't matter too much
everything works as described on the client 🎉 just a couple questions above and a nitpick, overall nice job!! |
Hrmmmm this sounds like something didn't go as planned, I'll try to reproduce. |
ah, also i'm seeing this error from the delete-repository-snapshots task when deleting those existing custom repos:
to reproduce these things:
|
func SetupGormTableOrFail(db *gorm.DB) { | ||
err := db.SetupJoinTable(models.Template{}, "RepositoryConfigurations", models.TemplateRepositoryConfiguration{}) | ||
if err != nil { | ||
log.Logger.Fatal().Err(err).Msg("error setting up join table for templates_repository_configurations") | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this causing that bug? Will soft delete still work without it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When i made the change here: 3544d27#diff-6daee329a68afa6ef518701e49dba065bb5372d88bec90794533ff924bf576d5R27 to link templates to TemplateRepositoryConfiguration directly, it complained. My guess is this might have been why it wasn't working before? (it was trying to link through that table to RepositoryConfigurations directly).
what sort of behavior were you seeing? Just the template_repository_configuration entry being deleted entirely instead of soft deleted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect the change to the model may have made it okay to be removed. I noticed there was a comment above this function that said "we need this for soft delete to work", so I wanted to make sure you caught that.
You can see in the gorm docs, the models are arranged like what we previously had: https://gorm.io/docs/many_to_many.html#Customize-JoinTable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i did a manual test by creating a template with 2 repos, I think ran the api server without any workers. I then updated the template to only have 1 repo. Finally I checked the db:
content=# select template_uuid, deleted_at from templates_repository_configurations ;
template_uuid | deleted_at
--------------------------------------+-------------------------------
a00baafb-1d3e-49b0-b3e9-4aa21df87716 | 2024-11-08 19:16:08.869313+00
a00baafb-1d3e-49b0-b3e9-4aa21df87716 |
Finally i started the server again with the workers, and re-ran the query:
content=# select template_uuid, deleted_at from templates_repository_configurations
;
template_uuid | deleted_at
--------------------------------------+------------
a00baafb-1d3e-49b0-b3e9-4aa21df87716 |
so i think its good!
[test] |
2 similar comments
[test] |
[test] |
i did discover that i wasn't updating repository_path in the snapshots table, so i added that as part of 1e5b426 and added an integration test |
[test] |
repository paths look good now! after running rename-domains, i'm still seeing the delete-repository-snapshots task fail on deleteRpmDistribution when deleting a repo created previously though:
also when fetching a snapshot task in the UI for existing repos (which uses /admin/tasks/:uuid):
i think the pulp hrefs in our db need to be updated to the new domain too? i see the hrefs using the new domain in pulp:
but not in our db:
|
Hi @jlsherrill , please rebase. |
Summary
Changes template model to no longer contain RepositoryConfigurations directly, but instead TemplateRepositoryConfigurations, this was due to an observed bug with gorm. Strangely this bug was not always reproducible, but it was observed that querying Templates with the default scope resulted in RepositoryConfigurations included that had been removed from the template.
This moves all the jobs to one command ./cmd/jobs/main.go that takes the name of the job as an argument.
Deletes the repair_latest and cleanup_versions jobs
Adds a new job 'rename_domains' that renames the red hat domain to 'cs-redhat' if its not already that. Renames all other domains to cs-PREVIOUS_NAME if it doesn't already start with 'cs-'.
The rename process involves:
5 is done last so that if the process errors at any point, it can be re-run from the start for that domain simply by re-running the job.
Testing steps
start the server and let the RHEL repos snapshot
3. Create at least 1 custom repo with snapshotting enabled
4. Create a content template, assign rhel9 and custom repo(s)
5. follow https://github.com/content-services/content-sources-backend/blob/main/docs/register_client.md to register a client
6. confirm that the client can install content form the redhat repos and custom repos
7. rename the domains:
go run cmd/jobs/main.go rename_domains
8. On the client run
subscription-manager refresh
&&subscription-manager repo-override
9. verify that you can still install content from the red hat and custom repos
10. confirm that the templates urls now point to cs-DOMAIN & cs-redhat
11. confirm that the repo configs in the webUI for snapshots still work
12. If you still see the old domains in the redhat.repo file, then try re-registering