Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decomission Koji setup #2006

Closed
ekohl opened this issue Dec 21, 2023 · 13 comments
Closed

Decomission Koji setup #2006

ekohl opened this issue Dec 21, 2023 · 13 comments
Assignees

Comments

@ekohl
Copy link
Member

ekohl commented Dec 21, 2023

With the move to COPR (#1795) the Koji setup is no longer needed. Today we still build Foreman 3.7 & 3.8, but after those are EOL the infrastructure can be decommissioned.

@ehelms
Copy link
Member

ehelms commented Dec 21, 2023

If I do the math correctly, that means with the 3.11 release we can take on this work.

@evgeni
Copy link
Member

evgeni commented Dec 21, 2023

I think that's correct.

Technically we also have to consider Katello 4.10 and whatever Pulpcore was before 3.39, but luckily all those align together.

@ekohl ekohl mentioned this issue May 2, 2024
@ekohl
Copy link
Member Author

ekohl commented May 2, 2024

When we've removed the VMs, we should make sure koji.katello.org is also removed.

@ekohl
Copy link
Member Author

ekohl commented May 6, 2024

Expanding on my previous comment, I think it's a good time to consider what the full plan is. First of all, can we start turning off koji itself?

http://koji.katello.org/koji shows the last build was a month and a half ago, so that's good. Then the question is whether we need anything from http://koji.katello.org/releases/.

Currently we have:

/dev/xvda1      8.0G  3.5G  4.6G  44% /
/dev/nvme0n1p2  505G  274G  206G  58% /mnt/tmp
/dev/xvdx1     1008G  939G   19G  99% /mnt/koji

So that's not easy to just dump somewhere. Another thing to note is that /mnt/tmp is empheral and will be deleted. Quoting /mnt/tmp/README:

This volume is ephemeral, stopping the VM immediately deletes it.

It only contains temporary and work files, but we keep backups at
/mnt/koji/backups/ephemeral (which is AWS EBS volume). The backup
script is at:

/etc/cron.weekly/koji-backup

To restore do this:

duplicity restore file:///mnt/koji/backups/ephemeral /mnt/tmp --force --no-encryption

The backup skips filenames with RPM extension to keep the backup
clean and small - the most important are directories and permissions.

Perhaps we can copy over the backups:

# du -sh /mnt/koji/backups/
209G	/mnt/koji/backups/

But it should be noted that those have file timestamps back to 2017 so it's probably sufficient to take the latest ones.

As for storage to place it: we have 650 GB of unallocated storage at OSUOSL.

@evgeni
Copy link
Member

evgeni commented May 6, 2024

Expanding on my previous comment, I think it's a good time to consider what the full plan is. First of all, can we start turning off koji itself?

I think so, yes.

http://koji.katello.org/koji shows the last build was a month and a half ago, so that's good.

That correlates with theforeman/jenkins-jobs#436 that dropped the support of being able to build there ;)

Then the question is whether we need anything from http://koji.katello.org/releases/.

I'd argue we have everything we need on yum.theforeman.org?

Currently we have:

/dev/xvda1      8.0G  3.5G  4.6G  44% /
/dev/nvme0n1p2  505G  274G  206G  58% /mnt/tmp
/dev/xvdx1     1008G  939G   19G  99% /mnt/koji

So that's not easy to just dump somewhere. Another thing to note is that /mnt/tmp is empheral and will be deleted. Quoting /mnt/tmp/README:

This volume is ephemeral, stopping the VM immediately deletes it.

It only contains temporary and work files, but we keep backups at
/mnt/koji/backups/ephemeral (which is AWS EBS volume). The backup
script is at:

/etc/cron.weekly/koji-backup

To restore do this:

duplicity restore file:///mnt/koji/backups/ephemeral /mnt/tmp --force --no-encryption

The backup skips filenames with RPM extension to keep the backup
clean and small - the most important are directories and permissions.

Perhaps we can copy over the backups:

# du -sh /mnt/koji/backups/
209G	/mnt/koji/backups/

But it should be noted that those have file timestamps back to 2017 so it's probably sufficient to take the latest ones.

As for storage to place it: we have 650 GB of unallocated storage at OSUOSL.

If we only take things from 2024, it's 13G.
But given it does not include the RPMs, I don't see much value in the backups outside the existing install?

@ekohl
Copy link
Member Author

ekohl commented May 6, 2024

But given it does not include the RPMs, I don't see much value in the backups outside the existing install?

I was leaning the same way. So a concrete plan:

  • 2024-05-06 Announce plan on Discourse
  • 2024-05-13 turn machine(s) off
  • 2024-06-01 delete machine(s)
    • Remove AWS machine(s) (IIRC @pcreech has access)
    • Remove DNS records like koji.katello.org (IIRC managed by Red Hat IT)
    • Remove from Foreman (multiple people have access)

@evgeni evgeni moved this to In progress in Infrastructure Jun 10, 2024
@evgeni
Copy link
Member

evgeni commented Jun 10, 2024

@pcreech has shut down the machine last Friday.

@evgeni
Copy link
Member

evgeni commented Jun 27, 2024

@pcreech you can go ahead and delete it

TODO:

  • DNS
  • any readmes/docs referring to koji need to be cleaned up

@pcreech
Copy link
Member

pcreech commented Aug 16, 2024

@evgeni Deleted. Koji is gone.

@evgeni
Copy link
Member

evgeni commented Aug 16, 2024

Awesome.

Do you know who I need to pester about katello dot org DNS?

@evgeni
Copy link
Member

evgeni commented Oct 24, 2024

@ekohl please file a ticket with RH IT

@ekohl
Copy link
Member Author

ekohl commented Nov 4, 2024

Ticket has been filed and resolved. The DNS record is gone and with that I consider this resolved.

@ekohl ekohl closed this as completed Nov 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

No branches or pull requests

4 participants