Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

internal/recover: Write the global DB patch on all members #208

Closed

Conversation

MggMuggins
Copy link
Contributor

Follow-up on the test failures in #207. This is almost surely the same race condition I was seeing in LXD.

TL;DR we're selecting on the member's new address before core_cluster_members has been updated to include the new addresses. I haven't traced through this as finely as I did for LXD, but like over there, this is likely to be non-trivial/breaking to fix "correctly". Since the query is idempotent, creating the patch file on all nodes is the straightforward fix.

A fix for the same issue as in LXD: canonical/lxd#13754 (comment)

Signed-off-by: Wesley Hershberger <[email protected]>
@MggMuggins MggMuggins requested a review from masnax July 29, 2024 22:56
@MggMuggins MggMuggins changed the title internal/recover: Write the global DB patch on all nodes internal/recover: Write the global DB patch on all members Jul 29, 2024
@masnax
Copy link
Contributor

masnax commented Jul 29, 2024

Looking at that query, I think we could pass the daemon name to the DB struct and use it as the query filter to avoid dealing with the address entirely.

masnax added a commit that referenced this pull request Jul 30, 2024
@MggMuggins MggMuggins closed this Jul 30, 2024
@MggMuggins MggMuggins deleted the recover-patch-db-on-all-members branch July 30, 2024 20:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants