-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: RDS instances not syncing due to wrong AZ in spec #1379
Comments
I found a workaround that will unblock me, though it doesn't explain why things got into this state to begin with: If I delete Example with kubectl:
|
The process of copying fields from the observed Version 0.40 is really too old for bug reports to be meaningful. Since then, we've completely changed the way we reconcile resources (no longer forking terraform processes for each managed resource), which made a huge improvement in compute resource requirements for the provider, and upgraded to terraform provider aws version 5.x, to name just some of the major changes. I would encourage you to upgrade to a newer version of the provider, and I expect this issue would likely not re-occur. If it does, I'd be happy to look at a bug report with more specific STRs. By the way, it looks like the fork you linked is missing the backport of a bug fix for a regression introduced in v0.40.0 for the |
I also had this issue. The reason for happening is that since you enable "multiAZ=true" if for some reason the instance of the initial Az has any issue it will switch to another Az and this error will start to appear (this is pretty common). I have this issue in many other fields like "engineVersion" because AWS does minorUpgrades and Crossplane starts getting a sync error. It would be really cool if we could have a way to specify that some field should be ignored. The initProvider kinda does this but we need to provide an initial value which for most of the cases don't make sense. |
This also relates to this other issue I have: #1370 basically I just want to import an existing ReplicationGroup and tell Crossplane to ignore the field "authTokenSecretRef" but (afaik) it is not possible cause Crossplane always tries to sync it. |
We are facing the same issue. |
This provider repo does not have enough maintainers to address every issue. Since there has been no activity in the last 90 days it is now marked as |
Exactly what I stumbled upon too. During OS system maintenance it it switching over to the secondary AZ and suddenly the resource is not in sync anymore. |
Is there an existing issue for this?
Affected Resource(s)
Resource MRs required to reproduce the bug
exampledb.yml.txt
mysqlinstanceandservice.yml.txt
xmysqlinstance.yml.txt
xmysqlinstanceandservice.yml.txt
Steps to Reproduce
I don't have a way to reproduce this outside our environment, but here's an approximation:
multiAz
totrue
. Do not specify an availability zone.instance
(instances.rds.aws.upbound.io
) to see in which AZ it landed (status.atProvider.availabilityZone
)instance
(instances.rds.aws.upbound.io
) to see which AZ ends up in the spec (spec.forProvider.availabilityZone
)What happened?
Expected: RDS instance created successfully, and the
instance
managed resource stays synced. The AZ in which the instance was created matches the AZ that appears in the spec.Actual behavior: RDS instance created successfully, but at some point
Synced
on theInstance
becomesFalse
because a different availability zone appeared in the spec, which causes replacement. Replacement is blocked (thankfully) due to"prevent_destroy":true
. Any unrelated changes we want to make are blocked by this.Relevant Error Output Snippet
Crossplane Version
1.14.9
Provider Version
0.40.102
Kubernetes Version
v1.28.9-eks-036c24b
Kubernetes Distribution
EKS
Additional Info
rds-ca-2019
tords-ca-rsa2048-g1
) were not being applied.The text was updated successfully, but these errors were encountered: