-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
db_import with keyauth_credentials fails on re-run #12288
Comments
fixes Kong#12288 The upsert DAO function generated uses the primary key as an "ON CONFLICT" match. This might not be the case if a field uses a unique flag or if that field is also workgroupable. Postgresql requires an index to be specified. I tried autodetecting the correct behavior but it failed on other models. Changing the primary key also did not work. I added a way to specify the conflict index. It defaults to the primary key.
fixes Kong#12288 The upsert DAO function generated uses the primary key as an "ON CONFLICT" match. This might not be the case if a field uses a unique flag or if that field is also workgroupable. Postgresql requires an index to be specified. I tried autodetecting the correct behavior but it failed on other models. Changing the primary key also did not work. I added a way to specify the conflict index. It defaults to the primary key.
@ahalay Thank you for reporting this issue. As you have seen in PR #12597, the underlying issue is difficult to fix. We think of |
Thanks for the info @hanshuebner Unfortunately this is something we'd really like to avoid, as we haven't found a more convenient way than |
Given that a complete solution is not in sight, would it be possible to perform a check on the database to see if it needs to receive its initial payload using |
Yes, it is possible, but it deprives us of the possibility of updating the value of this keyauth, whose uniqueness the function complains about. |
we have exactly the same problem. We also use db_import after the automated migration steps in order to overwrite the admin-api route and the admin credentials (this will avoid locking out the user on accidently removing the route / user / api-key). I just wanted to upgrade from 3.3.0 to 3.6.1 and had the same issues. Using deck would not be a solution as we would have a "chicken-and-egg-problem" when something is accidently removed. so this is a kind of bypass ensuring that we always have access to our productive system (we just need to reboot the container and have access again. we have no other access to the commandline in order to change something as this is managed by our oc4 provider). |
We'll see how we can address this issue. (KAG-4044) |
We have discussed this internally once more and also reviewed #12597. It is unfortunately not straightforward to completely fix the issue, given that PostgreSQL only allows one Would you be able to specify the IDs of your entities in your YAML files?
|
Yes, I think this option should work, as far as I understand in case an existing record will have a different id it can be safely updated in the database itself before |
Closing as this is a non-issue and the reporter seems to have a workaround. |
Is there an existing issue for this?
Kong version (
$ kong version
)Kong 3.4.0 (Kong 3.3.1 is not affected)
Current Behavior
When I try to run
db_import
a second time with the same config file, it fails with an error(I use it for initial configuration and run it in k8s init container before further configuration via decK)
errors:
kong container:
Error: Failed importing: [postgres] UNIQUE violation detected on '{key="very-secret-key"}'
postgresql container:
Expected Behavior
Kong version 3.3 allows you to run
db_import
multiple times with updating existing database recordsSteps To Reproduce
Fails on a second run of
kong config db_import kong.yaml
kong.yaml:
Anything else?
No response
The text was updated successfully, but these errors were encountered: