-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix nullable deleted column in API Keys table #15956
Fix nullable deleted column in API Keys table #15956
Conversation
Doesn't this potentially make old (and thus supposedly inactive) API keys valid ? Can we just fix those external scripts and call it a day? |
See also #14489 for context |
Would you also need to add Edit: I initially wrote |
...axy/model/migrations/alembic/versions_gxy/1d0564b6d7aa_make_api_keys_deleted_non_nullable.py
Outdated
Show resolved
Hide resolved
If I remember correctly with #14489, only the latest API that is not deleted will be the valid one, so I guess the only problem will be with a new API key that is deleted=NULL
I'm completely fine with that, ping @hexylena in case she has some opinions.
Apparently, this is the default when you don't explicitly set the |
I guess @nsoranzo meant that it needs to be changed to nullable=False, no ? |
Indeed, sorry! |
right ... so if an admin set deleted=False on the latest API key your old key (which is now the latest non-deleted one) becomes active ... it just seems like there is too much room for error here. It may not work that way, but the fact that we can't talk about this without tripping up makes me think we should migrate Or not do anything and fix the script. |
what happens when you declare a column as non-nullable in the migration but you have null values in the database ? if that only blocks inserting new null values then that would also be a valid option. |
The migration will fail with:
Agree, I will change the delete=NULL existing rows to delete=True as suggested, feels much safer, in the worst case the user will need to recreate the API key because the one that may be used was discarded in the migration. |
This probably impacts only EU. If it's all easier, then let's fix this on dev and don't introduce a migration in a stable branch. |
53f7db6
to
0a4d214
Compare
I mean, yea, we can. We are doing that separately. That said, this column allowed nulls which semantically do not make sense, and we should fix that underlying issue as well at some point, how that happens (23.0 vs dev) all is fine for me. But now that it 'fails closed', this looks better to me. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, @jdavcs can you also take a look ?
Both alter statements (upgrade and downgrade) should be wrapped in a batch operations context manager: with op.batch_alter_table(table_name) as batch_op:
batch_op.alter_column(column_name, existing_type=sa.Boolean(), nullable=True, server_default=None) Note that we do not need to pass The reason for that is that SQLite has very limited support for ALTER statements. The batch mode uses the "move and copy" approach. The rest looks fine. |
Also, not a dealbreaker, but would it be possible to merge this after #15811? Otherwise, it would have to be edited after that PR is merged. With 15811, this revision script would look like this: with transaction():
op.execute(...)
alter_column(...) In addition, this revision would be tested (that PR adds a test to the CI so that all revisions go through a downgrade>upgrade test sequence on both postgresql and sqlite). |
This seems another reason to target dev instead of 23.0 ? |
Yes, I will retarget dev and update the PR with John's suggestions as soon as #15811 is merged. I will keep it in draft for now and rename the title :) |
0a4d214
to
56598e4
Compare
Rebased and retargeted to |
@davelopez my suggestion is to do the same as we do with all other DDL statements: have a helper like this. That way they are all routed through the base |
@davelopez I've added the utility in #16009. With that, instead of |
This makes sure the deleted column in the `api_keys` table has a valid boolean value from now on and sets any existing null value as deleted (True).
56598e4
to
5066f23
Compare
...axy/model/migrations/alembic/versions_gxy/b855b714e8b8_make_api_keys_deleted_non_nullable.py
Show resolved
Hide resolved
Test failures: |
This makes sure the default value is set when inserting new rows with or without using Sqlalchemy. Co-authored-by: Nicola Soranzo <[email protected]>
0028fb8
to
65984d7
Compare
Thank you, @davelopez and everyone for such a thorough review! |
this definitely needs to go into the release announcement, since it'll invalidate old api keys |
By default, the
deleted
column is nullable even if it has a default value of False.There is a chance, some external scripts that create API keys for users directly in the database might not set the
deleted
value explicitly causing possible incongruences with multiple API keys for the user not being explicitly deleted=False.This restricts those cases by forcing a non-null value.
We could also try to handle possible null values in the API logic, I can change the approach if this is more desirable than altering the database schema at this point.
How to test the changes?
(Select all options that apply)
License