Skip to content

Commit

Permalink
feat: pgbelt now supports non-public schemas! (#398)
Browse files Browse the repository at this point in the history
* feat: adding schema key to model

* feat: pglogical role permissions now cover non-public schemas (grant and revoke)

fix: oops wrong var name

* feat: dump commands now target named schema based on model

* doc: update requirements for owner user

* doc: update comment

* feat: support named schema in analyze_pkey command and all uses of that

* feat: saving in-progress work for precheck and integration testing

fix: comment update and update due to rebase

* refactor: precheck code into separate functions for easier readability

* feat: precheck updated to flag schema mismatching (for non-public schema support)

* feat: revamp precheck to check both SRC and DST DBs, and check for owner CREATE permissions in the target DB

fix: missed some stuff

* fix: status command needs to throw error when no source DBs tables are detected due to size detection code

* doc: update docs with schema feature (and remove config.md, very outdated)

* fix: ignore flake8 for preflight data structure comments

* fix: schema on src and dst must be the same according to pglogical plugin

* fix: schema word is reserved in Pydantic

fix: missed some spots

* fix: various fixes from integration testing

* doc: update line about schemas in quickstart reqs
  • Loading branch information
vjeeva authored Feb 27, 2024
1 parent a61ee80 commit 1f1fa87
Show file tree
Hide file tree
Showing 14 changed files with 559 additions and 217 deletions.
2 changes: 2 additions & 0 deletions .flake8
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,5 @@ exclude = docs/* tests/* *__init__.py
# B008 is false flagging for asyncpg stuff that's valid.
# RST201 complaining about something not even true.
ignore = E501, W503, D, DAR, B008, RST201, S608
per-file-ignores =
pgbelt/cmd/preflight.py: RST203, RST301, RST401
2 changes: 1 addition & 1 deletion docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ services:
TEST_PG_DST_PORT: 5432
TEST_PG_DST_ROOT_USERNAME: postgres
TEST_PG_DST_ROOT_PASSWORD: postgres
command: bash -c "cd /pgbelt-volume/ && poetry run python3 tests/integration/conftest.py && pip3 install -e . && bash"
command: bash -c "cd /pgbelt-volume/ && poetry run python3 tests/integration/conftest.py --non-public-schema && pip3 install -e . && bash"
depends_on:
db-src:
condition: service_healthy
Expand Down
93 changes: 0 additions & 93 deletions docs/config.md

This file was deleted.

5 changes: 3 additions & 2 deletions docs/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,7 @@ Fill in config.json with the required info (marked in `<>`), referring to this e
},
"tables": [],
"sequences": []
// Optional key: "schema_name": "<someschema>". If the key isn't specified, the default will be "public". Schema name must be the same in source and destination DBs.
}
```

Expand All @@ -103,8 +104,8 @@ Both your source and target database must satisfy the following requirements:

- Be running postgreSQL version 9.6 or greater.
- Each database must be accessible from the other on the network.
- All data to be migrated must be in the public schema.
- All data to be migrated must be owned by a single login user.
- All data to be migrated must be owned by a single login user, and that user must have CREATE permissions to create objects.
- All targeted data must live in the same schema in both the source and destination DBs.
- There must be a postgres superuser with a login in the database.
- Have the following parameters:
- `max_replication_slots` >= 2 (at least 2 for use by this tool, add more if other tools are using slots as well)
Expand Down
4 changes: 3 additions & 1 deletion pgbelt/cmd/convenience.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,9 @@ async def _check_pkeys(
conf: DbupgradeConfig, logger: Logger
) -> tuple[list[str], list[str]]:
async with create_pool(conf.src.root_uri, min_size=1) as pool:
pkey_tables, no_pkey_tables, _ = await analyze_table_pkeys(pool, logger)
pkey_tables, no_pkey_tables, _ = await analyze_table_pkeys(
pool, conf.schema_name, logger
)
return pkey_tables, no_pkey_tables


Expand Down
Loading

0 comments on commit 1f1fa87

Please sign in to comment.