Skip to content

Commit

Permalink
Merge pull request #2827 from EnterpriseDB/docs/mtk/updating-code-blocks
Browse files Browse the repository at this point in the history
MTK: code block changes
  • Loading branch information
drothery-edb authored Jul 7, 2022
2 parents 5e8382c + 71c4498 commit 4711d29
Show file tree
Hide file tree
Showing 2 changed files with 61 additions and 35 deletions.
90 changes: 58 additions & 32 deletions product_docs/docs/migration_toolkit/55/08_mtk_command_options.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -37,27 +37,39 @@ If you specify the `-offlineMigration` option in the command line, Migration Too

To perform an offline migration of both schema and data, specify the `‑offlineMigration` keyword, followed by the schema name:

`$ ./runMTK.sh -offlineMigration <schema_name>`
```shell
$ ./runMTK.sh -offlineMigration <schema_name>
```

Each database object definition is saved in a separate file with a name derived from the schema name and object type in your home folder. To specify an alternative file destination, include a directory name after the `‑offlineMigration` option:

`$ ./runMTK.sh -offlineMigration <file_dest> <schema_name>`
```shell
$ ./runMTK.sh -offlineMigration <file_dest> <schema_name>
```

To perform an offline migration of schema objects only (creating empty tables), specify the `‑schemaOnly` keyword in addition to the `‑offlineMigration` keyword when invoking Migration Toolkit:

`$ ./runMTK.sh -offlineMigration -schemaOnly <schema_name>`
```shell
$ ./runMTK.sh -offlineMigration -schemaOnly <schema_name>
```

To perform an offline migration of data only, omitting any schema object definitions, specify the `‑dataOnly` keyword and the `‑offlineMigration` keyword when invoking Migration Toolkit:

`$ ./runMTK.sh -offlineMigration -dataOnly <schema_name>`
```shell
$ ./runMTK.sh -offlineMigration -dataOnly <schema_name>
```

By default, data is written in COPY format. To write the data in a plain SQL format, include the `‑safeMode` keyword:

`$ ./runMTK.sh -offlineMigration -dataOnly -safeMode <schema_name>`
```shell
$ ./runMTK.sh -offlineMigration -dataOnly -safeMode <schema_name>
```

By default, when you perform an offline migration that contains table data, a separate file is created for each table. To create a single file that contains the data from multiple tables, specify the `‑singleDataFile` keyword:

`./runMTK.sh -offlineMigration -dataOnly -singleDataFile -safeMode <schema_name>`
```shell
$ ./runMTK.sh -offlineMigration -dataOnly -singleDataFile -safeMode <schema_name>
```

!!! Note
The `-singleDataFile` option is available only when migrating data in a plain SQL format. You must include the `-safeMode` keyword if you include the `‑singleDataFile` option.
Expand All @@ -70,19 +82,27 @@ You can use the edb-psql or psql command line to execute the scripts generated d

1. Use the `createdb` command to create the acctg database, into which you'll restore the migrated database objects:

`createdb -U enterprisedb acctg`
```shell
$ createdb -U enterprisedb acctg
```

2. Connect to the new database with edb-psql:

`edb-psql -U enterprisedb acctg`
```shell
$ edb-psql -U enterprisedb acctg
```

3. Use the `\i` meta-command to invoke the migration script that creates the object definitions:

`acctg=# \i ./mtk_hr_ddl.sql`
```shell
$ acctg=# \i ./mtk_hr_ddl.sql
```

4. If the `-offlineMigration` command included the `‑singleDataFile` keyword, the `mtk_hr_data.sql` script will contain the commands required to re-create all of the objects in the new target database. Populate the database with the command:

`acctg=# \i ./mtk_hr_data.sql`
```shell
$ acctg=# \i ./mtk_hr_data.sql
```

<div id="import_options" class="registered_link"></div>

Expand Down Expand Up @@ -399,14 +419,14 @@ Table `tab3` is also loaded in two chunks, `tab3_chunk1` and `tab3_chunk2`. As t

Here is the the list of tables and table chunks created:

```text
tab1_chunk1 (25,000 rows)
tab1_chunk2 (25,000 rows)
tab2 (9500 rows)
tab3_chunk1 (25,001 rows)
tab3_chunk2 (25,000 rows)
tab4 (9999 rows)
```
```text
tab1_chunk1 (25,000 rows)
tab1_chunk2 (25,000 rows)
tab2 (9500 rows)
tab3_chunk1 (25,001 rows)
tab3_chunk2 (25,000 rows)
tab4 (9999 rows)
```

Thread pool with the four threads `T1, T2, T3, T4` is created.

Expand Down Expand Up @@ -454,7 +474,7 @@ T3 → tab3 (50,001 rows)
T4 → tab4 (9999 rows)
```

### System resource recommendations and constraints**
### System resource recommendations and constraints

Choose the number of threads depending on the CPU and RAM resources available on the host system running MTK. As there is a certain overhead on the source database in terms of CPU, IO, and disk utilization, exceeding parallel thread count beyond a certain threshold degrades the performance of MTK.

Expand Down Expand Up @@ -520,7 +540,9 @@ The `dblink_ora` module provides EDB Postgres Advanced Server-to-Oracle connecti

This example uses the `dblink_ora` `COPY API` to migrate all tables from the `HR` schema:

`$./runMTK.sh -copyViaDBLinkOra -allTables HR`
```shell
$./runMTK.sh -copyViaDBLinkOra -allTables HR
```

The target EDB Postgres Advanced Server database must have `dblink_ora` installed and configured. For information about `dblink_ora`, see [Database Compatibility for Oracle Developers](../../epas/11/epas_compat_ora_dev_guide/06_dblink_ora).

Expand All @@ -530,13 +552,17 @@ Choose this option to migrate Oracle database links. The password information fo

To migrate all database links using edb as the dummy password for the connected user:

`$./runMTK.sh -allDBLinks HR`
```shell
$./runMTK.sh -allDBLinks HR
```

You can alternatively specify the password for each of the database links through a comma-separated list of name=value pairs with no intervening space characters. Specify the link name on the left side of the pair and the password value on the right side.

To migrate all database links with the actual passwords specified on the command line:

`$./runMTK.sh -allDBLinks LINK_NAME1=abc,LINK_NAME2=xyz HR`
```shell
$./runMTK.sh -allDBLinks LINK_NAME1=abc,LINK_NAME2=xyz HR
```

Migration Toolkit migrates only the database link types that are currently supported by EnterpriseDB. This includes fixed user links of public and private type.

Expand Down Expand Up @@ -570,7 +596,7 @@ The uppercase naming convention is preserved for tables, views, sequences, proce

The default behavior of the Migration Toolkit without the `-useOraCase` option is that database object names are extracted from Oracle without enclosing quotation marks unless the database object was explicitly created in Oracle with enclosing quotation marks. The following is a portion of a table command generated by the Migration Toolkit with the `-offlineMigration` option:

```text
```sql
CREATE TABLE DEPT (
DEPTNO NUMBER(2) NOT NULL,
DNAME VARCHAR2(14),
Expand All @@ -582,7 +608,7 @@ ALTER TABLE DEPT ADD CONSTRAINT DEPT_DNAME_UQ UNIQUE (DNAME);

When you then migrate this table and create it in EDB Postgres Advanced Server, all unquoted object names are converted to lowercase letters, so the table appears in EDB Postgres Advanced Server as follows:

```text
```sql
Table "edb.dept"
Column | Type | Modifiers
--------+-----------------------+-----------
Expand All @@ -596,15 +622,15 @@ Indexes:

If your EDB Postgres Advanced Server applications are referencing the migrated database objects using quoted uppercase identifiers, the applications fail since the database object names are now in lower case.

```text
```sql
usepostcase=# SELECT * FROM "DEPT";
ERROR: relation "DEPT" does not exist
LINE 1: SELECT * FROM "DEPT";
```

If your application uses quoted uppercase identifiers, perform the migration with the `-useOraCase` option. The DDL encloses all database object names in quotes:

```text
```sql
CREATE TABLE "DEPT" (
"DEPTNO" NUMBER(2) NOT NULL,
"DNAME" VARCHAR2(14),
Expand All @@ -616,7 +642,7 @@ ALTER TABLE "DEPT" ADD CONSTRAINT "DEPT_DNAME_UQ" UNIQUE ("DNAME");

When you then migrate this table and create it in EDB Postgres Advanced Server, all object names are maintained in uppercase letters, so the table appears in EDB Postgres Advanced Server as follows:

```text
```sql
Table "EDB.DEPT"
Column | Type | Modifiers
--------+-----------------------+-----------
Expand All @@ -630,7 +656,7 @@ Indexes:

Applications can then access the object using quoted uppercase names.

```text
```sql
useoracase=# SELECT * FROM "DEPT";
DEPTNO | DNAME | LOC
--------+------------+----------
Expand Down Expand Up @@ -701,7 +727,7 @@ This example performs an Oracle to EDB Postgres Advanced Server migration.

The following is the content of the `toolkit.properties` file.

```text
```ini
SRC_DB_URL=jdbc:oracle:thin:@192.168.2.6:1521:xe
SRC_DB_USER=edb
SRC_DB_PASSWORD=password
Expand All @@ -713,9 +739,9 @@ TARGET_DB_PASSWORD=password

The following command invokes Migration Toolkit:

`$ ./runMTK.sh EDB`

```text
```shell
$ ./runMTK.sh EDB
__OUTPUT__
Running EnterpriseDB Migration Toolkit (Build 48.0.0) ...
Source database connectivity info...
conn =jdbc:oracle:thin:@192.168.2.6:1521:xe
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ EDB Postgres Advanced Server doesn't currently support the enum data type but wi

The following code sample includes a simple example of a check constraint that restricts the value of a column to one of three dept types; `sales`, `admin` or `technical`.

```text
```sql
CREATE TABLE emp (
emp_id INT NOT NULL PRIMARY KEY,
dept VARCHAR(255) NOT NULL,
Expand All @@ -62,15 +62,15 @@ CREATE TABLE emp (

If you test the check constraint by entering a valid dept type, the INSERT statement works without error:

```text
```sql
test=# INSERT INTO emp VALUES (7324, 'sales');

INSERT 0 1
```

If you try to insert a value not included in the constraint (`support`), EDB Postgres Advanced Server returns an error:

```text
```sql
test=# INSERT INTO emp VALUES (7325, 'support');

ERROR: new row for relation "emp" violates check constraint "emp_dept_check"
Expand Down

0 comments on commit 4711d29

Please sign in to comment.