Skip to content

Commit

Permalink
Addressed Dee Dee's comments
Browse files Browse the repository at this point in the history
  • Loading branch information
moiznalwalla committed Feb 2, 2022
1 parent 5835d0b commit 9a71cc9
Show file tree
Hide file tree
Showing 3 changed files with 18 additions and 18 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -14,5 +14,5 @@ New features, enhancements, bug fixes, and other changes in Migration Portal 3.5
| Enhancement | Migration Portal now uses the metadata information in a SQL dump file to assess the extracted schemas. You no longer have to download the DDL extractor script to extract DDLs for your schemas. |
| Feature| Updated repair handler to remove OVERFLOW syntax from CREATE TABLE syntax to make it compatible with EDB Postgres Advanced Server compatible syntax. (ERH-2011: ORGANIZATION_INDEX_COMPRESS).|
| Feature| Added a repair handler to remove the NAME clause and associated label name from SET TRANSACTION statements. (ERH-3002: SET_TRANSACTION).|
| Enhancement | Improved the User Interface for a better user experience. |
| Enhancement | Improved the user interface for a better user experience. |

Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ You can use the Oracle’s Data Pump utility to extract metadata from your Oracl


!!! Note
Migration Portal might become unresponsive for very large SQL files, depending on your system’s and browser’s resource availability. Migration Portal supports SQL files upto 1 GB. If your SQL file size exceeds 1 GB, extract fewer schemas at a time to reduce the SQL file size.
Migration Portal might become unresponsive for very large SQL files, depending on your system’s and browser’s resource availability. Migration Portal supports SQL files up to 1 GB. If your SQL file size exceeds 1 GB, extract fewer schemas at a time to reduce the SQL file size.

## Extracting schemas to an SQL dump file

Expand All @@ -22,7 +22,7 @@ Note that Migration Portal only requires the metadata in the SQL dump file to as

#### Prerequisites

* If you plan on exporting schemas that are not your own, ensure that you are assigned the *DATAPUMP_IMP_FULL_DATABASE* role. Otherwise, you can only export your own schema.
* If you plan on exporting schemas that are not your own, ensure that you are assigned the `DATAPUMP_IMP_FULL_DATABASE` role. Otherwise, you can only export your own schema.

* Ensure that you have sufficient tablespace quota to create objects in the tablespace.

Expand All @@ -48,7 +48,7 @@ Perform either of the following procedures:

`SQL> grant read,write on directory DMPDIR to <Username>;`

3. Before running the `expdp` command, create a file with a `.par` extension (for example: parameters.par) on your server. Add attributes and values to the file as shown below:
3. Before running the `expdp` command, create a file with a `.par` extension (for example, parameters.par) on your server. Add attributes and values to the file, as follows:

```
$ cat parameters.par
Expand All @@ -63,15 +63,15 @@ The attributes and values in the above command specify the following options:
+ `INCLUDE=` specifies which database object types to include in the extraction. Append a comma-separated list of the [Supported object types](#supported-object-types) to only extract database object types that are supported by Migration Portal.


4. From the command line, run the export command to generate a **db.dump** file. For example, to extract metadata information for `<Schema_1>`, `<Schema_2>`, `<Schema_3>`, and so on, run the following command:
4. From the command line, run the export command to generate a **db.dump** file. For example, to extract metadata information for `<Schema_1>`, `<Schema_2>`, `<Schema_3>`, and so on, run:

```
$ expdp <Username>@<ConnectIdentifier> DIRECTORY=DMPDIR SCHEMAS=<Schema_1>,<Schema_2>,<Schema_3> DUMPFILE=db.dump parfile=parameters.par
```

5. To generate a SQL file from the dump file, run the import command.

For example, to generate **YourSchemas.SQL** file from the db.dump file, enter the following command:
For example, to generate **YourSchemas.SQL** file from the db.dump file, enter:

```
$ impdp <Username>@<ConnectIdentifier> DIRECTORY=<directory_name> TRANSFORM=OID:n SQLFILE=YourSchemas.sql DUMPFILE=db.dump
Expand All @@ -80,7 +80,7 @@ The attributes and values in the above command specify the following options:
### To extract all schemas in a database

!!! Note
Do not perform the following procedure from a user account that belongs to the excluded schemas list, see [Unsupported schemas](#unsupported-schemas). The impdp command will fail if the user account running the command is in the excluded list of schemas.
Do not perform the following procedure from a user account that belongs to the excluded schemas list, see [Unsupported schemas](#unsupported-schemas). The impdp command fails if the user account running the command is in the excluded list of schemas.


1. In SQL*Plus, create a directory object that points to a directory on your server file system. For example:
Expand All @@ -97,7 +97,7 @@ The attributes and values in the above command specify the following options:

`SQL> grant read,write on directory DMPDIR to <Username>;`

3. Before running the expdp command, create a file with a `.par` extension (for example: parameters1.par) on your server. Add attributes and values to the file as shown below:
3. Before running the expdp command, create a file with a `.par` extension (for example, parameters1.par) on your server. Add attributes and values to the file, as follows:

```
$ cat parameters1.par
Expand All @@ -121,7 +121,7 @@ The attributes and values in the above command specify the following options:
$ expdp <Username>@<ConnectIdentifier> DIRECTORY=DMPDIR DUMPFILE=db.dump parfile=parameters1.par
```

5. Before running the impdp command, create a parameter file with a `.par` extension (for example: parameters2.par) on your server. Add attributes and values to the file as shown below:
5. Before running the impdp command, create a parameter file with a `.par` extension (for example, parameters2.par) on your server. Add attributes and values to the file, as follows:

```
$ cat parameters2.par
Expand All @@ -139,14 +139,14 @@ The attributes and values in the above command specify the following options:

6. To generate a SQL file from the dump file, run the import command.

For example, to generate **YourSchemas.SQL** file from the db.dump file, enter the following command:
For example, to generate **YourSchemas.SQL** file from the db.dump file, enter:

```
$ impdp <Username>@<ConnectIdentifier> DIRECTORY=<directory_name> TRANSFORM=OID:n SQLFILE=YourSchemas.sql DUMPFILE=db.dump
```


## Supported schemas and objects
## Schemas and objects support

See the following sections for supported and unsupported schemas and objects.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ You can migrate schemas to an existing on-premises EDB Postgres Advanced Server
4. Select **Windows**.


5. To import the schemas, run the following command:
5. To import the schemas, run:

- On CLI:

Expand All @@ -64,7 +64,7 @@ The converted schemas migrate to the target server.

You can migrate schemas to an existing on-premises EDB Postgres Advanced Server on Linux.

1. Select `Existing on-premises EDB Postgres Advanced Server`.
1. Select **Existing on-premises EDB Postgres Advanced Server**.


2. Select one or more schemas to migrate to EDB Postgres Advanced Server.
Expand Down Expand Up @@ -119,7 +119,7 @@ Migrate schemas to a new on-premises EDB Postgres Advanced Server on Windows.
6. To download the assessed schemas, select **Download SQL file**.


7. You can import schemas by running the following command:
7. You can import schemas by running:

- On CLI

Expand All @@ -146,21 +146,21 @@ Migrate schemas to an on-premises EDB Postgres Advanced Server on Linux.

2. Select one or more schemas to migrate on EDB Postgres Advanced Server.

3. For the operating system, select the Linux.
3. For the operating system, select **Linux**.


4. You can select one of the following options to install the EDB Postgres Advanced Server:

- Repository
- More options

5. For information on the installation procedure, select `EDB Postgres Advanced Server Installation Guide` for Linux.
5. For information on the installation procedure, select **EDB Postgres Advanced Server Installation Guide** for Linux.


6. To download the assessed schemas, select **Download SQL file**.


7. To import the schemas, run the following command:
7. To import the schemas, run:

```text
sudo su - enterprisedb
Expand Down Expand Up @@ -190,7 +190,7 @@ Migrate schemas on EDB Postgres Advanced Server to the cloud.

4. To launch a new cluster, select **Go to BigAnimal**.

Or, if you have an existing cluster running, select `Next`.
Or, if you have an existing cluster running, select **Next**.

!!! Note
See the [Big Animal](https://www.enterprisedb.com/edb-cloud) page for more information.
Expand Down

0 comments on commit 9a71cc9

Please sign in to comment.