From e960e75cc31adfc3ffb92dedf60b4fe0f5b5fb2a Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Fri, 6 Sep 2024 20:01:27 +0530 Subject: [PATCH 01/67] JDBC - Advance Queuing update Addresses [EC-3136](https://enterprisedb.atlassian.net/browse/EC-3136) --- .../42.5.4.2/05a_using_advanced_queueing.mdx | 24 +++++++++++++++---- 1 file changed, 19 insertions(+), 5 deletions(-) diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx b/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx index cdd831b1873..3474640ff46 100644 --- a/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx +++ b/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx @@ -2,7 +2,7 @@ title: "Using advanced queueing" --- -!!! Tip "New Feature " +!!! Tip "New Feature" Advanced queueing is available in JDBC 42.3.2.1 and later. EDB Postgres Advanced Server advanced queueing provides message queueing and message processing for the EDB Postgres Advanced Server database. User-defined messages are stored in a queue, and a collection of queues is stored in a queue table. You must first create a queue table before creating a queue that depends on it. @@ -17,7 +17,13 @@ For more information about using EDB Postgres Advanced Server's advanced queuein ## Server-side setup -To use advanced queueing functionality on your JMS-based Java application, first create a user-defined type, queue table, and queue. Then start the queue on the database server. You can use either EDB-PSQL or EDB-JDBC JMS API in the Java application. +To use advanced queueing functionality on your JMS-based Java application perform following steps in EDB-PSQL or EDB-JDBC: + +1. Create a user-defined message type, which may be one of the standard JMS message types. However, EDB JDBC also supports any user-defined types. These types will be covered in detail in the upcoming sections. +2. Create a queue table specifying the payload type. This type will typically be the one created in step 1. +3. Create a queue using the queue table created in the previous step. +4. Start the queue on the database server. +5. You have the option to use either EDB-PSQL or EDB-JDBC JMS API in your Java application. ### Using EDB-PSQL @@ -28,7 +34,7 @@ Invoke EDB-PSQL and connect to the EDB Postgres Advanced Server host database. U To specify a RAW data type, create a user-defined type. This example creates a user-defined type named as `mytype`. ```sql -CREATE TYPE mytype AS (code int, project TEXT); +CREATE OR REPLACE TYPE mytype AS (code INT, project TEXT, manager VARCHAR(10)); ``` **Create the queue table** @@ -48,7 +54,10 @@ END; This example creates a queue named `MSG_QUEUE` in the table `MSG_QUEUE_TABLE`. ```sql -EXEC DBMS_AQADM.CREATE_QUEUE ( queue_name => 'MSG_QUEUE', queue_table => 'MSG_QUEUE_TABLE', comment => 'This queue contains pending messages.'); +EXEC DBMS_AQADM.CREATE_QUEUE + (queue_name => 'MSG_QUEUE', + queue_table => 'MSG_QUEUE_TABLE', + comment => 'This queue contains pending messages.'); ``` **Start the queue** @@ -59,8 +68,13 @@ Once the queue is created, invoke the following SPL code at the command line to EXEC DBMS_AQADM.START_QUEUE(queue_name => 'MSG_QUEUE'); commit; ``` + ### Using EDB-JDBC JMS API +!!!note "Tip" +The following sequence of steps is required only if you want to create message types, queue table and queue programmatically. If the message types, queue table, and queue are created using EDB-PSQL then you can use the standard JMS API. +!!! + The following JMS API calls perform the same steps performed using EDB-PSQL to: - Connect to the EDB Postgres Advanced Server database - Create the user-defined type @@ -74,7 +88,7 @@ conn = (EDBJmsQueueConnection) edbJmsFact.createQueueConnection(); session = (EDBJmsQueueSession) conn.createQueueSession(true, Session.CLIENT_ACKNOWLEDGE); -String sql = "CREATE TYPE mytype AS (code int, project TEXT);"; +String sql = "CREATE OR REPLACE TYPE mytype AS (code int, project TEXT);"; UDTType udtType = new UDTType(conn.getConn(), sql, "mytype"); Operation operation = new UDTTypeOperation(udtType); operation.execute(); From 502ec6efa633522212791d058700e72a2fa65132 Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Wed, 4 Sep 2024 14:54:39 +0200 Subject: [PATCH 02/67] Added note on non-ascii characters not being supported --- .../postgresql/installing/windows.mdx | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/advocacy_docs/supported-open-source/postgresql/installing/windows.mdx b/advocacy_docs/supported-open-source/postgresql/installing/windows.mdx index 9f53a855a7e..68b951b0835 100644 --- a/advocacy_docs/supported-open-source/postgresql/installing/windows.mdx +++ b/advocacy_docs/supported-open-source/postgresql/installing/windows.mdx @@ -27,10 +27,12 @@ Rather than use the EDB installer, you can also obtain a prebuilt installation p ## Installing PostgreSQL -To perform an installation using the graphical installation wizard, you need superuser or administrator privileges. +To perform an installation using the graphical installation wizard, you need superuser or administrator privileges. -!!! Note - If you're using the graphical installation wizard to perform a system upgrade, the installer preserves the configuration options specified during the previous installation. +!!!note Notes + - Multi-byte, non-ascii characters are not supported in user or machine names. + - If you're using the graphical installation wizard to perform a system ***upgrade***, the installer preserves the configuration options specified during the previous installation. +!!! 1. To start the installation wizard, assume sufficient privileges, and double-click the installer icon. If prompted, provide a password. (In some versions of Windows, to invoke the installer with administrator privileges, you must select **Run as Administrator** from the installer icon's context menu.) From 42fbf930a981b010e904ba59f1404ea4a245000a Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Mon, 9 Sep 2024 10:50:34 +0200 Subject: [PATCH 03/67] changed wording to reflect that EDB doesn't support all non-ascii characters --- .../supported-open-source/postgresql/installing/windows.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/supported-open-source/postgresql/installing/windows.mdx b/advocacy_docs/supported-open-source/postgresql/installing/windows.mdx index 68b951b0835..b02c54af85f 100644 --- a/advocacy_docs/supported-open-source/postgresql/installing/windows.mdx +++ b/advocacy_docs/supported-open-source/postgresql/installing/windows.mdx @@ -30,7 +30,7 @@ Rather than use the EDB installer, you can also obtain a prebuilt installation p To perform an installation using the graphical installation wizard, you need superuser or administrator privileges. !!!note Notes - - Multi-byte, non-ascii characters are not supported in user or machine names. + - EDB doesn't support all non-ASCII, multi-byte characters in user or machine names. Use ASCII characters only to avoid installation errors related to unsupported path names. - If you're using the graphical installation wizard to perform a system ***upgrade***, the installer preserves the configuration options specified during the previous installation. !!! From 2559801c578bd0312cccad841052b0ce52b8ae91 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Mon, 9 Sep 2024 16:33:28 +0530 Subject: [PATCH 04/67] Added more content --- .../42.5.4.2/05a_using_advanced_queueing.mdx | 269 +++++++++++++++++- 1 file changed, 268 insertions(+), 1 deletion(-) diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx b/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx index 3474640ff46..d1c14d9c5ed 100644 --- a/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx +++ b/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx @@ -105,7 +105,274 @@ queue.setEdbQueueTbl(queueTable); queue.start(); ``` -## Client-side example +## Setting up JMS application + +After creating the queue table and queue for the message types and starting the queue, you can follow these steps to set up your JMS Application: + +1. Create a [Connection Factory](#connection-factory). +1. Create a [Connection](#connection) using the connection factory. +1. Create a [Session](#session) using the connection. +1. Get the Queue from the session. +1. Create a message producer using the session and queue. + 1. Send messages. +1. Create a message consumer using the session and queue. + 1. Receive messages. + + +### Connection factory + +The Connection Factory is used to create connections. EDBJmsConnectionFactory is an implementation of ConnectionFactory and QueueConnectionFactory, used to create Connection and QueueConnection. A connection factory can be created using one of the constructors of the EDBJmsConnectionFactory class. All three constructors can be used to create either a ConnectionFactory or QueueConnectionFactory. + +```java +//Constructor with connection related properties. +public EDBJmsConnectionFactory(String host, int port, String database, + String username, String password); +//Constructor with connection string, user name and password. +public EDBJmsConnectionFactory(String connectionString, + String username, String password); +//Constructor with SQL Connection. +public EDBJmsConnectionFactory(java.sql.Connection connection); +``` + +This example shows how to create a ConnectionFactory using an existing `java.sql.Connection`: + +```java +javax.jms.ConnectionFactory connFactory = new EDBJmsConnectionFactory(connection); +``` + +This example shows how to create a QueueConnectionFactory using a connection string, username, and password: + +```java +javax.jms.QueueConnectionFactory connFactory = new EDBJmsConnectionFactory + ("jdbc:edb//localhost:5444/edb", "enterprisedb", "edb"); +``` + +### Connection + +A Connection is a client's active connection that can be created from the ConnectionFactory and used to create sessions. EDBJmsConnection is an implementation of Connection, while EDBJmsQueueConnection is an implementation of QueueConnection and extends EDBJmsConnection. A Connection can be created using ConnectionFactory, while QueueConnection can be created from QueueConnectionFactory. + +This example shows how to create a Connection and a QueueConnection: + +```java +//Connection from ConnectionFactory. Assuming connFactory is ConnectionFactory. +javax.jms.Connection connection = connFactory.createConnection(); + +////Connection from QueueConnectionFactory. Assuming connFactory is QueueConnectionFactory. +javax.jms.QueueConnection queueConnection = connFactory.createQueueConnection(); +``` + +A connection must be started in order for the consumer to receive messages. On the other hand, a producer can send messages without starting the connection. To start a connection, use the following code: + +```java +queueConnection.start(); +``` + +A connection can be stopped at any time to cease receiving messages, and can be restarted when needed. However, a closed connection cannot be restarted. + +To stop and close the connection, use the following code: + +```java +queueConnection.stop(); +queueConnection.close(); +``` + +### Session + +The Session in EDBJms is used for creating producers and consumers, and for sending and receiving messages. EDBJmsSession implements the basic Session functionality, while EDBJmsQueueSession extends EDBJmsSession and implements QueueSession. A session can be created from a Connection. + +This example shows how to create a Session and a QueueSession: + +```java +// Session +javax.jms.Session session = connection.createSession(false, javax.jms.Session.AUTO_ACKNOWLEDGE); +// QueueSession +javax.jms.QueueSession session = queueConnection.createQueueSession(false, javax.jms.Session.AUTO_ACKNOWLEDGE); +``` + +A Session or QueueSession is also used to create queues. It's important to note that in this context, "creating a queue" does not refer to physically creating the queue. As discussed earlier, the queue needs to be created and started as part of the server-side setup. In this context, creating a queue means getting the queue, related queue table, and payload type that have already been created. + +This example shows how to create a queue: + +```java +javax.jms.Queue queue = session.createQueue("MSG_QUEUE"); +``` + +### Message producer + +A message producer is responsible for creating and sending messages. It is created using a session and queue. EDBJmsMessageProducer is an implementation of MessageProducer, but in most cases, you will be using the standard MessageProducer. + +This example shows how to create a message producer, create a message, and send it. Creating messages of different types will be discussed in the following sections. + +```java +javax.jms.MessageProducer messageProducer = session.createProducer(queue); + +javax.jms.Message msg = session.createMessage(); +msg.setStringProperty("myprop1", "test value 1"); + +messageProducer.send(msg); +``` + +### Message consumer + +A Message consumer is used to receive messages. It is created using a session and a queue. EDBJmsMessageConsumer is an implementation of MessageConsumer, but you will most often use the standard MessageConsumer. + +This example shows how to create a message consumer and receive a message: + +```java +javax.jms.MessageConsumer messageConsumer = session.createConsumer(queue); + +javax.jms.Message message = messageConsumer.receive(); +``` + +### Message acknowledgement + +Acknowledgement of messages is controlled by the two arguments to the createSession() and createQueueSession() methods: + +```java +EDBJmsConnection.createSession(boolean transacted, int acknowledgeMode) + +EDBJmsQueueConnection.createQueueSession(boolean transacted, int acknowledgeMode) +``` + +If the first argument is true, it indicates that the session mode is transacted, and the second argument is ignored. However, if the first argument is false, then the second argument comes into play, and the client can specify different acknowledgment modes. These acknowledgment modes include, +- Session.AUTO_ACKNOWLEDGE +- Session.CLIENT_ACKNOWLEDGE +- Session.DUPS_OK_ACKNOWLEDGE + +The following sections describe different modes of acknowledgement: + +### Transacted session + +In transacted sessions, messages are both sent and received during a transaction. These messages are acknowledged by making an explicit call to commit(). If rollback() is called, all received messages will be marked as not acknowledged. + +A transacted session always has an active transaction. When a client calls the commit() or rollback() method, the current transaction is either committed or rolled back, and a new transaction is started. + +This example explains how the transacted session works: + +```java + MessageProducer messageProducer = (MessageProducer) session.createProducer(queue); + + //Send a message in transacted session and commit it. + + //Send message + TextMessage msg1 = session.createTextMessage(); + String messageText1 = "Hello 1"; + msg1.setText(messageText1); + messageProducer.send(msg1); + + //Commit the transaction. + session.commit(); + + //Now we have one message in the queue. + + //Next, we want to send and receive in the same transaction. + + MessageConsumer messageConsumer = (MessageConsumer) session.createConsumer(queue); + + //Send a Message in transaction. + TextMessage msg2 = session.createTextMessage(); + String messageText2 = "Hello 2"; + msg2.setText(messageText2); + messageProducer.send(msg2); + + //Receive message in the same transaction. There should be 1 message available. + Message message1 = messageConsumer.receive(); + TextMessage txtMsg1 = (TextMessage) message1; + + //Send another Message in transaction. + TextMessage msg3 = session.createTextMessage(); + String messageText3 = "Hello 3"; + msg3.setText(messageText3); + messageProducer.send(msg3); + + //Commit the transaction. + //This should remove the one message we sent initially and received above and send 2 messages. + session.commit(); + + //2 messages are in the queue so we can receive these 2 messages. + + //Receive 1 + Message message2 = messageConsumer.receive(); + TextMessage txtMsg2 = (TextMessage) message2; + + //Receive 2 + Message message3 = messageConsumer.receive(); + TextMessage txtMsg3 = (TextMessage) message3; + + //Commit the transaction. This will consume the two messages. + session.commit(); + + //Receive should fail now as there should be no messages available. + Message message4 = messageConsumer.receive(); + //message4 will be null here. +``` + +#### AUTO_ACKNOWLEDGE mode + +If the first argument to createSession() or createQueueSession() is false and the second argument is Session.AUTO_ACKNOWLEDGE, the messages are automatically acknowledged. + +#### DUPS_OK_ACKNOWLEDGE mode + +This mode instructs the session to lazily acknowledge the message, and it is okay if some messages are redelivered. However, in EDB JMS, this option is implemented the same way as Session.AUTO_ACKNOWLEDGE, where messages will be acknowledged automatically. + +#### CLIENT_ACKNOWLEDGE mode + +If the first argument to createSession() or createQueueSession() is false and the second argument is Session.CLIENT_ACKNOWLEDGE, the messages are acknowledged when the client acknowledges the message by calling the acknowledge() method on a message. Acknowledging happens at the session level, and acknowledging one message will cause all the received messages to be acknowledged. + +For example, if we send 5 messages and then receive the 5 messages, acknowledging the 5th message will cause all 5 messages to be acknowledged. + +```java + MessageProducer messageProducer = (MessageProducer) session.createProducer(queue); + + //Send 5 messages + for(int i=1; i<=5; i++) { + TextMessage msg = session.createTextMessage(); + String messageText = "Hello " + i; + msg.setText(messageText); + messageProducer.send(msg); + } + + MessageConsumer messageConsumer = (MessageConsumer) session.createConsumer(queue); + + //Receive 4 + for(int i=1; i<=4; i++) { + Message message = messageConsumer.receive(); + TextMessage txtMsg = (TextMessage) message; + } + + //Receive the 5th message + Message message5 = messageConsumer.receive(); + TextMessage txtMsg5 = (TextMessage) message5; + + //Now acknowledge it and all the messages will be acknowledged. + txtMsg5.acknowledge(); + + //Try to receive again. This should return null as there is no message available. + Message messageAgain = messageConsumer.receive(); +``` + +### Message types + +EDB-JDBC JMS API supports the following message types and can be used in a standard way: + +| Message type | JMS type | +|------------------------|-------------------------| +| aq$_jms_message | javax.jms.Message | +| aq$_jms_text_message | javax.jms.TextMessage | +| aq$_jms_bytes_message | javax.jms.BytesMessage | +| aq$_jms_object_message | javax.jms.ObjectMessage | + +#### Message properties + +#### TextMessage + +#### BytesMessage + +#### ObjectMessage + +### Message + +#### Non-standard message After you create a user-defined type followed by queue table and queue, start the queue. Then, you can enqueue or dequeue a message using EDB-JDBC driver's JMS API. From 61c6f25822f930f433b63c52a72d84e395e3b5e8 Mon Sep 17 00:00:00 2001 From: Simon Notley <43099400+sonotley@users.noreply.github.com> Date: Mon, 9 Sep 2024 15:42:59 +0100 Subject: [PATCH 05/67] TPA 23.34.1 rel notes --- product_docs/docs/tpa/23/rel_notes/index.mdx | 2 ++ .../docs/tpa/23/rel_notes/tpa_23.34.1_rel_notes.mdx | 12 ++++++++++++ 2 files changed, 14 insertions(+) create mode 100644 product_docs/docs/tpa/23/rel_notes/tpa_23.34.1_rel_notes.mdx diff --git a/product_docs/docs/tpa/23/rel_notes/index.mdx b/product_docs/docs/tpa/23/rel_notes/index.mdx index 086be0dc0a5..70411fab830 100644 --- a/product_docs/docs/tpa/23/rel_notes/index.mdx +++ b/product_docs/docs/tpa/23/rel_notes/index.mdx @@ -2,6 +2,7 @@ title: Trusted Postgres Architect release notes navTitle: "Release notes" navigation: + - tpa_23.34.1_rel_notes - tpa_23.34_rel_notes - tpa_23.33_rel_notes - tpa_23.32_rel_notes @@ -32,6 +33,7 @@ The Trusted Postgres Architect documentation describes the latest version of Tru | Version | Release date | | ---------------------------- | ------------ | +| [23.35](tpa_23.34.1_rel_notes) | 09 Sep 2024 | | [23.34](tpa_23.34_rel_notes) | 22 Aug 2024 | | [23.33](tpa_23.33_rel_notes) | 24 Jun 2024 | | [23.32](tpa_23.32_rel_notes) | 15 May 2024 | diff --git a/product_docs/docs/tpa/23/rel_notes/tpa_23.34.1_rel_notes.mdx b/product_docs/docs/tpa/23/rel_notes/tpa_23.34.1_rel_notes.mdx new file mode 100644 index 00000000000..0d8e21cbce7 --- /dev/null +++ b/product_docs/docs/tpa/23/rel_notes/tpa_23.34.1_rel_notes.mdx @@ -0,0 +1,12 @@ +--- +title: Trusted Postgres Architect 23.34.1 release notes +navTitle: "Version 23.34.1" +--- + +Released: 9 September 2024 + +Trusted Postgres Architect 23.34.1 is a bug fix release which resolves the following issues: + +| Type | Description | +|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Bug Fix | Fixed an issue whereby running deploy after a switchover fails for nodes with `efm-witness` role. The `upstream-primary` for EFM nodes is determined using the facts gathered from Postgres. This previously failed for nodes with `efm-witness` roles since they do not have Postgres. The task to determine upstream-primary is now run only on nodes with `primary` or `replica` roles. | \ No newline at end of file From 2e885cda7c270c359b5ada61a9bb4572a9bcc9bd Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Mon, 9 Sep 2024 20:10:26 +0100 Subject: [PATCH 06/67] TPA patch for 23.24.1 --- product_docs/docs/tpa/23/rel_notes/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/tpa/23/rel_notes/index.mdx b/product_docs/docs/tpa/23/rel_notes/index.mdx index 70411fab830..9f2811615ed 100644 --- a/product_docs/docs/tpa/23/rel_notes/index.mdx +++ b/product_docs/docs/tpa/23/rel_notes/index.mdx @@ -33,7 +33,7 @@ The Trusted Postgres Architect documentation describes the latest version of Tru | Version | Release date | | ---------------------------- | ------------ | -| [23.35](tpa_23.34.1_rel_notes) | 09 Sep 2024 | +| [23.34.1](tpa_23.34.1_rel_notes) | 09 Sep 2024 | | [23.34](tpa_23.34_rel_notes) | 22 Aug 2024 | | [23.33](tpa_23.33_rel_notes) | 24 Jun 2024 | | [23.32](tpa_23.32_rel_notes) | 15 May 2024 | From 76dddcae9b740a306cb5917c02fe66bccae1b8e1 Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Fri, 6 Sep 2024 14:59:57 +0200 Subject: [PATCH 07/67] TDE: specify AES implemmentation is done with OpenSSL --- product_docs/docs/tde/15/index.mdx | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/product_docs/docs/tde/15/index.mdx b/product_docs/docs/tde/15/index.mdx index 924e9721ce9..3184f92c96d 100644 --- a/product_docs/docs/tde/15/index.mdx +++ b/product_docs/docs/tde/15/index.mdx @@ -75,7 +75,9 @@ Data encryption and decryption is managed by the database and doesn't require ap EDB Postgres Advanced Server and EDB Postgres Extended Server provide hooks to key management that's external to the database. These hooks allow for simple passphrase encrypt/decrypt or integration with enterprise key management solutions. See [Securing the data encryption key](./key_stores) for more information. -### How does TDE encrypt data? +### How does TDE encrypt data? + +EDB TDE uses [OpenSSL](https://openssl-library.org/) to encrypt data files with the AES encryption algorithm. By default, it uses the OpenSSL libraries available on the operating system machine that initializes the database cluster. Starting with version 16, EDB TDE introduces the option to choose between AES-128 and AES-256 encryption algorithms during the initialization of the Postgres cluster. The choice between AES-128 and AES-256 hinges on balancing performance and security requirements. AES-128 is commonly advised for environments where performance efficiency and lower power consumption are pivotal, making it suitable for most applications. Conversely, AES-256 is recommended for scenarios demanding the highest level of security, often driven by regulatory mandates. From 2dc288dd2ad5df1026d856fa9b44fdbe4e24bb8a Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Fri, 6 Sep 2024 15:11:40 +0200 Subject: [PATCH 08/67] duplication of OS and machine, adapted wording --- product_docs/docs/tde/15/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/tde/15/index.mdx b/product_docs/docs/tde/15/index.mdx index 3184f92c96d..86b9138ee03 100644 --- a/product_docs/docs/tde/15/index.mdx +++ b/product_docs/docs/tde/15/index.mdx @@ -77,7 +77,7 @@ EDB Postgres Advanced Server and EDB Postgres Extended Server provide hooks to k ### How does TDE encrypt data? -EDB TDE uses [OpenSSL](https://openssl-library.org/) to encrypt data files with the AES encryption algorithm. By default, it uses the OpenSSL libraries available on the operating system machine that initializes the database cluster. +EDB TDE uses [OpenSSL](https://openssl-library.org/) to encrypt data files with the AES encryption algorithm. By default, it uses the OpenSSL libraries available on the machine that initializes the database cluster. Starting with version 16, EDB TDE introduces the option to choose between AES-128 and AES-256 encryption algorithms during the initialization of the Postgres cluster. The choice between AES-128 and AES-256 hinges on balancing performance and security requirements. AES-128 is commonly advised for environments where performance efficiency and lower power consumption are pivotal, making it suitable for most applications. Conversely, AES-256 is recommended for scenarios demanding the highest level of security, often driven by regulatory mandates. From a294ca82702bbfcf6efb4eedd2657f720d44d989 Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Fri, 6 Sep 2024 16:20:12 +0200 Subject: [PATCH 09/67] Added more implicit info after discussing with Matt W. --- product_docs/docs/tde/15/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/tde/15/index.mdx b/product_docs/docs/tde/15/index.mdx index 86b9138ee03..d82aad06fe9 100644 --- a/product_docs/docs/tde/15/index.mdx +++ b/product_docs/docs/tde/15/index.mdx @@ -77,7 +77,7 @@ EDB Postgres Advanced Server and EDB Postgres Extended Server provide hooks to k ### How does TDE encrypt data? -EDB TDE uses [OpenSSL](https://openssl-library.org/) to encrypt data files with the AES encryption algorithm. By default, it uses the OpenSSL libraries available on the machine that initializes the database cluster. +EDB TDE uses [OpenSSL](https://openssl-library.org/) to encrypt data files with the AES encryption algorithm. In Windows systems, TDE uses [OpenSSL 3](https://docs.openssl.org/3.0/). In Linux systems, TDE uses the OpenSSL version installed in the host operating system. To check the installed version, run `openssl version`. For more details, refer to the [OpenSSL documentation](https://docs.openssl.org/master/). Starting with version 16, EDB TDE introduces the option to choose between AES-128 and AES-256 encryption algorithms during the initialization of the Postgres cluster. The choice between AES-128 and AES-256 hinges on balancing performance and security requirements. AES-128 is commonly advised for environments where performance efficiency and lower power consumption are pivotal, making it suitable for most applications. Conversely, AES-256 is recommended for scenarios demanding the highest level of security, often driven by regulatory mandates. From 7bd2713f4b6f8ecb136a2452c585cf625ee3c127 Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Tue, 10 Sep 2024 08:48:46 +0200 Subject: [PATCH 10/67] Implemented feedback from Adam --- product_docs/docs/tde/15/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/tde/15/index.mdx b/product_docs/docs/tde/15/index.mdx index d82aad06fe9..cb02e5338b9 100644 --- a/product_docs/docs/tde/15/index.mdx +++ b/product_docs/docs/tde/15/index.mdx @@ -77,7 +77,7 @@ EDB Postgres Advanced Server and EDB Postgres Extended Server provide hooks to k ### How does TDE encrypt data? -EDB TDE uses [OpenSSL](https://openssl-library.org/) to encrypt data files with the AES encryption algorithm. In Windows systems, TDE uses [OpenSSL 3](https://docs.openssl.org/3.0/). In Linux systems, TDE uses the OpenSSL version installed in the host operating system. To check the installed version, run `openssl version`. For more details, refer to the [OpenSSL documentation](https://docs.openssl.org/master/). +EDB TDE uses [OpenSSL](https://openssl-library.org/) to encrypt data files with the AES encryption algorithm. In Windows systems, TDE uses [OpenSSL 3](https://docs.openssl.org/3.0/). In Linux systems, TDE uses the OpenSSL version installed in the host operating system. To check the installed version, run `openssl version`. For more information, see the [OpenSSL documentation](https://docs.openssl.org/master/). If you're using a custom build not provided by the OpenSSL community, consult your vendor's documentation. Starting with version 16, EDB TDE introduces the option to choose between AES-128 and AES-256 encryption algorithms during the initialization of the Postgres cluster. The choice between AES-128 and AES-256 hinges on balancing performance and security requirements. AES-128 is commonly advised for environments where performance efficiency and lower power consumption are pivotal, making it suitable for most applications. Conversely, AES-256 is recommended for scenarios demanding the highest level of security, often driven by regulatory mandates. From 4d385c7e17f0ab0989b2838ee0c6aa28a641d088 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 10 Sep 2024 10:26:16 +0100 Subject: [PATCH 11/67] Fixed references to bdr.raft_local_election_timeout in the docs. Signed-off-by: Dj Walker-Morgan --- product_docs/docs/pgd/5/reference/index.json | 2 +- product_docs/docs/pgd/5/reference/index.mdx | 2 +- product_docs/docs/pgd/5/reference/pgd-settings.mdx | 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/product_docs/docs/pgd/5/reference/index.json b/product_docs/docs/pgd/5/reference/index.json index cfe997f8729..39afee609a0 100644 --- a/product_docs/docs/pgd/5/reference/index.json +++ b/product_docs/docs/pgd/5/reference/index.json @@ -160,7 +160,7 @@ "bdrglobal_keepalives_count": "/pgd/latest/reference/pgd-settings#bdrglobal_keepalives_count", "bdrglobal_tcp_user_timeout": "/pgd/latest/reference/pgd-settings#bdrglobal_tcp_user_timeout", "bdrraft_global_election_timeout": "/pgd/latest/reference/pgd-settings#bdrraft_global_election_timeout", - "bdrraft_local_election_timeout": "/pgd/latest/reference/pgd-settings#bdrraft_local_election_timeout", + "bdrraft_group_election_timeout": "/pgd/latest/reference/pgd-settings#bdrraft_group_election_timeout", "bdrraft_response_timeout": "/pgd/latest/reference/pgd-settings#bdrraft_response_timeout", "bdrraft_keep_min_entries": "/pgd/latest/reference/pgd-settings#bdrraft_keep_min_entries", "bdrraft_log_min_apply_duration": "/pgd/latest/reference/pgd-settings#bdrraft_log_min_apply_duration", diff --git a/product_docs/docs/pgd/5/reference/index.mdx b/product_docs/docs/pgd/5/reference/index.mdx index b27cceec47e..2b57f037fb7 100644 --- a/product_docs/docs/pgd/5/reference/index.mdx +++ b/product_docs/docs/pgd/5/reference/index.mdx @@ -228,7 +228,7 @@ The reference section is a definitive listing of all functions, views, and comma * [`bdr.global_tcp_user_timeout`](pgd-settings#bdrglobal_tcp_user_timeout) ### [Internal settings - Raft timeouts](pgd-settings#internal-settings---raft-timeouts) * [`bdr.raft_global_election_timeout`](pgd-settings#bdrraft_global_election_timeout) - * [`bdr.raft_local_election_timeout`](pgd-settings#bdrraft_local_election_timeout) + * [`bdr.raft_group_election_timeout`](pgd-settings#bdrraft_group_election_timeout) * [`bdr.raft_response_timeout`](pgd-settings#bdrraft_response_timeout) ### [Internal settings - Other Raft values](pgd-settings#internal-settings---other-raft-values) * [`bdr.raft_keep_min_entries`](pgd-settings#bdrraft_keep_min_entries) diff --git a/product_docs/docs/pgd/5/reference/pgd-settings.mdx b/product_docs/docs/pgd/5/reference/pgd-settings.mdx index a553cec5bc1..ac7d3d6a45e 100644 --- a/product_docs/docs/pgd/5/reference/pgd-settings.mdx +++ b/product_docs/docs/pgd/5/reference/pgd-settings.mdx @@ -579,7 +579,7 @@ To account for network failures, the Raft consensus protocol implements timeouts for elections and requests. This value is used when a request is being sent to the global (top-level) group. The default is 6 seconds (6s). -### `bdr.raft_local_election_timeout` +### `bdr.raft_group_election_timeout` To account for network failures, the Raft consensus protocol implements timeouts for elections and requests. This value is used when a request is @@ -589,7 +589,7 @@ being sent to the sub-group. The default is 3 seconds (3s). For responses, the settings of [`bdr.raft_global_election_timeout`](#bdrraft_global_election_timeout) and -[`bdr.raft_local_election_timeout`](#bdrraft_local_election_timeout) are used +[`bdr.raft_group_election_timeout`](#bdrraft_group_election_timeout) are used as appropriate. You can override this behavior by setting this variable. The setting of `bdr.raft_response_timeout` must be less than either of the election timeout values. Set this variable to -1 to disable the override. From c39737459517e6947b7cbe3f9e3a2d48f550e3c3 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 10 Sep 2024 17:45:15 +0530 Subject: [PATCH 12/67] Added remaining content and release notes --- .../42.5.4.2/01_jdbc_rel_notes/index.mdx | 2 + .../jdbc_42.7.3.1_rel_notes.mdx | 34 ++ .../42.5.4.2/05a_using_advanced_queueing.mdx | 559 ++++++++++++------ 3 files changed, 401 insertions(+), 194 deletions(-) create mode 100644 product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/index.mdx b/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/index.mdx index 9d5a51f7013..5b204e63be2 100644 --- a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/index.mdx +++ b/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/index.mdx @@ -1,6 +1,7 @@ --- title: "Release notes" navigation: +- jdbc_42.7.3.1_rel_notes - jdbc_42.5.4.2_rel_notes - jdbc_42.5.4.1_rel_notes - jdbc_42.5.1.2_rel_notes @@ -14,6 +15,7 @@ These release notes describe what's new in each release. When a minor or patch r | Version | Release Date | | ---------------------------------------- | ------------ | +| [42.7.3.1](jdbc_42.7.3.1_rel_notes) | 10 Sep 2024 | | [42.5.4.2](jdbc_42.5.4.2_rel_notes) | 26 Feb 2024 | | [42.5.4.1](jdbc_42.5.4.1_rel_notes) | 16 Mar 2023 | | [42.5.1.2](jdbc_42.5.1.2_rel_notes) | 14 Feb 2023 | diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx new file mode 100644 index 00000000000..042c98014de --- /dev/null +++ b/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx @@ -0,0 +1,34 @@ +--- +title: "EDB JDBC Connector 42.7.3.1 release notes" +navTitle: Version 42.7.3.1 +--- + +Released: 10 Sep 2024 + +The EDB JDBC connector provides connectivity between a Java application and an EDB Postgres Advanced Server database. + +New features, enhancements, bug fixes, and other changes in the EDB JDBC Connector 42.7.3.1 include: + +| Type | Description | Addresses | +|---------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------| +| Upstream Merge | Merged with the upstream community driver version 42.7.3. See the community [JDBC documentation](https://jdbc.postgresql.org/changelogs/2024-03-14-42.7.3-release/) for details. | | +| Enhancement | Improved the parsing issue with the large SQL statements. | | +| Enhancement | JMS Enhancements

- The EDB JMS API has been made according to the JMS standard. All supported JMS classes related to Factory, Connection, Session, Producer, Consumer and Message types can now be used in a standard way.

- DefaultMessageListenerContainer can now be used to continuously pull messages from EDB JMS Queue.

-Transacted Sessions are implemented. | | +| Enhancement | Fixed null pointer exception in case of timeout or end-of-fetch during message dequeue. | #37882 | +| Enhancement | EDB JMS API now supports the basic Apache Camel Route concept as a source and destination. | #37882 | +| Enhancement | JMS message types, such as message, text message, bytes message, and object message, are now supported. | #37884 | +| Enhancement | EDBJmsConnectionFactory now has an alternative constructor that takes SQL Connection as a parameter. | #38465 | +| Enhancement | - EDBJmsConnection now implements the critical lifecycle methods start() and stop().

- EDBJmsSession now implements the critical close() method.

- EDBJmsSession.createQueue now returns a valid queue instance.
- EDB JMS message types are now aligned with the JMS standard. The following message types are now supported:



- aq$_jms_message



- aq$_jms_text_message



- aq$_jms_bytes_message



- aq$_jms_object_message

- All message types now support set*Property() and get*Property() for setting and getting properties of JMS supported types. | #38542 | + + + + + + + + + + + + + diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx b/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx index d1c14d9c5ed..ee59b27be4c 100644 --- a/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx +++ b/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx @@ -362,289 +362,460 @@ EDB-JDBC JMS API supports the following message types and can be used in a stand | aq$_jms_bytes_message | javax.jms.BytesMessage | | aq$_jms_object_message | javax.jms.ObjectMessage | +Please note that the corresponding payload types (user-defined types) are not pre-defined and must be created by the user before configuring the queue table. This is discussed in the upcoming sections. + +You can specify schema-qualified user-defined types, but the property types and message types must be in the same schema. + #### Message properties + +All of the above-mentioned message types support setting and getting message properties. Before creating the actual message type, you must create the corresponding user-defined type for message properties. + +This example shows how to create the user-defined type for message properties: + +```sql +CREATE OR REPLACE TYPE AQ$_JMS_USERPROPERTY +AS object +( + NAME VARCHAR2(100), + VALUE VARCHAR2(2000) +); +``` + +All primitive types of message properties are supported. #### TextMessage +Text messages can be sent using the TextMessage interface. EDBTextMessageImpl is an implementation of TextMessage, but for most cases, you will be using the standard TextMessage. + +Before using the text message, it is necessary to create a user-defined type for it. This example shows how to create the user-defined type for TextMessage: + +```sql +CREATE OR REPLACE TYPE AQ$_JMS_TEXT_MESSAGE AS object(PROPERTIES AQ$_JMS_USERPROPERTY[], STRING_VALUE VARCHAR2(4000)); +``` + +Once the user-defined type is created, you can then proceed to create the queue table using this type: + +```sql +EXEC DBMS_AQADM.CREATE_QUEUE_TABLE (queue_table => 'MSG_QUEUE_TABLE', queue_payload_type => 'AQ$_JMS_TEXT_MESSAGE', comment => 'Message queue table'); +``` + +After setting up the queue table, you can send and receive TextMessages using the standard procedure outlined in the Java code snippet: + +```java +MessageProducer messageProducer = (MessageProducer) session.createProducer(queue); +// Create text message +TextMessage msg = session.createTextMessage(); +String messageText = "Hello there!"; +msg.setText(messageText); +msg.setStringProperty("myprop1", "test value 1"); +// Send message +messageProducer.send(msg); + +MessageConsumer messageConsumer = (MessageConsumer) session.createConsumer(queue); +// Receive Message +Message message = messageConsumer.receive(); +TextMessage txtMsg = (TextMessage) message; +System.out.println(txtMsg.getText()); +System.out.println(txtMsg.getStringProperty("myprop1")); +``` + #### BytesMessage +The BytesMessage is used to send a stream of bytes. EDBBytesMessageImpl is an implementation of BytesMessage, but in most cases, you will use the standard BytesMessage. Before using the bytes message, a user-defined type must be created. + +This example shows how to create the user-defined type for BytesMessage: + +```sql +CREATE OR REPLACE TYPE AQ$_JMS_BYTES_MESSAGE AS OBJECT (PROPERTIES AQ$_JMS_USERPROPERTY[], RAW_VALUE CLOB); +``` + +Now, BytesMessage can be sent and received in the standard way. + +This example shows how to create and use a BytesMessage in Java: + +```java +MessageProducer messageProducer = (MessageProducer) session.createProducer(queue); +BytesMessage msg = session.createBytesMessage(); +String messageText = "Hello there!"; +msg.writeBytes(messageText.getBytes()); +messageProducer.send(msg); + +MessageConsumer messageConsumer = (MessageConsumer) session.createConsumer(queue); +Message message = messageConsumer.receive(); +BytesMessage byteMsg = (BytesMessage) message; +byteMsg.reset(); +byte[] bytes = new byte[(int) byteMsg.getBodyLength()]; +byteMsg.readBytes(bytes); +System.out.println(new String(bytes)); +``` + #### ObjectMessage -### Message +An ObjectMessage is used to send a serializable object as a message. EDBObjectMessageImpl is an implementation of ObjectMessage, but the standard ObjectMessage is most commonly used. -#### Non-standard message +Before using the ObjectMessage, it is necessary to create the user-defined type for the object message. -After you create a user-defined type followed by queue table and queue, start the queue. Then, you can enqueue or dequeue a message using EDB-JDBC driver's JMS API. +This example shows how to create the user-defined type for ObjectMessage: -Create a Java project and add the `edb-jdbc18.jar` from the `edb-jdbc` installation directory to its libraries. +```sql +CREATE OR REPLACE TYPE AQ$_JMS_OBJECT_MESSAGE AS object(PROPERTIES AQ$_JMS_USERPROPERTY[], OBJECT_VALUE CLOB); +``` -Create a Java Bean corresponding to the type you created. +For example we have the following serializable Java class: ```java -package mypackage; +import java.io.Serializable; -import java.util.ArrayList; -import com.edb.aq.UDTType; +public class Emp implements Serializable { + private int id; + private String name; + private String role; -public class MyType extends UDTType { + // Getter and setter methods + public int getId() { + return id; + } - private int code; - private String project; + public void setId(int id) { + this.id = id; + } - public MyType() {} + public String getName() { + return name; + } - /** - * @return the code - */ - public int getCode() { - return code; + public void setName(String name) { + this.name = name; } + public String getRole() { + return role; + } + + public void setRole(String role) { + this.role = role; + } +} +``` + +This example shows how to use ObjectMessage to send a message containing an object of this class: + +```java +MessageProducer messageProducer = (MessageProducer) session.createProducer(queue); + +// Create object message +ObjectMessage msg = session.createObjectMessage(); +Emp emp = new Emp(); +emp.setId(1); +emp.setName("Joe"); +emp.setRole("Manager"); +msg.setObject(emp); + +// Send message +messageProducer.send(msg); + +MessageConsumer messageConsumer = (MessageConsumer) session.createConsumer(queue); + +// Receive Message +Message message = messageConsumer.receive(); +ObjectMessage objMsg = (ObjectMessage) message; +Emp empBack = (Emp) objMsg.getObject(); +System.out.println("ID: " + empBack.getId()); +System.out.println("Name: " + empBack.getName()); +System.out.println("Role: " + empBack.getRole()); +``` + + +### Message + +A Message can be used to send a message with only properties and no body. EDBMessageImpl is an implementation of a Message, but you will most often use the standard Message. +Before using a message, it is required to create a user-defined type. + +This example shows how to create the user-defined type for Message: + +```sql +CREATE OR REPLACE TYPE AQ$_JMS_MESSAGE AS object(PROPERTIES AQ$_JMS_USERPROPERTY[]); +``` + +This example shows how to send a message that contains only properties and no body: + +```java +MessageProducer messageProducer = (MessageProducer) session.createProducer(queue); +// Create message. +Message msg = session.createMessage(); +msg.setStringProperty("myprop1", "test value 1"); +msg.setStringProperty("myprop2", "test value 2"); +msg.setStringProperty("myprop3", "test value 3"); +// Send message +messageProducer.send(msg); +MessageConsumer messageConsumer = (MessageConsumer) session.createConsumer(queue); +// Receive Message +message = messageConsumer.receive(); +System.out.println("myprop1: " + message.getStringProperty("myprop1")); +System.out.println("myprop2: " + message.getStringProperty("myprop2")); +System.out.println("myprop3: " + message.getStringProperty("myprop3")); +``` + +#### Non-standard message + +EDB-JDBC JMS allows users to send and receive non-standard messages that are fully controlled by the API user. These messages do not support the setting and getting of properties. The process involves creating a user-defined type and setting it as the payload for the queue table. + +This example shows how to create a Java Bean corresponding to the type you created: + +```java +package mypackage; +import com.edb.jms.common.CompareValue; +import java.util.ArrayList; +public class MyType extends com.edb.aq.UDTType { + private Integer code; + private String project; + private String manager; + public MyType() { + } /** * @param code the code to set */ - public void setCode(int code) { - this.code = code; + @CompareValue(0) + public void setCode(Integer code) { + this.code = code; } - /** - * @return the project + * @return the code */ - public String getProject() { - return project; + public Integer getCode() { + return code; } - /** * @param project the project to set */ + @CompareValue(1) public void setProject(String project) { - this.project = project; + this.project = project; } /** - * Override this method and call getter methods in the same order as in CREATE TYPE statement. - * CREATE TYPE mytype AS (code int, project TEXT); - * @return + * @return the project */ - @Override - public Object[] getParamValues() { - ArrayList params = new ArrayList(); - params.add(getCode()); - params.add(getProject()); - return params.toArray(); //To change body of generated methods, choose Tools | Templates. + public String getProject() { + return project; } + @CompareValue(2) + public void setManager(String manager) { + this.manager = manager; + } + public String getManager() { + return manager; + } + public String valueOf() { + StringBuilder sql = new StringBuilder("CREATE TYPE "); + sql.append(getName() + " "); + sql.append("AS ("); + sql.append("code int, "); + sql.append("project TEXT);"); + return sql.toString(); + } + /** + * Override this method and call getter methods in the same order as in CREATE TYPE statement. + * CREATE OR REPLACE TYPE mytype AS object (code int, project text, manager varchar(10)) + * @return object array containing parameters. + */ + @Override + public Object[] getParamValues() { + ArrayList params = new ArrayList<>(); + params.add(getCode()); + params.add(getProject()); + params.add(getManager()); + return params.toArray(); + } } ``` -### Enqueue and dequeue a message +!!!note +- To create a user-defined class, it must extend the com.edb.aq.operations.UDTType class and override the getParamValues() method. In this method, you should add the attribute values to an ArrayList in the same order as they appear in the CREATE TYPE SQL statement in the database. +- Additionally, make sure to use the annotation @CompareValue(0) with setter methods, as this specifies the order of methods when using the reflection API to reconstruct the object after dequeuing the message from the queue. -To enqueue and dequeue a message: +Failure to meet these requirements may result in errors. +!!! -1. Create a JMS connection factory and create a queue connection. -2. Create a queue session. +This example shows how to send an object of this class as a message: ```java -edbJmsFact = new EDBJmsConnectionFactory("localhost", 5445, "edb", "edb", "edb"); - -conn = (EDBJmsQueueConnection) edbJmsFact.createQueueConnection(); +messageProducer = (EDBJmsMessageProducer) session.createProducer(queue); + MyType udtType1 = new MyType(); + udtType1.setProject("Test Project"); + udtType1.setManager("Joe"); + udtType1.setCode(321); + udtType1.setName("mytype"); //type name used in "CREATE TYPE" + messageProducer.send(udtType1); +``` -session = (EDBJmsQueueSession) conn.createQueueSession(true, Session.CLIENT_ACKNOWLEDGE); +This example shows how to receive this object as a message: -queue = new EDBJmsQueue("MSG_QUEUE"); -``` +```java +messageConsumer = (EDBJmsMessageConsumer) session.createConsumer(queue); -### Enqueue a message +Message message = messageConsumer.receive(); -To enqueue a message: +MyType myt = (MyType) message; +System.out.println("Code: "+ myt.getCode()); +System.out.println("Project: "+ myt.getProject()); +System.out.println("Manager: "+ myt.getManager()); +``` -1. Create `EDBJmsMessageProducer` from the session. -2. Create the enqueue message. -3. Call the `EDBJmsMessageProducer.send` method. +#### InnermostCustom.java: ```java -messageProducer = (EDBJmsMessageProducer) session.createProducer(queue); +package mypackage; -MyType udtType1 = new MyType(); -udtType1.setProject("Test Omega"); -udtType1.setCode(321); +import com.edb.aq.UDTType; +import com.edb.jms.common.CompareValue; -udtType1.setName("mytype"); +import java.util.ArrayList; -messageProducer.send(udtType1); -``` +public class InnermostCustom extends UDTType { -### Dequeue a message + public InnermostCustom() { + } -To dequeue a message: + private String testing_field_1; -1. Create `EDBJmsMessageConsumer` from the session. -2. Call the `EDBJmsMessageConsumer.Receive` method. + public String getTesting_field_1() { + return testing_field_1; + } -```java -messageConsumer = (EDBJmsMessageConsumer) session.createConsumer(queue); - -queue.setDequeue_mode(DequeueMode.BROWSE); -queue.setTypeName("mytype"); - -Message message = messageConsumer.receive(); + @CompareValue(0) + public void setTesting_field_1(String testing_field_1) { + this.testing_field_1 = testing_field_1; + } + @Override + public Object[] getParamValues(){ + ArrayList params = new ArrayList(); + params.add(getTesting_field_1()); + return params.toArray(); + } +} ``` -## A complete enqueue and dequeue program - -This example shows enqueue and dequeue. User-defined type, queue table, and queue are created using EDB-PSQL, and the queue is started. +#### InnerCustom.java ```java package mypackage; -import com.edb.aq.DequeueMode; -import com.edb.aq.operations.*; -import com.edb.jms.client.EDBJmsQueueConnection; -import com.edb.jms.client.EDBJmsConnectionFactory; -import com.edb.jms.client.EDBJmsMessageConsumer; -import com.edb.jms.client.EDBJmsMessageProducer; -import com.edb.jms.client.EDBJmsQueue; -import com.edb.jms.client.EDBJmsQueueSession; -import com.edb.jms.client.EDBQueueTable; -import java.sql.Connection; -import java.sql.DriverManager; -import javax.jms.JMSException; -import javax.jms.Message; -import javax.jms.Session; +import com.edb.aq.UDTType; +import com.edb.jms.common.CompareValue; -public class JMSClient { +import java.util.ArrayList; - public static void main(String args[]) throws JMSException { +public class InnerCustom extends UDTType { - EDBJmsConnectionFactory edbJmsFact = null; - EDBJmsQueueConnection conn = null; - EDBJmsQueueSession session = null; - EDBQueueTable queueTable = null; - EDBJmsQueue queue = null; - EDBJmsMessageProducer messageProducer = null; - EDBJmsMessageConsumer messageConsumer = null; + public InnerCustom() { + } - try { + private String testing_field_1; + private InnermostCustom innermostCustom; - edbJmsFact = new EDBJmsConnectionFactory("localhost", 5444, "edb", "edb", "edb"); + public String getTesting_field_1() { + return testing_field_1; + } - conn = (EDBJmsQueueConnection) edbJmsFact.createQueueConnection(); + @CompareValue(0) + public void setTesting_field_1(String testing_field_1) { + this.testing_field_1 = testing_field_1; + } - session = (EDBJmsQueueSession) conn.createQueueSession(true, Session.CLIENT_ACKNOWLEDGE); + public InnermostCustom getInnermostCustom() { + return innermostCustom; + } + + @CompareValue(1) + public void setInnermostCustom(InnermostCustom innermostCustom) { + this.innermostCustom = innermostCustom; + } + @Override + public Object[] getParamValues(){ + ArrayList params = new ArrayList(); + params.add(getTesting_field_1()); + params.add(getInnermostCustom()); + return params.toArray(); + } +} +``` - queue = (EDBJmsQueue) session.createQueue("MSG_QUEUE"); - - messageProducer = (EDBJmsMessageProducer) session.createProducer(queue); +#### CustomType.java - MyType udtType1 = new MyType(); - udtType1.setProject("Test Omega"); - udtType1.setCode(321); +```java +package mypackage; - udtType1.setName("mytype"); +import com.edb.aq.UDTType; +import com.edb.jms.common.CompareValue; - messageProducer.send(udtType1); +import java.util.ArrayList; - messageConsumer = (EDBJmsMessageConsumer) session.createConsumer(queue); - - queue.setDequeue_mode(DequeueMode.BROWSE); - queue.setTypeName("mytype"); - - Message message = messageConsumer.receive(); - System.out.println("Received: " + message); - - message = messageConsumer.receive(); - - System.out.println("Received: " + message); - } catch (JMSException jmsEx) { - System.out.println(jmsEx.getMessage()); - } finally { - if(conn != null) { - conn.close(); - } - } - } -} -``` +public class CustomType extends UDTType { -This example shows enqueue, dequeue, and creating the user-defined type, queue table, and queue. It also starts the queue. + private String testing_field; + private InnerCustom innerCustom; -```java -package mypackage; + public String getTesting_field() { + return testing_field; + } + + @CompareValue(0) + public void setTesting_field(String testing_field) { + this.testing_field = testing_field; + } -import com.edb.aq.DequeueMode; -import com.edb.aq.operations.*; -import com.edb.jms.client.EDBJmsQueueConnection; -import com.edb.jms.client.EDBJmsConnectionFactory; -import com.edb.jms.client.EDBJmsMessageConsumer; -import com.edb.jms.client.EDBJmsMessageProducer; -import com.edb.jms.client.EDBJmsQueue; -import com.edb.jms.client.EDBJmsQueueSession; -import com.edb.jms.client.EDBQueueTable; -import java.sql.Connection; -import java.sql.DriverManager; -import javax.jms.JMSException; -import javax.jms.Message; -import javax.jms.Session; + public InnerCustom getInnerCustom() { + return innerCustom; + } -public class JMSClient { + @CompareValue(1) + public void setInnerCustom(InnerCustom innerCustom) { + this.innerCustom = innerCustom; + } - public static void main(String args[]) throws JMSException { + public CustomType() { - EDBJmsConnectionFactory edbJmsFact = null; - EDBJmsQueueConnection conn = null; - EDBJmsQueueSession session = null; - EDBQueueTable queueTable = null; - EDBJmsQueue queue = null; - EDBJmsMessageProducer messageProducer = null; - EDBJmsMessageConsumer messageConsumer = null; + } - try { + public Object[] getParamValues(){ + ArrayList params = new ArrayList(); + params.add(getTesting_field()); + params.add(getInnerCustom()); + return params.toArray(); + } +} +``` - edbJmsFact = new EDBJmsConnectionFactory("localhost", 5444, "edb", "edb", "edb"); +This example shows how to read such nested types: - conn = (EDBJmsQueueConnection) edbJmsFact.createQueueConnection(); +```java + EDBJmsMessageProducer messageProducer = (EDBJmsMessageProducer) session.createProducer(queue_1); - session = (EDBJmsQueueSession) conn.createQueueSession(true, Session.CLIENT_ACKNOWLEDGE); + InnermostCustom innermostCustom = new InnermostCustom(); + innermostCustom.setTesting_field_1("Innermost set"); + innermostCustom.setName("innermostCustom"); - String sql = "CREATE TYPE mytype AS (code int, project TEXT);"; - UDTType udtType = new UDTType(conn.getConn(), sql, "mytype"); - Operation operation = new UDTTypeOperation(udtType); - operation.execute(); + InnerCustom innerCustom = new InnerCustom(); + innerCustom.setTesting_field_1("Inner set"); + innerCustom.setInnermostCustom(innermostCustom); + innerCustom.setName("innercustom"); - queueTable = session.createQueueTable(conn.getConn(), "MSG_QUEUE_TABLE", "mytype", "Message queue table"); - Queue queue1 = new Queue(conn.getConn(), "MSG_QUEUE", "MSG_QUEUE_TABLE", "Message Queue"); - operation = new QueueOperation(queue1); - operation.execute(); - queue = (EDBJmsQueue) session.createQueue("MSG_QUEUE"); - queue.setEdbQueueTbl(queueTable); - queue.start(); + CustomType customType = new CustomType(); + customType.setTesting_field("EDB"); + customType.setInnerCustom(innerCustom); + customType.setName("custom_type"); - messageProducer = (EDBJmsMessageProducer) session.createProducer(queue); + messageProducer.send(customType); - MyType udtType1 = new MyType(); - udtType1.setProject("Test Omega"); - udtType1.setCode(321); + EDBJmsMessageConsumer messageConsumer = (EDBJmsMessageConsumer) session.createConsumer(queue_1); - udtType1.setName("mytype"); + Message message = messageConsumer.receive(); - messageProducer.send(udtType1); + CustomType myType = (CustomType) message; + InnerCustom innerCustom_1 = myType.getInnerCustom(); + InnermostCustom innermostCustom1 = innerCustom_1.getInnermostCustom(); - messageConsumer = (EDBJmsMessageConsumer) session.createConsumer(queue); - - queue.setDequeue_mode(DequeueMode.BROWSE); - queue.setTypeName("mytype"); - - Message message = messageConsumer.receive(); - System.out.println("Received: " + message); - - message = messageConsumer.receive(); - - System.out.println("Received: " + message); - } catch (JMSException jmsEx) { - System.out.println(jmsEx.getMessage()); - } finally { - if(conn != null) { - conn.close(); - } - } - } -} -``` \ No newline at end of file + System.out.println("Outer type test field: " + myType.getTesting_field()); + System.out.println("Inner type test field: " + innerCustom_1.getTesting_field_1()); + System.out.println("Most Inner type test field: " + innermostCustom1.getTesting_field_1()); +``` From 33ce4534e625fc11923226230158b6e9a9c7c6ab Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 10 Sep 2024 18:07:32 +0530 Subject: [PATCH 13/67] Minor edit --- .../42.5.4.2/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx index 042c98014de..2a15887baaf 100644 --- a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx +++ b/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx @@ -12,7 +12,7 @@ New features, enhancements, bug fixes, and other changes in the EDB JDBC Connect | Type | Description | Addresses | |---------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------| | Upstream Merge | Merged with the upstream community driver version 42.7.3. See the community [JDBC documentation](https://jdbc.postgresql.org/changelogs/2024-03-14-42.7.3-release/) for details. | | -| Enhancement | Improved the parsing issue with the large SQL statements. | | +| Enhancement | Improved the parsing issue with the large SQL statements (for MTK/SQL Plus). | | | Enhancement | JMS Enhancements

- The EDB JMS API has been made according to the JMS standard. All supported JMS classes related to Factory, Connection, Session, Producer, Consumer and Message types can now be used in a standard way.

- DefaultMessageListenerContainer can now be used to continuously pull messages from EDB JMS Queue.

-Transacted Sessions are implemented. | | | Enhancement | Fixed null pointer exception in case of timeout or end-of-fetch during message dequeue. | #37882 | | Enhancement | EDB JMS API now supports the basic Apache Camel Route concept as a source and destination. | #37882 | From 57583fb839345c0efacd415a40beb47a73f36252 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 10 Sep 2024 13:45:24 +0100 Subject: [PATCH 14/67] Rework Signed-off-by: Dj Walker-Morgan --- product_docs/docs/pgd/5/data_migration/edbloader.mdx | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/pgd/5/data_migration/edbloader.mdx b/product_docs/docs/pgd/5/data_migration/edbloader.mdx index 1e72f22e83e..a4a5f7b0f96 100644 --- a/product_docs/docs/pgd/5/data_migration/edbloader.mdx +++ b/product_docs/docs/pgd/5/data_migration/edbloader.mdx @@ -2,6 +2,7 @@ title: EDB*Loader and PGD navTitle: EDB*Loader description: EDB*Loader is a high-speed data loading utility for EDB Postgres Advanced Server. +deepToC: true --- [EDB\*Loader](/epas/latest/database_administration/02_edb_loader/) is a high-speed data loading utility for EDB Postgres Advanced Server. It provides an interface compatible with Oracle databases, allowing you to load data into EDB Postgres Advanced Server. It's designed to load large volumes of data into EDB Postgres Advanced Server quickly and efficiently. @@ -14,6 +15,12 @@ As EDB\*Loader is a utility for EDB Postgres Advanced Server, it's available for ### Replication and EDB\*Loader -As with EDB Postgres Advanced Server, EDB\*Loader works with PGD in a replication environment. But, unlike EDB Postgres Advanced Server with physical replication, it isn't possible to use the [direct path load method](/epas/latest/database_administration/02_edb_loader/invoking_edb_loader/direct_path_load/) to load data into the replica nodes. Only the node connected to by EDB\*Loader gets the data that EDB\*Loader is loading because the direct path load method skips use of the WAL, upon which logical replication relies. +As with EDB Postgres Advanced Server, EDB\*Loader works with PGD in a replication environment, but you will not be able to use the direct load path method. This is because the[direct path load method](/epas/latest/database_administration/02_edb_loader/invoking_edb_loader/direct_path_load/) skips use of the WAL, upon which logical replication relies. That means that only the node connected, to by EDB\*Loader gets the data that EDB\*Loader is loading and no replication is done to the other nodes. + +To work around this limitation, you can run EDB\*loader's direct load path method independently on each node. This can be performed either on one node at a time or in parallel to all nodes, depending on the use case. + +!!! Warning +When using the direct path load method on multiple nodes, it's important to ensure there are no other writes happening to the table concurrently as this can result in inconsistencies. +!!! + -With PGD it's possible to run the direct load path method to each node. This can be performed on one node at a time or in parallel to all nodes, depending on the use case. When doing this, it's important to ensure there are no other writes happening to the table concurrently as this can result in inconsistencies. From 7c127d8ba6256ff4906c506f31897a47d24b3786 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 10 Sep 2024 18:23:43 +0530 Subject: [PATCH 15/67] Fixed heading levels --- .../42.5.4.2/05a_using_advanced_queueing.mdx | 33 ++++++++++--------- 1 file changed, 17 insertions(+), 16 deletions(-) diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx b/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx index ee59b27be4c..a26f7f7dd30 100644 --- a/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx +++ b/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx @@ -1,5 +1,6 @@ --- title: "Using advanced queueing" +indexdepth: 3 --- !!! Tip "New Feature" @@ -113,9 +114,9 @@ After creating the queue table and queue for the message types and starting the 1. Create a [Connection](#connection) using the connection factory. 1. Create a [Session](#session) using the connection. 1. Get the Queue from the session. -1. Create a message producer using the session and queue. +1. Create a [message producer](#message-producer) using the session and queue. 1. Send messages. -1. Create a message consumer using the session and queue. +1. Create a [message consumer](#message-consumer) using the session and queue. 1. Receive messages. @@ -241,7 +242,7 @@ If the first argument is true, it indicates that the session mode is transacted, The following sections describe different modes of acknowledgement: -### Transacted session +## Transacted session In transacted sessions, messages are both sent and received during a transaction. These messages are acknowledged by making an explicit call to commit(). If rollback() is called, all received messages will be marked as not acknowledged. @@ -307,15 +308,15 @@ This example explains how the transacted session works: //message4 will be null here. ``` -#### AUTO_ACKNOWLEDGE mode +### AUTO_ACKNOWLEDGE mode If the first argument to createSession() or createQueueSession() is false and the second argument is Session.AUTO_ACKNOWLEDGE, the messages are automatically acknowledged. -#### DUPS_OK_ACKNOWLEDGE mode +### DUPS_OK_ACKNOWLEDGE mode This mode instructs the session to lazily acknowledge the message, and it is okay if some messages are redelivered. However, in EDB JMS, this option is implemented the same way as Session.AUTO_ACKNOWLEDGE, where messages will be acknowledged automatically. -#### CLIENT_ACKNOWLEDGE mode +### CLIENT_ACKNOWLEDGE mode If the first argument to createSession() or createQueueSession() is false and the second argument is Session.CLIENT_ACKNOWLEDGE, the messages are acknowledged when the client acknowledges the message by calling the acknowledge() method on a message. Acknowledging happens at the session level, and acknowledging one message will cause all the received messages to be acknowledged. @@ -351,7 +352,7 @@ For example, if we send 5 messages and then receive the 5 messages, acknowledgin Message messageAgain = messageConsumer.receive(); ``` -### Message types +## Message types EDB-JDBC JMS API supports the following message types and can be used in a standard way: @@ -366,7 +367,7 @@ Please note that the corresponding payload types (user-defined types) are not pr You can specify schema-qualified user-defined types, but the property types and message types must be in the same schema. -#### Message properties +## Message properties All of the above-mentioned message types support setting and getting message properties. Before creating the actual message type, you must create the corresponding user-defined type for message properties. @@ -383,7 +384,7 @@ AS object All primitive types of message properties are supported. -#### TextMessage +### TextMessage Text messages can be sent using the TextMessage interface. EDBTextMessageImpl is an implementation of TextMessage, but for most cases, you will be using the standard TextMessage. @@ -419,7 +420,7 @@ System.out.println(txtMsg.getText()); System.out.println(txtMsg.getStringProperty("myprop1")); ``` -#### BytesMessage +### BytesMessage The BytesMessage is used to send a stream of bytes. EDBBytesMessageImpl is an implementation of BytesMessage, but in most cases, you will use the standard BytesMessage. Before using the bytes message, a user-defined type must be created. @@ -449,7 +450,7 @@ byteMsg.readBytes(bytes); System.out.println(new String(bytes)); ``` -#### ObjectMessage +### ObjectMessage An ObjectMessage is used to send a serializable object as a message. EDBObjectMessageImpl is an implementation of ObjectMessage, but the standard ObjectMessage is most commonly used. @@ -526,7 +527,7 @@ System.out.println("Role: " + empBack.getRole()); ``` -### Message +## Message A Message can be used to send a message with only properties and no body. EDBMessageImpl is an implementation of a Message, but you will most often use the standard Message. Before using a message, it is required to create a user-defined type. @@ -556,7 +557,7 @@ System.out.println("myprop2: " + message.getStringProperty("myprop2")); System.out.println("myprop3: " + message.getStringProperty("myprop3")); ``` -#### Non-standard message +### Non-standard message EDB-JDBC JMS allows users to send and receive non-standard messages that are fully controlled by the API user. These messages do not support the setting and getting of properties. The process involves creating a user-defined type and setting it as the payload for the queue table. @@ -661,7 +662,7 @@ System.out.println("Project: "+ myt.getProject()); System.out.println("Manager: "+ myt.getManager()); ``` -#### InnermostCustom.java: +## InnermostCustom.java: ```java package mypackage; @@ -695,7 +696,7 @@ public class InnermostCustom extends UDTType { } ``` -#### InnerCustom.java +## InnerCustom.java ```java package mypackage; @@ -740,7 +741,7 @@ public class InnerCustom extends UDTType { } ``` -#### CustomType.java +## CustomType.java ```java package mypackage; From ba23aadd25c40351167bb77bd9fcbc6e38b8af31 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 10 Sep 2024 18:30:28 +0530 Subject: [PATCH 16/67] Updated to latest version --- .../01_jdbc_rel_notes/08_jdbc_42.3.3.1_rel_notes.mdx | 0 .../01_jdbc_rel_notes/09_jdbc_42.3.2.1_rel_notes.mdx | 0 .../01_jdbc_rel_notes/10_jdbc_42.2.24.1_rel_notes.mdx | 0 .../01_jdbc_rel_notes/12_jdbc_42.2.19.1_rel_notes.mdx | 0 .../01_jdbc_rel_notes/14_jdbc_42.2.12.3_rel_notes.mdx | 0 .../01_jdbc_rel_notes/16_jdbc_42.2.9.1_rel_notes.mdx | 0 .../01_jdbc_rel_notes/18_jdbc_42.2.8.1_rel_notes.mdx | 0 .../{42.5.4.2 => 42.7.3.1}/01_jdbc_rel_notes/index.mdx | 0 .../01_jdbc_rel_notes/jdbc_42.5.0.1_rel_notes.mdx | 0 .../01_jdbc_rel_notes/jdbc_42.5.1.1_rel_notes.mdx | 0 .../01_jdbc_rel_notes/jdbc_42.5.1.2_rel_notes.mdx | 0 .../01_jdbc_rel_notes/jdbc_42.5.4.1_rel_notes.mdx | 0 .../01_jdbc_rel_notes/jdbc_42.5.4.2_rel_notes.mdx | 0 .../01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx | 0 .../{42.5.4.2 => 42.7.3.1}/02_requirements_overview.mdx | 0 .../03_advanced_server_jdbc_connector_overview.mdx | 0 .../01_loading_the_advanced_server_jdbc_connector.mdx | 0 .../01_additional_connection_properties.mdx | 0 .../02_preferring_synchronous_secondary_database_servers.mdx | 0 .../02_connecting_to_the_database/index.mdx | 0 .../03_executing_sql_statements_through_statement_objects.mdx | 0 .../04_retrieving_results_from_a_resultset_object.mdx | 0 .../05_freeing_resources.mdx | 0 .../06_handling_errors.mdx | 0 .../index.mdx | 0 .../{42.5.4.2 => 42.7.3.1}/05a_using_advanced_queueing.mdx | 1 + .../06_executing_sql_commands_with_executeUpdate().mdx | 0 .../07_adding_a_graphical_interface_to_a_java_program.mdx | 0 .../01_reducing_client-side_resource_requirements.mdx | 0 .../02_using_preparedstatements_to_send_sql_commands.mdx | 0 .../03_executing_stored_procedures.mdx | 0 .../04_using_ref_cursors_with_java.mdx | 0 .../05_using_bytea_data_with_java.mdx | 0 .../06_using_object_types_and_collections_with_java.mdx | 0 ...07_asynchronous_notification_handling_with_noticelistener.mdx | 0 .../08_advanced_jdbc_connector_functionality/index.mdx | 0 .../01_using_ssl/01_configuring_the_server.mdx | 0 .../01_using_ssl/02_configuring_the_client.mdx | 0 .../01_using_ssl/03_testing_the_ssl_jdbc_connection.mdx | 0 .../04_using_certificate_authentication_without_a_password.mdx | 0 .../09_security_and_encryption/01_using_ssl/index.mdx | 0 .../09_security_and_encryption/02_scram_compatibility.mdx | 0 .../03_support_for_gssapi_encrypted_connection.mdx | 0 .../{42.5.4.2 => 42.7.3.1}/09_security_and_encryption/index.mdx | 0 .../10_advanced_server_jdbc_connector_logging.mdx | 0 .../{42.5.4.2 => 42.7.3.1}/11_reference_jdbc_data_types.mdx | 0 .../images/core_classes_and_interfaces.png | 0 .../{42.5.4.2 => 42.7.3.1}/images/drivermanager_drivers.png | 0 .../{42.5.4.2 => 42.7.3.1}/images/jdbc_class_relationships.png | 0 .../{42.5.4.2 => 42.7.3.1}/images/the_showemployees_window.png | 0 .../docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/index.mdx | 0 .../{42.5.4.2 => 42.7.3.1}/installing/configuring_for_java.mdx | 0 .../jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/index.mdx | 0 .../{42.5.4.2 => 42.7.3.1}/installing/linux_arm64/index.mdx | 0 .../installing/linux_arm64/jdbc_debian_12.mdx | 0 .../{42.5.4.2 => 42.7.3.1}/installing/linux_ppc64le/index.mdx | 0 .../installing/linux_ppc64le/jdbc_rhel_8.mdx | 0 .../installing/linux_ppc64le/jdbc_rhel_9.mdx | 0 .../installing/linux_ppc64le/jdbc_sles_12.mdx | 0 .../installing/linux_ppc64le/jdbc_sles_15.mdx | 0 .../{42.5.4.2 => 42.7.3.1}/installing/linux_x86_64/index.mdx | 0 .../installing/linux_x86_64/jdbc_debian_11.mdx | 0 .../installing/linux_x86_64/jdbc_debian_12.mdx | 0 .../installing/linux_x86_64/jdbc_other_linux_8.mdx | 0 .../installing/linux_x86_64/jdbc_other_linux_9.mdx | 0 .../installing/linux_x86_64/jdbc_rhel_8.mdx | 0 .../installing/linux_x86_64/jdbc_rhel_9.mdx | 0 .../installing/linux_x86_64/jdbc_sles_12.mdx | 0 .../installing/linux_x86_64/jdbc_sles_15.mdx | 0 .../installing/linux_x86_64/jdbc_ubuntu_20.mdx | 0 .../installing/linux_x86_64/jdbc_ubuntu_22.mdx | 0 .../{42.5.4.2 => 42.7.3.1}/installing/upgrading.mdx | 0 .../{42.5.4.2 => 42.7.3.1}/installing/using_maven.mdx | 0 .../jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/windows.mdx | 0 74 files changed, 1 insertion(+) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/01_jdbc_rel_notes/08_jdbc_42.3.3.1_rel_notes.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/01_jdbc_rel_notes/09_jdbc_42.3.2.1_rel_notes.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/01_jdbc_rel_notes/10_jdbc_42.2.24.1_rel_notes.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/01_jdbc_rel_notes/12_jdbc_42.2.19.1_rel_notes.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/01_jdbc_rel_notes/14_jdbc_42.2.12.3_rel_notes.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/01_jdbc_rel_notes/16_jdbc_42.2.9.1_rel_notes.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/01_jdbc_rel_notes/18_jdbc_42.2.8.1_rel_notes.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/01_jdbc_rel_notes/index.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/01_jdbc_rel_notes/jdbc_42.5.0.1_rel_notes.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/01_jdbc_rel_notes/jdbc_42.5.1.1_rel_notes.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/01_jdbc_rel_notes/jdbc_42.5.1.2_rel_notes.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/01_jdbc_rel_notes/jdbc_42.5.4.1_rel_notes.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/01_jdbc_rel_notes/jdbc_42.5.4.2_rel_notes.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/02_requirements_overview.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/03_advanced_server_jdbc_connector_overview.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/05_using_the_advanced_server_jdbc_connector_with_java_applications/01_loading_the_advanced_server_jdbc_connector.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/01_additional_connection_properties.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/02_preferring_synchronous_secondary_database_servers.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/index.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/05_using_the_advanced_server_jdbc_connector_with_java_applications/03_executing_sql_statements_through_statement_objects.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/05_using_the_advanced_server_jdbc_connector_with_java_applications/04_retrieving_results_from_a_resultset_object.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/05_using_the_advanced_server_jdbc_connector_with_java_applications/05_freeing_resources.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/05_using_the_advanced_server_jdbc_connector_with_java_applications/06_handling_errors.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/05_using_the_advanced_server_jdbc_connector_with_java_applications/index.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/05a_using_advanced_queueing.mdx (99%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/06_executing_sql_commands_with_executeUpdate().mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/07_adding_a_graphical_interface_to_a_java_program.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/08_advanced_jdbc_connector_functionality/01_reducing_client-side_resource_requirements.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/08_advanced_jdbc_connector_functionality/02_using_preparedstatements_to_send_sql_commands.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/08_advanced_jdbc_connector_functionality/03_executing_stored_procedures.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/08_advanced_jdbc_connector_functionality/04_using_ref_cursors_with_java.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/08_advanced_jdbc_connector_functionality/05_using_bytea_data_with_java.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/08_advanced_jdbc_connector_functionality/06_using_object_types_and_collections_with_java.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/08_advanced_jdbc_connector_functionality/07_asynchronous_notification_handling_with_noticelistener.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/08_advanced_jdbc_connector_functionality/index.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/09_security_and_encryption/01_using_ssl/01_configuring_the_server.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/09_security_and_encryption/01_using_ssl/02_configuring_the_client.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/09_security_and_encryption/01_using_ssl/03_testing_the_ssl_jdbc_connection.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/09_security_and_encryption/01_using_ssl/04_using_certificate_authentication_without_a_password.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/09_security_and_encryption/01_using_ssl/index.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/09_security_and_encryption/02_scram_compatibility.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/09_security_and_encryption/03_support_for_gssapi_encrypted_connection.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/09_security_and_encryption/index.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/10_advanced_server_jdbc_connector_logging.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/11_reference_jdbc_data_types.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/images/core_classes_and_interfaces.png (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/images/drivermanager_drivers.png (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/images/jdbc_class_relationships.png (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/images/the_showemployees_window.png (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/index.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/configuring_for_java.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/index.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_arm64/index.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_arm64/jdbc_debian_12.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_ppc64le/index.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_ppc64le/jdbc_rhel_8.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_ppc64le/jdbc_rhel_9.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_ppc64le/jdbc_sles_12.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_ppc64le/jdbc_sles_15.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_x86_64/index.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_x86_64/jdbc_debian_11.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_x86_64/jdbc_debian_12.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_x86_64/jdbc_other_linux_8.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_x86_64/jdbc_other_linux_9.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_x86_64/jdbc_rhel_8.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_x86_64/jdbc_rhel_9.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_x86_64/jdbc_sles_12.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_x86_64/jdbc_sles_15.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_x86_64/jdbc_ubuntu_20.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/linux_x86_64/jdbc_ubuntu_22.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/upgrading.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/using_maven.mdx (100%) rename product_docs/docs/jdbc_connector/{42.5.4.2 => 42.7.3.1}/installing/windows.mdx (100%) diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/08_jdbc_42.3.3.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/08_jdbc_42.3.3.1_rel_notes.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/08_jdbc_42.3.3.1_rel_notes.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/08_jdbc_42.3.3.1_rel_notes.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/09_jdbc_42.3.2.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/09_jdbc_42.3.2.1_rel_notes.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/09_jdbc_42.3.2.1_rel_notes.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/09_jdbc_42.3.2.1_rel_notes.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/10_jdbc_42.2.24.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/10_jdbc_42.2.24.1_rel_notes.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/10_jdbc_42.2.24.1_rel_notes.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/10_jdbc_42.2.24.1_rel_notes.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/12_jdbc_42.2.19.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/12_jdbc_42.2.19.1_rel_notes.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/12_jdbc_42.2.19.1_rel_notes.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/12_jdbc_42.2.19.1_rel_notes.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/14_jdbc_42.2.12.3_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/14_jdbc_42.2.12.3_rel_notes.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/14_jdbc_42.2.12.3_rel_notes.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/14_jdbc_42.2.12.3_rel_notes.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/16_jdbc_42.2.9.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/16_jdbc_42.2.9.1_rel_notes.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/16_jdbc_42.2.9.1_rel_notes.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/16_jdbc_42.2.9.1_rel_notes.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/18_jdbc_42.2.8.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/18_jdbc_42.2.8.1_rel_notes.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/18_jdbc_42.2.8.1_rel_notes.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/18_jdbc_42.2.8.1_rel_notes.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/index.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/index.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/index.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/index.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.5.0.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.5.0.1_rel_notes.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.5.0.1_rel_notes.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.5.0.1_rel_notes.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.5.1.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.5.1.1_rel_notes.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.5.1.1_rel_notes.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.5.1.1_rel_notes.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.5.1.2_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.5.1.2_rel_notes.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.5.1.2_rel_notes.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.5.1.2_rel_notes.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.5.4.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.5.4.1_rel_notes.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.5.4.1_rel_notes.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.5.4.1_rel_notes.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.5.4.2_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.5.4.2_rel_notes.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.5.4.2_rel_notes.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.5.4.2_rel_notes.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/02_requirements_overview.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/02_requirements_overview.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/02_requirements_overview.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/02_requirements_overview.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/03_advanced_server_jdbc_connector_overview.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/03_advanced_server_jdbc_connector_overview.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/03_advanced_server_jdbc_connector_overview.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/03_advanced_server_jdbc_connector_overview.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/01_loading_the_advanced_server_jdbc_connector.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/01_loading_the_advanced_server_jdbc_connector.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/01_loading_the_advanced_server_jdbc_connector.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/01_loading_the_advanced_server_jdbc_connector.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/01_additional_connection_properties.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/01_additional_connection_properties.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/01_additional_connection_properties.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/01_additional_connection_properties.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/02_preferring_synchronous_secondary_database_servers.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/02_preferring_synchronous_secondary_database_servers.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/02_preferring_synchronous_secondary_database_servers.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/02_preferring_synchronous_secondary_database_servers.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/index.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/index.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/index.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/index.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/03_executing_sql_statements_through_statement_objects.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/03_executing_sql_statements_through_statement_objects.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/03_executing_sql_statements_through_statement_objects.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/03_executing_sql_statements_through_statement_objects.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/04_retrieving_results_from_a_resultset_object.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/04_retrieving_results_from_a_resultset_object.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/04_retrieving_results_from_a_resultset_object.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/04_retrieving_results_from_a_resultset_object.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/05_freeing_resources.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/05_freeing_resources.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/05_freeing_resources.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/05_freeing_resources.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/06_handling_errors.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/06_handling_errors.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/06_handling_errors.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/06_handling_errors.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/index.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/index.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/05_using_the_advanced_server_jdbc_connector_with_java_applications/index.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/index.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/05a_using_advanced_queueing.mdx similarity index 99% rename from product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/05a_using_advanced_queueing.mdx index a26f7f7dd30..a151fbf5467 100644 --- a/product_docs/docs/jdbc_connector/42.5.4.2/05a_using_advanced_queueing.mdx +++ b/product_docs/docs/jdbc_connector/42.7.3.1/05a_using_advanced_queueing.mdx @@ -1,5 +1,6 @@ --- title: "Using advanced queueing" +deepToC: true indexdepth: 3 --- diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/06_executing_sql_commands_with_executeUpdate().mdx b/product_docs/docs/jdbc_connector/42.7.3.1/06_executing_sql_commands_with_executeUpdate().mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/06_executing_sql_commands_with_executeUpdate().mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/06_executing_sql_commands_with_executeUpdate().mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/07_adding_a_graphical_interface_to_a_java_program.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/07_adding_a_graphical_interface_to_a_java_program.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/07_adding_a_graphical_interface_to_a_java_program.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/07_adding_a_graphical_interface_to_a_java_program.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/08_advanced_jdbc_connector_functionality/01_reducing_client-side_resource_requirements.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/08_advanced_jdbc_connector_functionality/01_reducing_client-side_resource_requirements.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/08_advanced_jdbc_connector_functionality/01_reducing_client-side_resource_requirements.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/08_advanced_jdbc_connector_functionality/01_reducing_client-side_resource_requirements.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/08_advanced_jdbc_connector_functionality/02_using_preparedstatements_to_send_sql_commands.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/08_advanced_jdbc_connector_functionality/02_using_preparedstatements_to_send_sql_commands.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/08_advanced_jdbc_connector_functionality/02_using_preparedstatements_to_send_sql_commands.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/08_advanced_jdbc_connector_functionality/02_using_preparedstatements_to_send_sql_commands.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/08_advanced_jdbc_connector_functionality/03_executing_stored_procedures.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/08_advanced_jdbc_connector_functionality/03_executing_stored_procedures.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/08_advanced_jdbc_connector_functionality/03_executing_stored_procedures.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/08_advanced_jdbc_connector_functionality/03_executing_stored_procedures.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/08_advanced_jdbc_connector_functionality/04_using_ref_cursors_with_java.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/08_advanced_jdbc_connector_functionality/04_using_ref_cursors_with_java.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/08_advanced_jdbc_connector_functionality/04_using_ref_cursors_with_java.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/08_advanced_jdbc_connector_functionality/04_using_ref_cursors_with_java.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/08_advanced_jdbc_connector_functionality/05_using_bytea_data_with_java.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/08_advanced_jdbc_connector_functionality/05_using_bytea_data_with_java.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/08_advanced_jdbc_connector_functionality/05_using_bytea_data_with_java.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/08_advanced_jdbc_connector_functionality/05_using_bytea_data_with_java.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/08_advanced_jdbc_connector_functionality/06_using_object_types_and_collections_with_java.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/08_advanced_jdbc_connector_functionality/06_using_object_types_and_collections_with_java.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/08_advanced_jdbc_connector_functionality/06_using_object_types_and_collections_with_java.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/08_advanced_jdbc_connector_functionality/06_using_object_types_and_collections_with_java.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/08_advanced_jdbc_connector_functionality/07_asynchronous_notification_handling_with_noticelistener.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/08_advanced_jdbc_connector_functionality/07_asynchronous_notification_handling_with_noticelistener.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/08_advanced_jdbc_connector_functionality/07_asynchronous_notification_handling_with_noticelistener.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/08_advanced_jdbc_connector_functionality/07_asynchronous_notification_handling_with_noticelistener.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/08_advanced_jdbc_connector_functionality/index.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/08_advanced_jdbc_connector_functionality/index.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/08_advanced_jdbc_connector_functionality/index.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/08_advanced_jdbc_connector_functionality/index.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/09_security_and_encryption/01_using_ssl/01_configuring_the_server.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/09_security_and_encryption/01_using_ssl/01_configuring_the_server.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/09_security_and_encryption/01_using_ssl/01_configuring_the_server.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/09_security_and_encryption/01_using_ssl/01_configuring_the_server.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/09_security_and_encryption/01_using_ssl/02_configuring_the_client.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/09_security_and_encryption/01_using_ssl/02_configuring_the_client.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/09_security_and_encryption/01_using_ssl/02_configuring_the_client.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/09_security_and_encryption/01_using_ssl/02_configuring_the_client.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/09_security_and_encryption/01_using_ssl/03_testing_the_ssl_jdbc_connection.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/09_security_and_encryption/01_using_ssl/03_testing_the_ssl_jdbc_connection.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/09_security_and_encryption/01_using_ssl/03_testing_the_ssl_jdbc_connection.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/09_security_and_encryption/01_using_ssl/03_testing_the_ssl_jdbc_connection.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/09_security_and_encryption/01_using_ssl/04_using_certificate_authentication_without_a_password.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/09_security_and_encryption/01_using_ssl/04_using_certificate_authentication_without_a_password.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/09_security_and_encryption/01_using_ssl/04_using_certificate_authentication_without_a_password.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/09_security_and_encryption/01_using_ssl/04_using_certificate_authentication_without_a_password.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/09_security_and_encryption/01_using_ssl/index.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/09_security_and_encryption/01_using_ssl/index.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/09_security_and_encryption/01_using_ssl/index.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/09_security_and_encryption/01_using_ssl/index.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/09_security_and_encryption/02_scram_compatibility.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/09_security_and_encryption/02_scram_compatibility.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/09_security_and_encryption/02_scram_compatibility.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/09_security_and_encryption/02_scram_compatibility.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/09_security_and_encryption/03_support_for_gssapi_encrypted_connection.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/09_security_and_encryption/03_support_for_gssapi_encrypted_connection.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/09_security_and_encryption/03_support_for_gssapi_encrypted_connection.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/09_security_and_encryption/03_support_for_gssapi_encrypted_connection.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/09_security_and_encryption/index.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/09_security_and_encryption/index.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/09_security_and_encryption/index.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/09_security_and_encryption/index.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/10_advanced_server_jdbc_connector_logging.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/10_advanced_server_jdbc_connector_logging.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/10_advanced_server_jdbc_connector_logging.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/10_advanced_server_jdbc_connector_logging.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/11_reference_jdbc_data_types.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/11_reference_jdbc_data_types.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/11_reference_jdbc_data_types.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/11_reference_jdbc_data_types.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/images/core_classes_and_interfaces.png b/product_docs/docs/jdbc_connector/42.7.3.1/images/core_classes_and_interfaces.png similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/images/core_classes_and_interfaces.png rename to product_docs/docs/jdbc_connector/42.7.3.1/images/core_classes_and_interfaces.png diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/images/drivermanager_drivers.png b/product_docs/docs/jdbc_connector/42.7.3.1/images/drivermanager_drivers.png similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/images/drivermanager_drivers.png rename to product_docs/docs/jdbc_connector/42.7.3.1/images/drivermanager_drivers.png diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/images/jdbc_class_relationships.png b/product_docs/docs/jdbc_connector/42.7.3.1/images/jdbc_class_relationships.png similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/images/jdbc_class_relationships.png rename to product_docs/docs/jdbc_connector/42.7.3.1/images/jdbc_class_relationships.png diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/images/the_showemployees_window.png b/product_docs/docs/jdbc_connector/42.7.3.1/images/the_showemployees_window.png similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/images/the_showemployees_window.png rename to product_docs/docs/jdbc_connector/42.7.3.1/images/the_showemployees_window.png diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/index.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/index.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/index.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/index.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/configuring_for_java.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/configuring_for_java.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/configuring_for_java.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/configuring_for_java.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/index.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/index.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/index.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/index.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_arm64/index.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_arm64/index.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_arm64/index.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_arm64/index.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_arm64/jdbc_debian_12.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_arm64/jdbc_debian_12.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_arm64/jdbc_debian_12.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_arm64/jdbc_debian_12.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_ppc64le/index.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_ppc64le/index.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_ppc64le/index.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_ppc64le/index.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_ppc64le/jdbc_rhel_8.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_ppc64le/jdbc_rhel_8.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_ppc64le/jdbc_rhel_8.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_ppc64le/jdbc_rhel_8.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_ppc64le/jdbc_rhel_9.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_ppc64le/jdbc_rhel_9.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_ppc64le/jdbc_rhel_9.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_ppc64le/jdbc_rhel_9.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_ppc64le/jdbc_sles_12.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_ppc64le/jdbc_sles_12.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_ppc64le/jdbc_sles_12.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_ppc64le/jdbc_sles_12.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_ppc64le/jdbc_sles_15.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_ppc64le/jdbc_sles_15.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_ppc64le/jdbc_sles_15.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_ppc64le/jdbc_sles_15.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/index.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/index.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/index.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/index.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_debian_11.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_debian_11.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_debian_11.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_debian_11.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_debian_12.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_debian_12.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_debian_12.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_debian_12.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_other_linux_8.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_other_linux_8.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_other_linux_8.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_other_linux_8.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_other_linux_9.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_other_linux_9.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_other_linux_9.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_other_linux_9.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_rhel_8.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_rhel_8.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_rhel_8.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_rhel_8.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_rhel_9.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_rhel_9.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_rhel_9.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_rhel_9.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_sles_12.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_sles_12.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_sles_12.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_sles_12.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_sles_15.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_sles_15.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_sles_15.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_sles_15.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_ubuntu_20.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_ubuntu_20.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_ubuntu_20.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_ubuntu_20.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_ubuntu_22.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_ubuntu_22.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/linux_x86_64/jdbc_ubuntu_22.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/linux_x86_64/jdbc_ubuntu_22.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/upgrading.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/upgrading.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/upgrading.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/upgrading.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/using_maven.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/using_maven.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/using_maven.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/using_maven.mdx diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/windows.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/installing/windows.mdx similarity index 100% rename from product_docs/docs/jdbc_connector/42.5.4.2/installing/windows.mdx rename to product_docs/docs/jdbc_connector/42.7.3.1/installing/windows.mdx From df3881e21dcc0b6f1ace1a41749459b1eff6f347 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 10 Sep 2024 18:49:13 +0530 Subject: [PATCH 17/67] fixed blank spaces --- .../42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx index 2a15887baaf..4ed995cbe8b 100644 --- a/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx +++ b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx @@ -18,7 +18,7 @@ New features, enhancements, bug fixes, and other changes in the EDB JDBC Connect | Enhancement | EDB JMS API now supports the basic Apache Camel Route concept as a source and destination. | #37882 | | Enhancement | JMS message types, such as message, text message, bytes message, and object message, are now supported. | #37884 | | Enhancement | EDBJmsConnectionFactory now has an alternative constructor that takes SQL Connection as a parameter. | #38465 | -| Enhancement | - EDBJmsConnection now implements the critical lifecycle methods start() and stop().

- EDBJmsSession now implements the critical close() method.

- EDBJmsSession.createQueue now returns a valid queue instance.
- EDB JMS message types are now aligned with the JMS standard. The following message types are now supported:



- aq$_jms_message



- aq$_jms_text_message



- aq$_jms_bytes_message



- aq$_jms_object_message

- All message types now support set*Property() and get*Property() for setting and getting properties of JMS supported types. | #38542 | +| Enhancement | - EDBJmsConnection now implements the critical lifecycle methods start() and stop().
- EDBJmsSession now implements the critical close() method.
- EDBJmsSession.createQueue now returns a valid queue instance.
- EDB JMS message types are now aligned with the JMS standard. The following message types are now supported:
- aq$_jms_message
- aq$_jms_text_message
- aq$_jms_bytes_message
- aq$_jms_object_message
- All message types now support set*Property() and get*Property() for setting and getting properties of JMS supported types. | #38542 | From c1df12fcd48a838711624e3ca18ebca9654ef162 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 10 Sep 2024 18:56:38 +0530 Subject: [PATCH 18/67] updated bullets --- .../42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx index 4ed995cbe8b..5401f43c14d 100644 --- a/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx +++ b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx @@ -18,7 +18,7 @@ New features, enhancements, bug fixes, and other changes in the EDB JDBC Connect | Enhancement | EDB JMS API now supports the basic Apache Camel Route concept as a source and destination. | #37882 | | Enhancement | JMS message types, such as message, text message, bytes message, and object message, are now supported. | #37884 | | Enhancement | EDBJmsConnectionFactory now has an alternative constructor that takes SQL Connection as a parameter. | #38465 | -| Enhancement | - EDBJmsConnection now implements the critical lifecycle methods start() and stop().
- EDBJmsSession now implements the critical close() method.
- EDBJmsSession.createQueue now returns a valid queue instance.
- EDB JMS message types are now aligned with the JMS standard. The following message types are now supported:
- aq$_jms_message
- aq$_jms_text_message
- aq$_jms_bytes_message
- aq$_jms_object_message
- All message types now support set*Property() and get*Property() for setting and getting properties of JMS supported types. | #38542 | +| Enhancement | - EDBJmsConnection now implements the critical lifecycle methods start() and stop().
- EDBJmsSession now implements the critical close() method.
- EDBJmsSession.createQueue now returns a valid queue instance.
- EDB JMS message types are now aligned with the JMS standard. The following message types are now supported:
1. aq$_jms_message
1. aq$_jms_text_message
1. aq$_jms_bytes_message
1. aq$_jms_object_message
- All message types now support set*Property() and get*Property() for setting and getting properties of JMS supported types. | #38542 | From fb4217ad45d6675691e20c64ff1cc1d4ff27ec36 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 10 Sep 2024 19:00:38 +0530 Subject: [PATCH 19/67] fixed more blank spaces --- .../42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx index 5401f43c14d..c58133fe82a 100644 --- a/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx +++ b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx @@ -13,7 +13,7 @@ New features, enhancements, bug fixes, and other changes in the EDB JDBC Connect |---------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------| | Upstream Merge | Merged with the upstream community driver version 42.7.3. See the community [JDBC documentation](https://jdbc.postgresql.org/changelogs/2024-03-14-42.7.3-release/) for details. | | | Enhancement | Improved the parsing issue with the large SQL statements (for MTK/SQL Plus). | | -| Enhancement | JMS Enhancements

- The EDB JMS API has been made according to the JMS standard. All supported JMS classes related to Factory, Connection, Session, Producer, Consumer and Message types can now be used in a standard way.

- DefaultMessageListenerContainer can now be used to continuously pull messages from EDB JMS Queue.

-Transacted Sessions are implemented. | | +| Enhancement | JMS Enhancements
- The EDB JMS API has been made according to the JMS standard. All supported JMS classes related to Factory, Connection, Session, Producer, Consumer and Message types can now be used in a standard way.
- DefaultMessageListenerContainer can now be used to continuously pull messages from EDB JMS Queue.
-Transacted Sessions are implemented. | | | Enhancement | Fixed null pointer exception in case of timeout or end-of-fetch during message dequeue. | #37882 | | Enhancement | EDB JMS API now supports the basic Apache Camel Route concept as a source and destination. | #37882 | | Enhancement | JMS message types, such as message, text message, bytes message, and object message, are now supported. | #37884 | From 5430ff606471c163f1ba89eb0b4fbb61e40eee72 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 10 Sep 2024 19:01:28 +0530 Subject: [PATCH 20/67] Added numbers --- .../42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx index c58133fe82a..1600f0c8917 100644 --- a/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx +++ b/product_docs/docs/jdbc_connector/42.7.3.1/01_jdbc_rel_notes/jdbc_42.7.3.1_rel_notes.mdx @@ -18,7 +18,7 @@ New features, enhancements, bug fixes, and other changes in the EDB JDBC Connect | Enhancement | EDB JMS API now supports the basic Apache Camel Route concept as a source and destination. | #37882 | | Enhancement | JMS message types, such as message, text message, bytes message, and object message, are now supported. | #37884 | | Enhancement | EDBJmsConnectionFactory now has an alternative constructor that takes SQL Connection as a parameter. | #38465 | -| Enhancement | - EDBJmsConnection now implements the critical lifecycle methods start() and stop().
- EDBJmsSession now implements the critical close() method.
- EDBJmsSession.createQueue now returns a valid queue instance.
- EDB JMS message types are now aligned with the JMS standard. The following message types are now supported:
1. aq$_jms_message
1. aq$_jms_text_message
1. aq$_jms_bytes_message
1. aq$_jms_object_message
- All message types now support set*Property() and get*Property() for setting and getting properties of JMS supported types. | #38542 | +| Enhancement | - EDBJmsConnection now implements the critical lifecycle methods start() and stop().
- EDBJmsSession now implements the critical close() method.
- EDBJmsSession.createQueue now returns a valid queue instance.
- EDB JMS message types are now aligned with the JMS standard. The following message types are now supported:
1. aq$_jms_message
2. aq$_jms_text_message
3. aq$_jms_bytes_message
4. aq$_jms_object_message
- All message types now support set*Property() and get*Property() for setting and getting properties of JMS supported types. | #38542 | From fa53f85930e1d37c031551f4044c69feac48f7cb Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 10 Sep 2024 14:47:33 +0100 Subject: [PATCH 21/67] more changes Signed-off-by: Dj Walker-Morgan --- product_docs/docs/pgd/5/data_migration/edbloader.mdx | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/product_docs/docs/pgd/5/data_migration/edbloader.mdx b/product_docs/docs/pgd/5/data_migration/edbloader.mdx index a4a5f7b0f96..6433b1cd6c3 100644 --- a/product_docs/docs/pgd/5/data_migration/edbloader.mdx +++ b/product_docs/docs/pgd/5/data_migration/edbloader.mdx @@ -15,12 +15,9 @@ As EDB\*Loader is a utility for EDB Postgres Advanced Server, it's available for ### Replication and EDB\*Loader -As with EDB Postgres Advanced Server, EDB\*Loader works with PGD in a replication environment, but you will not be able to use the direct load path method. This is because the[direct path load method](/epas/latest/database_administration/02_edb_loader/invoking_edb_loader/direct_path_load/) skips use of the WAL, upon which logical replication relies. That means that only the node connected, to by EDB\*Loader gets the data that EDB\*Loader is loading and no replication is done to the other nodes. +As with EDB Postgres Advanced Server, EDB\*Loader works with PGD in a replication environment. You cannot use the direct load path method because the[direct path load method](/epas/latest/database_administration/02_edb_loader/invoking_edb_loader/direct_path_load/) skips use of the WAL, upon which all replication relies. That means that only the node connected to by EDB\*Loader gets the data that EDB\*Loader is loading and no data replicates to the other nodes. -To work around this limitation, you can run EDB\*loader's direct load path method independently on each node. This can be performed either on one node at a time or in parallel to all nodes, depending on the use case. +With PGD, you can make use of EDB\*loader's direct load path method by running it independently on each node. You can perform this either on one node at a time or in parallel to all nodes, depending on the use case. When using the direct path load method on multiple nodes, it's important to ensure there are no other writes happening to the table concurrently as this can result in inconsistencies. -!!! Warning -When using the direct path load method on multiple nodes, it's important to ensure there are no other writes happening to the table concurrently as this can result in inconsistencies. -!!! From e4876aadee1ee40db08fc28e057de8ed87e6161a Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Tue, 10 Sep 2024 07:02:15 -0700 Subject: [PATCH 22/67] Fix typo (missing space) --- product_docs/docs/pgd/5/data_migration/edbloader.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/data_migration/edbloader.mdx b/product_docs/docs/pgd/5/data_migration/edbloader.mdx index 6433b1cd6c3..9b6b5d6cf96 100644 --- a/product_docs/docs/pgd/5/data_migration/edbloader.mdx +++ b/product_docs/docs/pgd/5/data_migration/edbloader.mdx @@ -15,7 +15,7 @@ As EDB\*Loader is a utility for EDB Postgres Advanced Server, it's available for ### Replication and EDB\*Loader -As with EDB Postgres Advanced Server, EDB\*Loader works with PGD in a replication environment. You cannot use the direct load path method because the[direct path load method](/epas/latest/database_administration/02_edb_loader/invoking_edb_loader/direct_path_load/) skips use of the WAL, upon which all replication relies. That means that only the node connected to by EDB\*Loader gets the data that EDB\*Loader is loading and no data replicates to the other nodes. +As with EDB Postgres Advanced Server, EDB\*Loader works with PGD in a replication environment. You cannot use the direct load path method because the [direct path load method](/epas/latest/database_administration/02_edb_loader/invoking_edb_loader/direct_path_load/) skips use of the WAL, upon which all replication relies. That means that only the node connected to by EDB\*Loader gets the data that EDB\*Loader is loading and no data replicates to the other nodes. With PGD, you can make use of EDB\*loader's direct load path method by running it independently on each node. You can perform this either on one node at a time or in parallel to all nodes, depending on the use case. When using the direct path load method on multiple nodes, it's important to ensure there are no other writes happening to the table concurrently as this can result in inconsistencies. From 244099e5299d14f250a75ba0a93f72d0b55691ef Mon Sep 17 00:00:00 2001 From: Ian Barwick Date: Wed, 11 Sep 2024 15:37:51 +0900 Subject: [PATCH 23/67] PGD: fix references to deprecated GUC "bdr.raft_election_timeout" From 5.0 this has been replaced by "bdr.raft_global_election_timeout". "bdr.raft_election_timeout" still works as an alias, but it's preferable to avoid using it. DOCS-1024. --- product_docs/docs/pgd/5/monitoring/sql.mdx | 14 +++++++------- product_docs/docs/pgd/5/reference/functions.mdx | 2 +- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/product_docs/docs/pgd/5/monitoring/sql.mdx b/product_docs/docs/pgd/5/monitoring/sql.mdx index 908d4220850..28bc0a7f8bb 100644 --- a/product_docs/docs/pgd/5/monitoring/sql.mdx +++ b/product_docs/docs/pgd/5/monitoring/sql.mdx @@ -573,10 +573,10 @@ From time to time, Raft consensus starts a new election to define a new `RAFT_LEADER`. During an election, there might be an intermediary situation where there's no `RAFT_LEADER`, and some of the nodes consider themselves as `RAFT_CANDIDATE`. The whole election can't take longer -than `bdr.raft_election_timeout` (by default it's set to 6 seconds). If +than `bdr.raft_global_election_timeout` (by default it's set to 6 seconds). If the query above returns an in-election situation, then wait for -`bdr.raft_election_timeout`, and run the query again. If after -`bdr.raft_election_timeout` has passed and some the listed conditions are +`bdr.raft_global_election_timeout`, and run the query again. If after +`bdr.raft_global_election_timeout` has passed and some the listed conditions are still not met, then Raft consensus isn't working. Raft consensus might not be working correctly on only a single node. @@ -587,15 +587,15 @@ make sure that: - All PGD nodes are accessible to each other through both regular and replication connections (check file `pg_hba.conf`). - PGD versions are the same on all nodes. -- `bdr.raft_election_timeout` is the same on all nodes. +- `bdr.raft_global_election_timeout` is the same on all nodes. In some cases, especially if nodes are geographically distant from each other or network latency is high, the default value of -`bdr.raft_election_timeout` (6 seconds) might not be enough. If Raft +`bdr.raft_global_election_timeout` (6 seconds) might not be enough. If Raft consensus is still not working even after making sure everything is -correct, consider increasing `bdr.raft_election_timeout` to 30 +correct, consider increasing `bdr.raft_global_election_timeout` to 30 seconds on all nodes. For PGD 3.6.11 and later, setting -`bdr.raft_election_timeout` requires only a server reload. +`bdr.raft_global_election_timeout` requires only a server reload. Given how Raft consensus affects cluster operational tasks, and also as Raft consensus is directly responsible for advancing the group slot, diff --git a/product_docs/docs/pgd/5/reference/functions.mdx b/product_docs/docs/pgd/5/reference/functions.mdx index ab2c5e84173..45d46542e59 100644 --- a/product_docs/docs/pgd/5/reference/functions.mdx +++ b/product_docs/docs/pgd/5/reference/functions.mdx @@ -245,7 +245,7 @@ with full voting rights. If `wait_for_completion` is false, the request is served on a best-effort basis. If the node can't become a leader in the -`bdr.raft_election_timeout` period, then some other capable node +`bdr.raft_global_lection_timeout` period, then some other capable node becomes the leader again. Also, the leadership can change over the period of time per Raft protocol. A `true` return result indicates only that the request was submitted successfully. From e986298514f5923eb853595440de8d3be85b990c Mon Sep 17 00:00:00 2001 From: Ian Barwick Date: Thu, 12 Sep 2024 15:12:58 +0900 Subject: [PATCH 24/67] PGD: improve description of "transaction_id" pseudo-GUC This is only queryable as a reported GUC via libpq's PQparameterStatus() function, and is never accessible at SQL level. Rewrite the description to clarify this, linking to a helpful code example while we're at it. Also rework the introductory section to make it clearer that one of the GUC-like objects about to be described is not query-able via SQL. DOCS-1028. --- product_docs/docs/pgd/5/reference/functions.mdx | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/product_docs/docs/pgd/5/reference/functions.mdx b/product_docs/docs/pgd/5/reference/functions.mdx index 45d46542e59..a9b3146f47e 100644 --- a/product_docs/docs/pgd/5/reference/functions.mdx +++ b/product_docs/docs/pgd/5/reference/functions.mdx @@ -46,9 +46,9 @@ Returns the current subscription statistics. ## System and progress information parameters -PGD exposes some parameters that you can query using `SHOW` in psql -or using `PQparameterStatus` (or equivalent) from a client -application. +PGD exposes some parameters that you can query directly in SQL using e.g. +`SHOW` or the `current_setting()` function, and/or using `PQparameterStatus` +(or equivalent) from a client application. ### `bdr.local_node_id` @@ -68,8 +68,10 @@ becomes remotely visible. ### `transaction_id` -As soon as Postgres assigns a transaction id, if CAMO is enabled, this parameter is -updated to show the transaction id just assigned. +If a CAMO transaction is in progress, `transaction_id` will be updated to show +the assigned transaction id. Note that this parameter can only be queried +using `PQparameterStatus` or equivalent. See section [Application use](../durability/camo#application-use) +for a usage example. ### `bdr.is_node_connected` From 15cac204b4f3e88aeee7fa2726f85de24b0470f9 Mon Sep 17 00:00:00 2001 From: kunliuedb <95676424+kunliuedb@users.noreply.github.com> Date: Thu, 12 Sep 2024 16:02:20 +0800 Subject: [PATCH 25/67] Update quick_start.mdx --- advocacy_docs/edb-postgres-ai/analytics/quick_start.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/edb-postgres-ai/analytics/quick_start.mdx b/advocacy_docs/edb-postgres-ai/analytics/quick_start.mdx index a9e1a8d2bc4..df985fb9289 100644 --- a/advocacy_docs/edb-postgres-ai/analytics/quick_start.mdx +++ b/advocacy_docs/edb-postgres-ai/analytics/quick_start.mdx @@ -110,7 +110,7 @@ flavors in the installation when you connect. as Delta Tables. Every cluster comes pre-loaded to point to a storage bucket with benchmarking data inside (TPC-H, TPC-DS, Clickbench) at scale factors 1 and 10. -* Only AWS is supported at the moment. Bring Your OWn Account (BYOA) is not supported. +* Only AWS is supported at the moment. Bring Your Own Account (BYOA) is not supported. * You can deploy a cluster in any region that is activated in your EDB Postgres AI Account. Each region has a bucket with a copy of the benchmarking data, and so when you launch a cluster, it will use the From fd216480dfb5fcb587bf086829c42aadcf1a5efb Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 27 Aug 2024 12:47:24 +0100 Subject: [PATCH 26/67] Rebuild with test DMS content and fix right hand TOCs Signed-off-by: Dj Walker-Morgan --- .../getting_started/apply_constraints.mdx | 21 ++ .../getting_started/config_reader.mdx | 160 ++++++++++ .../getting_started/create_database.mdx | 23 ++ .../getting_started/create_migration.mdx | 24 ++ .../getting_started/index.mdx | 28 ++ .../getting_started/installing/index.mdx | 31 ++ .../installing/linux_x86_64/dms_debian_11.mdx | 34 +++ .../installing/linux_x86_64/dms_debian_12.mdx | 34 +++ .../linux_x86_64/dms_other_linux_9.mdx | 34 +++ .../installing/linux_x86_64/dms_rhel_8.mdx | 34 +++ .../installing/linux_x86_64/dms_rhel_9.mdx | 34 +++ .../installing/linux_x86_64/dms_sles_12.mdx | 34 +++ .../installing/linux_x86_64/dms_sles_15.mdx | 34 +++ .../installing/linux_x86_64/dms_ubuntu_20.mdx | 34 +++ .../installing/linux_x86_64/dms_ubuntu_22.mdx | 34 +++ .../installing/linux_x86_64/index.mdx | 48 +++ .../getting_started/mark_completed.mdx | 11 + .../getting_started/prepare_schema.mdx | 66 +++++ .../getting_started/preparing_db/index.mdx | 11 + .../preparing_oracle_source_databases.mdx | 279 ++++++++++++++++++ .../preparing_postgres_source_databases.mdx | 130 ++++++++ .../getting_started/remove_software.mdx | 5 + .../getting_started/verify_migration.mdx | 7 + .../data-migration-service/index.mdx | 29 ++ .../data-migration-service/known_issues.mdx | 6 + .../data-migration-service/limitations.mdx | 24 ++ .../rel_notes/index.mdx | 18 ++ .../rel_notes/rel_notes_2.0.0_preview.mdx | 14 + .../supported_versions.mdx | 36 +++ .../data-migration-service/terminology.mdx | 27 ++ .../data-migration-service/upgrading.mdx | 16 + .../edb-postgres-ai/migration-etl/index.mdx | 15 + .../migration-etl/migration-and-ai.mdx | 17 ++ src/components/table-of-contents.js | 2 + src/templates/doc.js | 10 +- src/templates/learn-doc.js | 63 +++- 36 files changed, 1418 insertions(+), 9 deletions(-) create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/apply_constraints.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/config_reader.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_database.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_migration.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/index.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_debian_11.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_debian_12.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_other_linux_9.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_rhel_8.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_rhel_9.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_sles_12.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_sles_15.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_ubuntu_20.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_ubuntu_22.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/index.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/mark_completed.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/prepare_schema.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/index.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_oracle_source_databases.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_postgres_source_databases.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/remove_software.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/verify_migration.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/known_issues.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/limitations.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/index.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/rel_notes_2.0.0_preview.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/terminology.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/upgrading.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/index.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/migration-and-ai.mdx diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/apply_constraints.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/apply_constraints.mdx new file mode 100644 index 00000000000..e158db04ba8 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/apply_constraints.mdx @@ -0,0 +1,21 @@ +--- +title: "Applying constraints" +--- + +At the beginning of your data migration journey with EDB Data Migration Service (EDB DMS), you [prepared and imported the schema](prepare_schema) of your source database. Now, re-apply the constraints that were excluded from the schema and data migration. + +## `PRIMARY KEY` and `UNIQUE` constraints + +For `PRIMARY KEY` and `UNIQUE` constraints, you have already created the tables and constraints in the target Postgres database. This allowed EDB DMS to map them to the source objects and migrate data sucessfuly. You don't need do to anything else. + +The same applies to `NOT NULL` constraints if you included them in your schema import. + +## `FOREIGN KEY`, `REFERENCES`, `CHECK`, `CASCADE` and `EXCLUDE` constraints + +You can now re-apply the `FOREIGN KEY`, `REFERENCES`, `CHECK`, `CASCADE` and `EXCLUDE` constraints you excluded during the [schema preparation and import](prepare_schema). For example, you can use ALTER statements. + +## Ensuring data integrity + +Rows in tables that do not have `PRIMARY KEY` or `UNIQUE` constraints were migrated with at-least-once delivery, therefore, it is possible that these rows are duplicate. + +Deduplication can be performed as part of the [verification](verify_migration). \ No newline at end of file diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/config_reader.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/config_reader.mdx new file mode 100644 index 00000000000..21a42923e74 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/config_reader.mdx @@ -0,0 +1,160 @@ +--- +title: "Configuring and running the EDB DMS Reader" +deepToC: true +--- + +## Getting credentials + +1. Access the [EDB Postgres AI® Console](https://portal.biganimal.com) and log in with your EDB Postgres AI Database Cloud Service credentials. + +1. Select the project where you created the database cluster. + +1. Within your project, select **Migrate** > **Credentials**. + +1. Unzip the credentials folder and copy it to the host where the reader is installed. + +## Configuring the reader + +Set the following environment variables in `/opt/cdcreader/run-cdcreader.sh` with the right values: + +```shell +### set the following environment variables: + +############################################## +# Data Migration Service Cloud Configuration # +############################################## + +# This ID is used to identify the cdcreader. +#export DBZ_ID= + +# Now we only support aws +#export CLOUD_PROVIDER= + +# No need to change about this field +#export RW_SERVICE_HOST=https://transporter-rw-service.biganimal.com + +# You need to create migration credentials in EDB postgresAI platform and set these fields with the path of credential files +#export TLS_PRIVATE_KEY_PATH=$MY_CREDENTIALS_PATH/client-key.pem +#export TLS_CERTIFICATE_PATH=$MY_CREDENTIALS_PATH/client-cert.pem +#export TLS_CA_PATH=$MY_CREDENTIALS_PATH/int.crt +#export APICURIOREQUEST_CLIENT_KEYSTORE_LOCATION=$MY_CREDENTIALS_PATH/client.keystore.p12 +#export APICURIOREQUEST_TRUSTSTORE_LOCATION=$MY_CREDENTIALS_PATH/int.truststore.p12 +#export KAFKASECURITY_CLIENT_KEYSTORE_LOCATION=$MY_CREDENTIALS_PATH/client.keystore.p12 +#export KAFKASECURITY_TRUSTSTORE_LOCATION=$MY_CREDENTIALS_PATH/int.truststore.p12 + +################################################## +# Data Migration Service Source DB Configuration # +################################################## + +# A sample configuration to create a single postgres database connection: +#export DBZ_DATABASES_0__TYPE=POSTGRES +#export DBZ_DATABASES_0__HOSTNAME=localhost +#export DBZ_DATABASES_0__PORT=5432 +#export DBZ_DATABASES_0__CATALOG=source +#export DBZ_DATABASES_0__USERNAME=postgres +#export DBZ_DATABASES_0__PASSWORD=password + +# You can increase the index to config more database for the reader +#export DBZ_DATABASES_1__TYPE=ORACLE +#export DBZ_DATABASES_1__HOSTNAME=localhost +#export DBZ_DATABASES_1__PORT=1521 +#export DBZ_DATABASES_1__CATALOG=ORCLCDB/ORCLPDB1 +#export DBZ_DATABASES_1__USERNAME=oracle +#export DBZ_DATABASES_1__PASSWORD=password + +########################################## +# Optional Parameters Below # +########################################## + +# Configure logging +# Generic loglevel +#export QUARKUS_LOG_LEVEL=DEBUG +# Loglevel for a single package +#export QUARKUS_LOG_CATEGORY__COM_ENTERPRISEDB__LEVEL=DEBUG +``` + +## Parameters + +### DBZ_ID + +This is the name you assign to identify a source. This name will later appear as a _source_ in the **Migrate** > **Sources** section of the EDB Postgres AI Console. + +Consider the following ID guidelines: + +- The maximum character length for the ID is 255 characters. +- You can use lowercase and uppercase characters, numbers, underscores(_) and hyphens(-) for the ID. Other special characters are not supported. +- The ID must be unique. The source instances cannot have the same ID. + +### RW_SERVICE_HOST + +Specifies the URL of the service that will host the migration. `transporter-rw-service` is always https://transporter-rw-service.biganimal.com. + +### TLS_PRIVATE_KEY_PATH + +Directory path to the `client-key.pem` private key you downloaded from the EDB Postgres AI Console. +The Reader's HTTP client uses it to perform mTLS authentication with the `transporter-rw-service`. + +### TLS_CERTIFICATE_PATH + +Directory path to the X509 `client-cert.pem` certificate you downloaded from the EDB Postgres AI Console. +The Reader's HTTP client uses it to perform mTLS authentication with the `transporter-rw-service`. + +### TLS_CA_PATH + +Directory path to the `int.cert` Certificate Authority you downloaded from the EDB Postgres AI Console. +It signs the certificate configured in TLS_CERTIFICATE_PATH. + +### APICURIOREQUEST_CLIENT_KEYSTORE_LOCATION + +Directory path to the `client-keystore.p12` keystore location file you downloaded from the EDB Postgres AI Console. +It is created from the private key and certifiate configured in TLS_PRIVATE_KEY_PATH and TLS_CERTIFICATE_PATH. +The Apicurio client uses it to perform mTLS authentication with the `transporter-rw-service`. + +### APICURIOREQUEST_TRUSTSTORE_LOCATION +Created from the Certificate Authority configured in TLS_CA_PATH +Apicurio client use it to mTLS with transporter-rw-service + +### DBZ_DATABASES +This is a source databases list you want to reader to connect. You can configure multiple database for one reader. +You need to increase the index manully in you configuration. + +For example: + +`DBZ_DATABASES_0__TYPE` is the type of the first source database. + +`DBZ_DATABASES_1__TYPE` is the type of the second source database. + +#### DBZ_DATABASES_0__TYPE +Source database type, support ORACLE and POSTGRES currently + +#### DBZ_DATABASES_0__HOSTNAME +Source database hostname + +#### DBZ_DATABASES_0__PORT +Source database port + +#### DBZ_DATABASES_0__CATALOG +Source database catalog + +#### DBZ_DATABASES_0__USERNAME +Source database username + +#### DBZ_DATABASES_0__PASSWORD +Source database password + +Once the reader finishes running, the cdc source will appear in the EDB Postgres AI Console. You can select this source for any [migration](create_migration). + + +## Running the EDB DMS Reader + +1. Start the migration: + + ```shell + cd /opt/cdcreader + ./run-cdcreader.sh + ``` + +1. Go to the [EDB Postgres AI Console](https://portal.biganimal.com), and verify that a source with the `DBZ_ID` name is displayed in **Migrate** > **Sources**. + + + diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_database.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_database.mdx new file mode 100644 index 00000000000..f45e9466dad --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_database.mdx @@ -0,0 +1,23 @@ +--- +title: "Creating a database cluster" +--- + +You can use an existing EDB Postgres® AI cluster or create a new cluster for the target of the database migration. + +To use an existing cluster as a target for the migration, ensure the tables you migrate and the load generated on target doesn't interfere with existing workloads. + +1. Access the [EDB Postgres AI Console](https://portal.biganimal.com) and log in with your EDB Postgres AI Database Cloud Service credentials. + +1. Select the project where you want to create the database cluster. + + See [Creating a project](/biganimal/latest/administering_cluster/projects/#creating-a-project) if you want to create one. + +1. Within your project, select **Create New** and **Database Cluser** to create an instance that will serve as target for the EDB Data Migration Service (EDB DMS). + + See [Creating a cluster](/biganimal/release/getting_started/creating_a_cluster/) for detailed instructions on how to create a single-node or a primary/standby high availability cluster. + + See [Creating a distributed high-availability cluster](/biganimal/latest/getting_started/creating_a_cluster/creating_a_dha_cluster/) for detailed instructions on how to create a distributed high availibility cluster. + +1. In **Clusters** page, select your cluster, and use the **Quick Connect** option to access your instance from your terminal. + +1. Create a new empty database. For an example, see [Create a new database](/biganimal/latest/free_trial/quickstart/#create-a-new-database). diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_migration.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_migration.mdx new file mode 100644 index 00000000000..3c5911256c7 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_migration.mdx @@ -0,0 +1,24 @@ +--- +title: "Creating a migration" +--- + +After you use the EDB DMS Reader to read the source database, create a new migration in the EDB Postgres® AI Console. +This establishes a sync between the source database and a target cluster in the EDB Postgres AI Console. + +1. Access the [EDB Postgres AI Console](https://portal.biganimal.com) and log in with your EDB Postgres AI Database Cloud Service credentials. + +1. Select the project where you created the database cluster. + +1. Within your project, select **Migrate** > **Migrations**. + +1. In the **Migrations** page, select **Create New Migration** > **To Managed Postgres**. + +1. In the **Create Migration** page, assign a **Name** to the migration. + +1. Select the **Source** of the migration. The ID for the EDB DMS Reader is listed in the drop-down menu. + +1. Under **Destination**, select a target cluster for the migration and enter the name of the database where you want the migration to copy data and select **Next**. + +1. Select the tables and columns to migrate. Modify the table and column names if needed. + +1. Select **Create Migration**. \ No newline at end of file diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx new file mode 100644 index 00000000000..672699c5abc --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx @@ -0,0 +1,28 @@ +--- +title: "Getting started" +description: Understand how to create a migration from planning to execution. +navigation: + - create_database + - prepare_schema + - preparing_db + - installing + - config_reader + - create_migration + - mark_completed + - apply_constraints + - verify_migration + - remove_software +--- + +Setting up an EDB Data Migration consists of a number of steps. + +1. [Create a target database cluster](create_database) in the EDB Postgres® AI Console. +1. [Prepare the schema](prepare_schema) with Migration Portal and migrate the schema to the target database. +1. [Prepare your source Oracle or Postgres database](preparing_db) with `sqlplus` or `psql`. +1. [Install the EDB DMS Reader](installing) on your machine with your terminal. +1. [Configure the EDB DMS Reader](config_reader) on your machine with your terminal. +1. [Create a new migration](create_migration) in the EDB Postgres AI Console. +1. [Mark the Migration as completed](mark_completed) in the EDB Postgres AI Console. +1. [Apply constraints](apply_constraints) to the new database. +1. [Verify the migration completed successfully](verify_migration) with LiveCompare. +1. [Remove customer software](remove_software). diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/index.mdx new file mode 100644 index 00000000000..a22ead4771b --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/index.mdx @@ -0,0 +1,31 @@ +--- +navTitle: Installing EDB DMS Reader +title: Installing EDB DMS Reader on Linux + +navigation: + - linux_x86_64 +--- + +Select a link to access the applicable installation instructions. + +## Linux [x86-64 (amd64)](linux_x86_64) + +### Red Hat Enterprise Linux (RHEL) and derivatives + +- [RHEL 9](linux_x86_64/dms_rhel_9), [RHEL 8](linux_x86_64/dms_rhel_8) + +- [Oracle Linux (OL) 9](linux_x86_64/dms_rhel_9), [Oracle Linux (OL) 8](linux_x86_64/dms_rhel_8) + +- [Rocky Linux 9](linux_x86_64/dms_other_linux_9) + +- [AlmaLinux 9](linux_x86_64/dms_other_linux_9) + +### SUSE Linux Enterprise (SLES) + +- [SLES 15](linux_x86_64/dms_sles_15), [SLES 12](linux_x86_64/dms_sles_12) + +### Debian and derivatives + +- [Ubuntu 22.04](linux_x86_64/dms_ubuntu_22), [Ubuntu 20.04](linux_x86_64/dms_ubuntu_20) + +- [Debian 12](linux_x86_64/dms_debian_12), [Debian 11](linux_x86_64/dms_debian_11) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_debian_11.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_debian_11.mdx new file mode 100644 index 00000000000..cd7be7a8559 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_debian_11.mdx @@ -0,0 +1,34 @@ +--- +navTitle: Debian 11 +title: Installing the EDB DMS Reader on Debian 11 x86_64 +--- + +## Prerequisites + +Before you begin the installation process: + +- Set up the EDB repository. + + Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. + + To determine if your repository exists, enter: + + `apt-cache search enterprisedb` + + If no output is generated, the repository isn't installed. + + To set up the EDB repository: + + 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). + + 1. Select the button that provides access to the EDB repo. + + 1. Select the platform and software that you want to download. + +## Install the package + +Install the EDB DMS Reader (packaged as `cdcreader`): + +```shell +sudo apt-get install cdcreader +``` diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_debian_12.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_debian_12.mdx new file mode 100644 index 00000000000..abd9510a40d --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_debian_12.mdx @@ -0,0 +1,34 @@ +--- +navTitle: Debian 12 +title: Installing the EDB DMS Reader on Debian 12 x86_64 +--- + +## Prerequisites + +Before you begin the installation process: + +- Set up the EDB repository. + + Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. + + To determine if your repository exists, enter: + + `apt-cache search enterprisedb` + + If no output is generated, the repository isn't installed. + + To set up the EDB repository: + + 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). + + 1. Select the button that provides access to the EDB repo. + + 1. Select the platform and software that you want to download. + +## Install the package + +Install the EDB DMS Reader (packaged as `cdcreader`): + +```shell +sudo apt-get install cdcreader +``` diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_other_linux_9.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_other_linux_9.mdx new file mode 100644 index 00000000000..5c50ed81e9d --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_other_linux_9.mdx @@ -0,0 +1,34 @@ +--- +navTitle: AlmaLinux 9 or Rocky Linux 9 +title: Installing the EDB DMS Reader on AlmaLinux 9 or Rocky Linux 9 x86_64 +--- + +## Prerequisites + +Before you begin the installation process: + +- Set up the EDB repository. + + Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. + + To determine if your repository exists, enter: + + `dnf repolist | grep enterprisedb` + + If no output is generated, the repository isn't installed. + + To set up the EDB repository: + + 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). + + 1. Select the button that provides access to the EDB repo. + + 1. Select the platform and software that you want to download. + +## Install the package + +Install the EDB DMS Reader (packaged as `cdcreader`): + +```shell +sudo dnf install cdcreader +``` diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_rhel_8.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_rhel_8.mdx new file mode 100644 index 00000000000..4f79f23ff6c --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_rhel_8.mdx @@ -0,0 +1,34 @@ +--- +navTitle: RHEL 8 or OL 8 +title: Installing the EDB DMS Reader on RHEL 8 or OL 8 x86_64 +--- + +## Prerequisites + +Before you begin the installation process: + +- Set up the EDB repository. + + Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. + + To determine if your repository exists, enter: + + `dnf repolist | grep enterprisedb` + + If no output is generated, the repository isn't installed. + + To set up the EDB repository: + + 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). + + 1. Select the button that provides access to the EDB repo. + + 1. Select the platform and software that you want to download. + +## Install the package + +Install the EDB DMS Reader (packaged as `cdcreader`): + +```shell +sudo dnf install cdcreader +``` diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_rhel_9.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_rhel_9.mdx new file mode 100644 index 00000000000..64eebbc208f --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_rhel_9.mdx @@ -0,0 +1,34 @@ +--- +navTitle: RHEL 9 or OL 9 +title: Installing the EDB DMS Reader on RHEL 9 or OL 9 x86_64 +--- + +## Prerequisites + +Before you begin the installation process: + +- Set up the EDB repository. + + Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. + + To determine if your repository exists, enter: + + `dnf repolist | grep enterprisedb` + + If no output is generated, the repository isn't installed. + + To set up the EDB repository: + + 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). + + 1. Select the button that provides access to the EDB repo. + + 1. Select the platform and software that you want to download. + +## Install the package + +Install the EDB DMS Reader (packaged as `cdcreader`): + +```shell +sudo dnf install cdcreader +``` diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_sles_12.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_sles_12.mdx new file mode 100644 index 00000000000..83a2ed46e98 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_sles_12.mdx @@ -0,0 +1,34 @@ +--- +navTitle: SLES 12 +title: Installing the EDB DMS Reader on SLES 12 x86_64 +--- + +## Prerequisites + +Before you begin the installation process: + +- Set up the EDB repository. + + Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. + + To determine if your repository exists, enter: + + `zypper lr -E | grep enterprisedb` + + If no output is generated, the repository isn't installed. + + To set up the EDB repository: + + 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). + + 1. Select the button that provides access to the EDB repo. + + 1. Select the platform and software that you want to download. + +## Install the package + +Install the EDB DMS Reader (packaged as `cdcreader`): + +```shell +sudo zypper install cdcreader +``` diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_sles_15.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_sles_15.mdx new file mode 100644 index 00000000000..fcd764b1b8c --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_sles_15.mdx @@ -0,0 +1,34 @@ +--- +navTitle: SLES 15 +title: Installing the EDB DMS Reader on SLES 15 x86_64 +--- + +## Prerequisites + +Before you begin the installation process: + +- Set up the EDB repository. + + Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. + + To determine if your repository exists, enter: + + `zypper lr -E | grep enterprisedb` + + If no output is generated, the repository isn't installed. + + To set up the EDB repository: + + 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). + + 1. Select the button that provides access to the EDB repo. + + 1. Select the platform and software that you want to download. + +## Install the package + +Install the EDB DMS Reader (packaged as `cdcreader`): + +```shell +sudo zypper install cdcreader +``` diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_ubuntu_20.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_ubuntu_20.mdx new file mode 100644 index 00000000000..88f3e855262 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_ubuntu_20.mdx @@ -0,0 +1,34 @@ +--- +navTitle: Ubuntu 20.04 +title: Installing the EDB DMS Reader on Ubuntu 20.04 x86_64 +--- + +## Prerequisites + +Before you begin the installation process: + +- Set up the EDB repository. + + Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. + + To determine if your repository exists, enter: + + `apt-cache search enterprisedb` + + If no output is generated, the repository isn't installed. + + To set up the EDB repository: + + 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). + + 1. Select the button that provides access to the EDB repo. + + 1. Select the platform and software that you want to download. + +## Install the package + +Install the EDB DMS Reader (packaged as `cdcreader`): + +```shell +sudo apt-get install cdcreader +``` diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_ubuntu_22.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_ubuntu_22.mdx new file mode 100644 index 00000000000..d38dd5cdd9f --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_ubuntu_22.mdx @@ -0,0 +1,34 @@ +--- +navTitle: Ubuntu 22.04 +title: Installing the EDB DMS Reader on Ubuntu 22.04 x86_64 +--- + +## Prerequisites + +Before you begin the installation process: + +- Set up the EDB repository. + + Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. + + To determine if your repository exists, enter: + + `apt-cache search enterprisedb` + + If no output is generated, the repository isn't installed. + + To set up the EDB repository: + + 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). + + 1. Select the button that provides access to the EDB repo. + + 1. Select the platform and software that you want to download. + +## Install the package + +Install the EDB DMS Reader (packaged as `cdcreader`): + +```shell +sudo apt-get install cdcreader +``` diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/index.mdx new file mode 100644 index 00000000000..55bb9cc91e4 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/index.mdx @@ -0,0 +1,48 @@ +--- +title: "Installing EDB DMS Reader on Linux x86 (amd64)" +navTitle: "On Linux x86" + +navigation: + - dms_rhel_9 + - dms_rhel_8 + - dms_other_linux_9 + - dms_sles_15 + - dms_sles_12 + - dms_ubuntu_22 + - dms_ubuntu_20 + - dms_ubuntu_18 + - dms_debian_12 + - dms_debian_11 +--- + +For operating system-specific install instructions, including accessing the repo, see: + +### Red Hat Enterprise Linux (RHEL) and derivatives + +- [RHEL 9](dms_rhel_9) + +- [RHEL 8](dms_rhel_8) + +- [Oracle Linux (OL) 9](dms_rhel_9) + +- [Oracle Linux (OL) 8](dms_rhel_8) + +- [Rocky Linux 9](dms_other_linux_9) + +- [AlmaLinux 9](dms_other_linux_9) + +### SUSE Linux Enterprise (SLES) + +- [SLES 15](dms_sles_15) + +- [SLES 12](dms_sles_12) + +### Debian and derivatives + +- [Ubuntu 22.04](dms_ubuntu_22) + +- [Ubuntu 20.04](dms_ubuntu_20) + +- [Debian 12](dms_debian_12) + +- [Debian 11](dms_debian_11) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/mark_completed.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/mark_completed.mdx new file mode 100644 index 00000000000..a2460428bae --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/mark_completed.mdx @@ -0,0 +1,11 @@ +--- +title: "Mark migration as completed" +--- + +You have taken a snapshot of the source database and imported it to the EDB Postgres® AI Console. + +To ensure that the target cluster is up-to-date with the source cluster and allow the EDB DMS Reader to be able to stream the latest updates on the source database to the cluster mark the migration as completed. + +1. In the EDB Postgres AI Console, in your project, select **Migrate** > **Migrations**. + +1. Select the **Mark as completed** button. \ No newline at end of file diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/prepare_schema.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/prepare_schema.mdx new file mode 100644 index 00000000000..01fddc64c41 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/prepare_schema.mdx @@ -0,0 +1,66 @@ +--- +title: "Preparing and importing the schema" +deepToC: true +--- + +Before you use EDB Data Migration Service (EDB DMS) to configure a data migration, you must prepare and import your schema to the target database. + +Some of your schema's constraints must be included before the data migration takes place, whereas others must be applied [after the data migration is completed](apply_constraints). This ensures you can migrate without performance degradation. + +## Schema integrity and performance considerations + +The presence of target database constraints, triggers, and WAL logging can impact the data migration performance. When possible, EDB recommends a two-step import of schema constraints. + +### `PRIMARY KEY` and `UNIQUE` constraints + +`PRIMARY KEY` and `UNIQUE` constraints are leveraged by EDB DMS to provide an exactly-once delivery when migrating data to the target database. Therefore, `PRIMARY KEY` and `UNIQUE` constraints should be included in the schema import that you perform before the data migration begins. Other types of constraints should be excluded from the schema import. + +For rows in tables that do not have `PRIMARY KEY` or `UNIQUE` constraints it is only possible to achieve at-least-once delivery. Deduplication can be performed during the [data migration verification](verify_migration). + +!!!note + `NOT NULL` constraints don't represent a significant performance impact for destination servers and can also be included in the schema import. +!!! + +### `FOREIGN KEY`, `REFERENCES`, `CHECK`, `CASCADE` and `EXCLUDE` constraints + +EDB DMS is able to apply change events in parallel against destination database clusters. However, migrating some constraint types can negatively affect the performance of the migration. These type of constraints lead to unnecessary CPU and memory utilization in the context of an in-flight data migration from a consistent and referentially integral source database. + +EDB recommends [applying the following constraints](apply_constraints) on the target database after you have signilized the end of the CDC stream by [marking the migration as completed](mark_completed) in the Console. + +`FOREIGN KEY` / `REFERENCES` + +`CHECK` + +`CASCADE` + +`EXCLUDE` + +## Preparing and importing your schema + +### Prerequisite + +You created a schema in the target database. + +### Prepare your schema + +#### Oracle to EDB Postgres Advanced Server migrations + +Use [EDB Migration Portal](/migration_portal/latest/03_mp_using_portal/03_mp_quick_start/) to assess Oracle database sources for schema compatibility before starting the data migration process. + +EDB Migration Portal offers the ability to separate constraints from other destination DDL with the [offline migration option](/migration_portal/latest/04_mp_migrating_database/03_mp_schema_migration/#offline-migration). + +Ensure you exclude `FOREIGN KEY`, `REFERENCES`, `CHECK`, `CASCADE` and `EXCLUDE` constraints from the DDL before importing the schema to the target database. + +#### Other migrations + +For data migrations to and from Postgres EDB recommends using [EDB Migration Toolkit](/migration_toolkit/latest/) to manage the schema. MTK's [offline migration](/migration_toolkit/latest/07_invoking_mtk/08_mtk_command_options/#offline-migration-options) capability provides an easy way to extract a database's schema and separate constraints. + +Ensure you exclude `FOREIGN KEY`, `REFERENCES`, `CHECK`, `CASCADE` and `EXCLUDE` constraints from the DDL before importing the schema to the target database. + +Tools such as `pg_dump` and `pg_restore` are another valid route for migrating DDL. + +### Import your schema to the target database + +After you have prepared the DDL, and excluded `FOREIGN KEY`, `REFERENCES`, `CHECK`, `CASCADE` and `EXCLUDE` constraints, connect to the target database and import the SQL-formatted DDL file. + +You can use [pgAdmin](https://www.pgadmin.org/docs/pgadmin4/latest/index.html), [psql](https://www.postgresql.org/docs/7.0/app-psql.htm) or a different tool to perform the import. diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/index.mdx new file mode 100644 index 00000000000..12166b19bfc --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/index.mdx @@ -0,0 +1,11 @@ +--- +title: "Preparing databases" + +navigation: + - preparing_oracle_source_databases + - preparing_postgres_source_databases +--- + +To prepare source databases, see either: +- [Preparing Oracle source databases](preparing_oracle_source_databases) +- [Preparing Postgres source databases](preparing_postgres_source_databases) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_oracle_source_databases.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_oracle_source_databases.mdx new file mode 100644 index 00000000000..1afa506f874 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_oracle_source_databases.mdx @@ -0,0 +1,279 @@ +--- +title: "Preparing Oracle source databases" +--- + + + +Configuring Oracle for EDB Data Migration Services (EDB DMS) requires `sysdba` privileges. + +Configure an Oracle source database to: +- Enable archive log mode. +- Enable supplemental logging for the database and table columns of interest. +- Ensure adequate redo log space is available. +- Create a user with limited privileges to carry out the data migration. + +Execute SQL statements with `sqlplus` or a similar client. + +This command propmpts you for the password for ``: + +```shell +sqlplus @:/ as sysdba +``` + +Where: + + - `` is the Oracle DB hostname. + - `` is the Oracle DB port. + - `` is the Oracle System ID for the DB or CDB/PDB combination. + - `` is an Oracle DB username with sysdba privileges. + +## Oracle configuration + +To perform Oracle configuration: + +1. [Enable archive log mode](#enable-archive-log-mode). +1. [Enable database supplemental logging](#enable-database-supplemental-logging). +1. [Enable supplemental logging for table columns](#enable-supplemental-logging-for-table-columns). +1. [Verify redo logs for adequate count and size](#verify-redo-logs-for-adequate-count-and-size). +1. [Create a user with limited privileges for data migration](#create-a-user-with-limited-privileges-for-data-migration). +1. [Grant `SELECT` on source tables](#grant-select-on-source-tables). +1. [Validate configuration](#validate-configuration). + +### Enable archive log mode + +Oracle databases can operate in `ARCHIVELOG` or `NOARCHIVELOG` mode. In `ARCHIVELOG` mode, filled redo logs are archived rather than put back into log rotation to be overwritten. This mode is needed for the change data capture (CDC) process to use LogMiner and produce a complete history of changes after an initial consistent snapshot. + +To see the database mode: + +```sql +archive log list; +``` + +The returned content indicates the database mode: + +```sql +Database log mode Archive Mode +...or +Database log mode No Archive Mode +``` + +If `ARCHIVELOG` mode is enabled, confirm with your DBA that the size of your recovery file destination is appropriate for your workload. + +When enabling archive log mode, you need to enable a fast recovery area. For more information on enabling an Oracle fast recovery area, see [Enabling the Fast Recovery Area](https://docs.oracle.com/en/database/oracle/oracle-database/19/bradv/configuring-rman-client-basic.html#GUID-233338E2-3EE6-4248-A2B6-16A7899DB14F) in the Oracle documentation. + +To enable archive logging: + + +```sql +ORACLE_SID= sqlplus /nolog + +CONNECT /_PWD AS SYSDBA +alter system set db_recovery_file_dest_size = ; +alter system set db_recovery_file_dest = '' scope=spfile; +shutdown immediate +startup mount +alter database archivelog; +alter database open; +archive log list; +exit; +``` + +Where: + - `` is the Oracle DB system ID. + - `` is the name of a user with sysdba privileges. + - `` is the password for ``. + - `` is the size allowed for the recovery behavior, for example, `100G` for 100 gigabytes. + - `` is the file system path for an Oracle fast recovery area. This path can be a directory, file system, or Oracle Automatic Storage Management. Consult your DBA for guidance. + +The `archive log list` output shows the database is now in archive log mode. + +### Enable database supplemental logging + +Supplemental logging refers to the capture of additional information in Oracle redo logs, such as "before" state. This extra redo log information is needed for some log-based applications, such as EDB DMS, to capture change events. See [Supplemental Logging](https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-logminer-utility.html#GUID-D857AF96-AC24-4CA1-B620-8EA3DF30D72E) in the Oracle documentation for more information. + +You can enable supplemental logging at the database and table level. The following command enables minimal supplemental logging required for LogMiner to function at the database level: + +```sql +ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; +``` + +### Enable supplemental logging for table columns + +For every table you want to migrate, you must enable supplemental logging. To do +so for all columns in a table, apply the following statement: + +```sql +ALTER TABLE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; +``` + +Where `
` is the identifier for the table to migrate. + +Use `ALTER` with all table columns you want to migrate. + +### Verify redo logs for adequate count and size + +The migration process involves two phases. The first is a consistent snapshot. The second is continuous streaming of database changes. This stream of database changes is powered by LogMiner and the Oracle DB redo logs. + +Database changes have a limited lifetime on the redo logs before the change is no longer present in the log history. This lifetime depends on the size of the redo logs, the number of redo logs, and the change throughput to the database. Also, undersized logs cause frequent log switching and affect migration performance. + +To examine the state of the database redo logs: + +```sql +SELECT GROUP#, TYPE, MEMBER FROM V_$LOGFILE; + + GROUP# TYPE MEMBER +---------- ------- -------------------------------------------------- + 1 ONLINE /opt/oracle/oradata/ORCLCDB/redo03.log + 2 ONLINE /opt/oracle/oradata/ORCLCDB/redo01.log + 3 ONLINE /opt/oracle/oradata/ORCLCDB/redo04.log + +SELECT GROUP#, ARCHIVED, BYTES/1024/1024 MB, STATUS FROM V_$LOG; + + GROUP# ARC MB STATUS +---------- --- --- ---------------- + 1 YES 2000 INACTIVE + 3 YES 2000 INACTIVE + 3 NO 2000 CURRENT +``` + +This example uses three log groups of size 2000MB. Each group has one file member. This might be too +small for many production databases. You can safely adjust the redo logs with synchronous commands such as the following: + +```sql + ALTER DATABASE ADD LOGFILE GROUP 4 ('/opt/oracle/oradata/ORCLCDB/redo04.log') SIZE 8G; + ALTER DATABASE ADD LOGFILE GROUP 5 ('/opt/oracle/oradata/ORCLCDB/redo05.log') SIZE 8G; + ALTER DATABASE ADD LOGFILE GROUP 6 ('/opt/oracle/oradata/ORCLCDB/redo06.log') SIZE 8G; + ALTER DATABASE ADD LOGFILE GROUP 7 ('/opt/oracle/oradata/ORCLCDB/redo07.log') SIZE 8G; + ALTER SYSTEM ARCHIVE LOG CURRENT; + ALTER SYSTEM CHECKPOINT; + ALTER SYSTEM ARCHIVE LOG CURRENT; + ALTER SYSTEM CHECKPOINT; + ALTER SYSTEM ARCHIVE LOG CURRENT; + ALTER SYSTEM CHECKPOINT; + ALTER DATABASE DROP LOGFILE GROUP 1; + ALTER DATABASE DROP LOGFILE GROUP 2; + ALTER DATABASE DROP LOGFILE GROUP 3; +``` + +These commands result in four new 8GB log groups. Each group has a single log file. + +Consult your DBA for appropriate production sizing. + +### Create a user with limited privileges for data migration + +#### Tablespace preparation + +Provide a database user with adequate roles to carry out the CDC process. + +Then, we recommend creating a tablespace for the CDC user. For container databases, you need to create a pluggable database as well. + +This example creates a tablespace and datafiles for CDC migration. Your database settings might vary, but a common configuration with `SMALLFILE` tablespaces and an 8kB database block size results in a maximum of 32GB of storage avaiable per `MAXSIZE` tablespace datafile. Therefore, you might need to add multiple `AUTOEXTEND` datafiles when this limit might be exceeded. + +```sql +-- Create the tablespace, or in the case of a CDB/PDB, create the CDB tablespace +CREATE TABLESPACE DATAFILE '/opt/oracle/oradata/ORCLCDB/logminer_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; + +-- For CDB/PDB deployments we must specify at least one tablespace datafile for the PDB +CREATE TABLESPACE DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCPDB1/logminer_tbs_1.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; +-- Additional data files can be added with as follows +ALTER TABLESPACE DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/logminer_tbs_2.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; +``` + +Where: + - `` is the tablespace name for the CDC migration user to use. + +#### User creation and access grants + +With the tablespace files in place, you can create a user with appropriate access grants for CDC migration. + +For a CDB/PDB database setup, note the tablespace default and quota: + +```sql + CREATE USER IDENTIFIED BY + DEFAULT TABLESPACE + QUOTA UNLIMITED ON + CONTAINER=ALL; + GRANT CREATE SESSION TO CONTAINER=ALL; + GRANT SET CONTAINER TO CONTAINER=ALL; + GRANT SELECT ON V_$DATABASE to CONTAINER=ALL; + GRANT FLASHBACK ANY TABLE TO CONTAINER=ALL; + GRANT SELECT ANY TABLE TO CONTAINER=ALL; + GRANT SELECT_CATALOG_ROLE TO CONTAINER=ALL; + GRANT EXECUTE_CATALOG_ROLE TO CONTAINER=ALL; + GRANT SELECT ANY TRANSACTION TO CONTAINER=ALL; + GRANT SELECT ANY DICTIONARY TO CONTAINER=ALL; + GRANT LOGMINING TO CONTAINER=ALL; + GRANT CREATE TABLE TO CONTAINER=ALL; + GRANT LOCK ANY TABLE TO CONTAINER=ALL; + GRANT CREATE SEQUENCE TO CONTAINER=ALL; + GRANT EXECUTE ON DBMS_LOGMNR TO CONTAINER=ALL; + GRANT EXECUTE ON DBMS_LOGMNR_D TO CONTAINER=ALL; + GRANT SELECT ON V_$LOGMNR_LOGS TO CONTAINER=ALL; + GRANT SELECT ON V_$LOGMNR_CONTENTS TO CONTAINER=ALL; + GRANT SELECT ON V_$LOGFILE TO CONTAINER=ALL; + GRANT SELECT ON V_$ARCHIVED_LOG TO CONTAINER=ALL; + GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO CONTAINER=ALL; + GRANT SELECT ON V_$TRANSACTION TO CONTAINER=ALL; +``` + +For a non-CDB database: + +```sql + CREATE USER IDENTIFIED BY + DEFAULT TABLESPACE + QUOTA UNLIMITED ON ; + GRANT CREATE SESSION TO ; + GRANT SELECT ON V_$DATABASE to ; + GRANT FLASHBACK ANY TABLE TO ; + GRANT SELECT ANY TABLE TO ; + GRANT SELECT_CATALOG_ROLE TO ; + GRANT EXECUTE_CATALOG_ROLE TO ; + GRANT SELECT ANY TRANSACTION TO ; + GRANT SELECT ANY DICTIONARY TO ; + GRANT LOGMINING TO ; + GRANT CREATE TABLE TO ; + GRANT LOCK ANY TABLE TO ; + GRANT CREATE SEQUENCE TO ; + GRANT EXECUTE ON DBMS_LOGMNR TO ; + GRANT EXECUTE ON DBMS_LOGMNR_D TO ; + GRANT SELECT ON V_$LOGMNR_LOGS TO ; + GRANT SELECT ON V_$LOGMNR_CONTENTS TO ; + GRANT SELECT ON V_$LOGFILE TO ; + GRANT SELECT ON V_$ARCHIVED_LOG TO ; + GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO ; + GRANT SELECT ON V_$TRANSACTION TO ; +``` + +Where: + - `` is the name of the user to create for CDC migration table access. + - `` is the password for the migration user. + - `` is the tablespace for ``. + +### Grant `SELECT` on source tables + +The new `` needs `SELECT` access to source tables. Oracle doesn't support +granting access to an entire schema, so you need to do this for each table. + +```sql +GRANT SELECT ON TO +``` + +Where: + - `` is the name of the user to create for CDC migration table access. + - `` is the name of an individual table to migrate. + +### Validate configuration + +The EDB DMS Reader installation (packaged as `cdcreader`) comes with a helper script that validates the Oracle configuration and helps you identify any issues. After you configure the database, we recommend running the script to ensure all checks pass. + +Run the script without arguments to print the usage: + +```shell +/opt/cdcreader/oracleConfigValidation.sh +``` + +## More information + +Your database is ready for CDC migration. + +For more information, see the [Debezium Oracle Connector](https://debezium.io/documentation/reference/2.2/index.html) documentation. diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_postgres_source_databases.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_postgres_source_databases.mdx new file mode 100644 index 00000000000..9b25a4e7a3c --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_postgres_source_databases.mdx @@ -0,0 +1,130 @@ +--- +title: "Preparing Postgres source databases" +--- + + + +Configuring Postgres for EDB Data Migration Service (EDB DMS) requires administrative privileges. Create a change data capture (CDC) migration role with limited privileges for data migration. + +Execute SQL statements with psql or a similar client. + +To connect to the source database using `psql`: + +```shell +psql -h -p -U -d +``` + +Where: + - `` is the name of the Postgres database to connect to. + - `` is the Postgres database host. + - `` is the Postgres database port. + - `` is an administrative user who can create and grant roles, alter ownership of tables to migrate, and create a replication slot. + +This command prompts you for the password associated with ``. + +## Postgres configuration + +To perform Postgres configuration: +1. [Verify Postgres configuration](#verify-postgres-configuration). +1. [Create new roles and grant acccess for CDC migration](#create-new-roles-and-grant-acccess-for-cdc-migration). +1. [Grant `SELECT` on source tables to the CDC migration role](#grant-select-on-source-tables-to-the-cdc-migration-role). +1. [Create a logical replication slot](#create-a-logical-replication-slot) + +### Verify Postgres configuration + +Verify or set these configuration entries for Postgres. + +1. Ensure `wal_level` is configured as `logical`. + + The CDC migration process leverages Postgres logical decoding. Setting `wal_level` to `logical` enables logical decoding of the Postgres write-ahead log (WAL). + +2. Ensure `max_wal_senders` is configured appropriately. + + If EDB Data Migration Service migration is the first streaming client for your database, set `max_wal_senders` to at least `1`. Other streaming clients might be present. Consult your DBA for the appropriate value for streaming client connectivity. + +3. Ensure `max_replication_slots` is configured appropriately. + + `max_replication_slots` must be at least `1` for the CDC migration process. This value can be higher if your organization uses Postgres replication. + + See the [Postgres replication documentation](https://www.postgresql.org/docs/current/runtime-config-replication.html) for more information. + +4. Ensure `max_wal_size` is configured for adequate WAL LSN lifetime. + + Set the `max_wal_size` value large enough that production traffic is generating mostly timed checkpoints and not requested checkpoints based on WAL size. + + The streaming migration process also requires changes to be available in the WAL until they can be streamed to durable message storage in the cloud infrastructure of EDB DMS. Setting `max_wal_size` too small can affect performance. It can also interfere with the migration process by allowing Postgres LSNs to be dropped from the WAL before they can be streamed. + + For more information, see this [EDB blog post on tuning `max_wal_size`](https://www.enterprisedb.com/blog/tuning-maxwalsize-postgresql) and the [Postgres WAL documentation](https://www.postgresql.org/docs/current/wal-configuration.html). + +#### Config validation script + +The EDB DMS Reader installation (packaged as `cdcreader`) comes with a helper script that validates the Postgres configuration and helps you identify any issues. After you configure the database, we recommend running the script and ensuring all checks passed. + +Run the script without arguments to print the usage: + +```shell +/opt/cdcreader/postgresConfigValidation.sh +``` + +### Create new roles and grant acccess for CDC migration + +First, create a new role for CDC migration with `LOGIN` and `REPLICATION` abilities granted: + +```sql +CREATE ROLE WITH REPLICATION LOGIN PASSWORD ; +``` + +`` needs to own the source tables to autocreate Postgres publications. Because the source tables are already owned by another role, you create a role/user that can act as the new owner and grant the specified replication group role to both the current table owner and to ``: + +```sql +CREATE ROLE ; +GRANT TO ; +GRANT TO ; +ALTER TABLE OWNER TO +``` + +Where: + + - `` is the name of the Postgres role or user to use for CDC migration database access. + - `` is the original production owner of the table. + - `` is the name of a role used to own the source tables to migrate for publication autocreation. + +### Grant `SELECT` on source tables to the CDC migration role + +The new `` needs `SELECT` access to source tables. You can grant access across a schema +or for each table. + +For an entire schema's tables, use this command: + +```sql +ALTER DEFAULT PRIVILEGES IN SCHEMA GRANT SELECT ON TABLES to +``` + +For each table, use: + +```sql +GRANT SELECT ON TO +``` + +Where: + - `` is the database schema name for the tables to migrate. + - `` is the name of the Postgres role or user to use for CDC migration database access. + - `` is the name of a table to migrate. + +### Create a logical replication slot + +The CDC migration process for Postgres sources leverages logical decoding and the publication/subscription mechanism. To use Postgres as a source, you need to create a replication slot for your CDC migration role: + +```sql +PERFORM pg_create_logical_replication_slot('', 'pgoutput'); +``` + +Where: + - `` is the name of the Postgres role or user to use for CDC migration database access. + - `pgoutput` is the logical decoding plugin supplied by Postgres that EDB DMS uses. + +## More information + +Your database is ready for CDC migration. + +For more information, see the [Debezium Postgres Connector](https://debezium.io/documentation/reference/stable/connectors/postgresql.html) documentation. diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/remove_software.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/remove_software.mdx new file mode 100644 index 00000000000..d6a88cefa9b --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/remove_software.mdx @@ -0,0 +1,5 @@ +--- +title: "Removing customer software" +--- + +After verifying that the migration is up and running, you can remove the customer software. \ No newline at end of file diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/verify_migration.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/verify_migration.mdx new file mode 100644 index 00000000000..4a967416082 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/verify_migration.mdx @@ -0,0 +1,7 @@ +--- +title: "Verifying the migration" +--- + +Verify that the migration was successful by comparing the source and target databases. + +To do it, you can use [LiveCompare](/livecompare/latest/). \ No newline at end of file diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx new file mode 100644 index 00000000000..e052633a884 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx @@ -0,0 +1,29 @@ +--- +title: EDB Data Migration Service +indexCards: simple +deepToC: true +directoryDefaults: + description: "EDB Data Migration Service is a PG AI integrated migration solution that enables secure, fault-tolerant, and performant migrations to EDB Postgres AI Cloud Service." +navigation: + - "#Concepts" + - terminology + - "#Planning" + - supported_versions + - limitations + - "#Get started" + - getting_started + - "#Upgrading" + - upgrading + - "#Reference" + - rel_notes + - known_issues + +--- + +## EDB Postgres® AI migrations powered by EDB Data Migration Service + +EDB Data Migration Service (DMS) offers a secure and fault-tolerant way to migrate database data to the EDB Postgres AI platform. Using change data capture or CDC and event streaming, source database row changes are replicated to the migration destination. You can select a subset of your schemas' tables to migrate including support for schema, table, and column name remapping. + +EDB Data Migration Service is built on Apache Kafka and the open-source Debezium CDC platform. + +To migrate self-managed database sources you must download and configure the EDB DMS Reader. diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/known_issues.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/known_issues.mdx new file mode 100644 index 00000000000..43548b9ae90 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/known_issues.mdx @@ -0,0 +1,6 @@ +--- +title: "Known issues" +description: Review the currently known issues. +--- + +There are currently no known issues for EDB Data Migration Service. diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/limitations.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/limitations.mdx new file mode 100644 index 00000000000..fd0928f465c --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/limitations.mdx @@ -0,0 +1,24 @@ +--- +title: "Limitations" +description: Revise any unsupported Oracle data types and features. +--- + +A limited number of Oracle data types and features aren't supported by EDB Data Migration Service (EDB DMS). + +See the [Debezium documentation](https://debezium.io/documentation/reference/2.2/connectors/oracle.html#oracle-data-type-mappings) for detailed comments on supported data types. + +Unsupported Oracle data types include: + +- BFILE +- LONG +- LONG RAW +- RAW +- UROWID +- User-defined types (REF, Varrays, Nested Tables) +- ANY +- XML +- Spatial + +EDB DMS supports replicating Oracle tables that contain BLOB, CLOB, or NCLOB columns only if these also have the `PRIMARY KEY` constraint. If the tables don't have the `PRIMARY KEY` constraint, the streaming replication will only support INSERT operations. + +`BINARY_FLOAT` and `BINARY_DOUBLE` types in Oracle that might contain `Nan`, `+INF`, and `-INF` values are not supported by EDB DMS. diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/index.mdx new file mode 100644 index 00000000000..ce6ea9aeac3 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/index.mdx @@ -0,0 +1,18 @@ +--- +title: "EDB Data Migration Service Release notes" +navTitle: "Release notes" +description: Learn about new features and functions. +navigation: +- rel_notes_2.0.0_preview +--- + +The EDB Data Migration Service documentation describes the latest version of the EDB +DMS Reader 2, including minor releases and patches. The release notes +provide information on what was new in each release. For new functionality +introduced in a minor or patch release, the content also indicates the release +that introduced the feature. + +| Release Date | Data Migration | +|--------------|------------------------------------------| +| 2023 Jun 9 | [2.0.0_preview](rel_notes_2.0.0_preview) | + diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/rel_notes_2.0.0_preview.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/rel_notes_2.0.0_preview.mdx new file mode 100644 index 00000000000..6b15b93e8eb --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/rel_notes_2.0.0_preview.mdx @@ -0,0 +1,14 @@ +--- +title: "Release notes for EDB Data Migration Service version 2.0.0_preview" +navTitle: "Version 2.0.0_preview" +--- + +EDB Data Migration Service (EDB DMS) version 2.0.0_preview is a new major version of EDB Data Migration Service. + +The highlights of this release include: + +* General availability of EDB DMS migration capabilities. + +| Type | Description | +|-------------|------------------------------------------------------------------------------------------| +| Enhancement | Parallel table snapshots are available with Debezium 2.2.0. | diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx new file mode 100644 index 00000000000..90492fa53ab --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx @@ -0,0 +1,36 @@ +--- +title: "Product compatibility" +description: Verify that your Oracle or Postgres database version is compatible with EDB Data Migration Service. +--- + + + +## Database versions + +The following database versions are supported. + +| Database version | Supported | +|------------------|-----------| +| Postgres 11 - 16 | Y | +| EPAS 11 - 16 | Y | +| Oracle 11g | Y | +| Oracle 12c | Y | +| Oracle 18c | Y | +| Oracle 19c | Y | +| Oracle 21c | Y | + +### Oracle + +The EDB Data Migration Service (EDB DMS) stack requires the Oracle database to have archive log mode enabled and supplemental logging data enabled at the table and database level. For details, see [Preparing Oracle source databases](../2/getting_started/preparing_db/preparing_oracle_source_databases). + +Container databases (CDB/PDB) and non-CDB sources are supported. + +### Postgres/EDB Postgres Advanced Server + +Postgres and EDB Postgres Advanced Server sources require a database role or user that can manage replications. For details, see [Preparing Postgres source databases](../2/getting_started/preparing_db/preparing_postgres_source_databases). + +When used as a target, EDB DMS requires a database role or user that can write to the appropriate target tables. For details, see [Preparing target databases](../2/getting_started/preparing_db). + +## Operating systems + +The EDB DMS Reader can run on Linux. For details, see [Installing EDB DMS Reader](getting_started/installing). diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/terminology.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/terminology.mdx new file mode 100644 index 00000000000..d35a7d9f21b --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/terminology.mdx @@ -0,0 +1,27 @@ +--- +title: "Terminology" +description: Learn some basic concepts associated with EDB Data Migration Service. +--- + +This terminology is important to understand EDB Data Migration Service (DMS) functionality. + +## Analytics Sync + +EDB Postgres® AI Analytics Sync is a type of replication/migration supported by EDB Data Migration Service. EDB Postgres Advanced Server and Postgres source database snapshots are transformed into Delta Lake format in CSP Object Storage. This object storage is exposed as Storage Locations in the EDB Postgres AI Console. + +## Apache Kafka + +Apache Kafka is an open-source, distributed-event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. + +## Change Data Capture (CDC) + +CDC is a set of software design patterns used to determine and track changes in data sets by calculating *deltas*. EDB Data Migration Service uses CDC to stream changes from a database cluster to another. + +## Debezium + +Debezium is a Java-based, open-source platform for CDC. Debezium is supported by the Red Hat community. + +## EDB DMS Reader + +The EDB Data Migration Service Reader, packaged as `cdcreader`, uses Debezium to perform CDC operations on the source database and produce Kafka messages containing the change events. + diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/upgrading.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/upgrading.mdx new file mode 100644 index 00000000000..74ac0751558 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/upgrading.mdx @@ -0,0 +1,16 @@ +--- +title: "Upgrading" +description: Learn how to upgrade the EDB DMS Reader to a more recent version. +--- + +EDB recommends upgrading the EDB DMS Reader when it is not performing a streaming migration. However, you can also temporarily stop a migration to perform an upgrade. +The EDB DMS Reader components are designed to terminate and restart gracefully. + +To upgrade the software: + +1. If the EDB DMS reader is currently running, stop the process. + +1. Install and start a new version of the EDB DMS Reader. + +1. Continue the migration by restarting the EDB DMS Reader with the updated software. + diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/index.mdx new file mode 100644 index 00000000000..d28269fc219 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/index.mdx @@ -0,0 +1,15 @@ +--- +title: EDB Postgres AI Migration and ETL +navTitle: Migration +indexCards: simple +iconName: Migration +description: About the migration and ETL tools that feed the EDB Postgres AI platform. +navigation: + - data-migration-service + - migration-and-ai +--- + +Moving your data to Postgres is a challenge that EDB Postgres AI is built to solve. The EDB Postgres AI platform includes a set of tools that help you migrate your data to Postgres and keep it up-to-date. These tools include the EDB Data Migration Service and the EDB Postgres AI Migration and ETL tools. + + + diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/migration-and-ai.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/migration-and-ai.mdx new file mode 100644 index 00000000000..f8655c3373f --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/migration-and-ai.mdx @@ -0,0 +1,17 @@ +--- +title: EDB Postgres AI Tools - Migration and AI +navTitle: Migration and AI +description: The migration offering of EDB Postgres AI Tools includes an innovative migration copilot. +--- + +EDB Postgres® AI Tools Migration Portal offers an [AI copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/) to assist users who are migrating their databases to EDB Postgres. +The AI copilot is an AI-driven chatbot tool that helps users with the migration process. +The AI copilot is designed to help users with the following tasks: + +- **General migration assistance**: The AI copilot can help users with general migration questions. + For example, users can request information about available tools, and source and target database compatibility. +- **Migration planning**: The AI copilot can help users plan their migration, and obtain an overview of the end-to-end migration paths. +- **Migration assessment**: The AI copilot can help users assess their migration readiness. + For example, if there are compatibility issues between source and target databases, the AI Copilot can suggest compatible query alternatives. + +The AI copilot is designed to be user-friendly and easy to use. Users can interact with the AI copilot using natural language and improve the quality of answers with [good prompting](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/ai_good_prompts/). The AI copilot is also designed to be context-aware, so it can provide users with relevant information based on the context of the conversation. \ No newline at end of file diff --git a/src/components/table-of-contents.js b/src/components/table-of-contents.js index 04da5be2d6d..ab56b1e8aa9 100644 --- a/src/components/table-of-contents.js +++ b/src/components/table-of-contents.js @@ -7,6 +7,8 @@ const TableOfContents = ({ toc, deepToC }) => { const scrollRestoration = useScrollRestoration("header-navigation-sidebar"); const hash = useLocation().hash; + console.log(toc); + return (
    { } }); if (nextSection) sections.push(nextSection); - + console.error(sections); return sections; }; @@ -185,13 +185,13 @@ const DocTemplate = ({ data, pageContext }) => { const sections = depth === 2 ? buildSections(navTree) : null; // newtoc will be passed as the toc - this will blend the existing toc with the new sections - const newtoc = { items: [] }; + var newtoc = []; if (tableOfContents.items) { - newtoc.items.push(...tableOfContents.items); + newtoc.push(...tableOfContents.items); if (sections) { sections.forEach((section) => { section.slug = "section-" + slugger.slug(section.title); - newtoc.items.push({ + newtoc.push({ url: "#" + section.slug, title: section.title, }); @@ -290,7 +290,7 @@ const DocTemplate = ({ data, pageContext }) => { {showToc && (
- + )} diff --git a/src/templates/learn-doc.js b/src/templates/learn-doc.js index 33a197a71cf..2d01bfcbd9e 100644 --- a/src/templates/learn-doc.js +++ b/src/templates/learn-doc.js @@ -15,6 +15,7 @@ import { Tiles, TileModes, } from "../components"; +import GithubSlugger from "github-slugger"; export const query = graphql` query ($nodeId: String!) { @@ -72,8 +73,45 @@ const FeedbackButton = ({ githubIssuesLink }) => ( ); +const buildSections = (navTree, path) => { + const sections = []; + let nextSection; + + // Ok, now we have to figure out where we are in this tree + // We need to find the current node in the tree + + const findCurrentNode = (root, path) => { + if (root.path === path) return root; + for (let node of root.items) { + const result = findCurrentNode(node, path); + if (result) return result; + } + }; + + const currentNode = findCurrentNode(navTree, path); + + currentNode.items.forEach((navEntry) => { + if (navEntry.path) { + if (!nextSection) return; + nextSection.guides.push(navEntry); + } else { + // new section + if (nextSection) sections.push(nextSection); + nextSection = { + title: navEntry.title, + guides: [], + }; + } + }); + if (nextSection) sections.push(nextSection); + + return sections; +}; + const LearnDocTemplate = ({ data, pageContext }) => { + const slugger = new GithubSlugger(); const { mdx, edbGit: gitData } = data; + const { fields, tableOfContents } = data.mdx; const { frontmatter, pagePath, productVersions, navTree, prevNext } = pageContext; const navRoot = findDescendent(navTree, (n) => n.path === pagePath); @@ -95,6 +133,7 @@ const LearnDocTemplate = ({ data, pageContext }) => { isIndexPage: isPathAnIndexPage(mdx.fileAbsolutePath), productVersions, }; + const { path, depth } = fields; const showToc = !!mdx.tableOfContents.items && !frontmatter.hideToC; const showInteractiveBadge = @@ -102,6 +141,25 @@ const LearnDocTemplate = ({ data, pageContext }) => { ? frontmatter.showInteractiveBadge : !!katacodaPanel; + const sections = buildSections(navTree, path); + + // newtoc will be passed as the toc - this will blend the existing toc with the new sections + var newtoc = []; + if (tableOfContents.items) { + newtoc.push(...tableOfContents.items); + if (sections) { + sections.forEach((section) => { + section.slug = "section-" + slugger.slug(section.title); + newtoc.push({ + url: "#" + section.slug, + title: section.title, + }); + }); + } + } + + console.log(newtoc); + let cardTileMode = indexCards; if (!cardTileMode) { if (navRoot.depth === 2) cardTileMode = TileModes.Full; @@ -167,10 +225,7 @@ const LearnDocTemplate = ({ data, pageContext }) => { {showToc && ( - + )} From edb4f5777876ac3ce5fd32feba09d70e44fca6c4 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 27 Aug 2024 12:52:31 +0100 Subject: [PATCH 27/67] Remove /2/ in supported versions Signed-off-by: Dj Walker-Morgan --- .../data-migration-service/supported_versions.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx index 90492fa53ab..c3236598504 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx @@ -21,15 +21,15 @@ The following database versions are supported. ### Oracle -The EDB Data Migration Service (EDB DMS) stack requires the Oracle database to have archive log mode enabled and supplemental logging data enabled at the table and database level. For details, see [Preparing Oracle source databases](../2/getting_started/preparing_db/preparing_oracle_source_databases). +The EDB Data Migration Service (EDB DMS) stack requires the Oracle database to have archive log mode enabled and supplemental logging data enabled at the table and database level. For details, see [Preparing Oracle source databases](../getting_started/preparing_db/preparing_oracle_source_databases). Container databases (CDB/PDB) and non-CDB sources are supported. ### Postgres/EDB Postgres Advanced Server -Postgres and EDB Postgres Advanced Server sources require a database role or user that can manage replications. For details, see [Preparing Postgres source databases](../2/getting_started/preparing_db/preparing_postgres_source_databases). +Postgres and EDB Postgres Advanced Server sources require a database role or user that can manage replications. For details, see [Preparing Postgres source databases](../getting_started/preparing_db/preparing_postgres_source_databases). -When used as a target, EDB DMS requires a database role or user that can write to the appropriate target tables. For details, see [Preparing target databases](../2/getting_started/preparing_db). +When used as a target, EDB DMS requires a database role or user that can write to the appropriate target tables. For details, see [Preparing target databases](../getting_started/preparing_db). ## Operating systems From 2846ec1bc4c7d0a119d910aa6cb8bc6570cfc847 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 27 Aug 2024 13:02:15 +0100 Subject: [PATCH 28/67] Refix links in supported versions Signed-off-by: Dj Walker-Morgan --- .../data-migration-service/supported_versions.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx index c3236598504..1c5e69a97d8 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx @@ -21,15 +21,15 @@ The following database versions are supported. ### Oracle -The EDB Data Migration Service (EDB DMS) stack requires the Oracle database to have archive log mode enabled and supplemental logging data enabled at the table and database level. For details, see [Preparing Oracle source databases](../getting_started/preparing_db/preparing_oracle_source_databases). +The EDB Data Migration Service (EDB DMS) stack requires the Oracle database to have archive log mode enabled and supplemental logging data enabled at the table and database level. For details, see [Preparing Oracle source databases](getting_started/preparing_db/preparing_oracle_source_databases). Container databases (CDB/PDB) and non-CDB sources are supported. ### Postgres/EDB Postgres Advanced Server -Postgres and EDB Postgres Advanced Server sources require a database role or user that can manage replications. For details, see [Preparing Postgres source databases](../getting_started/preparing_db/preparing_postgres_source_databases). +Postgres and EDB Postgres Advanced Server sources require a database role or user that can manage replications. For details, see [Preparing Postgres source databases](getting_started/preparing_db/preparing_postgres_source_databases). -When used as a target, EDB DMS requires a database role or user that can write to the appropriate target tables. For details, see [Preparing target databases](../getting_started/preparing_db). +When used as a target, EDB DMS requires a database role or user that can write to the appropriate target tables. For details, see [Preparing target databases](getting_started/preparing_db). ## Operating systems From 745ba0834e8ef78992848d83c8d5a5312f7d7096 Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Tue, 27 Aug 2024 18:57:04 +0200 Subject: [PATCH 29/67] Squashed commit of the following: commit 44c3bf6e6c254ebc44c7274f56cf2a6c6bdcceb2 Author: gvasquezvargas Date: Tue Aug 27 18:45:59 2024 +0200 Moved dms documentation to advocacy_docs and reorganized to match DJs frontpage PR commit 41b93f1292e03229e2d2e2e2dad68f2698985ffc Author: gvasquezvargas Date: Tue Aug 27 17:50:10 2024 +0200 Revert install template to remove automation as install topics have been added manually by dev team commit 5a7baea20e0a9e310f22c42225a97522e8d929f3 Merge: e40b51712 f83251d78 Author: gvasquezvargas Date: Tue Aug 27 17:44:00 2024 +0200 Merge remote-tracking branch 'origin/develop' into docs/transporter/ba-preview commit f83251d7883e8824f859390e264d985d95595ab1 Merge: 7f161a3f1 0f9f44335 Author: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue Aug 27 14:59:47 2024 +0530 Merge pull request #5978 from EnterpriseDB/content/docs/epas/release_notes_fix EPAS - 12 to 15 release notes fix commit 7f161a3f1dcdbe7299976867ff0ba86a0b288050 Merge: 8ef2ef700 3272c0418 Author: Josh Heyer Date: Mon Aug 26 09:37:37 2024 -0700 Merge pull request #5989 from EnterpriseDB/content/docs/epas/docs-961 commit 8ef2ef700c8e88832ac61676fd4f076b3c24a993 Merge: c0ff531f2 3f426e51c Author: Josh Heyer Date: Mon Aug 26 09:37:12 2024 -0700 Merge pull request #5988 from EnterpriseDB/automatic_docs_update/repo_EnterpriseDB/cloud-native-postgres/ref_refs/tags/v1.24.0 commit 3f426e51cf27aa07858b484f7825caf191c2bdd3 Author: Josh Heyer Date: Mon Aug 26 16:32:11 2024 +0000 Release notes commit 16cbc3b0552fa9a7c2c4e206dc496cdb4cb5ec2f Author: Josh Heyer Date: Mon Aug 26 16:24:44 2024 +0000 reconcile local changes commit 3272c0418faeba07ce28657de5d48886fbb06fc0 Author: gvasquezvargas Date: Mon Aug 26 18:12:23 2024 +0200 Added page to index commit 3985106a1de2bbf93ff9a4aca0819406cb727d8f Author: julienmarcbrown Date: Mon Aug 26 09:01:01 2024 -0700 Update release date commit 6ebe1d4ce8ccfecb5cbf20e84a1900e3af987b45 Author: julienmarcbrown Date: Mon Aug 26 08:50:47 2024 -0700 Add Release Note For EPAS 15.8.1 commit acc796b457b305a4e42c959efd7387971dacb50d Author: cnp-autobot <85171364+cnp-autobot@users.noreply.github.com> Date: Mon Aug 26 15:43:51 2024 +0000 [create-pull-request] automated change commit e40b5171253b4680ce9650a9b41e738a21b09233 Author: gvasquezvargas Date: Mon Aug 26 10:48:49 2024 +0200 Full name of product on the first mention, and abbreviation for subsequent mentions commit c0ff531f257f43d8de2ab648549850f052966cc9 Merge: f4dcffef8 e582765e3 Author: Betsy Gitelman Date: Fri Aug 23 13:57:00 2024 -0400 Merge pull request #5964 from EnterpriseDB/docs/pgd_reedit_13 Reedit of PGD doc - group 13 commit e582765e32e27ae6e3727e7ca544bc9703952926 Merge: abd704bc9 f4dcffef8 Author: Betsy Gitelman Date: Fri Aug 23 11:33:56 2024 -0400 Merge branch 'develop' into docs/pgd_reedit_13 commit abd704bc9fde3b1eb82fbd8298d71361f9fc0b56 Merge: 9aeb7f4ae 74c98113f Author: Betsy Gitelman Date: Fri Aug 23 11:16:46 2024 -0400 Merge branch 'docs/pgd_reedit_13' of https://github.com/EnterpriseDB/docs into docs/pgd_reedit_13 commit f4dcffef8a95ac9f1d8679661b984b4e3352f9ec Merge: a86cbe513 dfd51dddd Author: Betsy Gitelman Date: Fri Aug 23 11:05:56 2024 -0400 Merge pull request #5907 from EnterpriseDB/docs/edits_to_pgd_group12 PGD re-edit group 12 - routing commit 0f9f44335a1a98bf787af7e1b76c098c4bfad205 Author: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Fri Aug 23 10:10:59 2024 +0530 EPAS 12 to 15 release notes fix Edited the support ticket numbers commit dfd51dddd463047f731054362ea99afb279cff58 Author: Betsy Gitelman Date: Tue Aug 20 11:25:19 2024 -0400 Update proxy.mdx per DJ's review commit 94e8ff90f97b6e76052b2974ac9b18f2868f06a9 Author: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Mon Aug 5 11:29:07 2024 +0100 Update product_docs/docs/pgd/5/routing/proxy.mdx commit 4829b8913e111dcd698135ce5e67f49613a2b5bb Author: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Mon Aug 5 11:17:24 2024 +0100 Update proxy.mdx reworked for clearer connection behavior commit 2cfd57ac496df7eb7ff46ca590521ad9632503fa Author: Betsy Gitelman Date: Fri Aug 2 15:37:23 2024 -0400 Apply suggestions from code review commit 2ec9427b82680e388927808a37b24e4c990626c2 Author: Betsy Gitelman Date: Fri Aug 2 15:29:47 2024 -0400 PGD re-edit group 12 - routing commit ec2b0308dc00d94a30af9f7fd6dbccb6b584eea9 Author: Betsy Gitelman Date: Thu Aug 1 16:33:18 2024 -0400 PGD re-edit group 12 - routing commit 74c98113f30d477ff60e67d764a65937106166ed Author: Betsy Gitelman Date: Tue Aug 20 14:14:09 2024 -0400 Update product_docs/docs/pgd/5/security/roles.mdx commit bf1524186eef037c2d67e5bb3ba37bddcd80a4a7 Author: Betsy Gitelman Date: Tue Aug 20 14:05:54 2024 -0400 Update product_docs/docs/pgd/5/upgrades/tpa_overview.mdx Co-authored-by: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> commit 0d3065b6c52bfb9e12c0025f5a0a69a381627dd0 Author: Betsy Gitelman Date: Tue Aug 20 14:04:55 2024 -0400 Update product_docs/docs/pgd/5/security/roles.mdx commit 33ee03b11a13d9edfd08fb445572bd3204e4ab7a Author: Betsy Gitelman Date: Tue Aug 20 14:03:54 2024 -0400 Update product_docs/docs/pgd/5/security/roles.mdx commit 1cbcd0bcfeb3d90218c905c58f37ef8ba901a6bd Author: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Tue Aug 20 18:49:40 2024 +0100 Update product_docs/docs/pgd/5/security/roles.mdx commit d5176b8252269d7b9d0094fa1eb043f1a7c1b8ab Author: Betsy Gitelman Date: Tue Aug 20 13:37:06 2024 -0400 Reedit of PGD - group 13 commit 8d17c317d8f5bb88533fe05eea8399cf4587cbef Author: Betsy Gitelman Date: Thu Aug 15 16:50:54 2024 -0400 Re-edit of PGD group 13 commit 2da9136da8cef2e65feca9115044da212888974f Author: Javier Perozo Date: Wed Aug 21 09:45:48 2024 +0200 EDB Transporter - Removing support for ubuntu 18 commit 9aeb7f4ae8f1868794167cb2f640a565867cb886 Author: Betsy Gitelman Date: Tue Aug 20 13:37:06 2024 -0400 Reedit of PGD - group 13 commit c47754f4c20c3fd8a2f29b4cf1f4e436f241c7ac Merge: 080d3fb7f e300bb83a Author: gvasquezvargas Date: Tue Aug 20 14:32:11 2024 +0200 Merge pull request #5957 from EnterpriseDB/docs/dms/folder_structure Changed folder names to reflect product name and version commit e300bb83acf5da4cbcb5485d77e552f183bf92da Author: gvasquezvargas Date: Tue Aug 20 08:41:59 2024 +0200 Adapted source files to fix the deployment commit 035921546df6ebdac4effcf1df32f08be5eabe26 Author: Dj Walker-Morgan Date: Mon Aug 19 15:33:08 2024 +0100 Enable building and front page presence Signed-off-by: Dj Walker-Morgan commit c218818b819f28cd94ae1750aa0ec22d65ad7e0d Author: gvasquezvargas Date: Mon Aug 19 15:51:27 2024 +0200 Changed folder names to reflect product name and version commit 080d3fb7fa9b1606bf6cdb7ab3cbb11ef22ba035 Author: gvasquezvargas Date: Mon Aug 19 15:31:21 2024 +0200 Changed last instances of Transporter where mention is not in-code commit f25abfeb830ca31c4c727a233b0878e6d7c36184 Merge: 7d5f32e2c d2b8e53f5 Author: Javier Perozo Date: Mon Aug 19 09:40:55 2024 +0200 Merge pull request #5917 from EnterpriseDB/ET-480 feat(ET-480): Add documents for cdcreader installation commit 5660b5c1510f4e33e9e25b7b5f8769ec1b80aee9 Author: Betsy Gitelman Date: Thu Aug 15 16:50:54 2024 -0400 Re-edit of PGD group 13 commit d2b8e53f5f9905c59e022e2cd107ab3d66a33f47 Author: gvasquezvargas Date: Thu Aug 15 12:26:30 2024 +0200 Renamed files, replaced old naming with EDB Data Migration Service and EDB DMS Reader commit 7d5f32e2cba22ff7eb1362dc5beb1300b7bfd87a Merge: ac06c143e 71c5ef81f Author: gvasquezvargas Date: Thu Aug 15 12:10:17 2024 +0200 Merge pull request #5922 from EnterpriseDB/DMS/schema_feedback Improvements for schema migration commit ac06c143e2f12185c2eef5b2a0f673448f147b87 Merge: 24b7cad8b b49c5819e Author: gvasquezvargas Date: Thu Aug 15 12:09:54 2024 +0200 Merge pull request #5934 from EnterpriseDB/DMS/general-edits Data Migration Service: general edits commit b49c5819e305fa558150ad5b99e309c8dfcc6b26 Author: gvasquezvargas Date: Thu Aug 15 12:05:54 2024 +0200 Renamed Transporter to EDB Data Migration Service and Reader to EDB DMS Reader commit 90ecfc414091dae4715c672fb7dbc726a1f39c82 Author: Javier Perozo Date: Wed Aug 14 12:08:14 2024 +0400 typo commit 66469e04a3ae5afcc01635b1a600d205b5fcb901 Merge: 64b7c1745 24b7cad8b Author: Tian Lu Date: Wed Aug 14 00:44:09 2024 +0200 Merge branch 'docs/transporter/ba-preview' into ET-480 commit 64b7c1745c6632375e3f0c2e95fb5e739d2eeb6b Author: TianLu Date: Wed Aug 14 00:43:30 2024 +0200 fix(ET-480): remove deprecated OS versions and add some newer versions for cdcreader commit 25e7e95f5dd0bee375a01d6ef0c11579a681b6f5 Author: TianLu Date: Wed Aug 14 00:16:32 2024 +0200 fix(ET-480): add name "EDB Data Migration Reade" commit e50654d14b2d587607bbc5b52330d7117af26a9e Author: gvasquezvargas Date: Tue Aug 13 10:14:17 2024 +0200 Renaming and further edits commit 73f4868ad3ed3a74a00aed75659e2ef659390d8b Author: gvasquezvargas Date: Mon Aug 12 18:22:29 2024 +0200 General edits commit 71c5ef81fbe85c23214298646316fe5c4b511897 Author: gvasquezvargas Date: Mon Aug 12 17:50:16 2024 +0200 formatting, editing, rephrasing commit 05bd258745d1738bb728333aff2785a5655b2f44 Author: gvasquezvargas Date: Thu Aug 8 09:40:00 2024 +0200 Improvements for schema migration commit 24b7cad8b46a2934f4a1ee8ced949b50b69de998 Author: Javier Perozo Date: Mon Aug 12 10:22:10 2024 +0200 EDB Transporter - fixing URL path commit 87cb1162abe77a3a6e280b61b207be9d67f4d4eb Author: Javier Perozo Date: Mon Aug 12 10:18:02 2024 +0200 EDB Transporter - fixing URL path commit ab8340298e73d63dd87f18662f66a43fd94de336 Merge: ad763400d 891df8494 Author: Javier Perozo Date: Mon Aug 12 10:11:33 2024 +0200 Merge branch 'develop' into docs/transporter/ba-preview commit 4e8afc1f8ceb95f834383eda1b776876e5877d31 Author: Tian Lu Date: Thu Aug 8 14:41:51 2024 +0200 Update product_docs/docs/transporter/ba_preview/getting_started/installing/index.mdx Co-authored-by: gvasquezvargas commit 09e49f4eb728ec3ef9c480ea212a85dce7a1f120 Author: TianLu Date: Thu Aug 8 11:14:34 2024 +0200 fix(ET-480): fix doc reference commit ad763400d311cca161dc8f00007f5ddb1cf2918c Merge: fa0e8c668 d4972a0dd Author: gvasquezvargas Date: Thu Aug 8 09:04:36 2024 +0200 Merge pull request #5912 from EnterpriseDB/transporter/reader_suggestions Transporter/reader suggestions commit d4972a0dd08c169efca504ae06224fb7189ba4a0 Author: gvasquezvargas Date: Wed Aug 7 11:40:42 2024 +0200 Altering title to better reflect instructions commit b9a2b492edb16ee15c3850608507ca431ad9450f Author: TianLu Date: Wed Aug 7 11:12:19 2024 +0200 chore(ET-480): remove writer installation scripts commit 1acd0f0125448ce1c403fcfdda815e68ede9e0f9 Author: Yidian <1141312295@qq.com> Date: Wed Aug 7 16:13:36 2024 +0800 Update product_docs/docs/transporter/ba_preview/getting_started/config_reader.mdx LGTM Co-authored-by: gvasquezvargas commit b8b34f09012801ac00120bdb33071170db8d1738 Author: Yidian <1141312295@qq.com> Date: Wed Aug 7 16:05:49 2024 +0800 Update product_docs/docs/transporter/ba_preview/getting_started/config_reader.mdx Yes, this name will appear in the UI. So this one is ok Co-authored-by: gvasquezvargas commit 79a25a3b9e13aa0598eb2a31e904fba5c6183ac2 Author: TianLu Date: Tue Aug 6 21:49:33 2024 +0200 feat(ET-480): implement transporter/ba_preview/getting_started/installing/linux_x86_64 commit 896dc282859357193a509780cecd758beae7a01c Author: TianLu Date: Tue Aug 6 21:42:07 2024 +0200 feat(ET-480): initialize transporter/ba_preview/getting_started/installing commit 1bc6e4d439711c077e9e35f7c3ce6c63248cec56 Author: gvasquezvargas Date: Mon Jul 29 18:13:16 2024 +0200 Further edits to the reader instructions commit 8f067fa4c332ffa903502d21d0d8ac7276c038c8 Author: gvasquezvargas Date: Mon Jul 29 16:06:22 2024 +0200 Transporter: suggestions for reader walk-though commit fa0e8c668fa7c9144177557ae2417490105017fe Merge: 867bca0b0 0e9d94cf5 Author: gvasquezvargas Date: Mon Jul 29 10:33:20 2024 +0200 Merge branch 'develop' into docs/transporter/ba-preview commit 867bca0b0e7e724e167a62c682c490d1f733659f Author: Javier Perozo Date: Wed Jul 17 10:26:17 2024 +0200 EDB Transporter - temp to fix some refs in ui commit 324a2b36bbb2234b790afab4dba45160a5dc4100 Merge: 3bdadfbab c55496cc7 Author: gvasquezvargas Date: Tue Jul 16 14:12:23 2024 +0200 Merge branch 'develop' into docs/transporter/ba-preview commit 3bdadfbabb0fae782ee15136a247e019ac4a636c Author: Yidian Sun Date: Mon Jul 15 17:50:03 2024 +0800 EDB Transporter - add constraints of DBZ_ID commit e564fa0497e7c14681337118525f6503bc3b180e Author: Javier Perozo Date: Fri Jul 12 11:54:40 2024 +0200 EDB Transporter - updating config_reader.mdx commit aa680ccb590be60354d691fec3ac36bda9a782e4 Author: Javier Perozo Date: Fri Jul 12 10:33:28 2024 +0200 EDB Transporter - updating config_reader.mdx commit 85ccdcbf5f0b7801e35b90e6cea702dfccacf691 Author: Javier Perozo Date: Fri Jul 12 10:30:37 2024 +0200 EDB Transporter - updating preparing_oracle_source_databases.mdx commit 18f56347b82efd9f215dd43ee04963542ff580aa Author: Javier Perozo Date: Fri Jul 12 10:29:55 2024 +0200 EDB Transporter - updating preparing_oracle_source_databases.mdx commit fdc0fc1e8a92df79df75f2cc8b38511b53b94473 Author: Javier Perozo Date: Fri Jul 12 10:14:37 2024 +0200 EDB Transporter - updating supported_versions.mdx commit 141c0e1b85756cfae20e1ad2d574ec5859df1e7b Author: Javier Perozo Date: Fri Jul 12 09:54:38 2024 +0200 EDB Transporter - updating supported_versions.mdx commit 73a11180e51581f6194a55445f68f6bc06da0bf8 Author: Javier Perozo Date: Fri Jul 12 09:51:36 2024 +0200 EDB Transporter - updating config_reader commit cb24d9f16d46925568910911284f7ac1bb2c18ac Merge: cd637d051 a0592f31c Author: gvasquezvargas Date: Thu Jul 11 11:00:37 2024 +0200 Merge pull request #5814 from EnterpriseDB/docs/transporter/suggestions Suggestions for the Transporter documentation commit a0592f31c34dd9b34356e202dc8aff7dd73c834a Author: gvasquezvargas Date: Thu Jul 11 10:59:49 2024 +0200 style fix commit f8de35f90a6b9c7c134bc5acf69128feafdbbef0 Merge: 24527190c cd637d051 Author: gvasquezvargas Date: Mon Jul 8 11:35:33 2024 +0200 Preview branch 'docs/transporter/ba-preview' was updated, carrying the update over into docs/transporter/suggestions. commit cd637d051650dbd41e159cee7b6b563cdd41bba3 Author: Yidian Sun Date: Tue Jul 2 14:43:40 2024 +0800 EDB Transporter - updating format commit 24527190cf59dac9e4195d3a22cc0f91b15ed945 Merge: 356cb8b30 430ab242c Author: gvasquezvargas Date: Mon Jul 1 11:27:58 2024 +0200 Rebased and resolved conflicts commit 356cb8b302ad62df2fd1dd2ee454cbf4f5fb77b8 Author: gvasquezvargas Date: Fri Jun 28 12:01:12 2024 +0200 Added ToC to landing page commit 2880206c81307d08f51f9501ec37debde2166a07 Author: gvasquezvargas Date: Fri Jun 28 11:30:17 2024 +0200 light edits commit 430ab242c5724146bb874c3fbe30f8a70672ac00 Author: Yidian Sun Date: Fri Jun 28 17:17:54 2024 +0800 EDB Transporter - updating content add parameters explanation commit 4c938358a2f42dc18eba4a38ebd379932a93fbd6 Author: gvasquezvargas Date: Fri Jun 28 10:44:06 2024 +0200 added topic tiles to landing page commit 5f6c87a252ad01e690d209133e906a5f905597c3 Author: gvasquezvargas Date: Thu Jun 27 18:10:17 2024 +0200 cleanup commit 97c8cc1cb624e607bc302888bd062ea158f0c52a Merge: d817fa8d7 cd83ce245 Author: gvasquezvargas Date: Thu Jun 27 16:56:33 2024 +0200 Merge branch 'docs/transporter/ba-preview' into docs/transporter/suggestions commit cd83ce2453b9b3108074c69cdcc7b9ea6863c22e Author: Javier Perozo Date: Fri Jun 21 13:30:18 2024 +0200 EDB Transporter - updating content commit 189b9d46afe18b421aa0ed40c5c394df004b3f14 Author: Javier Perozo Date: Fri Jun 21 13:07:10 2024 +0200 EDB Transporter - updating content commit 5ecc5dac437f292d85c690fabe76cfd208cbbfa5 Author: Javier Perozo Date: Fri Jun 21 10:58:29 2024 +0200 EDB Transporter - updating content commit fa4771dae28fa49b5f9a2761359d9e50ee1b1cd4 Author: Javier Perozo Date: Fri Jun 21 10:51:42 2024 +0200 EDB Transporter - updating content commit e7a6fbbd9e921a649586b8d9e963ccfc5d632237 Author: Javier Perozo Date: Tue Jun 11 17:08:04 2024 +0200 EDB Transporter - updating content commit ac0f5987cc983aa0b3a56966152675bc9724df89 Author: Javier Perozo Date: Tue Jun 11 16:59:55 2024 +0200 EDB Transporter - updating content commit e085398f6878183593f0d579ccc1107c5c0e4543 Author: Javier Perozo Date: Tue Jun 11 15:59:28 2024 +0200 EDB Transporter - renaming version commit efa6d71983099e195daf620fb1202b46a0197c8e Author: drothery-edb Date: Thu Mar 9 14:42:27 2023 -0500 Transporter: alpha docs Applied Betsy's edits to the install templates Update preparing_postgres_source_databases.mdx Update preparing_oracle_source_databases.mdx Re-edited content Updated to bump packages Signed-off-by: Dj Walker-Morgan Icon updates Signed-off-by: Dj Walker-Morgan Added Transporter icon Signed-off-by: Dj Walker-Morgan EDB Transporter - Removing unused files removing from PR, incorrectly added generated files for SLES fixed reader install file name for SLES generated file for ppc index page added fix to ppc index page changed version from 2 to 2_preview generated install files for 2_preview Removed 2, alpha and preview folders Added SLES for PowerPC generated files for SLS another sles edit, missed on previous commit fixes erroneous update on earlier commit template changes for SLES platforms generated files for Transporter removed rhel for powerpc mostly completed system for EDB Transporter more tentative edits stumbling baby steps baby steps EDB Transporter preview docs adding product to the required files hide version EDB Transporter - Typos in preparing oracle source fixed vars in postgres file EDB Transporter - Revert "change dollar signs" on oracle table names Changed dollar signs to standard variable braces EDB Transporter - Code blocks type Update upgrading.mdx Removed comment Update terminology.mdx Removed hyphen and removed comment. Update preparing_postgres_source_databases.mdx Removed comment Fixed links and other formatting issues First pass at editing transporter preview EDB Transporter - Fix link EDB Transporter - Getting started EDB Transporter preview docs Add docs for transporter db config validation scripts Add basic text about EDB Transporter adding product to the required files hide version Transporter: alpha docs Add basic text about EDB Transporter adding product to the required files hide version Transporter: alpha docs commit d817fa8d718b97fdfe9ddb942c21408495892068 Author: gvasquezvargas Date: Thu Jun 27 16:30:09 2024 +0200 Break down topics for reader to follow up more easily commit fd13042420ab6358733fa37e3f4bbfa0b2baaab4 Author: gvasquezvargas Date: Thu Jun 27 12:17:54 2024 +0200 Updated left nav bar commit c0e3e91b83131bc02c85c892531d155149c9fe54 Author: gvasquezvargas Date: Thu Jun 27 08:11:00 2024 +0200 Suggestions for the Transporter preview commit c8a8c15df640886e1999483591d608bd7ecb7e67 Author: gvasquezvargas Date: Wed Jun 26 16:36:21 2024 +0200 started adding UI steps and reorganizing the getting started overview commit e81f7881ec6aafa0931c3571a31e9c01cd560c63 Author: Javier Perozo Date: Fri Jun 21 13:30:18 2024 +0200 EDB Transporter - updating content commit 26d2c417242f8fef957e33f30b7017cd15b2dc76 Author: Javier Perozo Date: Fri Jun 21 13:07:10 2024 +0200 EDB Transporter - updating content commit 368d01cf53010db778407cd03136c32a02bf5e0c Author: Javier Perozo Date: Fri Jun 21 10:58:29 2024 +0200 EDB Transporter - updating content commit 818162e8800458496fb567465a3b82861e01eec9 Author: Javier Perozo Date: Fri Jun 21 10:51:42 2024 +0200 EDB Transporter - updating content commit d27173f89ebb2440a020cf8a7efa1f7bdf4b41b2 Author: Javier Perozo Date: Tue Jun 11 17:08:04 2024 +0200 EDB Transporter - updating content commit 94ef814f8448ca685b3240359142cdd83dae1cff Author: Javier Perozo Date: Tue Jun 11 16:59:55 2024 +0200 EDB Transporter - updating content commit f0f861c8475f541dd4c7cfde1c12580f777c96a6 Author: Javier Perozo Date: Tue Jun 11 15:59:28 2024 +0200 EDB Transporter - renaming version commit 665272c251e84869f4eb216cb593880143773ac6 Author: drothery-edb Date: Thu Mar 9 14:42:27 2023 -0500 Transporter: alpha docs Applied Betsy's edits to the install templates Update preparing_postgres_source_databases.mdx Update preparing_oracle_source_databases.mdx Re-edited content Updated to bump packages Signed-off-by: Dj Walker-Morgan Icon updates Signed-off-by: Dj Walker-Morgan Added Transporter icon Signed-off-by: Dj Walker-Morgan EDB Transporter - Removing unused files removing from PR, incorrectly added generated files for SLES fixed reader install file name for SLES generated file for ppc index page added fix to ppc index page changed version from 2 to 2_preview generated install files for 2_preview Removed 2, alpha and preview folders Added SLES for PowerPC generated files for SLS another sles edit, missed on previous commit fixes erroneous update on earlier commit template changes for SLES platforms generated files for Transporter removed rhel for powerpc mostly completed system for EDB Transporter more tentative edits stumbling baby steps baby steps EDB Transporter preview docs adding product to the required files hide version EDB Transporter - Typos in preparing oracle source fixed vars in postgres file EDB Transporter - Revert "change dollar signs" on oracle table names Changed dollar signs to standard variable braces EDB Transporter - Code blocks type Update upgrading.mdx Removed comment Update terminology.mdx Removed hyphen and removed comment. Update preparing_postgres_source_databases.mdx Removed comment Fixed links and other formatting issues First pass at editing transporter preview EDB Transporter - Fix link EDB Transporter - Getting started EDB Transporter preview docs Add docs for transporter db config validation scripts Add basic text about EDB Transporter adding product to the required files hide version Transporter: alpha docs Add basic text about EDB Transporter adding product to the required files hide version Transporter: alpha docs --- .../getting_started/installing/index.mdx | 1 - .../data-migration-service/index.mdx | 2 ++ .../templates/platformBase/base.njk | 2 +- .../almalinux-8-or-rocky-linux-8.njk | 5 ++++ .../products/edb-transporter/base.njk | 24 +++++++++++++++++++ .../products/edb-transporter/centos-7.njk | 5 ++++ .../products/edb-transporter/debian-10.njk | 2 ++ .../products/edb-transporter/debian-11.njk | 2 ++ .../products/edb-transporter/debian-9.njk | 2 ++ .../products/edb-transporter/debian.njk | 4 ++++ .../products/edb-transporter/index.njk | 9 +++++++ .../edb-transporter/ppc64le_index.njk | 7 ++++++ .../edb-transporter/rhel-7-or-ol-7.njk | 5 ++++ .../edb-transporter/rhel-8-or-ol-8.njk | 5 ++++ .../products/edb-transporter/sles-12.njk | 6 +++++ .../edb-transporter/sles-12_ppc64le.njk | 5 ++++ .../products/edb-transporter/sles-15.njk | 5 ++++ .../edb-transporter/sles-15_ppc64le.njk | 4 ++++ .../products/edb-transporter/ubuntu-18.04.njk | 2 ++ .../products/edb-transporter/ubuntu-20.04.njk | 2 ++ .../products/edb-transporter/ubuntu-22.04.njk | 2 ++ .../products/edb-transporter/ubuntu.njk | 1 + .../products/edb-transporter/x86_64_index.njk | 7 ++++++ src/constants/products.js | 4 ++++ 24 files changed, 111 insertions(+), 2 deletions(-) create mode 100644 install_template/templates/products/edb-transporter/almalinux-8-or-rocky-linux-8.njk create mode 100644 install_template/templates/products/edb-transporter/base.njk create mode 100644 install_template/templates/products/edb-transporter/centos-7.njk create mode 100644 install_template/templates/products/edb-transporter/debian-10.njk create mode 100644 install_template/templates/products/edb-transporter/debian-11.njk create mode 100644 install_template/templates/products/edb-transporter/debian-9.njk create mode 100644 install_template/templates/products/edb-transporter/debian.njk create mode 100644 install_template/templates/products/edb-transporter/index.njk create mode 100644 install_template/templates/products/edb-transporter/ppc64le_index.njk create mode 100644 install_template/templates/products/edb-transporter/rhel-7-or-ol-7.njk create mode 100644 install_template/templates/products/edb-transporter/rhel-8-or-ol-8.njk create mode 100644 install_template/templates/products/edb-transporter/sles-12.njk create mode 100644 install_template/templates/products/edb-transporter/sles-12_ppc64le.njk create mode 100644 install_template/templates/products/edb-transporter/sles-15.njk create mode 100644 install_template/templates/products/edb-transporter/sles-15_ppc64le.njk create mode 100644 install_template/templates/products/edb-transporter/ubuntu-18.04.njk create mode 100644 install_template/templates/products/edb-transporter/ubuntu-20.04.njk create mode 100644 install_template/templates/products/edb-transporter/ubuntu-22.04.njk create mode 100644 install_template/templates/products/edb-transporter/ubuntu.njk create mode 100644 install_template/templates/products/edb-transporter/x86_64_index.njk diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/index.mdx index a22ead4771b..a1677666242 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/index.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/index.mdx @@ -1,7 +1,6 @@ --- navTitle: Installing EDB DMS Reader title: Installing EDB DMS Reader on Linux - navigation: - linux_x86_64 --- diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx index e052633a884..891c33ed71d 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx @@ -4,6 +4,8 @@ indexCards: simple deepToC: true directoryDefaults: description: "EDB Data Migration Service is a PG AI integrated migration solution that enables secure, fault-tolerant, and performant migrations to EDB Postgres AI Cloud Service." + product: "data migration service" + iconName: EdbTransporter navigation: - "#Concepts" - terminology diff --git a/install_template/templates/platformBase/base.njk b/install_template/templates/platformBase/base.njk index 15bfe645bd4..fea01437a43 100644 --- a/install_template/templates/platformBase/base.njk +++ b/install_template/templates/platformBase/base.njk @@ -19,7 +19,7 @@ Before you begin the installation process: Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. - To determine if your repository exists, enter this command: + To determine if your repository exists, enter: {%- filter indent(2) -%} {% block repocheck %} {# Any changes to this block should be replicated for Debian, Ubuntu, and SLES #} diff --git a/install_template/templates/products/edb-transporter/almalinux-8-or-rocky-linux-8.njk b/install_template/templates/products/edb-transporter/almalinux-8-or-rocky-linux-8.njk new file mode 100644 index 00000000000..7607989c9ea --- /dev/null +++ b/install_template/templates/products/edb-transporter/almalinux-8-or-rocky-linux-8.njk @@ -0,0 +1,5 @@ +{% extends "products/edb-transporter/base.njk" %} +{% set platformBaseTemplate = "almalinux-8-or-rocky-linux-8" %} +{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} +{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} +{% block prerequisites %}{% endblock prerequisites %} \ No newline at end of file diff --git a/install_template/templates/products/edb-transporter/base.njk b/install_template/templates/products/edb-transporter/base.njk new file mode 100644 index 00000000000..03559ba8926 --- /dev/null +++ b/install_template/templates/products/edb-transporter/base.njk @@ -0,0 +1,24 @@ +{% extends "platformBase/" + platformBaseTemplate + '.njk' %} +{% set packageName = packageName or 'cdcreader=-1.4-1.4766136665.17.1.jammy' %} +{% set writerPackageName = writerPackageName or 'cdcwriter=1.3-1.4766201953.4.1.jammy' %} +{% import "platformBase/_deploymentConstants.njk" as deploy %} +{% block frontmatter %} +{# + If you modify deployment path here, please first copy the old expression + and add it to the list under "redirects:" below - this ensures we don't + break any existing links. +#} +deployPath: transporter/{{ product.version }}/installing/linux_{{platform.arch}}/transporter_{{deploy.map_platform[platform.name]}}.mdx +{% endblock frontmatter %} + +{% block installCommand %} +Install CDCReader: +```shell +sudo {{packageManager}} install {{ packageName }} +``` + +Install CDCWriter: +```shell +sudo {{packageManager}} install {{ writerPackageName }} +``` +{% endblock installCommand %} diff --git a/install_template/templates/products/edb-transporter/centos-7.njk b/install_template/templates/products/edb-transporter/centos-7.njk new file mode 100644 index 00000000000..77738c07ed0 --- /dev/null +++ b/install_template/templates/products/edb-transporter/centos-7.njk @@ -0,0 +1,5 @@ +{% extends "products/edb-transporter/base.njk" %} +{% set platformBaseTemplate = "centos-7" %} +{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} +{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} +{% block prerequisites %}{% endblock prerequisites %} \ No newline at end of file diff --git a/install_template/templates/products/edb-transporter/debian-10.njk b/install_template/templates/products/edb-transporter/debian-10.njk new file mode 100644 index 00000000000..16149ca1714 --- /dev/null +++ b/install_template/templates/products/edb-transporter/debian-10.njk @@ -0,0 +1,2 @@ +{% extends "products/edb-transporter/debian.njk" %} +{% set platformBaseTemplate = "debian-10" %} diff --git a/install_template/templates/products/edb-transporter/debian-11.njk b/install_template/templates/products/edb-transporter/debian-11.njk new file mode 100644 index 00000000000..cc965a07174 --- /dev/null +++ b/install_template/templates/products/edb-transporter/debian-11.njk @@ -0,0 +1,2 @@ +{% extends "products/edb-transporter/debian.njk" %} +{% set platformBaseTemplate = "debian-11" %} diff --git a/install_template/templates/products/edb-transporter/debian-9.njk b/install_template/templates/products/edb-transporter/debian-9.njk new file mode 100644 index 00000000000..2d835f84d33 --- /dev/null +++ b/install_template/templates/products/edb-transporter/debian-9.njk @@ -0,0 +1,2 @@ +{% extends "products/edb-transporter/debian.njk" %} +{% set platformBaseTemplate = "debian-9" %} diff --git a/install_template/templates/products/edb-transporter/debian.njk b/install_template/templates/products/edb-transporter/debian.njk new file mode 100644 index 00000000000..7ba0d685ae9 --- /dev/null +++ b/install_template/templates/products/edb-transporter/debian.njk @@ -0,0 +1,4 @@ +{% extends "products/edb-transporter/base.njk" %} +{% block debian_ubuntu %}This section steps you through getting started with your cluster including logging in, ensuring the installation was successful, connecting to your cluster, and creating the user password. + +```shell{% endblock debian_ubuntu %} \ No newline at end of file diff --git a/install_template/templates/products/edb-transporter/index.njk b/install_template/templates/products/edb-transporter/index.njk new file mode 100644 index 00000000000..6253ffa4f3d --- /dev/null +++ b/install_template/templates/products/edb-transporter/index.njk @@ -0,0 +1,9 @@ +{% extends "platformBase/index.njk" %} +{% set productShortname="transporter" %} + +{% block frontmatter %} +{{ super() }} +{% endblock frontmatter %} +{% block navigation %} +- linux_x86_64 +{% endblock navigation %} diff --git a/install_template/templates/products/edb-transporter/ppc64le_index.njk b/install_template/templates/products/edb-transporter/ppc64le_index.njk new file mode 100644 index 00000000000..11bad6298dc --- /dev/null +++ b/install_template/templates/products/edb-transporter/ppc64le_index.njk @@ -0,0 +1,7 @@ + +{% extends "platformBase/ppc64le_index.njk" %} +{% set productShortname="transporter" %} + +{% block frontmatter %} +{{super()}} +{% endblock frontmatter %} diff --git a/install_template/templates/products/edb-transporter/rhel-7-or-ol-7.njk b/install_template/templates/products/edb-transporter/rhel-7-or-ol-7.njk new file mode 100644 index 00000000000..8530e82b892 --- /dev/null +++ b/install_template/templates/products/edb-transporter/rhel-7-or-ol-7.njk @@ -0,0 +1,5 @@ +{% extends "products/edb-transporter/base.njk" %} +{% set platformBaseTemplate = "rhel-7-or-ol-7" %} +{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} +{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} +{% block prerequisites %}{% endblock prerequisites %} \ No newline at end of file diff --git a/install_template/templates/products/edb-transporter/rhel-8-or-ol-8.njk b/install_template/templates/products/edb-transporter/rhel-8-or-ol-8.njk new file mode 100644 index 00000000000..fd1a0a7b8a6 --- /dev/null +++ b/install_template/templates/products/edb-transporter/rhel-8-or-ol-8.njk @@ -0,0 +1,5 @@ +{% extends "products/edb-transporter/base.njk" %} +{% set platformBaseTemplate = "rhel-8-or-ol-8" %} +{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} +{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} +{% block prerequisites %}{% endblock prerequisites %} \ No newline at end of file diff --git a/install_template/templates/products/edb-transporter/sles-12.njk b/install_template/templates/products/edb-transporter/sles-12.njk new file mode 100644 index 00000000000..935c60bab3b --- /dev/null +++ b/install_template/templates/products/edb-transporter/sles-12.njk @@ -0,0 +1,6 @@ +{% extends "products/edb-transporter/base.njk" %} +{% set platformBaseTemplate = "sles-12" %} +{% set packageManager = "zypper" %} +{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} +{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} + diff --git a/install_template/templates/products/edb-transporter/sles-12_ppc64le.njk b/install_template/templates/products/edb-transporter/sles-12_ppc64le.njk new file mode 100644 index 00000000000..6f87859e6a3 --- /dev/null +++ b/install_template/templates/products/edb-transporter/sles-12_ppc64le.njk @@ -0,0 +1,5 @@ +{% extends "products/edb-transporter/sles-12.njk" %} +{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} +{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} + + diff --git a/install_template/templates/products/edb-transporter/sles-15.njk b/install_template/templates/products/edb-transporter/sles-15.njk new file mode 100644 index 00000000000..57cb9bb981b --- /dev/null +++ b/install_template/templates/products/edb-transporter/sles-15.njk @@ -0,0 +1,5 @@ +{% extends "products/edb-transporter/base.njk" %} +{% set platformBaseTemplate = "sles-15" %} +{% set packageManager = "zypper" %} +{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} +{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} diff --git a/install_template/templates/products/edb-transporter/sles-15_ppc64le.njk b/install_template/templates/products/edb-transporter/sles-15_ppc64le.njk new file mode 100644 index 00000000000..6e7e15e15cb --- /dev/null +++ b/install_template/templates/products/edb-transporter/sles-15_ppc64le.njk @@ -0,0 +1,4 @@ +{% extends "products/edb-transporter/sles-15.njk" %} +{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} +{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} + diff --git a/install_template/templates/products/edb-transporter/ubuntu-18.04.njk b/install_template/templates/products/edb-transporter/ubuntu-18.04.njk new file mode 100644 index 00000000000..c61108b9231 --- /dev/null +++ b/install_template/templates/products/edb-transporter/ubuntu-18.04.njk @@ -0,0 +1,2 @@ +{% extends "products/edb-transporter/ubuntu.njk" %} +{% set platformBaseTemplate = "ubuntu-18.04" %} diff --git a/install_template/templates/products/edb-transporter/ubuntu-20.04.njk b/install_template/templates/products/edb-transporter/ubuntu-20.04.njk new file mode 100644 index 00000000000..caa472b2a47 --- /dev/null +++ b/install_template/templates/products/edb-transporter/ubuntu-20.04.njk @@ -0,0 +1,2 @@ +{% extends "products/edb-transporter/ubuntu.njk" %} +{% set platformBaseTemplate = "ubuntu-20.04" %} diff --git a/install_template/templates/products/edb-transporter/ubuntu-22.04.njk b/install_template/templates/products/edb-transporter/ubuntu-22.04.njk new file mode 100644 index 00000000000..78e2378ee34 --- /dev/null +++ b/install_template/templates/products/edb-transporter/ubuntu-22.04.njk @@ -0,0 +1,2 @@ +{% extends "products/edb-transporter/ubuntu.njk" %} +{% set platformBaseTemplate = "ubuntu-22.04" %} diff --git a/install_template/templates/products/edb-transporter/ubuntu.njk b/install_template/templates/products/edb-transporter/ubuntu.njk new file mode 100644 index 00000000000..e968ddc2fd8 --- /dev/null +++ b/install_template/templates/products/edb-transporter/ubuntu.njk @@ -0,0 +1 @@ +{% extends "products/edb-transporter/base.njk" %} diff --git a/install_template/templates/products/edb-transporter/x86_64_index.njk b/install_template/templates/products/edb-transporter/x86_64_index.njk new file mode 100644 index 00000000000..dff6034cccb --- /dev/null +++ b/install_template/templates/products/edb-transporter/x86_64_index.njk @@ -0,0 +1,7 @@ + +{% extends "platformBase/x86_64_index.njk" %} +{% set productShortname="transporter" %} + +{% block frontmatter %} +{{ super() }} +{% endblock frontmatter %} diff --git a/src/constants/products.js b/src/constants/products.js index a37c7e3a7cc..3e7e12f9f40 100644 --- a/src/constants/products.js +++ b/src/constants/products.js @@ -102,4 +102,8 @@ export const products = { name: "EDB Postgres AI", iconName: IconNames.EDB_POSTGRES_AI_LOOP_BLACK, }, + "data migration service": { + name: "EDB Data Migration Service", + iconName: IconNames.EDB_TRANSPORTER, + }, }; From 188a16963729cb5439622b8871631c82c2b3f3f0 Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Wed, 28 Aug 2024 15:49:29 +0200 Subject: [PATCH 30/67] Last pass of edits after testing --- .../getting_started/apply_constraints.mdx | 2 +- .../getting_started/config_reader.mdx | 79 +++++++++++-------- .../getting_started/create_database.mdx | 4 +- .../getting_started/create_migration.mdx | 8 +- .../getting_started/index.mdx | 8 +- .../getting_started/mark_completed.mdx | 8 +- .../getting_started/prepare_schema.mdx | 20 ++--- .../preparing_oracle_source_databases.mdx | 4 + .../preparing_postgres_source_databases.mdx | 8 +- .../getting_started/verify_migration.mdx | 2 +- 10 files changed, 82 insertions(+), 61 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/apply_constraints.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/apply_constraints.mdx index e158db04ba8..eaa736e971c 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/apply_constraints.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/apply_constraints.mdx @@ -2,7 +2,7 @@ title: "Applying constraints" --- -At the beginning of your data migration journey with EDB Data Migration Service (EDB DMS), you [prepared and imported the schema](prepare_schema) of your source database. Now, re-apply the constraints that were excluded from the schema and data migration. +At the beginning of your data migration journey with EDB Data Migration Service (EDB DMS), you [prepared and imported the schema](prepare_schema) of your source database. Now, connect to the target database and re-apply the constraints that were excluded from the schema and data migration. ## `PRIMARY KEY` and `UNIQUE` constraints diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/config_reader.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/config_reader.mdx index 21a42923e74..179477f3326 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/config_reader.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/config_reader.mdx @@ -11,11 +11,15 @@ deepToC: true 1. Within your project, select **Migrate** > **Credentials**. +1. Select **Create Migration Credential** > **Download Credential**. + 1. Unzip the credentials folder and copy it to the host where the reader is installed. ## Configuring the reader -Set the following environment variables in `/opt/cdcreader/run-cdcreader.sh` with the right values: +1. Open the EDB DMS reader located in `/opt/cdcreader/run-cdcreader.sh` and ensure you have write permissions. + +1. Set the variables according to your environment and uncomment the edited lines. See [parameters](#parameters) for further guidance. ```shell ### set the following environment variables: @@ -75,7 +79,7 @@ Set the following environment variables in `/opt/cdcreader/run-cdcreader.sh` wit ## Parameters -### DBZ_ID +### `DBZ_ID` This is the name you assign to identify a source. This name will later appear as a _source_ in the **Migrate** > **Sources** section of the EDB Postgres AI Console. @@ -85,64 +89,70 @@ Consider the following ID guidelines: - You can use lowercase and uppercase characters, numbers, underscores(_) and hyphens(-) for the ID. Other special characters are not supported. - The ID must be unique. The source instances cannot have the same ID. -### RW_SERVICE_HOST +### `RW_SERVICE_HOST` Specifies the URL of the service that will host the migration. `transporter-rw-service` is always https://transporter-rw-service.biganimal.com. -### TLS_PRIVATE_KEY_PATH +### `TLS_PRIVATE_KEY_PATH` Directory path to the `client-key.pem` private key you downloaded from the EDB Postgres AI Console. -The Reader's HTTP client uses it to perform mTLS authentication with the `transporter-rw-service`. -### TLS_CERTIFICATE_PATH +The HTTP client of the EDB DMS Reader uses it to perform mTLS authentication with the `transporter-rw-service`. + +### `TLS_CERTIFICATE_PATH` Directory path to the X509 `client-cert.pem` certificate you downloaded from the EDB Postgres AI Console. -The Reader's HTTP client uses it to perform mTLS authentication with the `transporter-rw-service`. -### TLS_CA_PATH +The HTTP client of the EDB DMS Reader uses it to perform mTLS authentication with the `transporter-rw-service`. + +### `TLS_CA_PATH` -Directory path to the `int.cert` Certificate Authority you downloaded from the EDB Postgres AI Console. -It signs the certificate configured in TLS_CERTIFICATE_PATH. +Directory path to the `int.cert` Certificate Authority you downloaded from the EDB Postgres AI Console. -### APICURIOREQUEST_CLIENT_KEYSTORE_LOCATION +It signs the certificate configured in `TLS_CERTIFICATE_PATH`. + +### `APICURIOREQUEST_CLIENT_KEYSTORE_LOCATION` Directory path to the `client-keystore.p12` keystore location file you downloaded from the EDB Postgres AI Console. -It is created from the private key and certifiate configured in TLS_PRIVATE_KEY_PATH and TLS_CERTIFICATE_PATH. +It is created from the private key and certificate configured in `TLS_PRIVATE_KEY_PATH` and `TLS_CERTIFICATE_PATH`. + The Apicurio client uses it to perform mTLS authentication with the `transporter-rw-service`. -### APICURIOREQUEST_TRUSTSTORE_LOCATION -Created from the Certificate Authority configured in TLS_CA_PATH -Apicurio client use it to mTLS with transporter-rw-service +### `APICURIOREQUEST_TRUSTSTORE_LOCATION` + +Created from the Certificate Authority configured in `TLS_CA_PATH`. + +The Apicurio client uses it to perform mTLS authentication with the `transporter-rw-service`. + +### `DBZ_DATABASES` + +This is a list of source database information you require for the EDB DMS Reader be able to read the correct source database information for the migration. + +You can configure the EDB DMS Reader to migrate multiple databases. The `DBZ_DATABASES_0__TYPE` section delimits the information for the first database. You can use `DBZ_DATABASES_1__TYPE` to provide data for a second database. Add more sections to the EDB DMS Reader (`DBZ_DATABASES_2__TYPE`, `DBZ_DATABASES_3__TYPE`) by increasing the index manully. + +#### `DBZ_DATABASES_0__TYPE` -### DBZ_DATABASES -This is a source databases list you want to reader to connect. You can configure multiple database for one reader. -You need to increase the index manully in you configuration. +This is the source database type. EDB DMS reader supports `ORACLE` and `POSTGRES`. -For example: +#### `DBZ_DATABASES_0__HOSTNAME` -`DBZ_DATABASES_0__TYPE` is the type of the first source database. +The hostname of the source database. -`DBZ_DATABASES_1__TYPE` is the type of the second source database. +#### `DBZ_DATABASES_0__PORT` -#### DBZ_DATABASES_0__TYPE -Source database type, support ORACLE and POSTGRES currently +The port of the source database. -#### DBZ_DATABASES_0__HOSTNAME -Source database hostname +#### `DBZ_DATABASES_0__CATALOG` -#### DBZ_DATABASES_0__PORT -Source database port +The database name in the source database server. -#### DBZ_DATABASES_0__CATALOG -Source database catalog +#### `DBZ_DATABASES_0__USERNAME` -#### DBZ_DATABASES_0__USERNAME -Source database username +The database username of the source database. -#### DBZ_DATABASES_0__PASSWORD -Source database password +#### `DBZ_DATABASES_0__PASSWORD` -Once the reader finishes running, the cdc source will appear in the EDB Postgres AI Console. You can select this source for any [migration](create_migration). +The password for the database username of the source database. ## Running the EDB DMS Reader @@ -156,5 +166,6 @@ Once the reader finishes running, the cdc source will appear in the EDB Postgres 1. Go to the [EDB Postgres AI Console](https://portal.biganimal.com), and verify that a source with the `DBZ_ID` name is displayed in **Migrate** > **Sources**. +You can select this source for your [migration](create_migration). diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_database.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_database.mdx index f45e9466dad..2f36e9cfc1e 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_database.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_database.mdx @@ -18,6 +18,6 @@ To use an existing cluster as a target for the migration, ensure the tables you See [Creating a distributed high-availability cluster](/biganimal/latest/getting_started/creating_a_cluster/creating_a_dha_cluster/) for detailed instructions on how to create a distributed high availibility cluster. -1. In **Clusters** page, select your cluster, and use the **Quick Connect** option to access your instance from your terminal. +1. In the **Clusters** page, select your cluster, and use the **Quick Connect** option to access your instance from your terminal. -1. Create a new empty database. For an example, see [Create a new database](/biganimal/latest/free_trial/quickstart/#create-a-new-database). +1. Create a new empty database that you will use as a target for the migration. Alternatively, you can use the default database `edb_admin`. diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_migration.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_migration.mdx index 3c5911256c7..10de35aa32f 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_migration.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_migration.mdx @@ -21,4 +21,10 @@ This establishes a sync between the source database and a target cluster in the 1. Select the tables and columns to migrate. Modify the table and column names if needed. -1. Select **Create Migration**. \ No newline at end of file +1. Select **Create Migration**. + +The EDB Postgres AI Console now displays a new migration. The EDB DMS Reader is constantly streaming data when the migration displays the **Running** state. Changes to data are replicated from the source to the target database as long as the migration is running. + +!!!note + The EDB DMS Reader streams data changes. It does not stream changes in the DDL objects. +!!! diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx index 672699c5abc..1791b4c2032 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx @@ -4,8 +4,8 @@ description: Understand how to create a migration from planning to execution. navigation: - create_database - prepare_schema - - preparing_db - installing + - preparing_db - config_reader - create_migration - mark_completed @@ -14,12 +14,14 @@ navigation: - remove_software --- -Setting up an EDB Data Migration consists of a number of steps. +Creating a migration to EDB Postgres AI involves a number of steps related to the preparation of your source and target clusters, the preparation of schemas and constrains, the installation and configuration of the EDB DMS reader, an data streaming to your new database cluster. + +This outline displays the steps in chronological order: 1. [Create a target database cluster](create_database) in the EDB Postgres® AI Console. 1. [Prepare the schema](prepare_schema) with Migration Portal and migrate the schema to the target database. -1. [Prepare your source Oracle or Postgres database](preparing_db) with `sqlplus` or `psql`. 1. [Install the EDB DMS Reader](installing) on your machine with your terminal. +1. [Prepare your source Oracle or Postgres database](preparing_db) with `sqlplus` or `psql`. 1. [Configure the EDB DMS Reader](config_reader) on your machine with your terminal. 1. [Create a new migration](create_migration) in the EDB Postgres AI Console. 1. [Mark the Migration as completed](mark_completed) in the EDB Postgres AI Console. diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/mark_completed.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/mark_completed.mdx index a2460428bae..4e7385cd8e1 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/mark_completed.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/mark_completed.mdx @@ -2,10 +2,8 @@ title: "Mark migration as completed" --- -You have taken a snapshot of the source database and imported it to the EDB Postgres® AI Console. +The EDB DMS Reader will continue to stream any data updates performed on the source database to the target database until you mark the migration as completed. Check that your data is present in the target database before you mark it as completed. -To ensure that the target cluster is up-to-date with the source cluster and allow the EDB DMS Reader to be able to stream the latest updates on the source database to the cluster mark the migration as completed. +1. In the [EDB Postgres AI Console](https://portal.biganimal.com), in your project, select **Migrate** > **Migrations**. -1. In the EDB Postgres AI Console, in your project, select **Migrate** > **Migrations**. - -1. Select the **Mark as completed** button. \ No newline at end of file +1. Select the **check icon** next to your migration to mark the migration as completed. This stops the streaming procedure. \ No newline at end of file diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/prepare_schema.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/prepare_schema.mdx index 01fddc64c41..e8fbb525096 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/prepare_schema.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/prepare_schema.mdx @@ -35,15 +35,9 @@ EDB recommends [applying the following constraints](apply_constraints) on the ta `EXCLUDE` -## Preparing and importing your schema +## Preparing your schema -### Prerequisite - -You created a schema in the target database. - -### Prepare your schema - -#### Oracle to EDB Postgres Advanced Server migrations +### Oracle to EDB Postgres Advanced Server migrations Use [EDB Migration Portal](/migration_portal/latest/03_mp_using_portal/03_mp_quick_start/) to assess Oracle database sources for schema compatibility before starting the data migration process. @@ -51,16 +45,18 @@ EDB Migration Portal offers the ability to separate constraints from other desti Ensure you exclude `FOREIGN KEY`, `REFERENCES`, `CHECK`, `CASCADE` and `EXCLUDE` constraints from the DDL before importing the schema to the target database. -#### Other migrations +### Other migrations -For data migrations to and from Postgres EDB recommends using [EDB Migration Toolkit](/migration_toolkit/latest/) to manage the schema. MTK's [offline migration](/migration_toolkit/latest/07_invoking_mtk/08_mtk_command_options/#offline-migration-options) capability provides an easy way to extract a database's schema and separate constraints. +For data migrations to and from Postgres, EDB recommends using [EDB Migration Toolkit](/migration_toolkit/latest/) to manage the schema. MTK's [offline migration](/migration_toolkit/latest/07_invoking_mtk/08_mtk_command_options/#offline-migration-options) capability provides an easy way to extract a database's schema and separate constraints. Ensure you exclude `FOREIGN KEY`, `REFERENCES`, `CHECK`, `CASCADE` and `EXCLUDE` constraints from the DDL before importing the schema to the target database. Tools such as `pg_dump` and `pg_restore` are another valid route for migrating DDL. -### Import your schema to the target database +## Importing your schema to the target database After you have prepared the DDL, and excluded `FOREIGN KEY`, `REFERENCES`, `CHECK`, `CASCADE` and `EXCLUDE` constraints, connect to the target database and import the SQL-formatted DDL file. -You can use [pgAdmin](https://www.pgadmin.org/docs/pgadmin4/latest/index.html), [psql](https://www.postgresql.org/docs/7.0/app-psql.htm) or a different tool to perform the import. +Importing the schema to the target database before the migration takes place is important as EDB DMS only migrates data. The target database must have the schemas in place for the migration to populate them with data. + +You can use different methods to import the schemas. You can manually create them in the target database, you can perform a [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) (for example, if you used [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html) to obtain the schemas), you can employ [pgAdmin](https://www.pgadmin.org/docs/pgadmin4/latest/index.html), [psql](https://www.postgresql.org/docs/7.0/app-psql.htm) or the tool of your preference. diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_oracle_source_databases.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_oracle_source_databases.mdx index 1afa506f874..07f11d07c98 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_oracle_source_databases.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_oracle_source_databases.mdx @@ -272,6 +272,10 @@ Run the script without arguments to print the usage: /opt/cdcreader/oracleConfigValidation.sh ``` +## SSL configuration + +Ensure you configure your source database server to accept SSL connections to allow the EDB DMS reader to connect to it. You must create a server certificate and a server private key, for example, with OpenSSL, to enable this configuration. + ## More information Your database is ready for CDC migration. diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_postgres_source_databases.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_postgres_source_databases.mdx index 9b25a4e7a3c..605656cc80f 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_postgres_source_databases.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_postgres_source_databases.mdx @@ -71,7 +71,7 @@ Run the script without arguments to print the usage: First, create a new role for CDC migration with `LOGIN` and `REPLICATION` abilities granted: ```sql -CREATE ROLE WITH REPLICATION LOGIN PASSWORD ; +CREATE ROLE WITH REPLICATION LOGIN PASSWORD ''; ``` `` needs to own the source tables to autocreate Postgres publications. Because the source tables are already owned by another role, you create a role/user that can act as the new owner and grant the specified replication group role to both the current table owner and to ``: @@ -116,13 +116,17 @@ Where: The CDC migration process for Postgres sources leverages logical decoding and the publication/subscription mechanism. To use Postgres as a source, you need to create a replication slot for your CDC migration role: ```sql -PERFORM pg_create_logical_replication_slot('', 'pgoutput'); +SELECT pg_create_logical_replication_slot('', 'pgoutput'); ``` Where: - `` is the name of the Postgres role or user to use for CDC migration database access. - `pgoutput` is the logical decoding plugin supplied by Postgres that EDB DMS uses. +## SSL configuration + +Ensure you configure your source database server to [accept SSL connections](https://www.postgresql.org/docs/current/ssl-tcp.html) to allow the EDB DMS reader to connect to it. You must create a server certificate and a server private key, for example, with OpenSSL, to enable this configuration. + ## More information Your database is ready for CDC migration. diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/verify_migration.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/verify_migration.mdx index 4a967416082..0ce42a7ab42 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/verify_migration.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/verify_migration.mdx @@ -4,4 +4,4 @@ title: "Verifying the migration" Verify that the migration was successful by comparing the source and target databases. -To do it, you can use [LiveCompare](/livecompare/latest/). \ No newline at end of file +You can use [LiveCompare](/livecompare/latest/). \ No newline at end of file From 47b14b85f74174190a8a1626eaf1a675661e2a6b Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Thu, 29 Aug 2024 15:50:37 +0200 Subject: [PATCH 31/67] Removed installation/removal steps as they are empty as discussed with Kyle --- .../data-migration-service/getting_started/index.mdx | 2 -- .../getting_started/remove_software.mdx | 5 ----- 2 files changed, 7 deletions(-) delete mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/remove_software.mdx diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx index 1791b4c2032..9dad5110217 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx @@ -11,7 +11,6 @@ navigation: - mark_completed - apply_constraints - verify_migration - - remove_software --- Creating a migration to EDB Postgres AI involves a number of steps related to the preparation of your source and target clusters, the preparation of schemas and constrains, the installation and configuration of the EDB DMS reader, an data streaming to your new database cluster. @@ -27,4 +26,3 @@ This outline displays the steps in chronological order: 1. [Mark the Migration as completed](mark_completed) in the EDB Postgres AI Console. 1. [Apply constraints](apply_constraints) to the new database. 1. [Verify the migration completed successfully](verify_migration) with LiveCompare. -1. [Remove customer software](remove_software). diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/remove_software.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/remove_software.mdx deleted file mode 100644 index d6a88cefa9b..00000000000 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/remove_software.mdx +++ /dev/null @@ -1,5 +0,0 @@ ---- -title: "Removing customer software" ---- - -After verifying that the migration is up and running, you can remove the customer software. \ No newline at end of file From b6ee58e5462e3058ce70a8733101c51be8e7d767 Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Mon, 2 Sep 2024 17:26:21 +0200 Subject: [PATCH 32/67] Fixed step-by-step descriptions in index page for migration workflow --- .../getting_started/index.mdx | 29 ++++++++++++------- 1 file changed, 18 insertions(+), 11 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx index 9dad5110217..e2e39e1c7a9 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx @@ -1,6 +1,7 @@ --- title: "Getting started" description: Understand how to create a migration from planning to execution. +indexCards: none navigation: - create_database - prepare_schema @@ -13,16 +14,22 @@ navigation: - verify_migration --- -Creating a migration to EDB Postgres AI involves a number of steps related to the preparation of your source and target clusters, the preparation of schemas and constrains, the installation and configuration of the EDB DMS reader, an data streaming to your new database cluster. +Creating a migration from an Oracle or Postgres database to EDB Postgres AI involves several steps. -This outline displays the steps in chronological order: +1. **[Create a database cluster](create_database)**: In the EDB Postgres® AI Console, ensure you have created a database cluster. Connect to the cluster and create a database that will serve as a target for the migration. -1. [Create a target database cluster](create_database) in the EDB Postgres® AI Console. -1. [Prepare the schema](prepare_schema) with Migration Portal and migrate the schema to the target database. -1. [Install the EDB DMS Reader](installing) on your machine with your terminal. -1. [Prepare your source Oracle or Postgres database](preparing_db) with `sqlplus` or `psql`. -1. [Configure the EDB DMS Reader](config_reader) on your machine with your terminal. -1. [Create a new migration](create_migration) in the EDB Postgres AI Console. -1. [Mark the Migration as completed](mark_completed) in the EDB Postgres AI Console. -1. [Apply constraints](apply_constraints) to the new database. -1. [Verify the migration completed successfully](verify_migration) with LiveCompare. +1. **[Prepare the schema](prepare_schema)**: In your source machine, prepare the source database by exporting it and excluding unsupported constraints. Then, import the adapted schema to the target database. + +1. **[Install the EDB DMS Reader](installing)**: In your source machine, install the EDB DMS Reader from the EDB repository. + +1. **[Prepare your source Oracle or Postgres database](preparing_db)**: In your source machine, prepare the source database by altering settings and creating users that are required for the migration. Ensure your source database can accept SSL connections. + +1. **[Configure the EDB DMS Reader](config_reader)**: In the EDB Postgres AI Console, download dedicated migration credentials. In your source machine, configure the EDB DMS Reader by exporting environment variables that allow the Reader to connect to the source. Execute the Reader. + +1. **[Create a new migration](create_migration)**: In the EDB Postgres AI Console, create a new migration by selecting the source generated by the Reader in the Console, and selecting the target database you created for this purpose. + +1. **[Mark the Migration as completed](mark_completed)**: In the EDB Postgres AI Console, mark the migration as completed to stop the streaming process. + +1. **[Reapply any excluded constraints](apply_constraints)**: Apply the constraints you excluded from the schema migration in the new database. + +1. **[Verify the migration completed successfully](verify_migration)**: Use LiveCompare to ensure the target database has the same data as the source database. From 90700ff6297e2ab68ee6c9a10bf5c34b641529ab Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Mon, 2 Sep 2024 18:11:50 +0200 Subject: [PATCH 33/67] Minor edits to getting started section --- .../getting_started/apply_constraints.mdx | 4 ++-- .../getting_started/create_database.mdx | 4 ++-- .../getting_started/prepare_schema.mdx | 10 +++++----- .../preparing_db/preparing_oracle_source_databases.mdx | 2 +- .../preparing_postgres_source_databases.mdx | 8 ++++---- .../getting_started/verify_migration.mdx | 2 +- 6 files changed, 15 insertions(+), 15 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/apply_constraints.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/apply_constraints.mdx index eaa736e971c..0d2ebfbbd4f 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/apply_constraints.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/apply_constraints.mdx @@ -6,13 +6,13 @@ At the beginning of your data migration journey with EDB Data Migration Service ## `PRIMARY KEY` and `UNIQUE` constraints -For `PRIMARY KEY` and `UNIQUE` constraints, you have already created the tables and constraints in the target Postgres database. This allowed EDB DMS to map them to the source objects and migrate data sucessfuly. You don't need do to anything else. +For `PRIMARY KEY` and `UNIQUE` constraints, you have already created the tables and constraints in the target Postgres database. This allowed EDB DMS to map them to the source objects and migrate data successfuly. You don't need do to anything else. The same applies to `NOT NULL` constraints if you included them in your schema import. ## `FOREIGN KEY`, `REFERENCES`, `CHECK`, `CASCADE` and `EXCLUDE` constraints -You can now re-apply the `FOREIGN KEY`, `REFERENCES`, `CHECK`, `CASCADE` and `EXCLUDE` constraints you excluded during the [schema preparation and import](prepare_schema). For example, you can use ALTER statements. +You can now re-apply the `FOREIGN KEY`, `REFERENCES`, `CHECK`, `CASCADE` and `EXCLUDE` constraints you excluded during the [schema preparation and import](prepare_schema). ## Ensuring data integrity diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_database.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_database.mdx index 2f36e9cfc1e..a6051b61b28 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_database.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_database.mdx @@ -4,9 +4,9 @@ title: "Creating a database cluster" You can use an existing EDB Postgres® AI cluster or create a new cluster for the target of the database migration. -To use an existing cluster as a target for the migration, ensure the tables you migrate and the load generated on target doesn't interfere with existing workloads. +To use an existing cluster as a target for the migration, ensure the tables you migrate and the load generated on target don't interfere with existing workloads. -1. Access the [EDB Postgres AI Console](https://portal.biganimal.com) and log in with your EDB Postgres AI Database Cloud Service credentials. +1. Access the [EDB Postgres AI Console](https://portal.biganimal.com) and log in with your EDB Postgres AI Cloud Service credentials. 1. Select the project where you want to create the database cluster. diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/prepare_schema.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/prepare_schema.mdx index e8fbb525096..07c7156743b 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/prepare_schema.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/prepare_schema.mdx @@ -25,7 +25,7 @@ For rows in tables that do not have `PRIMARY KEY` or `UNIQUE` constraints it is EDB DMS is able to apply change events in parallel against destination database clusters. However, migrating some constraint types can negatively affect the performance of the migration. These type of constraints lead to unnecessary CPU and memory utilization in the context of an in-flight data migration from a consistent and referentially integral source database. -EDB recommends [applying the following constraints](apply_constraints) on the target database after you have signilized the end of the CDC stream by [marking the migration as completed](mark_completed) in the Console. +EDB recommends [applying the following constraints](apply_constraints) on the target database after you have signalized the end of the CDC stream by [marking the migration as completed](mark_completed) in the Console. `FOREIGN KEY` / `REFERENCES` @@ -35,7 +35,9 @@ EDB recommends [applying the following constraints](apply_constraints) on the ta `EXCLUDE` -## Preparing your schema +## Preparing your schema + +You can use several different methods to prepare your schema, exclude unsupported constraints and import it to the target database. This section provides guidelines of how to prepare your Oracle or Postgres source database with Migration Portal or Migration Toolkit, but other tools like `pg_dump` and `pg_restore` are also valid routes for migrating DDL. ### Oracle to EDB Postgres Advanced Server migrations @@ -47,12 +49,10 @@ Ensure you exclude `FOREIGN KEY`, `REFERENCES`, `CHECK`, `CASCADE` and `EXCLUDE` ### Other migrations -For data migrations to and from Postgres, EDB recommends using [EDB Migration Toolkit](/migration_toolkit/latest/) to manage the schema. MTK's [offline migration](/migration_toolkit/latest/07_invoking_mtk/08_mtk_command_options/#offline-migration-options) capability provides an easy way to extract a database's schema and separate constraints. +For data migrations from Postgres, EDB recommends using [EDB Migration Toolkit](/migration_toolkit/latest/) to manage the schema. MTK's [offline migration](/migration_toolkit/latest/07_invoking_mtk/08_mtk_command_options/#offline-migration-options) capability provides an easy way to extract a database's schema and separate constraints. Ensure you exclude `FOREIGN KEY`, `REFERENCES`, `CHECK`, `CASCADE` and `EXCLUDE` constraints from the DDL before importing the schema to the target database. -Tools such as `pg_dump` and `pg_restore` are another valid route for migrating DDL. - ## Importing your schema to the target database After you have prepared the DDL, and excluded `FOREIGN KEY`, `REFERENCES`, `CHECK`, `CASCADE` and `EXCLUDE` constraints, connect to the target database and import the SQL-formatted DDL file. diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_oracle_source_databases.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_oracle_source_databases.mdx index 07f11d07c98..5578c8c8e9c 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_oracle_source_databases.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_oracle_source_databases.mdx @@ -274,7 +274,7 @@ Run the script without arguments to print the usage: ## SSL configuration -Ensure you configure your source database server to accept SSL connections to allow the EDB DMS reader to connect to it. You must create a server certificate and a server private key, for example, with OpenSSL, to enable this configuration. +Ensure you configure your source database server to accept SSL connections to allow the EDB DMS Reader to connect to it. You must create a server certificate and a server private key, for example, with OpenSSL, to enable this configuration. ## More information diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_postgres_source_databases.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_postgres_source_databases.mdx index 605656cc80f..eed41fceb58 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_postgres_source_databases.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/preparing_db/preparing_postgres_source_databases.mdx @@ -15,16 +15,16 @@ psql -h -p -U -d ``` Where: - - `` is the name of the Postgres database to connect to. + - `` is the name of the Postgres database source to connect to. - `` is the Postgres database host. - `` is the Postgres database port. - `` is an administrative user who can create and grant roles, alter ownership of tables to migrate, and create a replication slot. This command prompts you for the password associated with ``. -## Postgres configuration +## Postgres database configuration -To perform Postgres configuration: +To prepare the source Postgres database configuration: 1. [Verify Postgres configuration](#verify-postgres-configuration). 1. [Create new roles and grant acccess for CDC migration](#create-new-roles-and-grant-acccess-for-cdc-migration). 1. [Grant `SELECT` on source tables to the CDC migration role](#grant-select-on-source-tables-to-the-cdc-migration-role). @@ -125,7 +125,7 @@ Where: ## SSL configuration -Ensure you configure your source database server to [accept SSL connections](https://www.postgresql.org/docs/current/ssl-tcp.html) to allow the EDB DMS reader to connect to it. You must create a server certificate and a server private key, for example, with OpenSSL, to enable this configuration. +Ensure you configure your source database server to [accept SSL connections](https://www.postgresql.org/docs/current/ssl-tcp.html) to allow the EDB DMS Reader to connect to it. You must create a server certificate and a server private key, for example, with OpenSSL, to enable this configuration. ## More information diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/verify_migration.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/verify_migration.mdx index 0ce42a7ab42..d7f8502e458 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/verify_migration.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/verify_migration.mdx @@ -2,6 +2,6 @@ title: "Verifying the migration" --- -Verify that the migration was successful by comparing the source and target databases. +Compare the source and target databases to verify that the all data was migrated. You can use [LiveCompare](/livecompare/latest/). \ No newline at end of file From d1db7ebced598c34b319ebe65cf37b0d6d0ed9ab Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Tue, 3 Sep 2024 11:35:48 +0200 Subject: [PATCH 34/67] Minor edits remaining sections --- .../migration-etl/data-migration-service/index.mdx | 4 ++-- .../data-migration-service/supported_versions.mdx | 2 -- .../migration-etl/data-migration-service/terminology.mdx | 2 +- 3 files changed, 3 insertions(+), 5 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx index 891c33ed71d..4c0e5d6c527 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx @@ -22,10 +22,10 @@ navigation: --- -## EDB Postgres® AI migrations powered by EDB Data Migration Service +**EDB Postgres® AI migrations powered by EDB Data Migration Service** EDB Data Migration Service (DMS) offers a secure and fault-tolerant way to migrate database data to the EDB Postgres AI platform. Using change data capture or CDC and event streaming, source database row changes are replicated to the migration destination. You can select a subset of your schemas' tables to migrate including support for schema, table, and column name remapping. EDB Data Migration Service is built on Apache Kafka and the open-source Debezium CDC platform. -To migrate self-managed database sources you must download and configure the EDB DMS Reader. +To migrate self-managed database sources, see [Getting started](./getting_started/). diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx index 1c5e69a97d8..1d19773831d 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx @@ -29,8 +29,6 @@ Container databases (CDB/PDB) and non-CDB sources are supported. Postgres and EDB Postgres Advanced Server sources require a database role or user that can manage replications. For details, see [Preparing Postgres source databases](getting_started/preparing_db/preparing_postgres_source_databases). -When used as a target, EDB DMS requires a database role or user that can write to the appropriate target tables. For details, see [Preparing target databases](getting_started/preparing_db). - ## Operating systems The EDB DMS Reader can run on Linux. For details, see [Installing EDB DMS Reader](getting_started/installing). diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/terminology.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/terminology.mdx index d35a7d9f21b..cb7a7fa52d3 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/terminology.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/terminology.mdx @@ -23,5 +23,5 @@ Debezium is a Java-based, open-source platform for CDC. Debezium is supported by ## EDB DMS Reader -The EDB Data Migration Service Reader, packaged as `cdcreader`, uses Debezium to perform CDC operations on the source database and produce Kafka messages containing the change events. +The [EDB Data Migration Service Reader](./getting_started/installing/), packaged as `cdcreader`, uses Debezium to perform CDC operations on the source database and produce Kafka messages containing the change events. From 7e69d516d48e6da3f9379c89fa74d3c99fed724b Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Wed, 4 Sep 2024 09:43:12 +0200 Subject: [PATCH 35/67] Removed upgrade pages, as no upgrades are available now --- .../data-migration-service/index.mdx | 2 -- .../data-migration-service/upgrading.mdx | 16 ---------------- 2 files changed, 18 deletions(-) delete mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/upgrading.mdx diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx index 4c0e5d6c527..a3b5d6b19aa 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx @@ -14,8 +14,6 @@ navigation: - limitations - "#Get started" - getting_started - - "#Upgrading" - - upgrading - "#Reference" - rel_notes - known_issues diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/upgrading.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/upgrading.mdx deleted file mode 100644 index 74ac0751558..00000000000 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/upgrading.mdx +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: "Upgrading" -description: Learn how to upgrade the EDB DMS Reader to a more recent version. ---- - -EDB recommends upgrading the EDB DMS Reader when it is not performing a streaming migration. However, you can also temporarily stop a migration to perform an upgrade. -The EDB DMS Reader components are designed to terminate and restart gracefully. - -To upgrade the software: - -1. If the EDB DMS reader is currently running, stop the process. - -1. Install and start a new version of the EDB DMS Reader. - -1. Continue the migration by restarting the EDB DMS Reader with the updated software. - From 8854ff9baf4e65d1c75a0fb1ed74ce67e62b34a7 Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Tue, 10 Sep 2024 17:34:18 +0200 Subject: [PATCH 36/67] Clarification on page --- .../getting_started/mark_completed.mdx | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/mark_completed.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/mark_completed.mdx index 4e7385cd8e1..170dad42a25 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/mark_completed.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/mark_completed.mdx @@ -2,7 +2,11 @@ title: "Mark migration as completed" --- -The EDB DMS Reader will continue to stream any data updates performed on the source database to the target database until you mark the migration as completed. Check that your data is present in the target database before you mark it as completed. +The EDB DMS Reader will continue to stream any data updates performed on the source database to the target database until you mark the migration as completed. + +It is important that you carefully select the best time to stop the data migration stream. Ensure no new inserts are executed in the source and verify that the last known inserts have been synchronized in the target. + +After checking that your data is present in the target database you can stop the stream by marking the migration as completed. 1. In the [EDB Postgres AI Console](https://portal.biganimal.com), in your project, select **Migrate** > **Migrations**. From 3027b677c20c1696bc9d9a5bbaca135ac5f3b71f Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Wed, 11 Sep 2024 09:43:19 +0200 Subject: [PATCH 37/67] Merge pull request #6043 from EnterpriseDB/dms/migration_stop DMS: migration stop From 3cb3526c499339d2e7ff7baa813ed9b7ae366c7a Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Wed, 11 Sep 2024 09:54:24 +0200 Subject: [PATCH 38/67] DMS: added purl for UI link --- .../data-migration-service/getting_started/config_reader.mdx | 2 ++ 1 file changed, 2 insertions(+) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/config_reader.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/config_reader.mdx index 179477f3326..db3a3149fa0 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/config_reader.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/config_reader.mdx @@ -1,6 +1,8 @@ --- title: "Configuring and running the EDB DMS Reader" deepToC: true +redirects: + - /purl/dms/configure_source --- ## Getting credentials From 150ce104c546edfafb77f0aca6a0c4e9d7473d0e Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Wed, 11 Sep 2024 11:20:33 +0200 Subject: [PATCH 39/67] Merge pull request #6046 from EnterpriseDB/DMS/add_purl DMS: added purl for UI link From 10c84c0976e901b42ebf353cebe2bd3bb706a579 Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Thu, 12 Sep 2024 09:16:37 +0200 Subject: [PATCH 40/67] Implementing feedback from Matt p.1 --- .../getting_started/create_database.mdx | 2 +- .../data-migration-service/getting_started/index.mdx | 4 ++-- .../getting_started/prepare_schema.mdx | 8 +++++++- 3 files changed, 10 insertions(+), 4 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_database.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_database.mdx index a6051b61b28..b6f0a3e49ad 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_database.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/create_database.mdx @@ -12,7 +12,7 @@ To use an existing cluster as a target for the migration, ensure the tables you See [Creating a project](/biganimal/latest/administering_cluster/projects/#creating-a-project) if you want to create one. -1. Within your project, select **Create New** and **Database Cluser** to create an instance that will serve as target for the EDB Data Migration Service (EDB DMS). +1. Within your project, select **Create New** and **Database Cluster** to create an instance that will serve as target for the EDB Data Migration Service (EDB DMS). See [Creating a cluster](/biganimal/release/getting_started/creating_a_cluster/) for detailed instructions on how to create a single-node or a primary/standby high availability cluster. diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx index e2e39e1c7a9..1c26f1e7d47 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/index.mdx @@ -16,9 +16,9 @@ navigation: Creating a migration from an Oracle or Postgres database to EDB Postgres AI involves several steps. -1. **[Create a database cluster](create_database)**: In the EDB Postgres® AI Console, ensure you have created a database cluster. Connect to the cluster and create a database that will serve as a target for the migration. +1. **[Create a target Postgres database cluster](create_database)**: In the EDB Postgres® AI Console, ensure you have created a database cluster. Connect to the cluster and create a database that will serve as a target for the migration. -1. **[Prepare the schema](prepare_schema)**: In your source machine, prepare the source database by exporting it and excluding unsupported constraints. Then, import the adapted schema to the target database. +1. **[Prepare the source database schema](prepare_schema)**: In your source machine, prepare the source database by exporting it and excluding unsupported constraints. Then, import the adapted schema to the target database. 1. **[Install the EDB DMS Reader](installing)**: In your source machine, install the EDB DMS Reader from the EDB repository. diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/prepare_schema.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/prepare_schema.mdx index 07c7156743b..ce37e8ce492 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/prepare_schema.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/prepare_schema.mdx @@ -59,4 +59,10 @@ After you have prepared the DDL, and excluded `FOREIGN KEY`, `REFERENCES`, `CHEC Importing the schema to the target database before the migration takes place is important as EDB DMS only migrates data. The target database must have the schemas in place for the migration to populate them with data. -You can use different methods to import the schemas. You can manually create them in the target database, you can perform a [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) (for example, if you used [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html) to obtain the schemas), you can employ [pgAdmin](https://www.pgadmin.org/docs/pgadmin4/latest/index.html), [psql](https://www.postgresql.org/docs/7.0/app-psql.htm) or the tool of your preference. +You can use different methods to import the schemas. + +If you used Migration Portal to export an Oracle source schema, continue using it to [import the schema in offline or online mode](/migration_portal/latest/04_mp_migrating_database/03_mp_schema_migration/#mp_schema_migration). + +If you used Migration Toolkit to export a Postgres source in offline mode, execute the generated [offline migration script to start the import](/migration_toolkit/latest/07_invoking_mtk/08_mtk_command_options/#offline-migration-options). + +Other alternatives to import Postgres schemas include manually creating the schemas in the target database, performing a [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) (for example, if you used [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html) to obtain the schemas), or employing [pgAdmin](https://www.pgadmin.org/docs/pgadmin4/latest/index.html), [psql](https://www.postgresql.org/docs/7.0/app-psql.htm) or the tool of your preference. From 4ca76a42e8bac1df4f5d57828dae881ffadb95e6 Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Thu, 12 Sep 2024 14:04:14 +0200 Subject: [PATCH 41/67] Removed outdated release notes --- .../data-migration-service/index.mdx | 1 - .../data-migration-service/rel_notes/index.mdx | 18 ------------------ .../rel_notes/rel_notes_2.0.0_preview.mdx | 14 -------------- 3 files changed, 33 deletions(-) delete mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/index.mdx delete mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/rel_notes_2.0.0_preview.mdx diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx index a3b5d6b19aa..e58205bfcb0 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx @@ -15,7 +15,6 @@ navigation: - "#Get started" - getting_started - "#Reference" - - rel_notes - known_issues --- diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/index.mdx deleted file mode 100644 index ce6ea9aeac3..00000000000 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/index.mdx +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: "EDB Data Migration Service Release notes" -navTitle: "Release notes" -description: Learn about new features and functions. -navigation: -- rel_notes_2.0.0_preview ---- - -The EDB Data Migration Service documentation describes the latest version of the EDB -DMS Reader 2, including minor releases and patches. The release notes -provide information on what was new in each release. For new functionality -introduced in a minor or patch release, the content also indicates the release -that introduced the feature. - -| Release Date | Data Migration | -|--------------|------------------------------------------| -| 2023 Jun 9 | [2.0.0_preview](rel_notes_2.0.0_preview) | - diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/rel_notes_2.0.0_preview.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/rel_notes_2.0.0_preview.mdx deleted file mode 100644 index 6b15b93e8eb..00000000000 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/rel_notes_2.0.0_preview.mdx +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: "Release notes for EDB Data Migration Service version 2.0.0_preview" -navTitle: "Version 2.0.0_preview" ---- - -EDB Data Migration Service (EDB DMS) version 2.0.0_preview is a new major version of EDB Data Migration Service. - -The highlights of this release include: - -* General availability of EDB DMS migration capabilities. - -| Type | Description | -|-------------|------------------------------------------------------------------------------------------| -| Enhancement | Parallel table snapshots are available with Debezium 2.2.0. | From 2f7ebda96fa1fec03c291817c6cf51fc896fb4ef Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Thu, 12 Sep 2024 14:07:16 +0200 Subject: [PATCH 42/67] Reorganized terminology section --- .../data-migration-service/terminology.mdx | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/terminology.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/terminology.mdx index cb7a7fa52d3..c172dec34c4 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/terminology.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/terminology.mdx @@ -9,19 +9,19 @@ This terminology is important to understand EDB Data Migration Service (DMS) fun EDB Postgres® AI Analytics Sync is a type of replication/migration supported by EDB Data Migration Service. EDB Postgres Advanced Server and Postgres source database snapshots are transformed into Delta Lake format in CSP Object Storage. This object storage is exposed as Storage Locations in the EDB Postgres AI Console. -## Apache Kafka - -Apache Kafka is an open-source, distributed-event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. - ## Change Data Capture (CDC) CDC is a set of software design patterns used to determine and track changes in data sets by calculating *deltas*. EDB Data Migration Service uses CDC to stream changes from a database cluster to another. -## Debezium - -Debezium is a Java-based, open-source platform for CDC. Debezium is supported by the Red Hat community. - ## EDB DMS Reader The [EDB Data Migration Service Reader](./getting_started/installing/), packaged as `cdcreader`, uses Debezium to perform CDC operations on the source database and produce Kafka messages containing the change events. +### Apache Kafka + +Apache Kafka is an open-source, distributed-event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. + +### Debezium + +Debezium is a Java-based, open-source platform for CDC. Debezium is supported by the Red Hat community. + From c7743eb3d43d67f9408bf627fbf0df4aa9150cfe Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Thu, 12 Sep 2024 14:24:26 +0200 Subject: [PATCH 43/67] Added supported target databases on compatibility page --- .../supported_versions.mdx | 16 ++++++++++++++-- .../data-migration-service/terminology.mdx | 6 +++--- 2 files changed, 17 insertions(+), 5 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx index 1d19773831d..3d20367dfc9 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/supported_versions.mdx @@ -5,7 +5,7 @@ description: Verify that your Oracle or Postgres database version is compatible -## Database versions +## Supported source databases The following database versions are supported. @@ -29,6 +29,18 @@ Container databases (CDB/PDB) and non-CDB sources are supported. Postgres and EDB Postgres Advanced Server sources require a database role or user that can manage replications. For details, see [Preparing Postgres source databases](getting_started/preparing_db/preparing_postgres_source_databases). -## Operating systems +## Supported target databases + +Data Migration Service supports migrating to EDB Postgres AI Cloud Services. The target Postgres database cluster must meet the following requirements: + +- The cluster was deployed with the [EDB Hosted Cloud Service](/edb-postgres-ai/cloud-service/getting_started/planning/choosing_your_deployment/edb_hosted_cloud_service/). + +- AWS is the provider for the EDB Hosted Cloud Service cluster. + +!!!note + Other Cloud Service providers and [Your Cloud Account](/edb-postgres-ai/cloud-service/getting_started/planning/choosing_your_deployment/your_cloud_account/) deployments are currently not supported. +!!! + +## Supported operating systems The EDB DMS Reader can run on Linux. For details, see [Installing EDB DMS Reader](getting_started/installing). diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/terminology.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/terminology.mdx index c172dec34c4..2dd2ffb7c45 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/terminology.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/terminology.mdx @@ -15,13 +15,13 @@ CDC is a set of software design patterns used to determine and track changes in ## EDB DMS Reader -The [EDB Data Migration Service Reader](./getting_started/installing/), packaged as `cdcreader`, uses Debezium to perform CDC operations on the source database and produce Kafka messages containing the change events. +The [EDB Data Migration Service Reader](./getting_started/installing/), packaged as `cdcreader`, uses Debezium to perform CDC operations on the source database and Kafka to produce messages containing the change events. ### Apache Kafka -Apache Kafka is an open-source, distributed-event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. +[Apache Kafka](https://kafka.apache.org/) is an open-source, distributed-event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. EDB Data Migration Service uses Kafka to manage data streaming from the source to the target database. ### Debezium -Debezium is a Java-based, open-source platform for CDC. Debezium is supported by the Red Hat community. +[Debezium](https://debezium.io/) is a Java-based, open-source platform for CDC that is supported by the Red Hat community. The EDB DMS Reader uses Debezium to perform reading operations and capture data changes on the source database. From 2c70ef0d0c50bb5c72166863403b1cbbe95e721d Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 13 Aug 2024 12:02:52 +0100 Subject: [PATCH 44/67] DevGuide POC Signed-off-by: Dj Walker-Morgan --- advocacy_docs/dev-guides/deploy/docker.mdx | 174 ++++++++++++++++++ advocacy_docs/dev-guides/deploy/index.mdx | 7 + advocacy_docs/dev-guides/developing/index.mdx | 6 + advocacy_docs/dev-guides/index.mdx | 32 ++++ advocacy_docs/dev-guides/working/index.mdx | 6 + src/pages/index.js | 14 +- 6 files changed, 236 insertions(+), 3 deletions(-) create mode 100644 advocacy_docs/dev-guides/deploy/docker.mdx create mode 100644 advocacy_docs/dev-guides/deploy/index.mdx create mode 100644 advocacy_docs/dev-guides/developing/index.mdx create mode 100644 advocacy_docs/dev-guides/index.mdx create mode 100644 advocacy_docs/dev-guides/working/index.mdx diff --git a/advocacy_docs/dev-guides/deploy/docker.mdx b/advocacy_docs/dev-guides/deploy/docker.mdx new file mode 100644 index 00000000000..9c00e50bc1f --- /dev/null +++ b/advocacy_docs/dev-guides/deploy/docker.mdx @@ -0,0 +1,174 @@ +--- +title: Installing PostgreSQL in a Docker container on your local machine +navTitle: Installing PostgreSQL in a Docker container +description: Learn how to install PostgreSQL in a Docker container on your local machine for development purposes. +deepToC: true +--- + +## Prerequisites + +* Docker-compatible OS (macOS, Windows, Linux) + +Using Docker for your local PostgreSQL development environment streamlines setup, ensures consistency, and simplifies management. It provides a flexible, isolated, and portable solution that can adapt to various development needs and workflows. + +## Preparing Docker + +### Install Docker: + +* macOS: Download and install Docker Desktop from Docker’s official website. +* Windows: Download and install Docker Desktop from Docker’s official website. Ensure WSL 2 is enabled if using Windows 10 or later. +* Linux: Install Docker using your distribution’s package manager. For example, on Ubuntu: + +``` + sudo apt update + sudo apt install docker.io + sudo systemctl start docker + sudo systemctl enable docker + sudo usermod -ag docker $USER + newgrp docker +``` + +Make sure Docker is installed on your machine or download and install it from [Docker’s official website](https://www.docker.com/products/docker-desktop/). + +### Pull the PostgreSQL Docker image: + +Open a terminal or command prompt and run the following command to pull the latest PostgreSQL image from Docker Hub: + +``` +docker pull postgres +``` + +## Running and accessing the container’s PostgreSQL database + +### Run the PostgreSQL Container: + +Run a new container with the PostgreSQL image using the following command: + +``` +docker run --name my_postgres -d postgres -e POSTGRES_PASSWORD=mysecretpassword -v my_pgdata:/var/lib/postgresql/data -p 5432:5432 +``` + +#### `--name my_postgres -d postgres` + +The `--name` flag tells docker to creates a new container named `my_postgres`, while the `-d` flag tells it to use the `postgres` image which we pulled previously. Note that if we had not pulled it, this command would automatically pull the PostgreSQL image. + +#### `-e POSTGRES_PASSWORD=mysecretpassword` + +The `-e` flag sets an environment variable `POSTGRES_PASSWORD` to `mysecretpassword`. This is used the password for the default `postgres` user. You should use a different password. + +#### `-v my_pgdata:/var/lib/postgresql/data` + +Volumes are used to persist data in Docker containers. This flag mounts a volume named `my_pgdata` to persist data. The data is stored in the `/var/lib/postgresql/data` directory within the container. + +#### `-p 5432:5432` + +The `-p` flag maps the container’s port 5432 to the host machine’s port 5432. Port 5432 is Postgres's default port for communications. By using this flag, it allows you to access the PostgreSQL database from your host machine. + +### Verify the container is running: + +To verify that the container is running, use the following command: + +``` +docker ps +``` + +This command lists all running containers. You should see the `my_postgres` container listed. + +You now have a persistant, locally accessible Postgres database running in a Docker container. +Let's start using it. + +### Access PostgreSQL: + +To access the PostgreSQL database, without any additional tools, you can use the following command to open a PostgreSQL prompt: + +``` +docker exec \-it my\_postgres psql \-U postgres +``` + +This logs into the Docker container and runs the `psql` command as the `postgres` user from there. + +TBD: Installing the psql client on your local machine. + +### Using a PostgreSQL client + +The `psql` command is a powerful tool for interacting with PostgreSQL databases. You should install it on your local machine to interact with the PostgreSQL database running in the Docker container. + +#### macOS: + +You can install the PostgreSQL client using Homebrew: + +``` +brew install libpq +``` + +#### Windows: + +Download the PostgreSQL client from the [official website](https://www.enterprisedb.com/downloads/postgres-postgresql-downloads). + +#### Linux: + +Use your distribution’s package manager to install the PostgreSQL client. For example, on Ubuntu: + +``` +sudo apt-get install postgresql-client +``` + +#### Connecting other apps + +You can also connect other applications to the PostgreSQL database running in the Docker container. You need to provide the following connection details: + +* Host: `localhost` +* Port: `5432` +* Username: `postgres` +* Password: (whatever you set it to) +* Database: `postgres` + +Or use the connection string: + + ``` + postgresql://postgres:mysecretpassword@localhost:5432/postgres + ``` + +### Verifying data persistence + +1. Create a table and insert data. + Access the PostgreSQL instance and run the following SQL commands to create a table with columns and insert some data: + + ```sql + CREATE TABLE employees ( + id SERIAL PRIMARY KEY, + first_name VARCHAR(50), + last_name VARCHAR(50), + email VARCHAR(100), + hire_date DATE + ); + INSERT INTO employees (first_name, last_name, email, hire_date) VALUES + ('John', 'Doe','john.doe@example.com', '2020-01-15'), + ('Jane', 'Smith', 'jane.smith@example.com', '2019-03-22'); + ``` + +2. Stop and completly remove the container. + ``` + docker stop my_postgres + docker rm my_postgres + ``` + +3. Recreate the container with the same volume. + + ``` + docker run --name my_postgres -d postgres -e POSTGRES_PASSWORD=mysecretpassword -v my_pgdata:/var/lib/postgresql/data -p 5432:5432 + ``` + +4. Verify Data Persistence. + Access the PostgreSQL instance and check if the data still exists: + + ```sql + SELECT * FROM employees + ``` + + If everything worked as expected, you should see the employee table with the data previously loaded still present. + + +## Conclusion + +By following these steps, you have set up a robust local development environment for PostgreSQL using Docker. This setup ensures data persistence and provides a flexible, isolated, and consistent environment for all of your development needs. \ No newline at end of file diff --git a/advocacy_docs/dev-guides/deploy/index.mdx b/advocacy_docs/dev-guides/deploy/index.mdx new file mode 100644 index 00000000000..05a9a4c7c53 --- /dev/null +++ b/advocacy_docs/dev-guides/deploy/index.mdx @@ -0,0 +1,7 @@ +--- +title: Deploying Postgres for developers +navTitle: Deploying for developers +description: How to deploy Postgres for developers. +--- + + diff --git a/advocacy_docs/dev-guides/developing/index.mdx b/advocacy_docs/dev-guides/developing/index.mdx new file mode 100644 index 00000000000..dad61bff78c --- /dev/null +++ b/advocacy_docs/dev-guides/developing/index.mdx @@ -0,0 +1,6 @@ +--- +title: Developing applications with Postgres +navTitle: Developing applications +description: How to develop applications that use Postgres databases. +--- + diff --git a/advocacy_docs/dev-guides/index.mdx b/advocacy_docs/dev-guides/index.mdx new file mode 100644 index 00000000000..ada2093022f --- /dev/null +++ b/advocacy_docs/dev-guides/index.mdx @@ -0,0 +1,32 @@ +--- +title: The EDB Postgres AI Developer Guides +navTitle: Developer Guides +description: The EDB Postgres AI Developer Guides provide information on how to use the EDB Postgres AI platform to build and develop Postgres and AI applications. +deepToC: true +directoryDefaults: + iconName: "CodeWriting" + indexCards: simple + prevNext: true +navigation: +- deploy +- working +- developing +--- + +The EDB Postgres AI Developer Guides are all about providing you, the developer, with the information you need to accelerate your development efforts using the EDB Postgres AI platform. The guides cover a wide range of topics, from setting up your development environment to deploying Postgres and AI applications. + +## Deploying Postgres Locally for developers + +* [Deploying Postgres Using Docker Locally](deploying/deploying-postgres-docker-locally) + +## Working with Postgres + +* [PSQL for busy developers](working/psql-for-busy-developers) + +## Developing Postgres Applications + +* [Developing Postgres Applications with Python](developing/developing-postgres-applications-with-python) + + + + diff --git a/advocacy_docs/dev-guides/working/index.mdx b/advocacy_docs/dev-guides/working/index.mdx new file mode 100644 index 00000000000..c6d12076c68 --- /dev/null +++ b/advocacy_docs/dev-guides/working/index.mdx @@ -0,0 +1,6 @@ +--- +title: Working with Postgres tools - guides for developers +navTitle: Working with tools +description: How to work with a range of Postgres tools, with a focus on developers and debugging. +--- + diff --git a/src/pages/index.js b/src/pages/index.js index 2af0b21d798..0668d697809 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -78,8 +78,9 @@ const BannerWideCard = ({ iconName, headingText, to, children }) => ( ); -const BannerWideCardLink = ({ to, className, children }) => ( +const BannerWideCardLink = ({ to, className, iconName, children }) => ( { Downloads and Repositories + + Developer Guides +
From 5bef349d4d4bd0fba64b29adfa826214d1c6094b Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 13 Aug 2024 15:33:39 +0100 Subject: [PATCH 45/67] placeholder developer content Signed-off-by: Dj Walker-Morgan --- advocacy_docs/dev-guides/deploy/docker.mdx | 2 +- ...ping-postgres-applications-with-python.mdx | 6 ++ advocacy_docs/dev-guides/index.mdx | 2 +- .../working/psql-for-busy-developers.mdx | 65 +++++++++++++++++++ 4 files changed, 73 insertions(+), 2 deletions(-) create mode 100644 advocacy_docs/dev-guides/developing/developing-postgres-applications-with-python.mdx create mode 100644 advocacy_docs/dev-guides/working/psql-for-busy-developers.mdx diff --git a/advocacy_docs/dev-guides/deploy/docker.mdx b/advocacy_docs/dev-guides/deploy/docker.mdx index 9c00e50bc1f..9fd4863001e 100644 --- a/advocacy_docs/dev-guides/deploy/docker.mdx +++ b/advocacy_docs/dev-guides/deploy/docker.mdx @@ -58,7 +58,7 @@ The `-e` flag sets an environment variable `POSTGRES_PASSWORD` to `mysecretpassw #### `-v my_pgdata:/var/lib/postgresql/data` -Volumes are used to persist data in Docker containers. This flag mounts a volume named `my_pgdata` to persist data. The data is stored in the `/var/lib/postgresql/data` directory within the container. +Docker uses volumes to persist data in Docker containers. This flag mounts a volume named `my_pgdata` to persist data. The data in this case is whatever what Postgres writes to the `/var/lib/postgresql/data` directory within the container. These writes are persisted outside the container in a docker volume; the command `docker volume inspect my_pgdata` will show you information about that volume. #### `-p 5432:5432` diff --git a/advocacy_docs/dev-guides/developing/developing-postgres-applications-with-python.mdx b/advocacy_docs/dev-guides/developing/developing-postgres-applications-with-python.mdx new file mode 100644 index 00000000000..f64ed0740c5 --- /dev/null +++ b/advocacy_docs/dev-guides/developing/developing-postgres-applications-with-python.mdx @@ -0,0 +1,6 @@ +--- +title: Developing Postgres applications with Python +navTitle: Developing with Python +description: How to develop applications that use Postgres databases with Python. +--- + diff --git a/advocacy_docs/dev-guides/index.mdx b/advocacy_docs/dev-guides/index.mdx index ada2093022f..92525bad505 100644 --- a/advocacy_docs/dev-guides/index.mdx +++ b/advocacy_docs/dev-guides/index.mdx @@ -17,7 +17,7 @@ The EDB Postgres AI Developer Guides are all about providing you, the developer, ## Deploying Postgres Locally for developers -* [Deploying Postgres Using Docker Locally](deploying/deploying-postgres-docker-locally) +* [Deploying Postgres Using Docker Locally](deploying/docker) ## Working with Postgres diff --git a/advocacy_docs/dev-guides/working/psql-for-busy-developers.mdx b/advocacy_docs/dev-guides/working/psql-for-busy-developers.mdx new file mode 100644 index 00000000000..82460ccdf82 --- /dev/null +++ b/advocacy_docs/dev-guides/working/psql-for-busy-developers.mdx @@ -0,0 +1,65 @@ +--- +title: PSQL for busy developers +navTitle: PSQL for developers +description: How to use PSQL for common developer tasks +--- + +The PSQL command line tool is essential in a developer's toolkit as it provides full command-line access to PostgreSQL databases. + +## Getting psql installed + +Unless you've just installed Postgres natively on your machine, you'll need to install the psql client. + +(Install guides) + +## Connecting to a database + +### Connection strings + +psql can connect to a database using a connection string. The connection string is a single string that contains all the information needed to connect to a database. + +!!! tip +Always wrap your connection string in single quotes to avoid any special characters being interpreted by the shell. +!!! + +### PGPASSWORD environment variable + +Best practice is not to have your password in your connection string or in your command history. Instead, you can use the `PGPASSWORD` environment variable to store your Postgres password. This is simple, but not very secure because some Unix systems allow other users to see the environment variables of other users. + +### .pgpass file + +Creating a .pgpass file is a more secure way to store your password. The .pgpass file is a plain text file that contains the connection information for your databases. The file should be stored in your home directory and should be readable only by you. The file should have the following format: + +``` +hostname:port:database:username:password +``` + +Hostname, port, database and username can all be set to wildcards to match any value. For example, `*:*:*:postgres:password` would match any database on any host for the user `postgres`. + +Read more about the .pgpass file in the [Postgres documentation](https://www.postgresql.org/docs/current/libpq-pgpass.html). + +## The PSQL command line + +You can enter SQL commands directly into the PSQL command line. Be sure to end each command with a semicolon, otherwise PSQL doesn't execute the command. + +You can use tab-completion in many situations to help you complete commands and table names. Pressing tab at the start of a line will show you a list of available SQL commands. + +PSQL also has a number of built-in commands that can help you manage your databases and tables. + + +| Command | Description | +|-----------------|----------------------------------------------------------| +| `\l` | List all databases | +| `\c` | Connect to a database | +| `\d` | List tables, sequences and views in the current database | +| `\d table_name` | Describe a table | +| `\watch seconds` | Re-run a query every `seconds` | +| `\q` | Quit PSQL | + +`\d` is a very useful command that shows a range of different information when followed by another character. Think of it as `d` for display. For example, `\dt` shows all the tables in the current database, `\dv` shows all the views, and `\ds` shows all the sequences. + +`\watch` is useful when you want to repeat running a query at regular intervals. For example, you could use `\watch 5` to run a query every 5 seconds. The query that will be re-run is the last query you entered. + + + + From e1cf33e5d3149a22f6151420fb1571fd410760fd Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Tue, 13 Aug 2024 15:49:12 +0100 Subject: [PATCH 46/67] Fix typo --- advocacy_docs/dev-guides/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/dev-guides/index.mdx b/advocacy_docs/dev-guides/index.mdx index 92525bad505..b66e32089ba 100644 --- a/advocacy_docs/dev-guides/index.mdx +++ b/advocacy_docs/dev-guides/index.mdx @@ -17,7 +17,7 @@ The EDB Postgres AI Developer Guides are all about providing you, the developer, ## Deploying Postgres Locally for developers -* [Deploying Postgres Using Docker Locally](deploying/docker) +* [Deploying Postgres Using Docker Locally](deploy/docker) ## Working with Postgres From dfd4f5ec46779d9e80022fced22d94d503792647 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Wed, 14 Aug 2024 14:18:49 +0100 Subject: [PATCH 47/67] Add singular icons for indexy links Signed-off-by: Dj Walker-Morgan --- src/pages/index.js | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/src/pages/index.js b/src/pages/index.js index 0668d697809..149c9f94e94 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -85,6 +85,13 @@ const BannerWideCardLink = ({ to, className, iconName, children }) => ( className={`col-12 col-md-4 py-2 px-5 text-center ${className}`} style={{ minwidth: "14em" }} > + + {children} ); @@ -169,13 +176,22 @@ const Page = () => { headingText="EDB Postgres AI" > - + Overview and Concepts - + Guide and Getting Started - + Latest Release News @@ -279,14 +295,14 @@ const Page = () => { Downloads and Repositories Developer Guides From 690fc721315a0eacfe9da5e677d93f192e753099 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Thu, 15 Aug 2024 10:52:33 +0100 Subject: [PATCH 48/67] Testing redirects as a page transfer mechanism Signed-off-by: Dj Walker-Morgan --- .../edb-postgres-ai/migration-etl/pgd.mdx | 6 +++++ product_docs/docs/pgd/5/index.mdx | 1 + src/pages/index.js | 24 +++++++++++++------ 3 files changed, 24 insertions(+), 7 deletions(-) create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/pgd.mdx diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/pgd.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/pgd.mdx new file mode 100644 index 00000000000..632584dd934 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/pgd.mdx @@ -0,0 +1,6 @@ +--- +title: PGD test +navTitle: PGD test +description: PGD test +refresh: /pgd/latest/ +--- diff --git a/product_docs/docs/pgd/5/index.mdx b/product_docs/docs/pgd/5/index.mdx index 6a56a9e6b82..094a5326b4d 100644 --- a/product_docs/docs/pgd/5/index.mdx +++ b/product_docs/docs/pgd/5/index.mdx @@ -4,6 +4,7 @@ indexCards: simple redirects: - /pgd/5/compatibility_matrix - /pgd/latest/bdr + - /edb-postgres-ai/migration-etl/pgd navigation: - rel_notes - known_issues diff --git a/src/pages/index.js b/src/pages/index.js index 149c9f94e94..878877cf348 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -82,7 +82,7 @@ const BannerWideCardLink = ({ to, className, iconName, children }) => ( { Use the Tech Preview + + Test to PGD Migration and AI + + + Management @@ -293,19 +302,20 @@ const Page = () => { Downloads and Repositories - Developer Guides - + */}
From 21b9b96a0dde1078284227d4eb7357c8bf3a998e Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Thu, 15 Aug 2024 11:13:26 +0100 Subject: [PATCH 49/67] Test 2 Signed-off-by: Dj Walker-Morgan --- product_docs/docs/pgd/5/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/index.mdx b/product_docs/docs/pgd/5/index.mdx index 094a5326b4d..45e8fcc003d 100644 --- a/product_docs/docs/pgd/5/index.mdx +++ b/product_docs/docs/pgd/5/index.mdx @@ -4,7 +4,7 @@ indexCards: simple redirects: - /pgd/5/compatibility_matrix - /pgd/latest/bdr - - /edb-postgres-ai/migration-etl/pgd + - /edb-postgres-ai/migration-etl/pgd/ navigation: - rel_notes - known_issues From cfd18a70321af473364cc03d71479fb5152e2fcf Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Thu, 15 Aug 2024 11:13:49 +0100 Subject: [PATCH 50/67] Test 2 Part 2 Signed-off-by: Dj Walker-Morgan --- src/pages/index.js | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/src/pages/index.js b/src/pages/index.js index 878877cf348..91736d03e3b 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -280,7 +280,9 @@ const Page = () => { headingText="Migration and ETL" to="/edb-postgres-ai/migration-etl" > - Test to PGD + + Test to PGD + Migration and AI From 7fc4a092bc2376a0773cdf2f084c507245102ec3 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Mon, 19 Aug 2024 17:48:18 +0100 Subject: [PATCH 51/67] Make database links through links to versioned content Signed-off-by: Dj Walker-Morgan --- src/pages/index.js | 343 +++++++++++++++++++++++++++------------------ 1 file changed, 203 insertions(+), 140 deletions(-) diff --git a/src/pages/index.js b/src/pages/index.js index 91736d03e3b..254ccc6a38b 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -70,6 +70,29 @@ const BannerSubCard = ({ iconName, headingText, to, children }) => (
); +const BannerWideSubCard = ({ iconName, headingText, to, children }) => ( +
+
+
+
+ + +

+ {headingText} +

+ +
+
{children}
+
+
+
+); + const BannerWideCard = ({ iconName, headingText, to, children }) => (
@@ -96,6 +119,16 @@ const BannerWideCardLink = ({ to, className, iconName, children }) => ( ); +const BannerWideLink = ({ to, className, children }) => ( + + {children} + +); + const BannerCardLink = ({ to, className, children }) => ( {children} @@ -178,7 +211,7 @@ const Page = () => { Overview and Concepts @@ -190,7 +223,7 @@ const Page = () => { Latest Release News @@ -232,14 +265,14 @@ const Page = () => { headingText="Databases" to="/edb-postgres-ai/databases" > - + EDB Postgres Advanced Server - + EDB Postgres Extended Server - - EDB Postgres Distributed + + EDB Postgres Distributed (PGD) @@ -280,45 +313,46 @@ const Page = () => { headingText="Migration and ETL" to="/edb-postgres-ai/migration-etl" > - - Test to PGD + + Data Migration Service - Migration and AI + Migration Portal with AI Copilot + - - + Management - - + + Backup and Recovery - - - + + - - - Downloads and Repositories - + + + Downloads and Repositories + - {/* Developer Guides */} - + +
@@ -531,123 +565,152 @@ const Page = () => { Trusted Postgres Architect +
- + + Connectors + + JDBC + .NET + OCL + ODBC + + + Connection Poolers + + PgBouncer + pgPool-II + + + Foreign Data Wrappers + + + Hadoop + + + Mongo + + + MySQL + + + + + + Backup + + - - Connectors - - JDBC - .NET - OCL - ODBC - - - Connection Poolers - - PgBouncer - pgPool-II - - - Foreign Data Wrappers - - - Hadoop - - - Mongo - - - MySQL - - - - + - - Backup - - - Cohesity DataProtect for PostgreSQL - - - Commvault Backup & Recovery - - - Repostor Data Protector for PostgresSQL - - - Kasten by Veeam for Kasten K10 - - - Veritas NetBackup for PostgreSQL - - - - Data Movement - - - Precisely Connect CDC - - - - Developer Tools - - - DBeaver PRO - - - Liquibase Pro - - - Quest Toad Edge - - - SIB Visions VisionX - - - - Security - - - Hashicorp Vault - - - Hashicorp Vault Transit Secrets Engine - - - Imperva Data Security Fabric - - - Thales CipherTrust Manager - - - Thales CipherTrust Transparent Encryption - - - - Other - - - Chemaxon JChem PostgreSQL Cartridge - - - Esri ArcGIS Pro and Esri ArcGIS Enterprise - - HPE - - Nutanix AHV - - - Pure Storage FlashArray - - -
- + Commvault Backup & Recovery + + + Repostor Data Protector for PostgresSQL + + + Kasten by Veeam for Kasten K10 + + + Veritas NetBackup for PostgreSQL + + + + Data Movement + + + Precisely Connect CDC + + + + Developer Tools + + + DBeaver PRO + + + Liquibase Pro + + + Quest Toad Edge + + + SIB Visions VisionX + +
+ + Security + + + Hashicorp Vault + + + Hashicorp Vault Transit Secrets Engine + + + Imperva Data Security Fabric + + + Thales CipherTrust Manager + + + Thales CipherTrust Transparent Encryption + +
+ + Other + + + Chemaxon JChem PostgreSQL Cartridge + + + Esri ArcGIS Pro and Esri ArcGIS Enterprise + + HPE + + Nutanix AHV + + + Pure Storage FlashArray + +
From 1913a583e2c92f267bdeea0aea6b9693b89d3aaa Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Fri, 23 Aug 2024 13:53:26 +0100 Subject: [PATCH 52/67] WIP reorg Signed-off-by: Dj Walker-Morgan --- src/pages/index.js | 226 ++++++++++++++++++--------------------------- 1 file changed, 92 insertions(+), 134 deletions(-) diff --git a/src/pages/index.js b/src/pages/index.js index 254ccc6a38b..25369fdbe15 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -316,7 +316,7 @@ const Page = () => { Data Migration Service - + Migration Portal with AI Copilot @@ -327,61 +327,112 @@ const Page = () => { headingText="Platforms and Tools" to="/edb-postgres-ai/tools" > - - Management + + + Kubernetes + + + EDB Postgres Distributed for Kubernetes + + + + EDB Postgres for Kubernetes + + + + CloudNativePG + + + + + Management and Monitoring + + + + Postgres Enterprise Manager + + + pgAdmin + + EDB*Plus + Lasso + + LiveCompare + + + Postgres Workload Report + + + + + Security + + + Transparent Data Encryption + + + EDB LDAP Sync + + + + + High Availability + + + Replication Manager (repmgr) + + + Patroni - - Backup and Recovery + + Slony (Deprecated) + + + pglogical 2 + + Failover Manager + + + + Migration + + + + Migration Handbook + + + Migration Toolkit + + + Replication Server Downloads and Repositories - {/* - Developer Guides - */} + + Developer Guides +
- - - EDB Postgres Advanced Server - - - PostGIS - - - - EDB Postgres Extended Server - - - - PostgreSQL - - - - Security - - - - Transparent Data Encryption - - - EDB LDAP Sync - - + {/* Extensions and Tools @@ -441,79 +492,7 @@ const Page = () => { Language Pack - - - - - EDB Postgres Distributed (PGD) - - Failover Manager - - Replication Manager (repmgr) - - - Patroni - - - Slony (Deprecated) - - - pglogical 2 - - - - - - Migration Handbook - - - Migration Portal - - - Migration Toolkit - - - Replication Server - - - - - - EDB BigAnimal - - - Quick Start - - - Oracle SQL Compatibility - - Demo - - - - - - - EDB Postgres Distributed for Kubernetes - - - - EDB Postgres for Kubernetes - - - - CloudNativePG - - + */} { pgBackRest - - - - Postgres Enterprise Manager - - - pgAdmin - - EDB*Plus - Lasso - - LiveCompare - - - Postgres Workload Report - - - Trusted Postgres Architect From e2f8ac3a93eca1b54e14587397916c4681ac8428 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Fri, 23 Aug 2024 15:31:55 +0100 Subject: [PATCH 53/67] More reorg - pass 1 complete Signed-off-by: Dj Walker-Morgan --- src/pages/index.js | 223 +++++++++++++++++++++++---------------------- 1 file changed, 113 insertions(+), 110 deletions(-) diff --git a/src/pages/index.js b/src/pages/index.js index 25369fdbe15..f1a61e44914 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -327,7 +327,7 @@ const Page = () => { headingText="Platforms and Tools" to="/edb-postgres-ai/tools" > - + Kubernetes @@ -374,6 +374,15 @@ const Page = () => { EDB LDAP Sync + + + Automation + + + + Trusted Postgres Architect + + { pglogical 2 + Failover Manager + + + Backup and Recovery + + + + Barman + + + pgBackRest + + Migration @@ -412,150 +438,127 @@ const Page = () => { - - - Downloads and Repositories - - - - Developer Guides - - - - -
- {/* - - Extensions and Tools + + + Extensions - + Supported Postgres extensions - + + + PostGIS - + EDB Advanced Storage Pack - + - + EDB Postgres Tuner - + - + EDB Query Advisor - + - + EDB Wait States - + - + PG Squeeze - + - + wal2json - + - + system_stats - + - + EDB Job Scheduler - + - + PG Failover Slots - + - + + Tools + + + EDB SPL Check - + - + EDB SQL Patch - + - + alteruser - + - + Language Pack - - */} + + - - - Barman - + + + Downloads and Repositories + - - Single Server Streaming - - Demo - - - - pgBackRest - - - - - Trusted Postgres Architect - - -
+ Developer Guides + + - - - Connectors - - JDBC - .NET - OCL - ODBC + + + Connectors + + JDBC + .NET + OCL + ODBC - - Connection Poolers - - PgBouncer - pgPool-II + + Connection Poolers + + PgBouncer + pgPool-II - - Foreign Data Wrappers - - - Hadoop - - - Mongo - - - MySQL - - + + Foreign Data Wrappers + + + Hadoop + + + Mongo + + + MySQL + + + Date: Fri, 23 Aug 2024 17:23:44 +0000 Subject: [PATCH 54/67] responsive homepage --- package-lock.json | 12 +++--------- package.json | 1 - src/components/index.js | 2 -- src/components/layout.js | 6 ++++-- src/components/search-navigation-links.js | 7 +++++-- src/components/search-navigation.js | 22 ++++++++++++++++------ src/components/search/index.js | 6 ++++-- src/components/text-balancer.js | 12 ------------ src/html.js | 1 + src/pages/index.js | 1 + 10 files changed, 34 insertions(+), 36 deletions(-) delete mode 100644 src/components/text-balancer.js diff --git a/package-lock.json b/package-lock.json index 3d59fc782a9..316baade14f 100644 --- a/package-lock.json +++ b/package-lock.json @@ -14,7 +14,6 @@ "@mdx-js/react": "^1.6.22", "@raae/gatsby-plugin-fathom": "^0.1.0", "algoliasearch": "^4.23.3", - "balance-text": "^3.3.1", "bl": "5.0.0", "bootstrap": "^5.3.3", "gatsby": "^4.25.9", @@ -5665,11 +5664,6 @@ "url": "https://github.com/sponsors/wooorm" } }, - "node_modules/balance-text": { - "version": "3.3.1", - "resolved": "https://registry.npmjs.org/balance-text/-/balance-text-3.3.1.tgz", - "integrity": "sha512-tpnHvo1w0rJ5rbu+jZKf7NLKKg6XZ6eAwREP/9jEDJ+ZTBi6jQFqn/UGARL3/oqD8SgQbyTwBXBjhKDdTgoPRw==" - }, "node_modules/balanced-match": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz", @@ -6130,9 +6124,9 @@ } }, "node_modules/caniuse-lite": { - "version": "1.0.30001478", - "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001478.tgz", - "integrity": "sha512-gMhDyXGItTHipJj2ApIvR+iVB5hd0KP3svMWWXDvZOmjzJJassGLMfxRkQCSYgGd2gtdL/ReeiyvMSFD1Ss6Mw==", + "version": "1.0.30001651", + "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001651.tgz", + "integrity": "sha512-9Cf+Xv1jJNe1xPZLGuUXLNkE1BoDkqRqYyFJ9TDYSqhduqA4hu4oR9HluGoWYQC/aj8WHjsGVV+bwkh0+tegRg==", "funding": [ { "type": "opencollective", diff --git a/package.json b/package.json index 143a0429fdd..76ef608a768 100644 --- a/package.json +++ b/package.json @@ -42,7 +42,6 @@ "@mdx-js/react": "^1.6.22", "@raae/gatsby-plugin-fathom": "^0.1.0", "algoliasearch": "^4.23.3", - "balance-text": "^3.3.1", "bl": "5.0.0", "bootstrap": "^5.3.3", "gatsby": "^4.25.9", diff --git a/src/components/index.js b/src/components/index.js index 6717442862a..c42a9e30b77 100644 --- a/src/components/index.js +++ b/src/components/index.js @@ -25,7 +25,6 @@ import SearchNavigation from "./search-navigation"; import SideNavigation from "./side-navigation"; import StubCards from "./stub-cards"; import TableOfContents from "./table-of-contents"; -import TextBalancer from "./text-balancer"; import Tiles, { TileModes } from "./tiles.js"; import TimedBanner from "./timed-banner"; import TopBar from "./top-bar"; @@ -61,7 +60,6 @@ export { SideNavigation, StubCards, TableOfContents, - TextBalancer, Tiles, TileModes, TimedBanner, diff --git a/src/components/layout.js b/src/components/layout.js index 19b642f6f0a..c39446b5e30 100644 --- a/src/components/layout.js +++ b/src/components/layout.js @@ -12,7 +12,6 @@ import { Link, StubCards, IconList, - TextBalancer, } from "../components"; import { MDXProvider } from "@mdx-js/react"; import Icon from "../components/icon/"; @@ -164,6 +163,10 @@ const Layout = ({ {meta.description && ( )} + @@ -171,7 +174,6 @@ const Layout = ({ {children} - ); }; diff --git a/src/components/search-navigation-links.js b/src/components/search-navigation-links.js index 8b61f27b011..463903ea85b 100644 --- a/src/components/search-navigation-links.js +++ b/src/components/search-navigation-links.js @@ -1,9 +1,12 @@ import React from "react"; import { Link } from "./"; -const SearchNavigationLinks = () => ( +const SearchNavigationLinks = (props) => ( <> - + Advanced Search diff --git a/src/components/search-navigation.js b/src/components/search-navigation.js index d20261473b7..dca43727612 100644 --- a/src/components/search-navigation.js +++ b/src/components/search-navigation.js @@ -10,10 +10,12 @@ const LogoLink = () => ( ); -const DocsLink = () => ( +const DocsLink = ({ className }) => ( /docs @@ -27,18 +29,26 @@ const SearchNavigation = ({ logo = false, }) => { return ( - + {logo ? ( <> {/* Homepage */} - + ) : ( <> )} - - + + ); }; diff --git a/src/components/search/index.js b/src/components/search/index.js index eaabccaf1e7..8fb70e93bec 100644 --- a/src/components/search/index.js +++ b/src/components/search/index.js @@ -227,7 +227,7 @@ const Search = ({ searchProduct, onSearchProductChange }) => { ); }; -const SearchBar = ({ searchProduct, searchVersion }) => { +const SearchBar = ({ searchProduct, searchVersion, className }) => { const [currentProduct, setCurrentProduct] = useState(searchProduct); const { algoliaIndex } = useSiteMetadata(); @@ -250,7 +250,9 @@ const SearchBar = ({ searchProduct, searchVersion }) => { // use SSR provider just to trigger static rendering of search form. Speeds this up a LOT return ( -
+
{ - useEffect(() => { - balanceText(); - balanceText.updateWatched(); - }); - return null; -}; - -export default TextBalancer; diff --git a/src/html.js b/src/html.js index c03cd594725..a8da97d8bbd 100644 --- a/src/html.js +++ b/src/html.js @@ -13,6 +13,7 @@ export default function HTML(props) { {props.headComponents} diff --git a/src/pages/index.js b/src/pages/index.js index f1a61e44914..cd701c39cf2 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -155,6 +155,7 @@ const Page = () => { pageMeta={{ description: "EDB supercharges Postgres with products, services, and support to help you control database risk, manage costs, and scale efficiently.", + minDeviceWidth: 320, }} background="white" > From 276720bb72c4bea856730dd51ade89f3f7056b8a Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Fri, 23 Aug 2024 17:28:28 +0000 Subject: [PATCH 55/67] Fully remove Masonry --- package-lock.json | 43 ------------------------------------------- package.json | 1 - src/pages/index.js | 40 +--------------------------------------- 3 files changed, 1 insertion(+), 83 deletions(-) diff --git a/package-lock.json b/package-lock.json index 316baade14f..2af06ad76f7 100644 --- a/package-lock.json +++ b/package-lock.json @@ -44,7 +44,6 @@ "hast-util-to-string": "^1.0.4", "is-absolute-url": "^4.0.1", "markdown-to-jsx": "^7.4.7", - "masonry-layout": "^4.2.2", "mdast-util-to-string": "^2.0.0", "prismjs": "^1.29.0", "react": "^18.3.1", @@ -7552,11 +7551,6 @@ "node": ">=6" } }, - "node_modules/desandro-matches-selector": { - "version": "2.0.2", - "resolved": "https://registry.npmjs.org/desandro-matches-selector/-/desandro-matches-selector-2.0.2.tgz", - "integrity": "sha512-+1q0nXhdzg1IpIJdMKalUwvvskeKnYyEe3shPRwedNcWtnhEKT3ZxvFjzywHDeGcKViIxTCAoOYQWP1qD7VNyg==" - }, "node_modules/destroy": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/destroy/-/destroy-1.2.0.tgz", @@ -8822,11 +8816,6 @@ "node": ">= 0.6" } }, - "node_modules/ev-emitter": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/ev-emitter/-/ev-emitter-1.1.1.tgz", - "integrity": "sha512-ipiDYhdQSCZ4hSbX4rMW+XzNKMD1prg/sTvoVmSLkuQ1MVlwjJQQA+sW8tMYR3BLUr9KjodFV4pvzunvRhd33Q==" - }, "node_modules/eval": { "version": "0.1.8", "resolved": "https://registry.npmjs.org/eval/-/eval-0.1.8.tgz", @@ -9353,14 +9342,6 @@ "url": "https://github.com/sponsors/sindresorhus" } }, - "node_modules/fizzy-ui-utils": { - "version": "2.0.7", - "resolved": "https://registry.npmjs.org/fizzy-ui-utils/-/fizzy-ui-utils-2.0.7.tgz", - "integrity": "sha512-CZXDVXQ1If3/r8s0T+v+qVeMshhfcuq0rqIFgJnrtd+Bu8GmDmqMjntjUePypVtjHXKJ6V4sw9zeyox34n9aCg==", - "dependencies": { - "desandro-matches-selector": "^2.0.0" - } - }, "node_modules/flat-cache": { "version": "3.0.4", "resolved": "https://registry.npmjs.org/flat-cache/-/flat-cache-3.0.4.tgz", @@ -11617,11 +11598,6 @@ "node": ">=4" } }, - "node_modules/get-size": { - "version": "2.0.3", - "resolved": "https://registry.npmjs.org/get-size/-/get-size-2.0.3.tgz", - "integrity": "sha512-lXNzT/h/dTjTxRbm9BXb+SGxxzkm97h/PCIKtlN/CBCxxmkkIVV21udumMS93MuVTDX583gqc94v3RjuHmI+2Q==" - }, "node_modules/get-stream": { "version": "6.0.1", "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-6.0.1.tgz", @@ -14554,15 +14530,6 @@ "react": ">= 0.14.0" } }, - "node_modules/masonry-layout": { - "version": "4.2.2", - "resolved": "https://registry.npmjs.org/masonry-layout/-/masonry-layout-4.2.2.tgz", - "integrity": "sha512-iGtAlrpHNyxaR19CvKC3npnEcAwszXoyJiI8ARV2ePi7fmYhIud25MHK8Zx4P0LCC4d3TNO9+rFa1KoK1OEOaA==", - "dependencies": { - "get-size": "^2.0.2", - "outlayer": "^2.1.0" - } - }, "node_modules/md5-file": { "version": "5.0.0", "resolved": "https://registry.npmjs.org/md5-file/-/md5-file-5.0.0.tgz", @@ -15829,16 +15796,6 @@ "node": ">=0.10.0" } }, - "node_modules/outlayer": { - "version": "2.1.1", - "resolved": "https://registry.npmjs.org/outlayer/-/outlayer-2.1.1.tgz", - "integrity": "sha512-+GplXsCQ3VrbGujAeHEzP9SXsBmJxzn/YdDSQZL0xqBmAWBmortu2Y9Gwdp9J0bgDQ8/YNIPMoBM13nTwZfAhw==", - "dependencies": { - "ev-emitter": "^1.0.0", - "fizzy-ui-utils": "^2.0.0", - "get-size": "^2.0.2" - } - }, "node_modules/p-cancelable": { "version": "2.1.1", "resolved": "https://registry.npmjs.org/p-cancelable/-/p-cancelable-2.1.1.tgz", diff --git a/package.json b/package.json index 76ef608a768..401456e5d9a 100644 --- a/package.json +++ b/package.json @@ -72,7 +72,6 @@ "hast-util-to-string": "^1.0.4", "is-absolute-url": "^4.0.1", "markdown-to-jsx": "^7.4.7", - "masonry-layout": "^4.2.2", "mdast-util-to-string": "^2.0.0", "prismjs": "^1.29.0", "react": "^18.3.1", diff --git a/src/pages/index.js b/src/pages/index.js index cd701c39cf2..6cda70c04f3 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -1,33 +1,9 @@ -import React, { useLayoutEffect, useRef } from "react"; +import React from "react"; import { Container } from "react-bootstrap"; import Icon, { iconNames } from "../components/icon/"; import { Footer, IndexSubNav, Layout, Link, MainContent } from "../components"; import { updates } from "../constants/updates"; -const isBrowser = typeof window !== "undefined"; -const Masonry = isBrowser ? window.Masonry || require("masonry-layout") : null; - -const IndexCard = ({ iconName, headingText, children }) => ( -
-
-
-
-
- -
-

{headingText}

-
-
    {children}
-
-
-
-); - const BannerCard = ({ iconName, headingText, children }) => (
@@ -135,21 +111,7 @@ const BannerCardLink = ({ to, className, children }) => ( ); -const IndexCardLink = ({ to, className, children }) => ( -
  • - - {children} - -
  • -); - const Page = () => { - const layout = useRef(null); - useLayoutEffect(() => { - layout.current = layout.current || new Masonry("*[data-masonry]"); - return () => layout.current?.destroy(); - }, []); - return ( Date: Tue, 27 Aug 2024 12:47:24 +0100 Subject: [PATCH 56/67] Rebuild with test DMS content and fix right hand TOCs Signed-off-by: Dj Walker-Morgan --- .../getting_started/mark_completed.mdx | 2 +- .../getting_started/remove_software.mdx | 5 +++++ .../getting_started/verify_migration.mdx | 2 +- .../data-migration-service/rel_notes/index.mdx | 18 ++++++++++++++++++ .../rel_notes/rel_notes_2.0.0_preview.mdx | 14 ++++++++++++++ .../data-migration-service/upgrading.mdx | 16 ++++++++++++++++ .../edb-postgres-ai/migration-etl/pgd.mdx | 6 ------ src/pages/index.js | 4 ++-- 8 files changed, 57 insertions(+), 10 deletions(-) create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/remove_software.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/index.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/rel_notes_2.0.0_preview.mdx create mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/upgrading.mdx delete mode 100644 advocacy_docs/edb-postgres-ai/migration-etl/pgd.mdx diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/mark_completed.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/mark_completed.mdx index 170dad42a25..d65aa7c2d05 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/mark_completed.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/mark_completed.mdx @@ -10,4 +10,4 @@ After checking that your data is present in the target database you can stop the 1. In the [EDB Postgres AI Console](https://portal.biganimal.com), in your project, select **Migrate** > **Migrations**. -1. Select the **check icon** next to your migration to mark the migration as completed. This stops the streaming procedure. \ No newline at end of file +1. Select the **check icon** next to your migration to mark the migration as completed. This stops the streaming procedure. diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/remove_software.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/remove_software.mdx new file mode 100644 index 00000000000..d6a88cefa9b --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/remove_software.mdx @@ -0,0 +1,5 @@ +--- +title: "Removing customer software" +--- + +After verifying that the migration is up and running, you can remove the customer software. \ No newline at end of file diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/verify_migration.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/verify_migration.mdx index d7f8502e458..08b9b2deb54 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/verify_migration.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/verify_migration.mdx @@ -4,4 +4,4 @@ title: "Verifying the migration" Compare the source and target databases to verify that the all data was migrated. -You can use [LiveCompare](/livecompare/latest/). \ No newline at end of file +You can use [LiveCompare](/livecompare/latest/). diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/index.mdx new file mode 100644 index 00000000000..ce6ea9aeac3 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/index.mdx @@ -0,0 +1,18 @@ +--- +title: "EDB Data Migration Service Release notes" +navTitle: "Release notes" +description: Learn about new features and functions. +navigation: +- rel_notes_2.0.0_preview +--- + +The EDB Data Migration Service documentation describes the latest version of the EDB +DMS Reader 2, including minor releases and patches. The release notes +provide information on what was new in each release. For new functionality +introduced in a minor or patch release, the content also indicates the release +that introduced the feature. + +| Release Date | Data Migration | +|--------------|------------------------------------------| +| 2023 Jun 9 | [2.0.0_preview](rel_notes_2.0.0_preview) | + diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/rel_notes_2.0.0_preview.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/rel_notes_2.0.0_preview.mdx new file mode 100644 index 00000000000..6b15b93e8eb --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/rel_notes/rel_notes_2.0.0_preview.mdx @@ -0,0 +1,14 @@ +--- +title: "Release notes for EDB Data Migration Service version 2.0.0_preview" +navTitle: "Version 2.0.0_preview" +--- + +EDB Data Migration Service (EDB DMS) version 2.0.0_preview is a new major version of EDB Data Migration Service. + +The highlights of this release include: + +* General availability of EDB DMS migration capabilities. + +| Type | Description | +|-------------|------------------------------------------------------------------------------------------| +| Enhancement | Parallel table snapshots are available with Debezium 2.2.0. | diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/upgrading.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/upgrading.mdx new file mode 100644 index 00000000000..74ac0751558 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/upgrading.mdx @@ -0,0 +1,16 @@ +--- +title: "Upgrading" +description: Learn how to upgrade the EDB DMS Reader to a more recent version. +--- + +EDB recommends upgrading the EDB DMS Reader when it is not performing a streaming migration. However, you can also temporarily stop a migration to perform an upgrade. +The EDB DMS Reader components are designed to terminate and restart gracefully. + +To upgrade the software: + +1. If the EDB DMS reader is currently running, stop the process. + +1. Install and start a new version of the EDB DMS Reader. + +1. Continue the migration by restarting the EDB DMS Reader with the updated software. + diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/pgd.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/pgd.mdx deleted file mode 100644 index 632584dd934..00000000000 --- a/advocacy_docs/edb-postgres-ai/migration-etl/pgd.mdx +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: PGD test -navTitle: PGD test -description: PGD test -refresh: /pgd/latest/ ---- diff --git a/src/pages/index.js b/src/pages/index.js index 6cda70c04f3..c9c6a321039 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -276,13 +276,13 @@ const Page = () => { headingText="Migration and ETL" to="/edb-postgres-ai/migration-etl" > - + Data Migration Service Migration Portal with AI Copilot - +   Date: Tue, 27 Aug 2024 13:49:49 +0100 Subject: [PATCH 57/67] Remove sticky debug Signed-off-by: Dj Walker-Morgan --- src/templates/doc.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/templates/doc.js b/src/templates/doc.js index bdacc48e808..29906fdf260 100644 --- a/src/templates/doc.js +++ b/src/templates/doc.js @@ -77,7 +77,7 @@ const buildSections = (navTree) => { } }); if (nextSection) sections.push(nextSection); - console.error(sections); + return sections; }; From b44b9f9add76626c150eedf7fe6fd1548e0e4a04 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Thu, 29 Aug 2024 11:27:08 +0100 Subject: [PATCH 58/67] Page tuneup for front page - dividers are now functions Signed-off-by: Dj Walker-Morgan --- .../edb-postgres-ai/migration-etl/index.mdx | 4 +- src/pages/index.js | 191 +++++++++--------- 2 files changed, 96 insertions(+), 99 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/index.mdx index d28269fc219..8f4d83dd50c 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/index.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/index.mdx @@ -9,7 +9,9 @@ navigation: - migration-and-ai --- -Moving your data to Postgres is a challenge that EDB Postgres AI is built to solve. The EDB Postgres AI platform includes a set of tools that help you migrate your data to Postgres and keep it up-to-date. These tools include the EDB Data Migration Service and the EDB Postgres AI Migration and ETL tools. +Moving your data to Postgres is a challenge that EDB Postgres AI is built to solve. The EDB Postgres AI platform includes a set of tools that help you migrate your data to Postgres and keep it up-to-date. + +These tools include the EDB Data Migration Service and the EDB Postgres AI Migration and ETL tools. diff --git a/src/pages/index.js b/src/pages/index.js index c9c6a321039..a8fed3d263b 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -6,7 +6,7 @@ import { updates } from "../constants/updates"; const BannerCard = ({ iconName, headingText, children }) => (
    -
    +
    ( const BannerSubCard = ({ iconName, headingText, to, children }) => (
    -
    +
    -
    +
    ( const BannerWideSubCard = ({ iconName, headingText, to, children }) => (
    -
    +
    -
    +
    ( const BannerWideCard = ({ iconName, headingText, to, children }) => (
    -
    +
    {children}
    @@ -111,6 +111,24 @@ const BannerCardLink = ({ to, className, children }) => ( ); +const BannerIconDivider = ({ iconName, headingText }) => ( + + + +   + {headingText} + + +); + +const BannerDivider = ({ headingText }) => ( + + + {headingText} + + +); + const Page = () => { return ( { Migration Portal with AI Copilot -   + Replication Server + + + Downloads and Repositories + + + + Developer Guides + + + - - - Kubernetes - + + EDB Postgres Distributed for Kubernetes @@ -306,10 +343,10 @@ const Page = () => { CloudNativePG - - - Management and Monitoring - + Postgres Enterprise Manager @@ -326,10 +363,11 @@ const Page = () => { Postgres Workload Report - - - Security - + + Transparent Data Encryption @@ -337,23 +375,20 @@ const Page = () => { EDB LDAP Sync - - - Automation - + Trusted Postgres Architect - - - High Availability - + + Replication Manager (repmgr) @@ -369,14 +404,10 @@ const Page = () => { Failover Manager - - - Backup and Recovery - + Barman @@ -385,29 +416,27 @@ const Page = () => { pgBackRest - - - Migration - + - + {/* Migration Handbook - + */} Migration Toolkit - + {/* Replication Server - + */} - - Extensions - + Supported Postgres extensions @@ -451,14 +480,12 @@ const Page = () => { PG Failover Slots - - Tools - - EDB SPL Check + + EDB SQL Patch @@ -472,45 +499,23 @@ const Page = () => { - - - Downloads and Repositories - - - - Developer Guides - - - - - Connectors - + + JDBC .NET OCL ODBC - - Connection Poolers - + + PgBouncer pgPool-II - - Foreign Data Wrappers - + Hadoop @@ -527,9 +532,7 @@ const Page = () => { iconName={iconNames.HANDSHAKE} headingText="Third Party Integrations" > - - Backup - + { Veritas NetBackup for PostgreSQL - - Data Movement - + { Precisely Connect CDC - - Developer Tools - + DBeaver PRO @@ -586,10 +585,9 @@ const Page = () => { > SIB Visions VisionX -
    - - Security - + + + Hashicorp Vault @@ -617,10 +615,7 @@ const Page = () => { > Thales CipherTrust Transparent Encryption -
    - - Other - + Chemaxon JChem PostgreSQL Cartridge From d787cdffd097d093305cedf9ea9610505eb47663 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Tue, 3 Sep 2024 11:49:22 -0400 Subject: [PATCH 59/67] first steps to setting up templatized install instructions for Data Migration Service --- install_template/config.yaml | 29 +++++++++++++++++++ .../almalinux-8-or-rocky-linux-8.njk | 5 ++++ .../products/data-migration-service/base.njk | 25 ++++++++++++++++ .../data-migration-service/debian-10.njk | 2 ++ .../data-migration-service/debian-11.njk | 2 ++ .../data-migration-service/debian.njk | 4 +++ .../products/data-migration-service/index.njk | 9 ++++++ .../data-migration-service/ppc64le_index.njk | 7 +++++ .../data-migration-service/rhel-8-or-ol-8.njk | 5 ++++ .../data-migration-service/sles-12.njk | 6 ++++ .../data-migration-service/sles-15.njk | 5 ++++ .../data-migration-service/ubuntu-18.04.njk | 2 ++ .../data-migration-service/ubuntu-20.04.njk | 2 ++ .../data-migration-service/ubuntu-22.04.njk | 2 ++ .../data-migration-service/ubuntu.njk | 1 + .../data-migration-service/x86_64_index.njk | 7 +++++ 16 files changed, 113 insertions(+) create mode 100644 install_template/templates/products/data-migration-service/almalinux-8-or-rocky-linux-8.njk create mode 100644 install_template/templates/products/data-migration-service/base.njk create mode 100644 install_template/templates/products/data-migration-service/debian-10.njk create mode 100644 install_template/templates/products/data-migration-service/debian-11.njk create mode 100644 install_template/templates/products/data-migration-service/debian.njk create mode 100644 install_template/templates/products/data-migration-service/index.njk create mode 100644 install_template/templates/products/data-migration-service/ppc64le_index.njk create mode 100644 install_template/templates/products/data-migration-service/rhel-8-or-ol-8.njk create mode 100644 install_template/templates/products/data-migration-service/sles-12.njk create mode 100644 install_template/templates/products/data-migration-service/sles-15.njk create mode 100644 install_template/templates/products/data-migration-service/ubuntu-18.04.njk create mode 100644 install_template/templates/products/data-migration-service/ubuntu-20.04.njk create mode 100644 install_template/templates/products/data-migration-service/ubuntu-22.04.njk create mode 100644 install_template/templates/products/data-migration-service/ubuntu.njk create mode 100644 install_template/templates/products/data-migration-service/x86_64_index.njk diff --git a/install_template/config.yaml b/install_template/config.yaml index 187d2eb3245..8d4a18a0b46 100644 --- a/install_template/config.yaml +++ b/install_template/config.yaml @@ -1,4 +1,33 @@ products: + - name: Data Migration Service + platforms: + - name: RHEL 8 or OL 8 + arch: x86_64 + supported versions: [2.0] + - name: AlmaLinux 9 or Rocky Linux 9 + arch: x86_64 + supported versions: [2.0] + - name: RHEL 9 or OL 9 + arch: x86_64 + supported versions: [2.0] + - name: Debian 11 + arch: x86_64 + supported versions: [2.0] + - name: Debian 12 + arch: x86_64 + supported versions: [2.0] + - name: Ubuntu 20.04 + arch: x86_64 + supported versions: [2.0] + - name: Ubuntu 22.04 + arch: x86_64 + supported versions: [2.0] + - name: SLES 15 + arch: x86_64 + supported versions: [2.0] + - name: SLES 12 + arch: x86_64 + supported versions: [2.0] - name: EDB JDBC Connector platforms: - name: RHEL 8 diff --git a/install_template/templates/products/data-migration-service/almalinux-8-or-rocky-linux-8.njk b/install_template/templates/products/data-migration-service/almalinux-8-or-rocky-linux-8.njk new file mode 100644 index 00000000000..85caebe4da4 --- /dev/null +++ b/install_template/templates/products/data-migration-service/almalinux-8-or-rocky-linux-8.njk @@ -0,0 +1,5 @@ +{% extends "products/data-migration-service/base.njk" %} +{% set platformBaseTemplate = "almalinux-8-or-rocky-linux-8" %} +{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} +{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} +{% block prerequisites %}{% endblock prerequisites %} \ No newline at end of file diff --git a/install_template/templates/products/data-migration-service/base.njk b/install_template/templates/products/data-migration-service/base.njk new file mode 100644 index 00000000000..5924b36d3f2 --- /dev/null +++ b/install_template/templates/products/data-migration-service/base.njk @@ -0,0 +1,25 @@ +{% extends "platformBase/" + platformBaseTemplate + '.njk' %} +{% set packageName = packageName or 'cdcreader=-1.4-1.4766136665.17.1.jammy' %} +{% set writerPackageName = writerPackageName or 'cdcwriter=1.3-1.4766201953.4.1.jammy' %} +{% import "platformBase/_deploymentConstants.njk" as deploy %} +{% block frontmatter %} +{# + If you modify deployment path here, please first copy the old expression + and add it to the list under "redirects:" below - this ensures we don't + break any existing links. +#} +deployPath: edb-postgres-ai/migration-etl/data-migration-service/{{ product.version }}/installing/linux_{{platform.arch}}/data-migration-service_{{deploy.map_platform[platform.name]}}.mdx +{% endblock frontmatter %} + +{% block installCommand %} +Install CDCReader: +```shell +sudo {{packageManager}} install {{ packageName }} +``` + +Install CDCWriter: +```shell +sudo {{packageManager}} install {{ writerPackageName }} +``` +{% endblock installCommand %} + diff --git a/install_template/templates/products/data-migration-service/debian-10.njk b/install_template/templates/products/data-migration-service/debian-10.njk new file mode 100644 index 00000000000..61391348f43 --- /dev/null +++ b/install_template/templates/products/data-migration-service/debian-10.njk @@ -0,0 +1,2 @@ +{% extends "products/data-migration-service/debian.njk" %} +{% set platformBaseTemplate = "debian-10" %} diff --git a/install_template/templates/products/data-migration-service/debian-11.njk b/install_template/templates/products/data-migration-service/debian-11.njk new file mode 100644 index 00000000000..093d9ddf240 --- /dev/null +++ b/install_template/templates/products/data-migration-service/debian-11.njk @@ -0,0 +1,2 @@ +{% extends "products/data-migration-service/debian.njk" %} +{% set platformBaseTemplate = "debian-11" %} diff --git a/install_template/templates/products/data-migration-service/debian.njk b/install_template/templates/products/data-migration-service/debian.njk new file mode 100644 index 00000000000..913797e667c --- /dev/null +++ b/install_template/templates/products/data-migration-service/debian.njk @@ -0,0 +1,4 @@ +{% extends "products/data-migration-service/base.njk" %} +{% block debian_ubuntu %}This section steps you through getting started with your cluster including logging in, ensuring the installation was successful, connecting to your cluster, and creating the user password. + +```shell{% endblock debian_ubuntu %} \ No newline at end of file diff --git a/install_template/templates/products/data-migration-service/index.njk b/install_template/templates/products/data-migration-service/index.njk new file mode 100644 index 00000000000..d724f2e7b74 --- /dev/null +++ b/install_template/templates/products/data-migration-service/index.njk @@ -0,0 +1,9 @@ +{% extends "platformBase/index.njk" %} +{% set productShortname="data-migration-service" %} + +{% block frontmatter %} +deployPath: edb-postgres-ai/migration-etl/{{productShortname}}/{{ product.version }}/installing/index.mdx +{% endblock frontmatter %} +{% block navigation %} +- linux_x86_64 +{% endblock navigation %} diff --git a/install_template/templates/products/data-migration-service/ppc64le_index.njk b/install_template/templates/products/data-migration-service/ppc64le_index.njk new file mode 100644 index 00000000000..11bad6298dc --- /dev/null +++ b/install_template/templates/products/data-migration-service/ppc64le_index.njk @@ -0,0 +1,7 @@ + +{% extends "platformBase/ppc64le_index.njk" %} +{% set productShortname="transporter" %} + +{% block frontmatter %} +{{super()}} +{% endblock frontmatter %} diff --git a/install_template/templates/products/data-migration-service/rhel-8-or-ol-8.njk b/install_template/templates/products/data-migration-service/rhel-8-or-ol-8.njk new file mode 100644 index 00000000000..5c86b5dfa0d --- /dev/null +++ b/install_template/templates/products/data-migration-service/rhel-8-or-ol-8.njk @@ -0,0 +1,5 @@ +{% extends "products/data-migration-service/base.njk" %} +{% set platformBaseTemplate = "rhel-8-or-ol-8" %} +{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} +{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} +{% block prerequisites %}{% endblock prerequisites %} \ No newline at end of file diff --git a/install_template/templates/products/data-migration-service/sles-12.njk b/install_template/templates/products/data-migration-service/sles-12.njk new file mode 100644 index 00000000000..17133292bf2 --- /dev/null +++ b/install_template/templates/products/data-migration-service/sles-12.njk @@ -0,0 +1,6 @@ +{% extends "products/data-migration-service/base.njk" %} +{% set platformBaseTemplate = "sles-12" %} +{% set packageManager = "zypper" %} +{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} +{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} + diff --git a/install_template/templates/products/data-migration-service/sles-15.njk b/install_template/templates/products/data-migration-service/sles-15.njk new file mode 100644 index 00000000000..0e94954e533 --- /dev/null +++ b/install_template/templates/products/data-migration-service/sles-15.njk @@ -0,0 +1,5 @@ +{% extends "products/data-migration-service/base.njk" %} +{% set platformBaseTemplate = "sles-15" %} +{% set packageManager = "zypper" %} +{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} +{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} diff --git a/install_template/templates/products/data-migration-service/ubuntu-18.04.njk b/install_template/templates/products/data-migration-service/ubuntu-18.04.njk new file mode 100644 index 00000000000..7b9992b6113 --- /dev/null +++ b/install_template/templates/products/data-migration-service/ubuntu-18.04.njk @@ -0,0 +1,2 @@ +{% extends "products/data-migration-service/ubuntu.njk" %} +{% set platformBaseTemplate = "ubuntu-18.04" %} diff --git a/install_template/templates/products/data-migration-service/ubuntu-20.04.njk b/install_template/templates/products/data-migration-service/ubuntu-20.04.njk new file mode 100644 index 00000000000..15df88848be --- /dev/null +++ b/install_template/templates/products/data-migration-service/ubuntu-20.04.njk @@ -0,0 +1,2 @@ +{% extends "products/data-migration-service/ubuntu.njk" %} +{% set platformBaseTemplate = "ubuntu-20.04" %} diff --git a/install_template/templates/products/data-migration-service/ubuntu-22.04.njk b/install_template/templates/products/data-migration-service/ubuntu-22.04.njk new file mode 100644 index 00000000000..10bd2d5a432 --- /dev/null +++ b/install_template/templates/products/data-migration-service/ubuntu-22.04.njk @@ -0,0 +1,2 @@ +{% extends "products/data-migration-service/ubuntu.njk" %} +{% set platformBaseTemplate = "ubuntu-22.04" %} diff --git a/install_template/templates/products/data-migration-service/ubuntu.njk b/install_template/templates/products/data-migration-service/ubuntu.njk new file mode 100644 index 00000000000..e7c49e0c676 --- /dev/null +++ b/install_template/templates/products/data-migration-service/ubuntu.njk @@ -0,0 +1 @@ +{% extends "products/data-migration-service/base.njk" %} diff --git a/install_template/templates/products/data-migration-service/x86_64_index.njk b/install_template/templates/products/data-migration-service/x86_64_index.njk new file mode 100644 index 00000000000..a00dcc01f45 --- /dev/null +++ b/install_template/templates/products/data-migration-service/x86_64_index.njk @@ -0,0 +1,7 @@ + +{% extends "platformBase/x86_64_index.njk" %} +{% set productShortname="data-migration-service" %} + +{% block frontmatter %} +deployPath: edb-postgres-ai/migration-etl/{{productShortname}}/{{ product.version }}/installing/linux_x86_64/index.mdx +{% endblock frontmatter %} From 37bab63dcd4b6adc99664d1a338c29d5d17daeda Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 4 Sep 2024 08:53:06 -0400 Subject: [PATCH 60/67] Fixed name and short name --- install_template/config.yaml | 2 +- .../data-migration-service/debian-10.njk | 2 - .../data-migration-service/debian-11.njk | 2 - .../data-migration-service/ubuntu-18.04.njk | 2 - .../data-migration-service/ubuntu-20.04.njk | 2 - .../data-migration-service/ubuntu-22.04.njk | 2 - .../data-migration-service/ubuntu.njk | 1 - .../almalinux-8-or-rocky-linux-8.njk | 2 +- .../base.njk | 2 +- .../debian-10.njk | 2 + .../debian-11.njk | 2 + .../debian.njk | 2 +- .../index.njk | 2 +- .../ppc64le_index.njk | 2 +- .../rhel-8-or-ol-8.njk | 2 +- .../sles-12.njk | 2 +- .../sles-15.njk | 2 +- .../ubuntu-18.04.njk | 2 + .../ubuntu-20.04.njk | 2 + .../ubuntu-22.04.njk | 2 + .../ubuntu.njk | 1 + .../x86_64_index.njk | 2 +- .../edb-dms-reader/2/installing/index.mdx | 31 +++++++++++ .../linux_x86_64/edb-dms-reader_debian_11.mdx | 41 ++++++++++++++ .../linux_x86_64/edb-dms-reader_rhel_8.mdx | 41 ++++++++++++++ .../linux_x86_64/edb-dms-reader_sles_12.mdx | 52 ++++++++++++++++++ .../linux_x86_64/edb-dms-reader_sles_15.mdx | 53 +++++++++++++++++++ .../linux_x86_64/edb-dms-reader_ubuntu_20.mdx | 41 ++++++++++++++ .../linux_x86_64/edb-dms-reader_ubuntu_22.mdx | 41 ++++++++++++++ .../2/installing/linux_x86_64/index.mdx | 47 ++++++++++++++++ 30 files changed, 368 insertions(+), 21 deletions(-) delete mode 100644 install_template/templates/products/data-migration-service/debian-10.njk delete mode 100644 install_template/templates/products/data-migration-service/debian-11.njk delete mode 100644 install_template/templates/products/data-migration-service/ubuntu-18.04.njk delete mode 100644 install_template/templates/products/data-migration-service/ubuntu-20.04.njk delete mode 100644 install_template/templates/products/data-migration-service/ubuntu-22.04.njk delete mode 100644 install_template/templates/products/data-migration-service/ubuntu.njk rename install_template/templates/products/{data-migration-service => edb-data-migration-service-reader}/almalinux-8-or-rocky-linux-8.njk (80%) rename install_template/templates/products/{data-migration-service => edb-data-migration-service-reader}/base.njk (80%) create mode 100644 install_template/templates/products/edb-data-migration-service-reader/debian-10.njk create mode 100644 install_template/templates/products/edb-data-migration-service-reader/debian-11.njk rename install_template/templates/products/{data-migration-service => edb-data-migration-service-reader}/debian.njk (78%) rename install_template/templates/products/{data-migration-service => edb-data-migration-service-reader}/index.njk (83%) rename install_template/templates/products/{data-migration-service => edb-data-migration-service-reader}/ppc64le_index.njk (71%) rename install_template/templates/products/{data-migration-service => edb-data-migration-service-reader}/rhel-8-or-ol-8.njk (79%) rename install_template/templates/products/{data-migration-service => edb-data-migration-service-reader}/sles-12.njk (77%) rename install_template/templates/products/{data-migration-service => edb-data-migration-service-reader}/sles-15.njk (77%) create mode 100644 install_template/templates/products/edb-data-migration-service-reader/ubuntu-18.04.njk create mode 100644 install_template/templates/products/edb-data-migration-service-reader/ubuntu-20.04.njk create mode 100644 install_template/templates/products/edb-data-migration-service-reader/ubuntu-22.04.njk create mode 100644 install_template/templates/products/edb-data-migration-service-reader/ubuntu.njk rename install_template/templates/products/{data-migration-service => edb-data-migration-service-reader}/x86_64_index.njk (80%) create mode 100644 product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/index.mdx create mode 100644 product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_debian_11.mdx create mode 100644 product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_rhel_8.mdx create mode 100644 product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_sles_12.mdx create mode 100644 product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_sles_15.mdx create mode 100644 product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_ubuntu_20.mdx create mode 100644 product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_ubuntu_22.mdx create mode 100644 product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/index.mdx diff --git a/install_template/config.yaml b/install_template/config.yaml index 8d4a18a0b46..a8db7a9af67 100644 --- a/install_template/config.yaml +++ b/install_template/config.yaml @@ -1,5 +1,5 @@ products: - - name: Data Migration Service + - name: EDB Data Migration Service Reader platforms: - name: RHEL 8 or OL 8 arch: x86_64 diff --git a/install_template/templates/products/data-migration-service/debian-10.njk b/install_template/templates/products/data-migration-service/debian-10.njk deleted file mode 100644 index 61391348f43..00000000000 --- a/install_template/templates/products/data-migration-service/debian-10.njk +++ /dev/null @@ -1,2 +0,0 @@ -{% extends "products/data-migration-service/debian.njk" %} -{% set platformBaseTemplate = "debian-10" %} diff --git a/install_template/templates/products/data-migration-service/debian-11.njk b/install_template/templates/products/data-migration-service/debian-11.njk deleted file mode 100644 index 093d9ddf240..00000000000 --- a/install_template/templates/products/data-migration-service/debian-11.njk +++ /dev/null @@ -1,2 +0,0 @@ -{% extends "products/data-migration-service/debian.njk" %} -{% set platformBaseTemplate = "debian-11" %} diff --git a/install_template/templates/products/data-migration-service/ubuntu-18.04.njk b/install_template/templates/products/data-migration-service/ubuntu-18.04.njk deleted file mode 100644 index 7b9992b6113..00000000000 --- a/install_template/templates/products/data-migration-service/ubuntu-18.04.njk +++ /dev/null @@ -1,2 +0,0 @@ -{% extends "products/data-migration-service/ubuntu.njk" %} -{% set platformBaseTemplate = "ubuntu-18.04" %} diff --git a/install_template/templates/products/data-migration-service/ubuntu-20.04.njk b/install_template/templates/products/data-migration-service/ubuntu-20.04.njk deleted file mode 100644 index 15df88848be..00000000000 --- a/install_template/templates/products/data-migration-service/ubuntu-20.04.njk +++ /dev/null @@ -1,2 +0,0 @@ -{% extends "products/data-migration-service/ubuntu.njk" %} -{% set platformBaseTemplate = "ubuntu-20.04" %} diff --git a/install_template/templates/products/data-migration-service/ubuntu-22.04.njk b/install_template/templates/products/data-migration-service/ubuntu-22.04.njk deleted file mode 100644 index 10bd2d5a432..00000000000 --- a/install_template/templates/products/data-migration-service/ubuntu-22.04.njk +++ /dev/null @@ -1,2 +0,0 @@ -{% extends "products/data-migration-service/ubuntu.njk" %} -{% set platformBaseTemplate = "ubuntu-22.04" %} diff --git a/install_template/templates/products/data-migration-service/ubuntu.njk b/install_template/templates/products/data-migration-service/ubuntu.njk deleted file mode 100644 index e7c49e0c676..00000000000 --- a/install_template/templates/products/data-migration-service/ubuntu.njk +++ /dev/null @@ -1 +0,0 @@ -{% extends "products/data-migration-service/base.njk" %} diff --git a/install_template/templates/products/data-migration-service/almalinux-8-or-rocky-linux-8.njk b/install_template/templates/products/edb-data-migration-service-reader/almalinux-8-or-rocky-linux-8.njk similarity index 80% rename from install_template/templates/products/data-migration-service/almalinux-8-or-rocky-linux-8.njk rename to install_template/templates/products/edb-data-migration-service-reader/almalinux-8-or-rocky-linux-8.njk index 85caebe4da4..15a11c98baa 100644 --- a/install_template/templates/products/data-migration-service/almalinux-8-or-rocky-linux-8.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/almalinux-8-or-rocky-linux-8.njk @@ -1,4 +1,4 @@ -{% extends "products/data-migration-service/base.njk" %} +{% extends "products/edb-data-migration-service-reader/base.njk" %} {% set platformBaseTemplate = "almalinux-8-or-rocky-linux-8" %} {% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} {% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} diff --git a/install_template/templates/products/data-migration-service/base.njk b/install_template/templates/products/edb-data-migration-service-reader/base.njk similarity index 80% rename from install_template/templates/products/data-migration-service/base.njk rename to install_template/templates/products/edb-data-migration-service-reader/base.njk index 5924b36d3f2..96a7b21cdba 100644 --- a/install_template/templates/products/data-migration-service/base.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/base.njk @@ -8,7 +8,7 @@ and add it to the list under "redirects:" below - this ensures we don't break any existing links. #} -deployPath: edb-postgres-ai/migration-etl/data-migration-service/{{ product.version }}/installing/linux_{{platform.arch}}/data-migration-service_{{deploy.map_platform[platform.name]}}.mdx +deployPath: edb-postgres-ai/migration-etl/edb-dms-reader/{{ product.version }}/installing/linux_{{platform.arch}}/edb-dms-reader_{{deploy.map_platform[platform.name]}}.mdx {% endblock frontmatter %} {% block installCommand %} diff --git a/install_template/templates/products/edb-data-migration-service-reader/debian-10.njk b/install_template/templates/products/edb-data-migration-service-reader/debian-10.njk new file mode 100644 index 00000000000..31a098f2ff9 --- /dev/null +++ b/install_template/templates/products/edb-data-migration-service-reader/debian-10.njk @@ -0,0 +1,2 @@ +{% extends "products/edb-data-migration-service-reader/debian.njk" %} +{% set platformBaseTemplate = "debian-10" %} diff --git a/install_template/templates/products/edb-data-migration-service-reader/debian-11.njk b/install_template/templates/products/edb-data-migration-service-reader/debian-11.njk new file mode 100644 index 00000000000..e784520deb0 --- /dev/null +++ b/install_template/templates/products/edb-data-migration-service-reader/debian-11.njk @@ -0,0 +1,2 @@ +{% extends "products/edb-data-migration-service-reader/debian.njk" %} +{% set platformBaseTemplate = "debian-11" %} diff --git a/install_template/templates/products/data-migration-service/debian.njk b/install_template/templates/products/edb-data-migration-service-reader/debian.njk similarity index 78% rename from install_template/templates/products/data-migration-service/debian.njk rename to install_template/templates/products/edb-data-migration-service-reader/debian.njk index 913797e667c..1d5f1b15665 100644 --- a/install_template/templates/products/data-migration-service/debian.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/debian.njk @@ -1,4 +1,4 @@ -{% extends "products/data-migration-service/base.njk" %} +{% extends "products/edb-data-migration-service-reader/base.njk" %} {% block debian_ubuntu %}This section steps you through getting started with your cluster including logging in, ensuring the installation was successful, connecting to your cluster, and creating the user password. ```shell{% endblock debian_ubuntu %} \ No newline at end of file diff --git a/install_template/templates/products/data-migration-service/index.njk b/install_template/templates/products/edb-data-migration-service-reader/index.njk similarity index 83% rename from install_template/templates/products/data-migration-service/index.njk rename to install_template/templates/products/edb-data-migration-service-reader/index.njk index d724f2e7b74..431b6c0d4b0 100644 --- a/install_template/templates/products/data-migration-service/index.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/index.njk @@ -1,5 +1,5 @@ {% extends "platformBase/index.njk" %} -{% set productShortname="data-migration-service" %} +{% set productShortname="edb-dms-reader" %} {% block frontmatter %} deployPath: edb-postgres-ai/migration-etl/{{productShortname}}/{{ product.version }}/installing/index.mdx diff --git a/install_template/templates/products/data-migration-service/ppc64le_index.njk b/install_template/templates/products/edb-data-migration-service-reader/ppc64le_index.njk similarity index 71% rename from install_template/templates/products/data-migration-service/ppc64le_index.njk rename to install_template/templates/products/edb-data-migration-service-reader/ppc64le_index.njk index 11bad6298dc..b470fcbbb41 100644 --- a/install_template/templates/products/data-migration-service/ppc64le_index.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/ppc64le_index.njk @@ -1,6 +1,6 @@ {% extends "platformBase/ppc64le_index.njk" %} -{% set productShortname="transporter" %} +{% set productShortname="edb-dms-reader" %} {% block frontmatter %} {{super()}} diff --git a/install_template/templates/products/data-migration-service/rhel-8-or-ol-8.njk b/install_template/templates/products/edb-data-migration-service-reader/rhel-8-or-ol-8.njk similarity index 79% rename from install_template/templates/products/data-migration-service/rhel-8-or-ol-8.njk rename to install_template/templates/products/edb-data-migration-service-reader/rhel-8-or-ol-8.njk index 5c86b5dfa0d..595dfcb657b 100644 --- a/install_template/templates/products/data-migration-service/rhel-8-or-ol-8.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/rhel-8-or-ol-8.njk @@ -1,4 +1,4 @@ -{% extends "products/data-migration-service/base.njk" %} +{% extends "products/edb-data-migration-service-reader/base.njk" %} {% set platformBaseTemplate = "rhel-8-or-ol-8" %} {% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} {% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} diff --git a/install_template/templates/products/data-migration-service/sles-12.njk b/install_template/templates/products/edb-data-migration-service-reader/sles-12.njk similarity index 77% rename from install_template/templates/products/data-migration-service/sles-12.njk rename to install_template/templates/products/edb-data-migration-service-reader/sles-12.njk index 17133292bf2..2b50dd803ca 100644 --- a/install_template/templates/products/data-migration-service/sles-12.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/sles-12.njk @@ -1,4 +1,4 @@ -{% extends "products/data-migration-service/base.njk" %} +{% extends "products/edb-data-migration-service-reader/base.njk" %} {% set platformBaseTemplate = "sles-12" %} {% set packageManager = "zypper" %} {% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} diff --git a/install_template/templates/products/data-migration-service/sles-15.njk b/install_template/templates/products/edb-data-migration-service-reader/sles-15.njk similarity index 77% rename from install_template/templates/products/data-migration-service/sles-15.njk rename to install_template/templates/products/edb-data-migration-service-reader/sles-15.njk index 0e94954e533..344d65be864 100644 --- a/install_template/templates/products/data-migration-service/sles-15.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/sles-15.njk @@ -1,4 +1,4 @@ -{% extends "products/data-migration-service/base.njk" %} +{% extends "products/edb-data-migration-service-reader/base.njk" %} {% set platformBaseTemplate = "sles-15" %} {% set packageManager = "zypper" %} {% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} diff --git a/install_template/templates/products/edb-data-migration-service-reader/ubuntu-18.04.njk b/install_template/templates/products/edb-data-migration-service-reader/ubuntu-18.04.njk new file mode 100644 index 00000000000..34c6da74150 --- /dev/null +++ b/install_template/templates/products/edb-data-migration-service-reader/ubuntu-18.04.njk @@ -0,0 +1,2 @@ +{% extends "products/edb-data-migration-service-reader/ubuntu.njk" %} +{% set platformBaseTemplate = "ubuntu-18.04" %} diff --git a/install_template/templates/products/edb-data-migration-service-reader/ubuntu-20.04.njk b/install_template/templates/products/edb-data-migration-service-reader/ubuntu-20.04.njk new file mode 100644 index 00000000000..bf8a7d59776 --- /dev/null +++ b/install_template/templates/products/edb-data-migration-service-reader/ubuntu-20.04.njk @@ -0,0 +1,2 @@ +{% extends "products/edb-data-migration-service-reader/ubuntu.njk" %} +{% set platformBaseTemplate = "ubuntu-20.04" %} diff --git a/install_template/templates/products/edb-data-migration-service-reader/ubuntu-22.04.njk b/install_template/templates/products/edb-data-migration-service-reader/ubuntu-22.04.njk new file mode 100644 index 00000000000..292cef9d884 --- /dev/null +++ b/install_template/templates/products/edb-data-migration-service-reader/ubuntu-22.04.njk @@ -0,0 +1,2 @@ +{% extends "products/edb-data-migration-service-reader/ubuntu.njk" %} +{% set platformBaseTemplate = "ubuntu-22.04" %} diff --git a/install_template/templates/products/edb-data-migration-service-reader/ubuntu.njk b/install_template/templates/products/edb-data-migration-service-reader/ubuntu.njk new file mode 100644 index 00000000000..a3809155a2a --- /dev/null +++ b/install_template/templates/products/edb-data-migration-service-reader/ubuntu.njk @@ -0,0 +1 @@ +{% extends "products/edb-data-migration-service-reader/base.njk" %} diff --git a/install_template/templates/products/data-migration-service/x86_64_index.njk b/install_template/templates/products/edb-data-migration-service-reader/x86_64_index.njk similarity index 80% rename from install_template/templates/products/data-migration-service/x86_64_index.njk rename to install_template/templates/products/edb-data-migration-service-reader/x86_64_index.njk index a00dcc01f45..9cfbcbe6e5b 100644 --- a/install_template/templates/products/data-migration-service/x86_64_index.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/x86_64_index.njk @@ -1,6 +1,6 @@ {% extends "platformBase/x86_64_index.njk" %} -{% set productShortname="data-migration-service" %} +{% set productShortname="edb-dms-reader" %} {% block frontmatter %} deployPath: edb-postgres-ai/migration-etl/{{productShortname}}/{{ product.version }}/installing/linux_x86_64/index.mdx diff --git a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/index.mdx b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/index.mdx new file mode 100644 index 00000000000..10fff3f06bb --- /dev/null +++ b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/index.mdx @@ -0,0 +1,31 @@ +--- +navTitle: Installing +title: Installing EDB Data Migration Service Reader on Linux + +navigation: + - linux_x86_64 +--- + +Select a link to access the applicable installation instructions: + +## Linux [x86-64 (amd64)](linux_x86_64) + +### Red Hat Enterprise Linux (RHEL) and derivatives + +- [RHEL 9](linux_x86_64/edb-dms-reader_rhel_9), [RHEL 8](linux_x86_64/edb-dms-reader_rhel_8) + +- [Oracle Linux (OL) 9](linux_x86_64/edb-dms-reader_rhel_9), [Oracle Linux (OL) 8](linux_x86_64/edb-dms-reader_rhel_8) + +- [Rocky Linux 9](linux_x86_64/edb-dms-reader_other_linux_9) + +- [AlmaLinux 9](linux_x86_64/edb-dms-reader_other_linux_9) + +### SUSE Linux Enterprise (SLES) + +- [SLES 15](linux_x86_64/edb-dms-reader_sles_15), [SLES 12](linux_x86_64/edb-dms-reader_sles_12) + +### Debian and derivatives + +- [Ubuntu 22.04](linux_x86_64/edb-dms-reader_ubuntu_22), [Ubuntu 20.04](linux_x86_64/edb-dms-reader_ubuntu_20) + +- [Debian 12](linux_x86_64/edb-dms-reader_debian_12), [Debian 11](linux_x86_64/edb-dms-reader_debian_11) diff --git a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_debian_11.mdx b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_debian_11.mdx new file mode 100644 index 00000000000..1b6e6334f1c --- /dev/null +++ b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_debian_11.mdx @@ -0,0 +1,41 @@ +--- +navTitle: Debian 11 +title: Installing EDB Data Migration Service Reader on Debian 11 x86_64 +--- + +## Prerequisites + +Before you begin the installation process: + +- Set up the EDB repository. + + Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. + + To determine if your repository exists, enter: + + `apt-cache search enterprisedb` + + If no output is generated, the repository isn't installed. + + To set up the EDB repository: + + 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). + + 1. Select the button that provides access to the EDB repository. + 1. Select the platform and software that you want to download. + + 1. Follow the instructions for setting up the EDB repository. + +## Install the package + +Install CDCReader: + +```shell +sudo apt-get install cdcreader=-1.4-1.4766136665.17.1.jammy +``` + +Install CDCWriter: + +```shell +sudo apt-get install cdcwriter=1.3-1.4766201953.4.1.jammy +``` diff --git a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_rhel_8.mdx b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_rhel_8.mdx new file mode 100644 index 00000000000..c0fa080b083 --- /dev/null +++ b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_rhel_8.mdx @@ -0,0 +1,41 @@ +--- +navTitle: RHEL 8 or OL 8 +title: Installing EDB Data Migration Service Reader on RHEL 8 or OL 8 x86_64 +--- + +## Prerequisites + +Before you begin the installation process: + +- Set up the EDB repository. + + Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. + + To determine if your repository exists, enter: + + `dnf repolist | grep enterprisedb` + + If no output is generated, the repository isn't installed. + + To set up the EDB repository: + + 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). + + 1. Select the button that provides access to the EDB repository. + 1. Select the platform and software that you want to download. + + 1. Follow the instructions for setting up the EDB repository. + +## Install the package + +Install CDCReader: + +```shell +sudo dnf install cdcreader-1.4-1.4766136665.17.1.el8.x86_64 +``` + +Install CDCWriter: + +```shell +sudo dnf install cdcwriter-1.3-1.4766201953.4.1.el8.x86_64 +``` diff --git a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_sles_12.mdx b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_sles_12.mdx new file mode 100644 index 00000000000..abf7c83e615 --- /dev/null +++ b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_sles_12.mdx @@ -0,0 +1,52 @@ +--- +navTitle: SLES 12 +title: Installing EDB Data Migration Service Reader on SLES 12 x86_64 +--- + +## Prerequisites + +Before you begin the installation process: + +- Set up the EDB repository. + + Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. + + To determine if your repository exists, enter: + + `zypper lr -E | grep enterprisedb` + + If no output is generated, the repository isn't installed. + + To set up the EDB repository: + + 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). + + 1. Select the button that provides access to the EDB repository. + 1. Select the platform and software that you want to download. + + 1. Follow the instructions for setting up the EDB repository. + +- Activate the required SUSE module: + ```shell + sudo SUSEConnect -p PackageHub/12.5/x86_64 + sudo SUSEConnect -p sle-sdk/12.5/x86_64 + + ``` +- Refresh the metadata: + ```shell + sudo zypper refresh + ``` + +## Install the package + +Install CDCReader: + +```shell +sudo zypper install cdcreader-1.4-1.4766136665.17.1.el8.x86_64 +``` + +Install CDCWriter: + +```shell +sudo zypper install cdcwriter-1.3-1.4766201953.4.1.el8.x86_64 +``` diff --git a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_sles_15.mdx b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_sles_15.mdx new file mode 100644 index 00000000000..81d6e550278 --- /dev/null +++ b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_sles_15.mdx @@ -0,0 +1,53 @@ +--- +navTitle: SLES 15 +title: Installing EDB Data Migration Service Reader on SLES 15 x86_64 +--- + +## Prerequisites + +Before you begin the installation process: + +- Set up the EDB repository. + + Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. + + To determine if your repository exists, enter: + + `zypper lr -E | grep enterprisedb` + + If no output is generated, the repository isn't installed. + + To set up the EDB repository: + + 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). + + 1. Select the button that provides access to the EDB repository. + 1. Select the platform and software that you want to download. + + 1. Follow the instructions for setting up the EDB repository. + +- Activate the required SUSE module: + + ```shell + sudo SUSEConnect -p PackageHub/15.4/x86_64 + + ``` + +- Refresh the metadata: + ```shell + sudo zypper refresh + ``` + +## Install the package + +Install CDCReader: + +```shell +sudo zypper install cdcreader-1.4-1.4766136665.17.1.el8.x86_64 +``` + +Install CDCWriter: + +```shell +sudo zypper install cdcwriter-1.3-1.4766201953.4.1.el8.x86_64 +``` diff --git a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_ubuntu_20.mdx b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_ubuntu_20.mdx new file mode 100644 index 00000000000..826f5761471 --- /dev/null +++ b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_ubuntu_20.mdx @@ -0,0 +1,41 @@ +--- +navTitle: Ubuntu 20.04 +title: Installing EDB Data Migration Service Reader on Ubuntu 20.04 x86_64 +--- + +## Prerequisites + +Before you begin the installation process: + +- Set up the EDB repository. + + Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. + + To determine if your repository exists, enter: + + `apt-cache search enterprisedb` + + If no output is generated, the repository isn't installed. + + To set up the EDB repository: + + 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). + + 1. Select the button that provides access to the EDB repository. + 1. Select the platform and software that you want to download. + + 1. Follow the instructions for setting up the EDB repository. + +## Install the package + +Install CDCReader: + +```shell +sudo apt-get install cdcreader=-1.4-1.4766136665.17.1.jammy +``` + +Install CDCWriter: + +```shell +sudo apt-get install cdcwriter=1.3-1.4766201953.4.1.jammy +``` diff --git a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_ubuntu_22.mdx b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_ubuntu_22.mdx new file mode 100644 index 00000000000..6629b861967 --- /dev/null +++ b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_ubuntu_22.mdx @@ -0,0 +1,41 @@ +--- +navTitle: Ubuntu 22.04 +title: Installing EDB Data Migration Service Reader on Ubuntu 22.04 x86_64 +--- + +## Prerequisites + +Before you begin the installation process: + +- Set up the EDB repository. + + Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. + + To determine if your repository exists, enter: + + `apt-cache search enterprisedb` + + If no output is generated, the repository isn't installed. + + To set up the EDB repository: + + 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). + + 1. Select the button that provides access to the EDB repository. + 1. Select the platform and software that you want to download. + + 1. Follow the instructions for setting up the EDB repository. + +## Install the package + +Install CDCReader: + +```shell +sudo apt-get install cdcreader=-1.4-1.4766136665.17.1.jammy +``` + +Install CDCWriter: + +```shell +sudo apt-get install cdcwriter=1.3-1.4766201953.4.1.jammy +``` diff --git a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/index.mdx b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/index.mdx new file mode 100644 index 00000000000..093ba5ec8b2 --- /dev/null +++ b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/index.mdx @@ -0,0 +1,47 @@ +--- +title: "Installing EDB Data Migration Service Reader on Linux x86 (amd64)" +navTitle: "On Linux x86" + +navigation: + - edb-dms-reader_rhel_9 + - edb-dms-reader_rhel_8 + - edb-dms-reader_other_linux_9 + - edb-dms-reader_sles_15 + - edb-dms-reader_sles_12 + - edb-dms-reader_ubuntu_22 + - edb-dms-reader_ubuntu_20 + - edb-dms-reader_debian_12 + - edb-dms-reader_debian_11 +--- + +Operating system-specific install instructions are described in the corresponding documentation: + +### Red Hat Enterprise Linux (RHEL) and derivatives + +- [RHEL 9](edb-dms-reader_rhel_9) + +- [RHEL 8](edb-dms-reader_rhel_8) + +- [Oracle Linux (OL) 9](edb-dms-reader_rhel_9) + +- [Oracle Linux (OL) 8](edb-dms-reader_rhel_8) + +- [Rocky Linux 9](edb-dms-reader_other_linux_9) + +- [AlmaLinux 9](edb-dms-reader_other_linux_9) + +### SUSE Linux Enterprise (SLES) + +- [SLES 15](edb-dms-reader_sles_15) + +- [SLES 12](edb-dms-reader_sles_12) + +### Debian and derivatives + +- [Ubuntu 22.04](edb-dms-reader_ubuntu_22) + +- [Ubuntu 20.04](edb-dms-reader_ubuntu_20) + +- [Debian 12](edb-dms-reader_debian_12) + +- [Debian 11](edb-dms-reader_debian_11) From b9d8ba49ed0fd33f926539eedc170abd2baf743d Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 4 Sep 2024 09:30:00 -0400 Subject: [PATCH 61/67] added and deleted versions --- ...or-rocky-linux-8.njk => almalinux-9-or-rocky-linux-9.njk} | 2 +- .../{debian-10.njk => debian-12.njk} | 2 +- .../edb-data-migration-service-reader/rhel-9-or-ol-9.njk | 5 +++++ .../edb-data-migration-service-reader/ubuntu-18.04.njk | 2 -- 4 files changed, 7 insertions(+), 4 deletions(-) rename install_template/templates/products/edb-data-migration-service-reader/{almalinux-8-or-rocky-linux-8.njk => almalinux-9-or-rocky-linux-9.njk} (81%) rename install_template/templates/products/edb-data-migration-service-reader/{debian-10.njk => debian-12.njk} (60%) create mode 100644 install_template/templates/products/edb-data-migration-service-reader/rhel-9-or-ol-9.njk delete mode 100644 install_template/templates/products/edb-data-migration-service-reader/ubuntu-18.04.njk diff --git a/install_template/templates/products/edb-data-migration-service-reader/almalinux-8-or-rocky-linux-8.njk b/install_template/templates/products/edb-data-migration-service-reader/almalinux-9-or-rocky-linux-9.njk similarity index 81% rename from install_template/templates/products/edb-data-migration-service-reader/almalinux-8-or-rocky-linux-8.njk rename to install_template/templates/products/edb-data-migration-service-reader/almalinux-9-or-rocky-linux-9.njk index 15a11c98baa..7e034eefd10 100644 --- a/install_template/templates/products/edb-data-migration-service-reader/almalinux-8-or-rocky-linux-8.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/almalinux-9-or-rocky-linux-9.njk @@ -1,5 +1,5 @@ {% extends "products/edb-data-migration-service-reader/base.njk" %} -{% set platformBaseTemplate = "almalinux-8-or-rocky-linux-8" %} +{% set platformBaseTemplate = "almalinux-9-or-rocky-linux-9" %} {% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} {% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} {% block prerequisites %}{% endblock prerequisites %} \ No newline at end of file diff --git a/install_template/templates/products/edb-data-migration-service-reader/debian-10.njk b/install_template/templates/products/edb-data-migration-service-reader/debian-12.njk similarity index 60% rename from install_template/templates/products/edb-data-migration-service-reader/debian-10.njk rename to install_template/templates/products/edb-data-migration-service-reader/debian-12.njk index 31a098f2ff9..616c5e4a751 100644 --- a/install_template/templates/products/edb-data-migration-service-reader/debian-10.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/debian-12.njk @@ -1,2 +1,2 @@ {% extends "products/edb-data-migration-service-reader/debian.njk" %} -{% set platformBaseTemplate = "debian-10" %} +{% set platformBaseTemplate = "debian-12" %} diff --git a/install_template/templates/products/edb-data-migration-service-reader/rhel-9-or-ol-9.njk b/install_template/templates/products/edb-data-migration-service-reader/rhel-9-or-ol-9.njk new file mode 100644 index 00000000000..5d1054ce2b1 --- /dev/null +++ b/install_template/templates/products/edb-data-migration-service-reader/rhel-9-or-ol-9.njk @@ -0,0 +1,5 @@ +{% extends "products/edb-data-migration-service-reader/base.njk" %} +{% set platformBaseTemplate = "rhel-9-or-ol-9" %} +{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} +{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} +{% block prerequisites %}{% endblock prerequisites %} \ No newline at end of file diff --git a/install_template/templates/products/edb-data-migration-service-reader/ubuntu-18.04.njk b/install_template/templates/products/edb-data-migration-service-reader/ubuntu-18.04.njk deleted file mode 100644 index 34c6da74150..00000000000 --- a/install_template/templates/products/edb-data-migration-service-reader/ubuntu-18.04.njk +++ /dev/null @@ -1,2 +0,0 @@ -{% extends "products/edb-data-migration-service-reader/ubuntu.njk" %} -{% set platformBaseTemplate = "ubuntu-18.04" %} From eb675e7add4304d76771001155dce107e0183ab4 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 4 Sep 2024 09:34:24 -0400 Subject: [PATCH 62/67] Removing generated files that were erroneously added --- .../edb-dms-reader/2/installing/index.mdx | 31 ----------- .../linux_x86_64/edb-dms-reader_debian_11.mdx | 41 -------------- .../linux_x86_64/edb-dms-reader_rhel_8.mdx | 41 -------------- .../linux_x86_64/edb-dms-reader_sles_12.mdx | 52 ------------------ .../linux_x86_64/edb-dms-reader_sles_15.mdx | 53 ------------------- .../linux_x86_64/edb-dms-reader_ubuntu_20.mdx | 41 -------------- .../linux_x86_64/edb-dms-reader_ubuntu_22.mdx | 41 -------------- .../2/installing/linux_x86_64/index.mdx | 47 ---------------- 8 files changed, 347 deletions(-) delete mode 100644 product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/index.mdx delete mode 100644 product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_debian_11.mdx delete mode 100644 product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_rhel_8.mdx delete mode 100644 product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_sles_12.mdx delete mode 100644 product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_sles_15.mdx delete mode 100644 product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_ubuntu_20.mdx delete mode 100644 product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_ubuntu_22.mdx delete mode 100644 product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/index.mdx diff --git a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/index.mdx b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/index.mdx deleted file mode 100644 index 10fff3f06bb..00000000000 --- a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/index.mdx +++ /dev/null @@ -1,31 +0,0 @@ ---- -navTitle: Installing -title: Installing EDB Data Migration Service Reader on Linux - -navigation: - - linux_x86_64 ---- - -Select a link to access the applicable installation instructions: - -## Linux [x86-64 (amd64)](linux_x86_64) - -### Red Hat Enterprise Linux (RHEL) and derivatives - -- [RHEL 9](linux_x86_64/edb-dms-reader_rhel_9), [RHEL 8](linux_x86_64/edb-dms-reader_rhel_8) - -- [Oracle Linux (OL) 9](linux_x86_64/edb-dms-reader_rhel_9), [Oracle Linux (OL) 8](linux_x86_64/edb-dms-reader_rhel_8) - -- [Rocky Linux 9](linux_x86_64/edb-dms-reader_other_linux_9) - -- [AlmaLinux 9](linux_x86_64/edb-dms-reader_other_linux_9) - -### SUSE Linux Enterprise (SLES) - -- [SLES 15](linux_x86_64/edb-dms-reader_sles_15), [SLES 12](linux_x86_64/edb-dms-reader_sles_12) - -### Debian and derivatives - -- [Ubuntu 22.04](linux_x86_64/edb-dms-reader_ubuntu_22), [Ubuntu 20.04](linux_x86_64/edb-dms-reader_ubuntu_20) - -- [Debian 12](linux_x86_64/edb-dms-reader_debian_12), [Debian 11](linux_x86_64/edb-dms-reader_debian_11) diff --git a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_debian_11.mdx b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_debian_11.mdx deleted file mode 100644 index 1b6e6334f1c..00000000000 --- a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_debian_11.mdx +++ /dev/null @@ -1,41 +0,0 @@ ---- -navTitle: Debian 11 -title: Installing EDB Data Migration Service Reader on Debian 11 x86_64 ---- - -## Prerequisites - -Before you begin the installation process: - -- Set up the EDB repository. - - Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. - - To determine if your repository exists, enter: - - `apt-cache search enterprisedb` - - If no output is generated, the repository isn't installed. - - To set up the EDB repository: - - 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). - - 1. Select the button that provides access to the EDB repository. - 1. Select the platform and software that you want to download. - - 1. Follow the instructions for setting up the EDB repository. - -## Install the package - -Install CDCReader: - -```shell -sudo apt-get install cdcreader=-1.4-1.4766136665.17.1.jammy -``` - -Install CDCWriter: - -```shell -sudo apt-get install cdcwriter=1.3-1.4766201953.4.1.jammy -``` diff --git a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_rhel_8.mdx b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_rhel_8.mdx deleted file mode 100644 index c0fa080b083..00000000000 --- a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_rhel_8.mdx +++ /dev/null @@ -1,41 +0,0 @@ ---- -navTitle: RHEL 8 or OL 8 -title: Installing EDB Data Migration Service Reader on RHEL 8 or OL 8 x86_64 ---- - -## Prerequisites - -Before you begin the installation process: - -- Set up the EDB repository. - - Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. - - To determine if your repository exists, enter: - - `dnf repolist | grep enterprisedb` - - If no output is generated, the repository isn't installed. - - To set up the EDB repository: - - 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). - - 1. Select the button that provides access to the EDB repository. - 1. Select the platform and software that you want to download. - - 1. Follow the instructions for setting up the EDB repository. - -## Install the package - -Install CDCReader: - -```shell -sudo dnf install cdcreader-1.4-1.4766136665.17.1.el8.x86_64 -``` - -Install CDCWriter: - -```shell -sudo dnf install cdcwriter-1.3-1.4766201953.4.1.el8.x86_64 -``` diff --git a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_sles_12.mdx b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_sles_12.mdx deleted file mode 100644 index abf7c83e615..00000000000 --- a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_sles_12.mdx +++ /dev/null @@ -1,52 +0,0 @@ ---- -navTitle: SLES 12 -title: Installing EDB Data Migration Service Reader on SLES 12 x86_64 ---- - -## Prerequisites - -Before you begin the installation process: - -- Set up the EDB repository. - - Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. - - To determine if your repository exists, enter: - - `zypper lr -E | grep enterprisedb` - - If no output is generated, the repository isn't installed. - - To set up the EDB repository: - - 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). - - 1. Select the button that provides access to the EDB repository. - 1. Select the platform and software that you want to download. - - 1. Follow the instructions for setting up the EDB repository. - -- Activate the required SUSE module: - ```shell - sudo SUSEConnect -p PackageHub/12.5/x86_64 - sudo SUSEConnect -p sle-sdk/12.5/x86_64 - - ``` -- Refresh the metadata: - ```shell - sudo zypper refresh - ``` - -## Install the package - -Install CDCReader: - -```shell -sudo zypper install cdcreader-1.4-1.4766136665.17.1.el8.x86_64 -``` - -Install CDCWriter: - -```shell -sudo zypper install cdcwriter-1.3-1.4766201953.4.1.el8.x86_64 -``` diff --git a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_sles_15.mdx b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_sles_15.mdx deleted file mode 100644 index 81d6e550278..00000000000 --- a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_sles_15.mdx +++ /dev/null @@ -1,53 +0,0 @@ ---- -navTitle: SLES 15 -title: Installing EDB Data Migration Service Reader on SLES 15 x86_64 ---- - -## Prerequisites - -Before you begin the installation process: - -- Set up the EDB repository. - - Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. - - To determine if your repository exists, enter: - - `zypper lr -E | grep enterprisedb` - - If no output is generated, the repository isn't installed. - - To set up the EDB repository: - - 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). - - 1. Select the button that provides access to the EDB repository. - 1. Select the platform and software that you want to download. - - 1. Follow the instructions for setting up the EDB repository. - -- Activate the required SUSE module: - - ```shell - sudo SUSEConnect -p PackageHub/15.4/x86_64 - - ``` - -- Refresh the metadata: - ```shell - sudo zypper refresh - ``` - -## Install the package - -Install CDCReader: - -```shell -sudo zypper install cdcreader-1.4-1.4766136665.17.1.el8.x86_64 -``` - -Install CDCWriter: - -```shell -sudo zypper install cdcwriter-1.3-1.4766201953.4.1.el8.x86_64 -``` diff --git a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_ubuntu_20.mdx b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_ubuntu_20.mdx deleted file mode 100644 index 826f5761471..00000000000 --- a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_ubuntu_20.mdx +++ /dev/null @@ -1,41 +0,0 @@ ---- -navTitle: Ubuntu 20.04 -title: Installing EDB Data Migration Service Reader on Ubuntu 20.04 x86_64 ---- - -## Prerequisites - -Before you begin the installation process: - -- Set up the EDB repository. - - Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. - - To determine if your repository exists, enter: - - `apt-cache search enterprisedb` - - If no output is generated, the repository isn't installed. - - To set up the EDB repository: - - 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). - - 1. Select the button that provides access to the EDB repository. - 1. Select the platform and software that you want to download. - - 1. Follow the instructions for setting up the EDB repository. - -## Install the package - -Install CDCReader: - -```shell -sudo apt-get install cdcreader=-1.4-1.4766136665.17.1.jammy -``` - -Install CDCWriter: - -```shell -sudo apt-get install cdcwriter=1.3-1.4766201953.4.1.jammy -``` diff --git a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_ubuntu_22.mdx b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_ubuntu_22.mdx deleted file mode 100644 index 6629b861967..00000000000 --- a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/edb-dms-reader_ubuntu_22.mdx +++ /dev/null @@ -1,41 +0,0 @@ ---- -navTitle: Ubuntu 22.04 -title: Installing EDB Data Migration Service Reader on Ubuntu 22.04 x86_64 ---- - -## Prerequisites - -Before you begin the installation process: - -- Set up the EDB repository. - - Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step. - - To determine if your repository exists, enter: - - `apt-cache search enterprisedb` - - If no output is generated, the repository isn't installed. - - To set up the EDB repository: - - 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). - - 1. Select the button that provides access to the EDB repository. - 1. Select the platform and software that you want to download. - - 1. Follow the instructions for setting up the EDB repository. - -## Install the package - -Install CDCReader: - -```shell -sudo apt-get install cdcreader=-1.4-1.4766136665.17.1.jammy -``` - -Install CDCWriter: - -```shell -sudo apt-get install cdcwriter=1.3-1.4766201953.4.1.jammy -``` diff --git a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/index.mdx b/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/index.mdx deleted file mode 100644 index 093ba5ec8b2..00000000000 --- a/product_docs/docs/edb-postgres-ai/migration-etl/edb-dms-reader/2/installing/linux_x86_64/index.mdx +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: "Installing EDB Data Migration Service Reader on Linux x86 (amd64)" -navTitle: "On Linux x86" - -navigation: - - edb-dms-reader_rhel_9 - - edb-dms-reader_rhel_8 - - edb-dms-reader_other_linux_9 - - edb-dms-reader_sles_15 - - edb-dms-reader_sles_12 - - edb-dms-reader_ubuntu_22 - - edb-dms-reader_ubuntu_20 - - edb-dms-reader_debian_12 - - edb-dms-reader_debian_11 ---- - -Operating system-specific install instructions are described in the corresponding documentation: - -### Red Hat Enterprise Linux (RHEL) and derivatives - -- [RHEL 9](edb-dms-reader_rhel_9) - -- [RHEL 8](edb-dms-reader_rhel_8) - -- [Oracle Linux (OL) 9](edb-dms-reader_rhel_9) - -- [Oracle Linux (OL) 8](edb-dms-reader_rhel_8) - -- [Rocky Linux 9](edb-dms-reader_other_linux_9) - -- [AlmaLinux 9](edb-dms-reader_other_linux_9) - -### SUSE Linux Enterprise (SLES) - -- [SLES 15](edb-dms-reader_sles_15) - -- [SLES 12](edb-dms-reader_sles_12) - -### Debian and derivatives - -- [Ubuntu 22.04](edb-dms-reader_ubuntu_22) - -- [Ubuntu 20.04](edb-dms-reader_ubuntu_20) - -- [Debian 12](edb-dms-reader_debian_12) - -- [Debian 11](edb-dms-reader_debian_11) From 241b401e642e2b3056566e38ee8e8d3fadbbf53f Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 4 Sep 2024 11:09:04 -0400 Subject: [PATCH 63/67] template revisions for RHEL9 --- install_template/templates/platformBase/base.njk | 2 +- .../products/edb-data-migration-service-reader/base.njk | 9 ++------- .../edb-data-migration-service-reader/rhel-9-or-ol-9.njk | 2 -- 3 files changed, 3 insertions(+), 10 deletions(-) diff --git a/install_template/templates/platformBase/base.njk b/install_template/templates/platformBase/base.njk index fea01437a43..90acfc8f0d6 100644 --- a/install_template/templates/platformBase/base.njk +++ b/install_template/templates/platformBase/base.njk @@ -34,7 +34,7 @@ Before you begin the installation process: 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). 1. Select the button that provides access to the EDB repository. - + 1. Select the platform and software that you want to download. 1. Follow the instructions for setting up the EDB repository. diff --git a/install_template/templates/products/edb-data-migration-service-reader/base.njk b/install_template/templates/products/edb-data-migration-service-reader/base.njk index 96a7b21cdba..72adfacb1c7 100644 --- a/install_template/templates/products/edb-data-migration-service-reader/base.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/base.njk @@ -1,5 +1,5 @@ {% extends "platformBase/" + platformBaseTemplate + '.njk' %} -{% set packageName = packageName or 'cdcreader=-1.4-1.4766136665.17.1.jammy' %} +{% set packageName = packageName or 'cdcreader' %} {% set writerPackageName = writerPackageName or 'cdcwriter=1.3-1.4766201953.4.1.jammy' %} {% import "platformBase/_deploymentConstants.njk" as deploy %} {% block frontmatter %} @@ -12,14 +12,9 @@ deployPath: edb-postgres-ai/migration-etl/edb-dms-reader/{{ product.version }}/i {% endblock frontmatter %} {% block installCommand %} -Install CDCReader: +Install the EDB DMS Reader (packaged as `cdcreader`): ```shell sudo {{packageManager}} install {{ packageName }} ``` - -Install CDCWriter: -```shell -sudo {{packageManager}} install {{ writerPackageName }} -``` {% endblock installCommand %} diff --git a/install_template/templates/products/edb-data-migration-service-reader/rhel-9-or-ol-9.njk b/install_template/templates/products/edb-data-migration-service-reader/rhel-9-or-ol-9.njk index 5d1054ce2b1..0169b3a04ac 100644 --- a/install_template/templates/products/edb-data-migration-service-reader/rhel-9-or-ol-9.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/rhel-9-or-ol-9.njk @@ -1,5 +1,3 @@ {% extends "products/edb-data-migration-service-reader/base.njk" %} {% set platformBaseTemplate = "rhel-9-or-ol-9" %} -{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} -{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} {% block prerequisites %}{% endblock prerequisites %} \ No newline at end of file From d1492a0ee0340415a26b8e97a0d988df307b1cd0 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 11 Sep 2024 07:42:20 -0400 Subject: [PATCH 64/67] Updated template for RHEL 8 and Alma Linux 9 --- .../almalinux-9-or-rocky-linux-9.njk | 2 -- .../edb-data-migration-service-reader/rhel-8-or-ol-8.njk | 2 -- 2 files changed, 4 deletions(-) diff --git a/install_template/templates/products/edb-data-migration-service-reader/almalinux-9-or-rocky-linux-9.njk b/install_template/templates/products/edb-data-migration-service-reader/almalinux-9-or-rocky-linux-9.njk index 7e034eefd10..307f4a30f84 100644 --- a/install_template/templates/products/edb-data-migration-service-reader/almalinux-9-or-rocky-linux-9.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/almalinux-9-or-rocky-linux-9.njk @@ -1,5 +1,3 @@ {% extends "products/edb-data-migration-service-reader/base.njk" %} {% set platformBaseTemplate = "almalinux-9-or-rocky-linux-9" %} -{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} -{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} {% block prerequisites %}{% endblock prerequisites %} \ No newline at end of file diff --git a/install_template/templates/products/edb-data-migration-service-reader/rhel-8-or-ol-8.njk b/install_template/templates/products/edb-data-migration-service-reader/rhel-8-or-ol-8.njk index 595dfcb657b..e2139a29cb5 100644 --- a/install_template/templates/products/edb-data-migration-service-reader/rhel-8-or-ol-8.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/rhel-8-or-ol-8.njk @@ -1,5 +1,3 @@ {% extends "products/edb-data-migration-service-reader/base.njk" %} {% set platformBaseTemplate = "rhel-8-or-ol-8" %} -{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} -{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} {% block prerequisites %}{% endblock prerequisites %} \ No newline at end of file From 6af2a0dd0bbe858db7726fab8d4b43dae0c06226 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 11 Sep 2024 07:53:38 -0400 Subject: [PATCH 65/67] Revised deployment path for generated files --- .../products/edb-data-migration-service-reader/base.njk | 4 ++-- .../products/edb-data-migration-service-reader/index.njk | 4 ++-- .../edb-data-migration-service-reader/x86_64_index.njk | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/install_template/templates/products/edb-data-migration-service-reader/base.njk b/install_template/templates/products/edb-data-migration-service-reader/base.njk index 72adfacb1c7..28b790b2c4d 100644 --- a/install_template/templates/products/edb-data-migration-service-reader/base.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/base.njk @@ -8,7 +8,8 @@ and add it to the list under "redirects:" below - this ensures we don't break any existing links. #} -deployPath: edb-postgres-ai/migration-etl/edb-dms-reader/{{ product.version }}/installing/linux_{{platform.arch}}/edb-dms-reader_{{deploy.map_platform[platform.name]}}.mdx +deployPath: advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_{{platform.arch}}/edb-dms-reader_{{deploy.map_platform[platform.name]}}.mdx + {% endblock frontmatter %} {% block installCommand %} @@ -17,4 +18,3 @@ Install the EDB DMS Reader (packaged as `cdcreader`): sudo {{packageManager}} install {{ packageName }} ``` {% endblock installCommand %} - diff --git a/install_template/templates/products/edb-data-migration-service-reader/index.njk b/install_template/templates/products/edb-data-migration-service-reader/index.njk index 431b6c0d4b0..379d81c6e28 100644 --- a/install_template/templates/products/edb-data-migration-service-reader/index.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/index.njk @@ -2,8 +2,8 @@ {% set productShortname="edb-dms-reader" %} {% block frontmatter %} -deployPath: edb-postgres-ai/migration-etl/{{productShortname}}/{{ product.version }}/installing/index.mdx +deployPath: advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/index.mdx {% endblock frontmatter %} {% block navigation %} - linux_x86_64 -{% endblock navigation %} +{% endblock navigation %} \ No newline at end of file diff --git a/install_template/templates/products/edb-data-migration-service-reader/x86_64_index.njk b/install_template/templates/products/edb-data-migration-service-reader/x86_64_index.njk index 9cfbcbe6e5b..ebd1b6b5856 100644 --- a/install_template/templates/products/edb-data-migration-service-reader/x86_64_index.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/x86_64_index.njk @@ -3,5 +3,5 @@ {% set productShortname="edb-dms-reader" %} {% block frontmatter %} -deployPath: edb-postgres-ai/migration-etl/{{productShortname}}/{{ product.version }}/installing/linux_x86_64/index.mdx +deployPath: advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/index.mdx {% endblock frontmatter %} From 78ee53e4e3083786e02ce8e2af88cc1ccf5737cb Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 11 Sep 2024 08:46:18 -0400 Subject: [PATCH 66/67] Updated templates for SLES 15 and 12 --- .../products/edb-data-migration-service-reader/sles-12.njk | 2 -- .../products/edb-data-migration-service-reader/sles-15.njk | 2 -- 2 files changed, 4 deletions(-) diff --git a/install_template/templates/products/edb-data-migration-service-reader/sles-12.njk b/install_template/templates/products/edb-data-migration-service-reader/sles-12.njk index 2b50dd803ca..13b02b935f1 100644 --- a/install_template/templates/products/edb-data-migration-service-reader/sles-12.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/sles-12.njk @@ -1,6 +1,4 @@ {% extends "products/edb-data-migration-service-reader/base.njk" %} {% set platformBaseTemplate = "sles-12" %} {% set packageManager = "zypper" %} -{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} -{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} diff --git a/install_template/templates/products/edb-data-migration-service-reader/sles-15.njk b/install_template/templates/products/edb-data-migration-service-reader/sles-15.njk index 344d65be864..135cbdce581 100644 --- a/install_template/templates/products/edb-data-migration-service-reader/sles-15.njk +++ b/install_template/templates/products/edb-data-migration-service-reader/sles-15.njk @@ -1,5 +1,3 @@ {% extends "products/edb-data-migration-service-reader/base.njk" %} {% set platformBaseTemplate = "sles-15" %} {% set packageManager = "zypper" %} -{% set packageName %}cdcreader-1.4-1.4766136665.17.1.el8.x86_64{% endset %} -{% set writerPackageName %}cdcwriter-1.3-1.4766201953.4.1.el8.x86_64{% endset %} From 09c7755fc46614a66c1e20075bcd32bf29020b19 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Thu, 12 Sep 2024 12:14:18 -0400 Subject: [PATCH 67/67] Generated install files from templates and removed older install files --- .../getting_started/installing/index.mdx | 21 +++++---- ...an_11.mdx => edb-dms-reader_debian_11.mdx} | 6 ++- ...an_12.mdx => edb-dms-reader_debian_12.mdx} | 6 ++- ...9.mdx => edb-dms-reader_other_linux_9.mdx} | 7 ++- ...s_rhel_8.mdx => edb-dms-reader_rhel_8.mdx} | 6 ++- ...s_rhel_9.mdx => edb-dms-reader_rhel_9.mdx} | 6 ++- ...sles_12.mdx => edb-dms-reader_sles_12.mdx} | 17 ++++++- ...sles_15.mdx => edb-dms-reader_sles_15.mdx} | 18 ++++++- ...tu_20.mdx => edb-dms-reader_ubuntu_20.mdx} | 6 ++- ...tu_22.mdx => edb-dms-reader_ubuntu_22.mdx} | 6 ++- .../installing/linux_x86_64/index.mdx | 47 +++++++++---------- 11 files changed, 94 insertions(+), 52 deletions(-) rename advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/{dms_debian_11.mdx => edb-dms-reader_debian_11.mdx} (77%) rename advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/{dms_debian_12.mdx => edb-dms-reader_debian_12.mdx} (77%) rename advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/{dms_other_linux_9.mdx => edb-dms-reader_other_linux_9.mdx} (76%) rename advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/{dms_rhel_8.mdx => edb-dms-reader_rhel_8.mdx} (77%) rename advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/{dms_rhel_9.mdx => edb-dms-reader_rhel_9.mdx} (77%) rename advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/{dms_sles_12.mdx => edb-dms-reader_sles_12.mdx} (63%) rename advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/{dms_sles_15.mdx => edb-dms-reader_sles_15.mdx} (65%) rename advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/{dms_ubuntu_20.mdx => edb-dms-reader_ubuntu_20.mdx} (77%) rename advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/{dms_ubuntu_22.mdx => edb-dms-reader_ubuntu_22.mdx} (77%) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/index.mdx index a1677666242..10fff3f06bb 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/index.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/index.mdx @@ -1,30 +1,31 @@ --- -navTitle: Installing EDB DMS Reader -title: Installing EDB DMS Reader on Linux +navTitle: Installing +title: Installing EDB Data Migration Service Reader on Linux + navigation: - linux_x86_64 --- -Select a link to access the applicable installation instructions. +Select a link to access the applicable installation instructions: ## Linux [x86-64 (amd64)](linux_x86_64) ### Red Hat Enterprise Linux (RHEL) and derivatives -- [RHEL 9](linux_x86_64/dms_rhel_9), [RHEL 8](linux_x86_64/dms_rhel_8) +- [RHEL 9](linux_x86_64/edb-dms-reader_rhel_9), [RHEL 8](linux_x86_64/edb-dms-reader_rhel_8) -- [Oracle Linux (OL) 9](linux_x86_64/dms_rhel_9), [Oracle Linux (OL) 8](linux_x86_64/dms_rhel_8) +- [Oracle Linux (OL) 9](linux_x86_64/edb-dms-reader_rhel_9), [Oracle Linux (OL) 8](linux_x86_64/edb-dms-reader_rhel_8) -- [Rocky Linux 9](linux_x86_64/dms_other_linux_9) +- [Rocky Linux 9](linux_x86_64/edb-dms-reader_other_linux_9) -- [AlmaLinux 9](linux_x86_64/dms_other_linux_9) +- [AlmaLinux 9](linux_x86_64/edb-dms-reader_other_linux_9) ### SUSE Linux Enterprise (SLES) -- [SLES 15](linux_x86_64/dms_sles_15), [SLES 12](linux_x86_64/dms_sles_12) +- [SLES 15](linux_x86_64/edb-dms-reader_sles_15), [SLES 12](linux_x86_64/edb-dms-reader_sles_12) ### Debian and derivatives -- [Ubuntu 22.04](linux_x86_64/dms_ubuntu_22), [Ubuntu 20.04](linux_x86_64/dms_ubuntu_20) +- [Ubuntu 22.04](linux_x86_64/edb-dms-reader_ubuntu_22), [Ubuntu 20.04](linux_x86_64/edb-dms-reader_ubuntu_20) -- [Debian 12](linux_x86_64/dms_debian_12), [Debian 11](linux_x86_64/dms_debian_11) +- [Debian 12](linux_x86_64/edb-dms-reader_debian_12), [Debian 11](linux_x86_64/edb-dms-reader_debian_11) diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_debian_11.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_debian_11.mdx similarity index 77% rename from advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_debian_11.mdx rename to advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_debian_11.mdx index cd7be7a8559..d7bbe807362 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_debian_11.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_debian_11.mdx @@ -1,6 +1,6 @@ --- navTitle: Debian 11 -title: Installing the EDB DMS Reader on Debian 11 x86_64 +title: Installing EDB Data Migration Service Reader on Debian 11 x86_64 --- ## Prerequisites @@ -21,10 +21,12 @@ Before you begin the installation process: 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). - 1. Select the button that provides access to the EDB repo. + 1. Select the button that provides access to the EDB repository. 1. Select the platform and software that you want to download. + 1. Follow the instructions for setting up the EDB repository. + ## Install the package Install the EDB DMS Reader (packaged as `cdcreader`): diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_debian_12.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_debian_12.mdx similarity index 77% rename from advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_debian_12.mdx rename to advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_debian_12.mdx index abd9510a40d..720bc8a0d5f 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_debian_12.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_debian_12.mdx @@ -1,6 +1,6 @@ --- navTitle: Debian 12 -title: Installing the EDB DMS Reader on Debian 12 x86_64 +title: Installing EDB Data Migration Service Reader on Debian 12 x86_64 --- ## Prerequisites @@ -21,10 +21,12 @@ Before you begin the installation process: 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). - 1. Select the button that provides access to the EDB repo. + 1. Select the button that provides access to the EDB repository. 1. Select the platform and software that you want to download. + 1. Follow the instructions for setting up the EDB repository. + ## Install the package Install the EDB DMS Reader (packaged as `cdcreader`): diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_other_linux_9.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_other_linux_9.mdx similarity index 76% rename from advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_other_linux_9.mdx rename to advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_other_linux_9.mdx index 5c50ed81e9d..33d181bcd0a 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_other_linux_9.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_other_linux_9.mdx @@ -1,6 +1,7 @@ --- navTitle: AlmaLinux 9 or Rocky Linux 9 -title: Installing the EDB DMS Reader on AlmaLinux 9 or Rocky Linux 9 x86_64 +title: Installing EDB Data Migration Service Reader on AlmaLinux 9 or Rocky + Linux 9 x86_64 --- ## Prerequisites @@ -21,10 +22,12 @@ Before you begin the installation process: 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). - 1. Select the button that provides access to the EDB repo. + 1. Select the button that provides access to the EDB repository. 1. Select the platform and software that you want to download. + 1. Follow the instructions for setting up the EDB repository. + ## Install the package Install the EDB DMS Reader (packaged as `cdcreader`): diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_rhel_8.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_rhel_8.mdx similarity index 77% rename from advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_rhel_8.mdx rename to advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_rhel_8.mdx index 4f79f23ff6c..9659bed9b50 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_rhel_8.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_rhel_8.mdx @@ -1,6 +1,6 @@ --- navTitle: RHEL 8 or OL 8 -title: Installing the EDB DMS Reader on RHEL 8 or OL 8 x86_64 +title: Installing EDB Data Migration Service Reader on RHEL 8 or OL 8 x86_64 --- ## Prerequisites @@ -21,10 +21,12 @@ Before you begin the installation process: 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). - 1. Select the button that provides access to the EDB repo. + 1. Select the button that provides access to the EDB repository. 1. Select the platform and software that you want to download. + 1. Follow the instructions for setting up the EDB repository. + ## Install the package Install the EDB DMS Reader (packaged as `cdcreader`): diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_rhel_9.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_rhel_9.mdx similarity index 77% rename from advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_rhel_9.mdx rename to advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_rhel_9.mdx index 64eebbc208f..3dd455fa0bc 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_rhel_9.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_rhel_9.mdx @@ -1,6 +1,6 @@ --- navTitle: RHEL 9 or OL 9 -title: Installing the EDB DMS Reader on RHEL 9 or OL 9 x86_64 +title: Installing EDB Data Migration Service Reader on RHEL 9 or OL 9 x86_64 --- ## Prerequisites @@ -21,10 +21,12 @@ Before you begin the installation process: 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). - 1. Select the button that provides access to the EDB repo. + 1. Select the button that provides access to the EDB repository. 1. Select the platform and software that you want to download. + 1. Follow the instructions for setting up the EDB repository. + ## Install the package Install the EDB DMS Reader (packaged as `cdcreader`): diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_sles_12.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_sles_12.mdx similarity index 63% rename from advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_sles_12.mdx rename to advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_sles_12.mdx index 83a2ed46e98..10ab3f530b6 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_sles_12.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_sles_12.mdx @@ -1,6 +1,6 @@ --- navTitle: SLES 12 -title: Installing the EDB DMS Reader on SLES 12 x86_64 +title: Installing EDB Data Migration Service Reader on SLES 12 x86_64 --- ## Prerequisites @@ -21,10 +21,23 @@ Before you begin the installation process: 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). - 1. Select the button that provides access to the EDB repo. + 1. Select the button that provides access to the EDB repository. 1. Select the platform and software that you want to download. + 1. Follow the instructions for setting up the EDB repository. + +- Activate the required SUSE module: + ```shell + sudo SUSEConnect -p PackageHub/12.5/x86_64 + sudo SUSEConnect -p sle-sdk/12.5/x86_64 + + ``` +- Refresh the metadata: + ```shell + sudo zypper refresh + ``` + ## Install the package Install the EDB DMS Reader (packaged as `cdcreader`): diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_sles_15.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_sles_15.mdx similarity index 65% rename from advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_sles_15.mdx rename to advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_sles_15.mdx index fcd764b1b8c..3c449009a16 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_sles_15.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_sles_15.mdx @@ -1,6 +1,6 @@ --- navTitle: SLES 15 -title: Installing the EDB DMS Reader on SLES 15 x86_64 +title: Installing EDB Data Migration Service Reader on SLES 15 x86_64 --- ## Prerequisites @@ -21,10 +21,24 @@ Before you begin the installation process: 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). - 1. Select the button that provides access to the EDB repo. + 1. Select the button that provides access to the EDB repository. 1. Select the platform and software that you want to download. + 1. Follow the instructions for setting up the EDB repository. + +- Activate the required SUSE module: + + ```shell + sudo SUSEConnect -p PackageHub/15.4/x86_64 + + ``` + +- Refresh the metadata: + ```shell + sudo zypper refresh + ``` + ## Install the package Install the EDB DMS Reader (packaged as `cdcreader`): diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_ubuntu_20.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_ubuntu_20.mdx similarity index 77% rename from advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_ubuntu_20.mdx rename to advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_ubuntu_20.mdx index 88f3e855262..a3a9d623090 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_ubuntu_20.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_ubuntu_20.mdx @@ -1,6 +1,6 @@ --- navTitle: Ubuntu 20.04 -title: Installing the EDB DMS Reader on Ubuntu 20.04 x86_64 +title: Installing EDB Data Migration Service Reader on Ubuntu 20.04 x86_64 --- ## Prerequisites @@ -21,10 +21,12 @@ Before you begin the installation process: 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). - 1. Select the button that provides access to the EDB repo. + 1. Select the button that provides access to the EDB repository. 1. Select the platform and software that you want to download. + 1. Follow the instructions for setting up the EDB repository. + ## Install the package Install the EDB DMS Reader (packaged as `cdcreader`): diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_ubuntu_22.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_ubuntu_22.mdx similarity index 77% rename from advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_ubuntu_22.mdx rename to advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_ubuntu_22.mdx index d38dd5cdd9f..e1be13644a5 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/dms_ubuntu_22.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/edb-dms-reader_ubuntu_22.mdx @@ -1,6 +1,6 @@ --- navTitle: Ubuntu 22.04 -title: Installing the EDB DMS Reader on Ubuntu 22.04 x86_64 +title: Installing EDB Data Migration Service Reader on Ubuntu 22.04 x86_64 --- ## Prerequisites @@ -21,10 +21,12 @@ Before you begin the installation process: 1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads). - 1. Select the button that provides access to the EDB repo. + 1. Select the button that provides access to the EDB repository. 1. Select the platform and software that you want to download. + 1. Follow the instructions for setting up the EDB repository. + ## Install the package Install the EDB DMS Reader (packaged as `cdcreader`): diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/index.mdx index 55bb9cc91e4..093ba5ec8b2 100644 --- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/index.mdx +++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/getting_started/installing/linux_x86_64/index.mdx @@ -1,48 +1,47 @@ --- -title: "Installing EDB DMS Reader on Linux x86 (amd64)" +title: "Installing EDB Data Migration Service Reader on Linux x86 (amd64)" navTitle: "On Linux x86" navigation: - - dms_rhel_9 - - dms_rhel_8 - - dms_other_linux_9 - - dms_sles_15 - - dms_sles_12 - - dms_ubuntu_22 - - dms_ubuntu_20 - - dms_ubuntu_18 - - dms_debian_12 - - dms_debian_11 + - edb-dms-reader_rhel_9 + - edb-dms-reader_rhel_8 + - edb-dms-reader_other_linux_9 + - edb-dms-reader_sles_15 + - edb-dms-reader_sles_12 + - edb-dms-reader_ubuntu_22 + - edb-dms-reader_ubuntu_20 + - edb-dms-reader_debian_12 + - edb-dms-reader_debian_11 --- -For operating system-specific install instructions, including accessing the repo, see: +Operating system-specific install instructions are described in the corresponding documentation: ### Red Hat Enterprise Linux (RHEL) and derivatives -- [RHEL 9](dms_rhel_9) +- [RHEL 9](edb-dms-reader_rhel_9) -- [RHEL 8](dms_rhel_8) +- [RHEL 8](edb-dms-reader_rhel_8) -- [Oracle Linux (OL) 9](dms_rhel_9) +- [Oracle Linux (OL) 9](edb-dms-reader_rhel_9) -- [Oracle Linux (OL) 8](dms_rhel_8) +- [Oracle Linux (OL) 8](edb-dms-reader_rhel_8) -- [Rocky Linux 9](dms_other_linux_9) +- [Rocky Linux 9](edb-dms-reader_other_linux_9) -- [AlmaLinux 9](dms_other_linux_9) +- [AlmaLinux 9](edb-dms-reader_other_linux_9) ### SUSE Linux Enterprise (SLES) -- [SLES 15](dms_sles_15) +- [SLES 15](edb-dms-reader_sles_15) -- [SLES 12](dms_sles_12) +- [SLES 12](edb-dms-reader_sles_12) ### Debian and derivatives -- [Ubuntu 22.04](dms_ubuntu_22) +- [Ubuntu 22.04](edb-dms-reader_ubuntu_22) -- [Ubuntu 20.04](dms_ubuntu_20) +- [Ubuntu 20.04](edb-dms-reader_ubuntu_20) -- [Debian 12](dms_debian_12) +- [Debian 12](edb-dms-reader_debian_12) -- [Debian 11](dms_debian_11) +- [Debian 11](edb-dms-reader_debian_11)