diff --git a/product_docs/docs/pgd/5/durability/camo.mdx b/product_docs/docs/pgd/5/durability/camo.mdx index b0ae9247fe7..aecb6d8be93 100644 --- a/product_docs/docs/pgd/5/durability/camo.mdx +++ b/product_docs/docs/pgd/5/durability/camo.mdx @@ -157,6 +157,20 @@ provide some detail. This example considers a setup with two PGD nodes that are the CAMO partner of each other. +```sql +-- create a CAMO commit scope for a group over +-- a definite pair of nodes +SELECT bdr.add_commit_scope( + commit_scope_name := 'example_scope', + origin_node_group := 'camo_dc', + rule := 'ALL (left_dc) CAMO DEGRADE ON (timeout=500ms) TO ASYNC' +); +``` +For this CAMO commit scope to be legal, the number of nodes in the group +must equal exactly 2. Using ALL or ANY 2 on a group consisting of several +nodes is an error because the unquantified group expression does not resolve +to a definite pair of nodes. + #### With asynchronous mode If asynchronous mode is allowed, there's no single point of failure. When one diff --git a/product_docs/docs/pgd/5/durability/commit-scopes.mdx b/product_docs/docs/pgd/5/durability/commit-scopes.mdx index a65414a95c5..53a5d83b670 100644 --- a/product_docs/docs/pgd/5/durability/commit-scopes.mdx +++ b/product_docs/docs/pgd/5/durability/commit-scopes.mdx @@ -79,13 +79,13 @@ SELECT bdr.create_node_group( SELECT bdr.add_commit_scope( commit_scope_name := 'example_scope', origin_node_group := 'left_dc', - rule := 'ALL (left_dc) AND ANY 1 (right_dc)', + rule := 'ALL (left_dc) GROUP COMMIT AND ANY 1 (right_dc) GROUP COMMIT', wait_for_ready := true ); SELECT bdr.add_commit_scope( commit_scope_name := 'example_scope', origin_node_group := 'right_dc', - rule := 'ANY 1 (left_dc) AND ALL (right_dc)', + rule := 'ANY 1 (left_dc) GROUP COMMIT AND ALL (right_dc) GROUP COMMIT', wait_for_ready := true ); ``` diff --git a/product_docs/docs/pgd/5/durability/lag-control.mdx b/product_docs/docs/pgd/5/durability/lag-control.mdx index a5e95400a8d..b0eb0f0d2dc 100644 --- a/product_docs/docs/pgd/5/durability/lag-control.mdx +++ b/product_docs/docs/pgd/5/durability/lag-control.mdx @@ -62,18 +62,18 @@ Lag control is specified within a commit scope, which allows consistent and coo Using the sample node groups from the [Commit Scope](commit-scopes) chapter, this example shows lag control rules for two datacenters. ```sql --- create a Lag control commit scope with individual rules +-- create a Lag Control commit scope with individual rules -- for each sub-group SELECT bdr.add_commit_scope( commit_scope_name := 'example_scope', origin_node_group := 'left_dc', - rule := 'ALL (left_dc) AND ANY 1 (right_dc) LAG CONTROL (max_commit_delay=500ms, max_lag_time=30s)', + rule := 'ALL (left_dc) LAG CONTROL (max_commit_delay=500ms, max_lag_time=30s) AND ANY 1 (right_dc) LAG CONTROL (max_commit_delay=500ms, max_lag_time=30s)', wait_for_ready := true ); SELECT bdr.add_commit_scope( commit_scope_name := 'example_scope', origin_node_group := 'right_dc', - rule := 'ANY 1 (left_dc) AND ALL (right_dc) LAG CONTROL (max_commit_delay=0.250ms, max_lag_size=100MB)', + rule := 'ANY 1 (left_dc) LAG CONTROL (max_commit_delay=0.250ms, max_lag_size=100MB) AND ALL (right_dc) LAG CONTROL (max_commit_delay=0.250ms, max_lag_size=100MB)', wait_for_ready := true ); ``` diff --git a/product_docs/docs/pgd/5/rel_notes/pgd_5.0.0_rel_notes.mdx b/product_docs/docs/pgd/5/rel_notes/pgd_5.0.0_rel_notes.mdx index 3bdbac5e538..259825280e5 100644 --- a/product_docs/docs/pgd/5/rel_notes/pgd_5.0.0_rel_notes.mdx +++ b/product_docs/docs/pgd/5/rel_notes/pgd_5.0.0_rel_notes.mdx @@ -29,7 +29,7 @@ The highlights of this release include: | BDR | 5.0.0 | Feature | Postgres 15 compatibility
EDB Postgres Distributed 5 is compatible with Postgres 12 to 15.
| | BDR | 5.0.0 | Feature | Improved Cluster Event ManagementThe `bdr.worker_errors` and `bdr.state_journal_details` view were replaced by unified `bdr.event_summary` which also include changes in Raft role for the local node. In the future additional events may be added to it.
| | BDR | 5.0.0 | Change | Improved transaction tracking performanceTransaction tracking now uses shared memory instead of `bdr.internal_node_pre_commit` catalog which considerably improves performance as it does not incur additional I/O.
| -| BDR | 5.0.0 | Feature | Support non-default replication sets with Decoding WorkerAllows Decoding Worker feature to be used in clusters using non-default replication sets like assymetric replication setup.
| +| BDR | 5.0.0 | Feature | Support non-default replication sets with Decoding WorkerAllows Decoding Worker feature to be used in clusters using non-default replication sets like asymmetric replication setup.
| | BDR | 5.0.0 | Feature | Add support for HASH partitioning in AutopartitionExtend autopartition/autoscale to support HASH partitioning. Many of things that are required for RANGE partitioning are not needed for HASH partitioning. For example, we expect to create all HASH partitions in one go (at least for the current work; later we may change this). We don't expect HASH partitions to be moved to a different tablespace or dropped. So data retention policies don't apply for HASH partitioning.
| | BDR | 5.0.0 | Feature | Add a new benchmarking utility `pgd_bench`The utility supports benchmarking CAMO transactions and in future releases will be used for benchmarking PGD specific workloads.
| | BDR | 5.0.0 | Change | Separate Task Management from AutopartitionIn this release, the autopartition work queue mechanism has been moved to a separate module called Task Manager (taskmgr). The task manager is responsible for creating new tasks and executing the ones created by the local node or the task manager leader node. The autopartition worker is thus renamed as taskmgr worker process in this release.
In the older PGD releases, the Raft leader was responsible for creating new work items. But that creates a problem because a witness node can become a Raft leader while it does not have the full view of the cluster objects. In this release, we have introduced a concept of Task Manager Leader node. The node is selected automatically by PGD, but for upgraded clusters, its important to set the `node_kind` properly for all nodes in the cluster. The user is expected to this manually after upgrading to the latest PGD version by calling bdr.alter_node_kind() SQL function for each node.
| diff --git a/product_docs/docs/tpa/23/reference/INSTALL-docker.mdx b/product_docs/docs/tpa/23/reference/INSTALL-docker.mdx index d7269280073..65e1908bd36 100644 --- a/product_docs/docs/tpa/23/reference/INSTALL-docker.mdx +++ b/product_docs/docs/tpa/23/reference/INSTALL-docker.mdx @@ -33,7 +33,7 @@ Double-check the created image: $ docker image ls tpa/tpaexec REPOSITORY TAG IMAGE ID CREATED SIZE tpa/tpaexec latest e145cf8276fb 8 seconds ago 1.73GB -$ docker run --rm tpa/tpaexec tpaexec info +$ docker run --platform=linux/amd64 --rm tpa/tpaexec tpaexec info # TPAexec v20.11-59-g85a62fe3 (branch: master) tpaexec=/usr/local/bin/tpaexec TPA_DIR=/opt/EDB/TPA @@ -46,7 +46,7 @@ Create a TPA container and make your cluster configuration directories available inside the container: ```bash -$ docker run --rm -v ~/clusters:/clusters \ +$ docker run --platform=linux/amd64 --rm -v ~/clusters:/clusters \ -it tpa/tpaexec:latest ``` @@ -57,7 +57,7 @@ If you want to provision Docker containers using TPA, you must also allow the container to access the Docker control socket on the host: ``` -$ docker run --rm -v ~/clusters:/clusters \ +$ docker run --platform=linux/amd64 --rm -v ~/clusters:/clusters \ -v /var/run/docker.sock:/var/run/docker.sock \ -it tpa/tpaexec:latest ``` diff --git a/product_docs/docs/tpa/23/rel_notes/index.mdx b/product_docs/docs/tpa/23/rel_notes/index.mdx index bad4889cd55..2a5d8b278d6 100644 --- a/product_docs/docs/tpa/23/rel_notes/index.mdx +++ b/product_docs/docs/tpa/23/rel_notes/index.mdx @@ -1,10 +1,15 @@ --- -title: "Release notes" +title: Trusted Postgres Architect release notes +navTitle: "Release notes" +navigation: + - tpa_23.13_rel_notes + - tpa_23.12_rel_notes --- The Trusted Postgres Architect documentation describes the latest version of Trusted Postgres Architect 23. | Version | Release date | | ------- | ------------ | +| [23.13](tpa_23.13_rel_notes) | 22 Feb 2023 | | [23.12](tpa_23.12_rel_notes) | 21 Feb 2023 | diff --git a/product_docs/docs/tpa/23/rel_notes/tpa_23.13_rel_notes.mdx b/product_docs/docs/tpa/23/rel_notes/tpa_23.13_rel_notes.mdx new file mode 100644 index 00000000000..841a1271a69 --- /dev/null +++ b/product_docs/docs/tpa/23/rel_notes/tpa_23.13_rel_notes.mdx @@ -0,0 +1,13 @@ +--- +title: Trusted Postgres Architect 23.13 release notes +navTitle: "Version 23.13" +--- + + +New features, enhancements, bug fixes, and other changes in Trusted Postgres Architect 23.13 include the following: + +| Type | Description | +| ---- |------------ | +| Bug fix | Don't enable old EDB repo with PGD-Always-ON and `--epas`. | +| Bug fix | Fix error with PGD-Always-ON and `--postgres-version 15`. | + diff --git a/src/pages/index.js b/src/pages/index.js index b6f60930df5..13bb1c87e5f 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -47,7 +47,9 @@ const Page = () => (- EDB BigAnimal lets you run Oracle SQL queries in the cloud via - EDB Postgres Advanced Server. Watch the video, or load up psql - and follow along. + EDB Postgres Advanced Server 15.2.0, which is built on + open-source PostgreSQL 15.2, includes compatibility with + Oracle and enhanced security, administration, and performance + features.
- Watch demo - -
-- Find out more →
@@ -87,7 +85,7 @@ const Page = () => (
- Want to see what it takes to get the EDB Postgres for
- Kubernetes Operator up and running? Try in the browser now, no
- downloads required.
+ • EDB Advanced Storage Pack
+ • EDB Postgres Tuner
• EDB LDAP Sync
- - Try it now - -
-- + Find out more →