diff --git a/product_docs/docs/pgd/5/scaling.mdx b/product_docs/docs/pgd/5/scaling.mdx
index 57e7b2c823d..d0a1f329103 100644
--- a/product_docs/docs/pgd/5/scaling.mdx
+++ b/product_docs/docs/pgd/5/scaling.mdx
@@ -4,12 +4,14 @@ redirects:
- ../bdr/scaling
---
-AutoPartition allows tables to grow easily to large sizes by automatic
+AutoPartition allows tables to be split into several partitions, either
+by RANGE or HASH and let them grow easily to large sizes by automatic
partitioning management. This capability uses features of PGD
such as low-conflict locking of creating and dropping partitions.
You can create new partitions regularly and then drop them when the
-data retention period expires.
+data retention period expires. This applies only to RANGE partitions.
+All HASH partitions are created in advance and can't be dropped.
PGD management is primarily accomplished by functions that can be called by SQL.
All functions in PGD are exposed in the `bdr` schema. Unless you put it into
@@ -18,7 +20,7 @@ your search_path, you need to schema-qualify the name of each function.
## Auto creation of partitions
`bdr.autopartition()` creates or alters the definition of automatic
-range partitioning for a table. If no definition exists, it's created.
+partitioning for a table. If no definition exists, it's created.
Otherwise, later executions will alter the definition.
`bdr.autopartition()` doesn't lock the actual table. It changes the
@@ -28,9 +30,35 @@ definition of when and how new partition maintenance actions take place.
attached or detached/dropped without locking the rest of the table
(when the underlying Postgres version supports it).
-An ERROR is raised if the table isn't RANGE partitioned or a multi-column
+An ERROR is raised if the table isn't RANGE or HASH partitioned or a multi-column
partition key is used.
+By default, AutoPartition manages partitions globally. In other words, when a
+partition is created on one node, the same partition is also created on all
+other nodes in the cluster. So all partitions are consistent and guaranteed to
+be available. For this, AutoPartition makes use of Raft. You can change this behavior
+by passing `managed_locally` as `true`. In that case, all partitions
+are managed locally on each node. This is useful for the case when the
+partitioned table isn't a replicated table and hence it might not be necessary
+or even desirable to have all partitions on all nodes. For example, the
+built-in `bdr.conflict_history` table isn't a replicated table and is
+managed by AutoPartition locally. Each node creates partitions for this table
+locally and drops them once they are old enough.
+
+You can't later change tables marked as `managed_locally` to be managed
+globally and vice versa.
+
+Activities are performed only when the entry is marked `enabled = on`.
+
+You aren't expected to manually create or drop partitions for tables
+managed by AutoPartition. Doing so can make the AutoPartition metadata
+inconsistent and might cause it to fail.
+
+`bdr.autopartition()` takes different actions based on whether the
+parent table is partitioned by RANGE or HASH.
+
+### RANGE Partitioned Tables
+
A new partition is added for every `partition_increment` range of values, with
lower and upper bound `partition_increment` apart. For tables with a partition
key of type `timestamp` or `date`, the `partition_increment` must be a valid
@@ -86,30 +114,18 @@ bound of the partition. The partition is either migrated to the secondary
tablespace or dropped if either of the given period expires, relative to the
upper bound.
-By default, AutoPartition manages partitions globally. In other words, when a
-partition is created on one node, the same partition is also created on all
-other nodes in the cluster. So all partitions are consistent and guaranteed to
-be available. For this, AutoPartition makes use of Raft. You can change this behavior
-by passing `managed_locally` as `true`. In that case, all partitions
-are managed locally on each node. This is useful for the case when the
-partitioned table isn't a replicated table and hence it might not be necessary
-or even desirable to have all partitions on all nodes. For example, the
-built-in `bdr.conflict_history` table isn't a replicated table and is
-managed by AutoPartition locally. Each node creates partitions for this table
-locally and drops them once they are old enough.
-
-You can't later change tables marked as `managed_locally` to be managed
-globally and vice versa.
+### HASH Partitioned Tables
-Activities are performed only when the entry is marked `enabled = on`.
-
-You aren't expected to manually create or drop partitions for tables
-managed by AutoPartition. Doing so can make the AutoPartition metadata
-inconsistent and might cause it to fail.
+A number of HASH partitions as indicated by `hash_partitions_total` parameter
+are created in advance when the underlying table is a HASH partitioned
+table. The data retention strategy does not apply for HASH partitioned
+tables. Also, dynamic partition creation is not applicable for HASH
+partitioned tables as all desired partitions are created in advance and
+they are never dropped.
### Configure AutoPartition
-The `bdr.autopartition` function configures automatic partitioning of a table.
+The `bdr.autopartition` function configures automatic RANGE partitioning of a table.
#### Synopsis
@@ -166,28 +182,23 @@ bdr.autopartition('Orders', '1000000000',
);
```
-### Create one AutoPartition
-
-Use `bdr.autopartition_create_partition()` to create a standalone AutoPartition
-on the parent table.
-
#### Synopsis
```sql
-bdr.autopartition_create_partition(relname regclass,
- partname name,
- lowerb text,
- upperb text,
- nodes oid[]);
+bdr.autopartition(relation regclass,
+ hash_partitions_total integer DEFAULT 24,
+ managed_locally boolean DEFAULT false,
+ enabled boolean DEFAULT on);
```
#### Parameters
-- `relname` — Name or Oid of the parent table to attach to.
-- `partname` — Name of the new AutoPartition.
-- `lowerb` — The lower bound of the partition.
-- `upperb` — The upper bound of the partition.
-- `nodes` — List of nodes that the new partition resides on.
+- `relation` — Name or Oid of a table.
+- `hash_partitions_total` — Total number of hash partitions to
+ be created. If not specified, then by default 24 hash partitions
+ will be created.
+- `managed_locally` — If true, then the partitions are managed locally.
+- `enabled` — Allows activity to be disabled or paused and later resumed or reenabled.
### Stopping automatic creation of partitions
@@ -203,29 +214,13 @@ bdr.drop_autopartition(relation regclass);
- `relation` — Name or Oid of a table.
-### Drop one AutoPartition
-
-Use `bdr.autopartition_drop_partition` once a PGD AutoPartition table has been
-made, as this function can specify single partitions to drop. If the partitioned
-table was successfully dropped, the function returns `true`.
-
-#### Synopsis
-
-```sql
-bdr.autopartition_drop_partition(relname regclass)
-```
-
-#### Parameters
-
-- `relname` — The name of the partitioned table to drop.
-
-### Notes
-
-This places a DDL lock on the parent table, before using DROP TABLE on the
-chosen partition table.
-
### Wait for partition creation
+Irrespective of whether RANGE or HASH partitioning is used, partition
+creation is an asynchronous process. AutoPartition provides a set of
+functions to wait for the partition to be created, locally or on all
+nodes.
+
Use `bdr.autopartition_wait_for_partitions()` to wait for the creation of
partitions on the local node. The function takes the partitioned table name and
a partition key column value and waits until the partition that holds that
@@ -248,7 +243,8 @@ bdr.autopartition_wait_for_partitions(relation regclass, upperbound text);
#### Parameters
- `relation` — Name or Oid of a table.
-- `upperbound` — Partition key column value.
+- `upperbound` — Partition key column value. Ignore for HASH
+ partitioned tables.
#### Synopsis
@@ -259,7 +255,8 @@ bdr.autopartition_wait_for_partitions_on_all_nodes(relation regclass, upperbound
#### Parameters
- `relation` — Name or Oid of a table.
-- `upperbound` — Partition key column value.
+- `upperbound` — Partition key column value. Ignored for HASH
+ partitioned tables.
### Find partition
@@ -304,3 +301,57 @@ bdr.autopartition_disable(relname regclass);
- `relname` — Name of the relation to disable AutoPartitioning.
+### Create one AutoPartition
+
+AutoPartition uses an internal function
+`bdr.autopartition_create_partition()` to create a standalone
+AutoPartition on the parent table.
+
+#### Synopsis
+
+```sql
+bdr.autopartition_create_partition(relname regclass,
+ partname name,
+ lowerb text,
+ upperb text,
+ nodes oid[]);
+```
+
+#### Parameters
+
+- `relname` — Name or Oid of the parent table to attach to.
+- `partname` — Name of the new AutoPartition.
+- `lowerb` — The lower bound of the partition.
+- `upperb` — The upper bound of the partition.
+- `nodes` — List of nodes that the new partition resides on.
+ This parameter is internal to PGD and reserved for future use.
+
+### Notes
+
+This is an internal function used by AutoPartition for partition
+management and users are recommended to NOT use the function directly.
+
+
+### Drop one AutoPartition
+
+AutoPartition uses an internal function
+`bdr.autopartition_drop_partition` to drop a partition that's no longer
+required as per the data retention policy. If the partitioned
+table was successfully dropped, the function returns `true`.
+
+#### Synopsis
+
+```sql
+bdr.autopartition_drop_partition(relname regclass)
+```
+
+#### Parameters
+
+- `relname` — The name of the partitioned table to drop.
+
+### Notes
+
+This places a DDL lock on the parent table, before using DROP TABLE on the
+chosen partition table. This is an internal function used by
+AutoPartition for partition management and users are recommended to NOT
+use the function directly.
diff --git a/product_docs/docs/tpa/23/architecture-M1.mdx b/product_docs/docs/tpa/23/architecture-M1.mdx
index 849f6798bcd..6de81d72f15 100644
--- a/product_docs/docs/tpa/23/architecture-M1.mdx
+++ b/product_docs/docs/tpa/23/architecture-M1.mdx
@@ -68,15 +68,11 @@ More detail on the options is provided in the following section.
#### Additional Options
-| Parameter | Description | Behaviour if omitted |
-| ------------------------- | -------------------------------------------------------- | --------------------- |
-| `--platform` | One of `aws`, `docker`, `bare`. | Defaults to `aws`. |
-| `--num-cascaded-replicas` | The number of cascaded replicas from the first replica. | Defaults to 1. |
-| `--enable-efm` | Configure EDB Enterprise Failover Manager as the cluster | TPA will select EFM |
-| | failover manager. | as the failover |
-| | | manager for EPAS, and |
-| | | repmgr for all other |
-| | | flavours. |
+| Parameter | Description | Behaviour if omitted |
+| ------------------------- | -------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
+| `--platform` | One of `aws`, `docker`, `bare`. | Defaults to `aws`. |
+| `--num-cascaded-replicas` | The number of cascaded replicas from the first replica. | Defaults to 1. |
+| `--enable-efm` | Configure Failover Manager as the cluster failover manager. | TPA will select EFM
as the failover manager
for EPAS, and repmgr
for all other flavours. |
diff --git a/scripts/source/tpaexec.js b/scripts/source/tpaexec.js
index 61e6562940e..4d2e89e9530 100644
--- a/scripts/source/tpaexec.js
+++ b/scripts/source/tpaexec.js
@@ -206,6 +206,7 @@ function transformer() {
// ignore placeholder
if (node.value.match(/^
/)) return;
if (node.value.trim())
console.warn(
`${file.path}:${node.position.start.line}:${node.position.start.column} Stripping HTML content:\n\t ` +
diff --git a/src/pages/index.js b/src/pages/index.js
index dfd0ded98bb..33125750cd3 100644
--- a/src/pages/index.js
+++ b/src/pages/index.js
@@ -93,9 +93,9 @@ const Page = () => (