diff --git a/404.html b/404.html new file mode 100644 index 00000000000..8e10552b45c --- /dev/null +++ b/404.html @@ -0,0 +1,37 @@ + + +
+ + + + + +PolarDB for PostgreSQL 支持 ePQ 弹性跨机并行查询特性,通过利用集群中多个节点的计算能力,来实现跨节点的并行查询功能。ePQ 可以支持顺序扫描、索引扫描等多种物理算子的跨节点并行化。其中,对顺序扫描算子,ePQ 提供了两种扫描模式,分别为 自适应扫描模式 与 非自适应扫描模式。
非自适应扫描模式是 ePQ 顺序扫描算子(Sequential Scan)的默认扫描方式。每一个参与并行查询的 PX Worker 在执行过程中都会被分配一个唯一的 Worker ID。非自适应扫描模式将会依据 Worker ID 划分数据表在物理存储上的 Disk Unit ID,从而实现每个 PX Worker 可以均匀扫描数据表在共享存储上的存储单元,所有 PX Worker 的扫描结果最终汇总形成全量的数据。
在非自适应扫描模式下,扫描单元会均匀划分给每个 PX Worker。当存在个别只读节点计算资源不足的情况下,可能会导致扫描过程发生计算倾斜:用户发起的单次并行查询迟迟不能完成,查询受限于计算资源不足的节点长时间不能完成扫描任务。
ePQ 提供的自适应扫描模式可以解决这个问题。自适应扫描模式不再限定每个 PX Worker 扫描特定的 Disk Unit ID,而是采用 请求-响应(Request-Response)模式,通过 QC 进程与 PX Worker 进程之间的特定 RPC 通信机制,由 QC 进程负责告知每个 PX Worker 进程可以执行的扫描任务,从而消除计算倾斜的问题。
QC 进程在发起并行查询任务时,会为每个 PX Worker 进程分配固定的 Worker ID,每个 PX Worker 进程根据 Worker ID 对存储单元 取模,只扫描其所属的特定的 Dist Unit。
QC 进程在发起并行查询任务时,会启动 自适应扫描线程,用于接收并处理来自 PX Worker 进程的请求消息。自适应扫描线程维护了当前查询扫描任务的进度,并根据每个 PX Worker 进程的工作进度,向 PX Worker 进程分派需要扫描的 Disk Unit ID。对于需要扫描的最后一个 Disk Unit,自适应扫描线程会唤醒处于空闲状态的 PX Worker,加速最后一块 Disk Unit 的扫描过程。
由于自适应扫描线程与各个 PX worker 进程之间的通信数据很少,频率不高,所以重用了已有的 QC 进程与 PX worker 进程之间的 libpq 连接进行报文通信。自适应扫描线程通过 poll 的方式在需要时同步轮询 PX Worker 进程的请求和响应。
PX Worker 进程在执行顺序扫描算子时,会首先向 QC 进程发起询问请求,将以下信息发送给 QC 端的自适应扫描线程:
自适应扫描线程在收到询问请求后,会创建扫描任务或更新扫描任务的进度。
为了减少请求带来的网络交互次数,ePQ 实现了可变的任务颗粒度。当扫描任务量剩余较多时,PX Worker 进程单次领取的扫描物理块数较多;当扫描任务量剩余较少时,PX Worker 进程单次领取的扫描物理块数相应减少。通过这种方法,可以平衡 网络开销 与 负载均衡 两者之间的关系。
PX Worker 请求报文:采用 libpq 的 'S'
协议进行通信,按照 key-value 的方式编码为字符串。
内容 | 描述 |
---|---|
task_id | 扫描任务编号 |
direction | 扫描方向 |
page_count | 需扫描的总物理块数 |
scan_start | 扫描起始物理块号 |
current_page | 当前扫描的物理块号 |
scan_round | 扫描的次数 |
自适应扫描线程回复报文
内容 | 描述 |
---|---|
success | 是否成功 |
page_start | 响应的起始物理块号 |
page_end | 响应的结束物理块号 |
创建测试表:
postgres=# CREATE TABLE t(id INT);
+CREATE TABLE
+postgres=# INSERT INTO t VALUES(generate_series(1,100));
+INSERT 0 100
+
开启 ePQ 并行查询功能,并设置单节点并发度为 3。通过 EXPLAIN
可以看到执行计划来自 PX 优化器。由于参与测试的只读节点有两个,所以从执行计划中可以看到整体并发度为 6。
postgres=# SET polar_enable_px = 1;
+SET
+postgres=# SET polar_px_dop_per_node = 3;
+SET
+postgres=# SHOW polar_px_enable_adps;
+ polar_px_enable_adps
+----------------------
+ off
+(1 row)
+
+postgres=# EXPLAIN SELECT * FROM t;
+ QUERY PLAN
+-------------------------------------------------------------------------------
+ PX Coordinator 6:1 (slice1; segments: 6) (cost=0.00..431.00 rows=1 width=4)
+ -> Partial Seq Scan on t (cost=0.00..431.00 rows=1 width=4)
+ Optimizer: PolarDB PX Optimizer
+(3 rows)
+
+postgres=# SELECT COUNT(*) FROM t;
+ count
+-------
+ 100
+(1 row)
+
开启自适应扫描功能的开关后,通过 EXPLAIN ANALYZE
可以看到每个 PX Worker 进程扫描的物理块号。
postgres=# SET polar_enable_px = 1;
+SET
+postgres=# SET polar_px_dop_per_node = 3;
+SET
+postgres=# SET polar_px_enable_adps = 1;
+SET
+postgres=# SHOW polar_px_enable_adps;
+ polar_px_enable_adps
+----------------------
+ on
+(1 row)
+
+postgres=# SET polar_px_enable_adps_explain_analyze = 1;
+SET
+postgres=# SHOW polar_px_enable_adps_explain_analyze;
+ polar_px_enable_adps_explain_analyze
+--------------------------------------
+ on
+(1 row)
+
+postgres=# EXPLAIN ANALYZE SELECT * FROM t;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------
+ PX Coordinator 6:1 (slice1; segments: 6) (cost=0.00..431.00 rows=1 width=4) (actual time=0.968..0.982 rows=100 loops=1)
+ -> Partial Seq Scan on t (cost=0.00..431.00 rows=1 width=4) (actual time=0.380..0.435 rows=100 loops=1)
+ Dynamic Pages Per Worker: [1]
+ Planning Time: 5.571 ms
+ Optimizer: PolarDB PX Optimizer
+ (slice0) Executor memory: 23K bytes.
+ (slice1) Executor memory: 14K bytes avg x 6 workers, 14K bytes max (seg0).
+ Execution Time: 9.047 ms
+(8 rows)
+
+postgres=# SELECT COUNT(*) FROM t;
+ count
+-------
+ 100
+(1 row)
+
PostgreSQL 在优化器中为一个查询树输出一个执行效率最高的物理计划树。其中,执行效率高低的衡量是通过代价估算实现的。比如通过估算查询返回元组的条数,和元组的宽度,就可以计算出 I/O 开销;也可以根据将要执行的物理操作估算出可能需要消耗的 CPU 代价。优化器通过系统表 pg_statistic
获得这些在代价估算过程需要使用到的关键统计信息,而 pg_statistic
系统表中的统计信息又是通过自动或手动的 ANALYZE
操作(或 VACUUM
)计算得到的。ANALYZE
将会扫描表中的数据并按列进行分析,将得到的诸如每列的数据分布、最常见值、频率等统计信息写入系统表。
本文从源码的角度分析一下 ANALYZE
操作的实现机制。源码使用目前 PostgreSQL 最新的稳定版本 PostgreSQL 14。
首先,我们应当搞明白分析操作的输出是什么。所以我们可以看一看 pg_statistic
中有哪些列,每个列的含义是什么。这个系统表中的每一行表示其它数据表中 每一列的统计信息。
postgres=# \\d+ pg_statistic
+ Table "pg_catalog.pg_statistic"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+-------------+----------+-----------+----------+---------+----------+--------------+-------------
+ starelid | oid | | not null | | plain | |
+ staattnum | smallint | | not null | | plain | |
+ stainherit | boolean | | not null | | plain | |
+ stanullfrac | real | | not null | | plain | |
+ stawidth | integer | | not null | | plain | |
+ stadistinct | real | | not null | | plain | |
+ stakind1 | smallint | | not null | | plain | |
+ stakind2 | smallint | | not null | | plain | |
+ stakind3 | smallint | | not null | | plain | |
+ stakind4 | smallint | | not null | | plain | |
+ stakind5 | smallint | | not null | | plain | |
+ staop1 | oid | | not null | | plain | |
+ staop2 | oid | | not null | | plain | |
+ staop3 | oid | | not null | | plain | |
+ staop4 | oid | | not null | | plain | |
+ staop5 | oid | | not null | | plain | |
+ stanumbers1 | real[] | | | | extended | |
+ stanumbers2 | real[] | | | | extended | |
+ stanumbers3 | real[] | | | | extended | |
+ stanumbers4 | real[] | | | | extended | |
+ stanumbers5 | real[] | | | | extended | |
+ stavalues1 | anyarray | | | | extended | |
+ stavalues2 | anyarray | | | | extended | |
+ stavalues3 | anyarray | | | | extended | |
+ stavalues4 | anyarray | | | | extended | |
+ stavalues5 | anyarray | | | | extended | |
+Indexes:
+ "pg_statistic_relid_att_inh_index" UNIQUE, btree (starelid, staattnum, stainherit)
+
/* ----------------
+ * pg_statistic definition. cpp turns this into
+ * typedef struct FormData_pg_statistic
+ * ----------------
+ */
+CATALOG(pg_statistic,2619,StatisticRelationId)
+{
+ /* These fields form the unique key for the entry: */
+ Oid starelid BKI_LOOKUP(pg_class); /* relation containing
+ * attribute */
+ int16 staattnum; /* attribute (column) stats are for */
+ bool stainherit; /* true if inheritance children are included */
+
+ /* the fraction of the column's entries that are NULL: */
+ float4 stanullfrac;
+
+ /*
+ * stawidth is the average width in bytes of non-null entries. For
+ * fixed-width datatypes this is of course the same as the typlen, but for
+ * var-width types it is more useful. Note that this is the average width
+ * of the data as actually stored, post-TOASTing (eg, for a
+ * moved-out-of-line value, only the size of the pointer object is
+ * counted). This is the appropriate definition for the primary use of
+ * the statistic, which is to estimate sizes of in-memory hash tables of
+ * tuples.
+ */
+ int32 stawidth;
+
+ /* ----------------
+ * stadistinct indicates the (approximate) number of distinct non-null
+ * data values in the column. The interpretation is:
+ * 0 unknown or not computed
+ * > 0 actual number of distinct values
+ * < 0 negative of multiplier for number of rows
+ * The special negative case allows us to cope with columns that are
+ * unique (stadistinct = -1) or nearly so (for example, a column in which
+ * non-null values appear about twice on the average could be represented
+ * by stadistinct = -0.5 if there are no nulls, or -0.4 if 20% of the
+ * column is nulls). Because the number-of-rows statistic in pg_class may
+ * be updated more frequently than pg_statistic is, it's important to be
+ * able to describe such situations as a multiple of the number of rows,
+ * rather than a fixed number of distinct values. But in other cases a
+ * fixed number is correct (eg, a boolean column).
+ * ----------------
+ */
+ float4 stadistinct;
+
+ /* ----------------
+ * To allow keeping statistics on different kinds of datatypes,
+ * we do not hard-wire any particular meaning for the remaining
+ * statistical fields. Instead, we provide several "slots" in which
+ * statistical data can be placed. Each slot includes:
+ * kind integer code identifying kind of data (see below)
+ * op OID of associated operator, if needed
+ * coll OID of relevant collation, or 0 if none
+ * numbers float4 array (for statistical values)
+ * values anyarray (for representations of data values)
+ * The ID, operator, and collation fields are never NULL; they are zeroes
+ * in an unused slot. The numbers and values fields are NULL in an
+ * unused slot, and might also be NULL in a used slot if the slot kind
+ * has no need for one or the other.
+ * ----------------
+ */
+
+ int16 stakind1;
+ int16 stakind2;
+ int16 stakind3;
+ int16 stakind4;
+ int16 stakind5;
+
+ Oid staop1 BKI_LOOKUP_OPT(pg_operator);
+ Oid staop2 BKI_LOOKUP_OPT(pg_operator);
+ Oid staop3 BKI_LOOKUP_OPT(pg_operator);
+ Oid staop4 BKI_LOOKUP_OPT(pg_operator);
+ Oid staop5 BKI_LOOKUP_OPT(pg_operator);
+
+ Oid stacoll1 BKI_LOOKUP_OPT(pg_collation);
+ Oid stacoll2 BKI_LOOKUP_OPT(pg_collation);
+ Oid stacoll3 BKI_LOOKUP_OPT(pg_collation);
+ Oid stacoll4 BKI_LOOKUP_OPT(pg_collation);
+ Oid stacoll5 BKI_LOOKUP_OPT(pg_collation);
+
+#ifdef CATALOG_VARLEN /* variable-length fields start here */
+ float4 stanumbers1[1];
+ float4 stanumbers2[1];
+ float4 stanumbers3[1];
+ float4 stanumbers4[1];
+ float4 stanumbers5[1];
+
+ /*
+ * Values in these arrays are values of the column's data type, or of some
+ * related type such as an array element type. We presently have to cheat
+ * quite a bit to allow polymorphic arrays of this kind, but perhaps
+ * someday it'll be a less bogus facility.
+ */
+ anyarray stavalues1;
+ anyarray stavalues2;
+ anyarray stavalues3;
+ anyarray stavalues4;
+ anyarray stavalues5;
+#endif
+} FormData_pg_statistic;
+
从数据库命令行的角度和内核 C 代码的角度来看,统计信息的内容都是一致的。所有的属性都以 sta
开头。其中:
starelid
表示当前列所属的表或索引staattnum
表示本行统计信息属于上述表或索引中的第几列stainherit
表示统计信息是否包含子列stanullfrac
表示该列中值为 NULL 的行数比例stawidth
表示该列非空值的平均宽度stadistinct
表示列中非空值的唯一值数量 0
表示未知或未计算> 0
表示唯一值的实际数量< 0
表示 negative of multiplier for number of rows由于不同数据类型所能够被计算的统计信息可能会有一些细微的差别,在接下来的部分中,PostgreSQL 预留了一些存放统计信息的 槽(slots)。目前的内核里暂时预留了五个槽:
#define STATISTIC_NUM_SLOTS 5
+
每一种特定的统计信息可以使用一个槽,具体在槽里放什么完全由这种统计信息的定义自由决定。每一个槽的可用空间包含这么几个部分(其中的 N
表示槽的编号,取值为 1
到 5
):
stakindN
:标识这种统计信息的整数编号staopN
:用于计算或使用统计信息的运算符 OIDstacollN
:排序规则 OIDstanumbersN
:浮点数数组stavaluesN
:任意值数组PostgreSQL 内核中规定,统计信息的编号 1
至 99
被保留给 PostgreSQL 核心统计信息使用,其它部分的编号安排如内核注释所示:
/*
+ * The present allocation of "kind" codes is:
+ *
+ * 1-99: reserved for assignment by the core PostgreSQL project
+ * (values in this range will be documented in this file)
+ * 100-199: reserved for assignment by the PostGIS project
+ * (values to be documented in PostGIS documentation)
+ * 200-299: reserved for assignment by the ESRI ST_Geometry project
+ * (values to be documented in ESRI ST_Geometry documentation)
+ * 300-9999: reserved for future public assignments
+ *
+ * For private use you may choose a "kind" code at random in the range
+ * 10000-30000. However, for code that is to be widely disseminated it is
+ * better to obtain a publicly defined "kind" code by request from the
+ * PostgreSQL Global Development Group.
+ */
+
目前可以在内核代码中看到的 PostgreSQL 核心统计信息有 7 个,编号分别从 1
到 7
。我们可以看看这 7 种统计信息分别如何使用上述的槽。
/*
+ * In a "most common values" slot, staop is the OID of the "=" operator
+ * used to decide whether values are the same or not, and stacoll is the
+ * collation used (same as column's collation). stavalues contains
+ * the K most common non-null values appearing in the column, and stanumbers
+ * contains their frequencies (fractions of total row count). The values
+ * shall be ordered in decreasing frequency. Note that since the arrays are
+ * variable-size, K may be chosen by the statistics collector. Values should
+ * not appear in MCV unless they have been observed to occur more than once;
+ * a unique column will have no MCV slot.
+ */
+#define STATISTIC_KIND_MCV 1
+
对于一个列中的 最常见值,在 staop
中保存 =
运算符来决定一个值是否等于一个最常见值。在 stavalues
中保存了该列中最常见的 K 个非空值,stanumbers
中分别保存了这 K 个值出现的频率。
/*
+ * A "histogram" slot describes the distribution of scalar data. staop is
+ * the OID of the "<" operator that describes the sort ordering, and stacoll
+ * is the relevant collation. (In theory more than one histogram could appear,
+ * if a datatype has more than one useful sort operator or we care about more
+ * than one collation. Currently the collation will always be that of the
+ * underlying column.) stavalues contains M (>=2) non-null values that
+ * divide the non-null column data values into M-1 bins of approximately equal
+ * population. The first stavalues item is the MIN and the last is the MAX.
+ * stanumbers is not used and should be NULL. IMPORTANT POINT: if an MCV
+ * slot is also provided, then the histogram describes the data distribution
+ * *after removing the values listed in MCV* (thus, it's a "compressed
+ * histogram" in the technical parlance). This allows a more accurate
+ * representation of the distribution of a column with some very-common
+ * values. In a column with only a few distinct values, it's possible that
+ * the MCV list describes the entire data population; in this case the
+ * histogram reduces to empty and should be omitted.
+ */
+#define STATISTIC_KIND_HISTOGRAM 2
+
表示一个(数值)列的数据分布直方图。staop
保存 <
运算符用于决定数据分布的排序顺序。stavalues
包含了能够将该列的非空值划分到 M - 1 个容量接近的桶中的 M 个非空值。如果该列中已经有了 MCV 的槽,那么数据分布直方图中将不包含 MCV 中的值,以获得更精确的数据分布。
/*
+ * A "correlation" slot describes the correlation between the physical order
+ * of table tuples and the ordering of data values of this column, as seen
+ * by the "<" operator identified by staop with the collation identified by
+ * stacoll. (As with the histogram, more than one entry could theoretically
+ * appear.) stavalues is not used and should be NULL. stanumbers contains
+ * a single entry, the correlation coefficient between the sequence of data
+ * values and the sequence of their actual tuple positions. The coefficient
+ * ranges from +1 to -1.
+ */
+#define STATISTIC_KIND_CORRELATION 3
+
在 stanumbers
中保存数据值和它们的实际元组位置的相关系数。
/*
+ * A "most common elements" slot is similar to a "most common values" slot,
+ * except that it stores the most common non-null *elements* of the column
+ * values. This is useful when the column datatype is an array or some other
+ * type with identifiable elements (for instance, tsvector). staop contains
+ * the equality operator appropriate to the element type, and stacoll
+ * contains the collation to use with it. stavalues contains
+ * the most common element values, and stanumbers their frequencies. Unlike
+ * MCV slots, frequencies are measured as the fraction of non-null rows the
+ * element value appears in, not the frequency of all rows. Also unlike
+ * MCV slots, the values are sorted into the element type's default order
+ * (to support binary search for a particular value). Since this puts the
+ * minimum and maximum frequencies at unpredictable spots in stanumbers,
+ * there are two extra members of stanumbers, holding copies of the minimum
+ * and maximum frequencies. Optionally, there can be a third extra member,
+ * which holds the frequency of null elements (expressed in the same terms:
+ * the fraction of non-null rows that contain at least one null element). If
+ * this member is omitted, the column is presumed to contain no null elements.
+ *
+ * Note: in current usage for tsvector columns, the stavalues elements are of
+ * type text, even though their representation within tsvector is not
+ * exactly text.
+ */
+#define STATISTIC_KIND_MCELEM 4
+
与 MCV 类似,但是保存的是列中的 最常见元素,主要用于数组等类型。同样,在 staop
中保存了等值运算符用于判断元素出现的频率高低。但与 MCV 不同的是这里的频率计算的分母是非空的行,而不是所有的行。另外,所有的常见元素使用元素对应数据类型的默认顺序进行排序,以便二分查找。
/*
+ * A "distinct elements count histogram" slot describes the distribution of
+ * the number of distinct element values present in each row of an array-type
+ * column. Only non-null rows are considered, and only non-null elements.
+ * staop contains the equality operator appropriate to the element type,
+ * and stacoll contains the collation to use with it.
+ * stavalues is not used and should be NULL. The last member of stanumbers is
+ * the average count of distinct element values over all non-null rows. The
+ * preceding M (>=2) members form a histogram that divides the population of
+ * distinct-elements counts into M-1 bins of approximately equal population.
+ * The first of these is the minimum observed count, and the last the maximum.
+ */
+#define STATISTIC_KIND_DECHIST 5
+
表示列中出现所有数值的频率分布直方图。stanumbers
数组的前 M 个元素是将列中所有唯一值的出现次数大致均分到 M - 1 个桶中的边界值。后续跟上一个所有唯一值的平均出现次数。这个统计信息应该会被用于计算 选择率。
/*
+ * A "length histogram" slot describes the distribution of range lengths in
+ * rows of a range-type column. stanumbers contains a single entry, the
+ * fraction of empty ranges. stavalues is a histogram of non-empty lengths, in
+ * a format similar to STATISTIC_KIND_HISTOGRAM: it contains M (>=2) range
+ * values that divide the column data values into M-1 bins of approximately
+ * equal population. The lengths are stored as float8s, as measured by the
+ * range type's subdiff function. Only non-null rows are considered.
+ */
+#define STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM 6
+
长度直方图描述了一个范围类型的列的范围长度分布。同样也是一个长度为 M 的直方图,保存在 stanumbers
中。
/*
+ * A "bounds histogram" slot is similar to STATISTIC_KIND_HISTOGRAM, but for
+ * a range-type column. stavalues contains M (>=2) range values that divide
+ * the column data values into M-1 bins of approximately equal population.
+ * Unlike a regular scalar histogram, this is actually two histograms combined
+ * into a single array, with the lower bounds of each value forming a
+ * histogram of lower bounds, and the upper bounds a histogram of upper
+ * bounds. Only non-NULL, non-empty ranges are included.
+ */
+#define STATISTIC_KIND_BOUNDS_HISTOGRAM 7
+
边界直方图同样也被用于范围类型,与数据分布直方图类似。stavalues
中保存了使该列数值大致均分到 M - 1 个桶中的 M 个范围边界值。只考虑非空行。
知道 pg_statistic
最终需要保存哪些信息以后,再来看看内核如何收集和计算这些信息。让我们进入 PostgreSQL 内核的执行器代码中。对于 ANALYZE
这种工具性质的指令,执行器代码通过 standard_ProcessUtility()
函数中的 switch case 将每一种指令路由到实现相应功能的函数中。
/*
+ * standard_ProcessUtility itself deals only with utility commands for
+ * which we do not provide event trigger support. Commands that do have
+ * such support are passed down to ProcessUtilitySlow, which contains the
+ * necessary infrastructure for such triggers.
+ *
+ * This division is not just for performance: it's critical that the
+ * event trigger code not be invoked when doing START TRANSACTION for
+ * example, because we might need to refresh the event trigger cache,
+ * which requires being in a valid transaction.
+ */
+void
+standard_ProcessUtility(PlannedStmt *pstmt,
+ const char *queryString,
+ bool readOnlyTree,
+ ProcessUtilityContext context,
+ ParamListInfo params,
+ QueryEnvironment *queryEnv,
+ DestReceiver *dest,
+ QueryCompletion *qc)
+{
+ // ...
+
+ switch (nodeTag(parsetree))
+ {
+ // ...
+
+ case T_VacuumStmt:
+ ExecVacuum(pstate, (VacuumStmt *) parsetree, isTopLevel);
+ break;
+
+ // ...
+ }
+
+ // ...
+}
+
ANALYZE
的处理逻辑入口和 VACUUM
一致,进入 ExecVacuum()
函数。
/*
+ * Primary entry point for manual VACUUM and ANALYZE commands
+ *
+ * This is mainly a preparation wrapper for the real operations that will
+ * happen in vacuum().
+ */
+void
+ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)
+{
+ // ...
+
+ /* Now go through the common routine */
+ vacuum(vacstmt->rels, ¶ms, NULL, isTopLevel);
+}
+
在 parse 了一大堆 option 之后,进入了 vacuum()
函数。在这里,内核代码将会首先明确一下要分析哪些表。因为 ANALYZE
命令在使用上可以:
在明确要分析哪些表以后,依次将每一个表传入 analyze_rel()
函数:
if (params->options & VACOPT_ANALYZE)
+{
+ // ...
+
+ analyze_rel(vrel->oid, vrel->relation, params,
+ vrel->va_cols, in_outer_xact, vac_strategy);
+
+ // ...
+}
+
进入 analyze_rel()
函数以后,内核代码将会对将要被分析的表加 ShareUpdateExclusiveLock
锁,以防止两个并发进行的 ANALYZE
。然后根据待分析表的类型来决定具体的处理方式(比如分析一个 FDW 外表就应该直接调用 FDW routine 中提供的 ANALYZE 功能了)。接下来,将这个表传入 do_analyze_rel()
函数中。
/*
+ * analyze_rel() -- analyze one relation
+ *
+ * relid identifies the relation to analyze. If relation is supplied, use
+ * the name therein for reporting any failure to open/lock the rel; do not
+ * use it once we've successfully opened the rel, since it might be stale.
+ */
+void
+analyze_rel(Oid relid, RangeVar *relation,
+ VacuumParams *params, List *va_cols, bool in_outer_xact,
+ BufferAccessStrategy bstrategy)
+{
+ // ...
+
+ /*
+ * Do the normal non-recursive ANALYZE. We can skip this for partitioned
+ * tables, which don't contain any rows.
+ */
+ if (onerel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)
+ do_analyze_rel(onerel, params, va_cols, acquirefunc,
+ relpages, false, in_outer_xact, elevel);
+
+ // ...
+}
+
进入 do_analyze_rel()
函数后,内核代码将进一步明确要分析一个表中的哪些列:用户可能指定只分析表中的某几个列——被频繁访问的列才更有被分析的价值。然后还要打开待分析表的所有索引,看看是否有可以被分析的列。
为了得到每一列的统计信息,显然我们需要把每一列的数据从磁盘上读起来再去做计算。这里就有一个比较关键的问题了:到底扫描多少行数据呢?理论上,分析尽可能多的数据,最好是全部的数据,肯定能够得到最精确的统计数据;但是对一张很大的表来说,我们没有办法在内存中放下所有的数据,并且分析的阻塞时间也是不可接受的。所以用户可以指定要采样的最大行数,从而在运行开销和统计信息准确性上达成一个妥协:
/*
+ * Determine how many rows we need to sample, using the worst case from
+ * all analyzable columns. We use a lower bound of 100 rows to avoid
+ * possible overflow in Vitter's algorithm. (Note: that will also be the
+ * target in the corner case where there are no analyzable columns.)
+ */
+targrows = 100;
+for (i = 0; i < attr_cnt; i++)
+{
+ if (targrows < vacattrstats[i]->minrows)
+ targrows = vacattrstats[i]->minrows;
+}
+for (ind = 0; ind < nindexes; ind++)
+{
+ AnlIndexData *thisdata = &indexdata[ind];
+
+ for (i = 0; i < thisdata->attr_cnt; i++)
+ {
+ if (targrows < thisdata->vacattrstats[i]->minrows)
+ targrows = thisdata->vacattrstats[i]->minrows;
+ }
+}
+
+/*
+ * Look at extended statistics objects too, as those may define custom
+ * statistics target. So we may need to sample more rows and then build
+ * the statistics with enough detail.
+ */
+minrows = ComputeExtStatisticsRows(onerel, attr_cnt, vacattrstats);
+
+if (targrows < minrows)
+ targrows = minrows;
+
在确定需要采样多少行数据后,内核代码分配了一块相应长度的元组数组,然后开始使用 acquirefunc
函数指针采样数据:
/*
+ * Acquire the sample rows
+ */
+rows = (HeapTuple *) palloc(targrows * sizeof(HeapTuple));
+pgstat_progress_update_param(PROGRESS_ANALYZE_PHASE,
+ inh ? PROGRESS_ANALYZE_PHASE_ACQUIRE_SAMPLE_ROWS_INH :
+ PROGRESS_ANALYZE_PHASE_ACQUIRE_SAMPLE_ROWS);
+if (inh)
+ numrows = acquire_inherited_sample_rows(onerel, elevel,
+ rows, targrows,
+ &totalrows, &totaldeadrows);
+else
+ numrows = (*acquirefunc) (onerel, elevel,
+ rows, targrows,
+ &totalrows, &totaldeadrows);
+
这个函数指针指向的是 analyze_rel()
函数中设置好的 acquire_sample_rows()
函数。该函数使用两阶段模式对表中的数据进行采样:
两阶段同时进行。在采样完成后,被采样到的元组应该已经被放置在元组数组中了。对这个元组数组按照元组的位置进行快速排序,并使用这些采样到的数据估算整个表中的存活元组与死元组的个数:
/*
+ * acquire_sample_rows -- acquire a random sample of rows from the table
+ *
+ * Selected rows are returned in the caller-allocated array rows[], which
+ * must have at least targrows entries.
+ * The actual number of rows selected is returned as the function result.
+ * We also estimate the total numbers of live and dead rows in the table,
+ * and return them into *totalrows and *totaldeadrows, respectively.
+ *
+ * The returned list of tuples is in order by physical position in the table.
+ * (We will rely on this later to derive correlation estimates.)
+ *
+ * As of May 2004 we use a new two-stage method: Stage one selects up
+ * to targrows random blocks (or all blocks, if there aren't so many).
+ * Stage two scans these blocks and uses the Vitter algorithm to create
+ * a random sample of targrows rows (or less, if there are less in the
+ * sample of blocks). The two stages are executed simultaneously: each
+ * block is processed as soon as stage one returns its number and while
+ * the rows are read stage two controls which ones are to be inserted
+ * into the sample.
+ *
+ * Although every row has an equal chance of ending up in the final
+ * sample, this sampling method is not perfect: not every possible
+ * sample has an equal chance of being selected. For large relations
+ * the number of different blocks represented by the sample tends to be
+ * too small. We can live with that for now. Improvements are welcome.
+ *
+ * An important property of this sampling method is that because we do
+ * look at a statistically unbiased set of blocks, we should get
+ * unbiased estimates of the average numbers of live and dead rows per
+ * block. The previous sampling method put too much credence in the row
+ * density near the start of the table.
+ */
+static int
+acquire_sample_rows(Relation onerel, int elevel,
+ HeapTuple *rows, int targrows,
+ double *totalrows, double *totaldeadrows)
+{
+ // ...
+
+ /* Outer loop over blocks to sample */
+ while (BlockSampler_HasMore(&bs))
+ {
+ bool block_accepted;
+ BlockNumber targblock = BlockSampler_Next(&bs);
+ // ...
+ }
+
+ // ...
+
+ /*
+ * If we didn't find as many tuples as we wanted then we're done. No sort
+ * is needed, since they're already in order.
+ *
+ * Otherwise we need to sort the collected tuples by position
+ * (itempointer). It's not worth worrying about corner cases where the
+ * tuples are already sorted.
+ */
+ if (numrows == targrows)
+ qsort((void *) rows, numrows, sizeof(HeapTuple), compare_rows);
+
+ /*
+ * Estimate total numbers of live and dead rows in relation, extrapolating
+ * on the assumption that the average tuple density in pages we didn't
+ * scan is the same as in the pages we did scan. Since what we scanned is
+ * a random sample of the pages in the relation, this should be a good
+ * assumption.
+ */
+ if (bs.m > 0)
+ {
+ *totalrows = floor((liverows / bs.m) * totalblocks + 0.5);
+ *totaldeadrows = floor((deadrows / bs.m) * totalblocks + 0.5);
+ }
+ else
+ {
+ *totalrows = 0.0;
+ *totaldeadrows = 0.0;
+ }
+
+ // ...
+}
+
回到 do_analyze_rel()
函数。采样到数据以后,对于要分析的每一个列,分别计算统计数据,然后更新 pg_statistic
系统表:
/*
+ * Compute the statistics. Temporary results during the calculations for
+ * each column are stored in a child context. The calc routines are
+ * responsible to make sure that whatever they store into the VacAttrStats
+ * structure is allocated in anl_context.
+ */
+if (numrows > 0)
+{
+ // ...
+
+ for (i = 0; i < attr_cnt; i++)
+ {
+ VacAttrStats *stats = vacattrstats[i];
+ AttributeOpts *aopt;
+
+ stats->rows = rows;
+ stats->tupDesc = onerel->rd_att;
+ stats->compute_stats(stats,
+ std_fetch_func,
+ numrows,
+ totalrows);
+
+ // ...
+ }
+
+ // ...
+
+ /*
+ * Emit the completed stats rows into pg_statistic, replacing any
+ * previous statistics for the target columns. (If there are stats in
+ * pg_statistic for columns we didn't process, we leave them alone.)
+ */
+ update_attstats(RelationGetRelid(onerel), inh,
+ attr_cnt, vacattrstats);
+
+ // ...
+}
+
显然,对于不同类型的列,其 compute_stats
函数指针指向的计算函数肯定不太一样。所以我们不妨看看给这个函数指针赋值的地方:
/*
+ * std_typanalyze -- the default type-specific typanalyze function
+ */
+bool
+std_typanalyze(VacAttrStats *stats)
+{
+ // ...
+
+ /*
+ * Determine which standard statistics algorithm to use
+ */
+ if (OidIsValid(eqopr) && OidIsValid(ltopr))
+ {
+ /* Seems to be a scalar datatype */
+ stats->compute_stats = compute_scalar_stats;
+ /*--------------------
+ * The following choice of minrows is based on the paper
+ * "Random sampling for histogram construction: how much is enough?"
+ * by Surajit Chaudhuri, Rajeev Motwani and Vivek Narasayya, in
+ * Proceedings of ACM SIGMOD International Conference on Management
+ * of Data, 1998, Pages 436-447. Their Corollary 1 to Theorem 5
+ * says that for table size n, histogram size k, maximum relative
+ * error in bin size f, and error probability gamma, the minimum
+ * random sample size is
+ * r = 4 * k * ln(2*n/gamma) / f^2
+ * Taking f = 0.5, gamma = 0.01, n = 10^6 rows, we obtain
+ * r = 305.82 * k
+ * Note that because of the log function, the dependence on n is
+ * quite weak; even at n = 10^12, a 300*k sample gives <= 0.66
+ * bin size error with probability 0.99. So there's no real need to
+ * scale for n, which is a good thing because we don't necessarily
+ * know it at this point.
+ *--------------------
+ */
+ stats->minrows = 300 * attr->attstattarget;
+ }
+ else if (OidIsValid(eqopr))
+ {
+ /* We can still recognize distinct values */
+ stats->compute_stats = compute_distinct_stats;
+ /* Might as well use the same minrows as above */
+ stats->minrows = 300 * attr->attstattarget;
+ }
+ else
+ {
+ /* Can't do much but the trivial stuff */
+ stats->compute_stats = compute_trivial_stats;
+ /* Might as well use the same minrows as above */
+ stats->minrows = 300 * attr->attstattarget;
+ }
+
+ // ...
+}
+
这个条件判断语句可以被解读为:
=
(eqopr
:equals operator)和 <
(ltopr
:less than operator),那么这个列应该是一个数值类型,可以使用 compute_scalar_stats()
函数进行分析=
运算符,那么依旧还可以使用 compute_distinct_stats
进行唯一值的统计分析compute_trivial_stats
进行一些简单的分析我们可以分别看看这三个分析函数里做了啥,但我不准备深入每一个分析函数解读其中的逻辑了。因为其中的思想基于一些很古早的统计学论文,古早到连 PDF 上的字母都快看不清了。在代码上没有特别大的可读性,因为基本是参照论文中的公式实现的,不看论文根本没法理解变量和公式的含义。
如果某个列的数据类型不支持等值运算符和比较运算符,那么就只能进行一些简单的分析,比如:
这些可以通过对采样后的元组数组进行循环遍历后轻松得到。
/*
+ * compute_trivial_stats() -- compute very basic column statistics
+ *
+ * We use this when we cannot find a hash "=" operator for the datatype.
+ *
+ * We determine the fraction of non-null rows and the average datum width.
+ */
+static void
+compute_trivial_stats(VacAttrStatsP stats,
+ AnalyzeAttrFetchFunc fetchfunc,
+ int samplerows,
+ double totalrows)
+{}
+
如果某个列只支持等值运算符,也就是说我们只能知道一个数值 是什么,但不能和其它数值比大小。所以无法分析数值在大小范围上的分布,只能分析数值在出现频率上的分布。所以该函数分析的统计数据包含:
/*
+ * compute_distinct_stats() -- compute column statistics including ndistinct
+ *
+ * We use this when we can find only an "=" operator for the datatype.
+ *
+ * We determine the fraction of non-null rows, the average width, the
+ * most common values, and the (estimated) number of distinct values.
+ *
+ * The most common values are determined by brute force: we keep a list
+ * of previously seen values, ordered by number of times seen, as we scan
+ * the samples. A newly seen value is inserted just after the last
+ * multiply-seen value, causing the bottommost (oldest) singly-seen value
+ * to drop off the list. The accuracy of this method, and also its cost,
+ * depend mainly on the length of the list we are willing to keep.
+ */
+static void
+compute_distinct_stats(VacAttrStatsP stats,
+ AnalyzeAttrFetchFunc fetchfunc,
+ int samplerows,
+ double totalrows)
+{}
+
如果一个列的数据类型支持等值运算符和比较运算符,那么可以进行最详尽的分析。分析目标包含:
/*
+ * compute_distinct_stats() -- compute column statistics including ndistinct
+ *
+ * We use this when we can find only an "=" operator for the datatype.
+ *
+ * We determine the fraction of non-null rows, the average width, the
+ * most common values, and the (estimated) number of distinct values.
+ *
+ * The most common values are determined by brute force: we keep a list
+ * of previously seen values, ordered by number of times seen, as we scan
+ * the samples. A newly seen value is inserted just after the last
+ * multiply-seen value, causing the bottommost (oldest) singly-seen value
+ * to drop off the list. The accuracy of this method, and also its cost,
+ * depend mainly on the length of the list we are willing to keep.
+ */
+static void
+compute_distinct_stats(VacAttrStatsP stats,
+ AnalyzeAttrFetchFunc fetchfunc,
+ int samplerows,
+ double totalrows)
+{}
+
以 PostgreSQL 优化器需要的统计信息为切入点,分析了 ANALYZE
命令的大致执行流程。出于简洁性,在流程分析上没有覆盖各种 corner case 和相关的处理逻辑。
PostgreSQL 在优化器中为一个查询树输出一个执行效率最高的物理计划树。其中,执行效率高低的衡量是通过代价估算实现的。比如通过估算查询返回元组的条数,和元组的宽度,就可以计算出 I/O 开销;也可以根据将要执行的物理操作估算出可能需要消耗的 CPU 代价。优化器通过系统表 pg_statistic
获得这些在代价估算过程需要使用到的关键统计信息,而 pg_statistic
系统表中的统计信息又是通过自动或手动的 ANALYZE
操作(或 VACUUM
)计算得到的。ANALYZE
将会扫描表中的数据并按列进行分析,将得到的诸如每列的数据分布、最常见值、频率等统计信息写入系统表。
本文从源码的角度分析一下 ANALYZE
操作的实现机制。源码使用目前 PostgreSQL 最新的稳定版本 PostgreSQL 14。
首先,我们应当搞明白分析操作的输出是什么。所以我们可以看一看 pg_statistic
中有哪些列,每个列的含义是什么。这个系统表中的每一行表示其它数据表中 每一列的统计信息。
postgres=# \\d+ pg_statistic
+ Table "pg_catalog.pg_statistic"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+-------------+----------+-----------+----------+---------+----------+--------------+-------------
+ starelid | oid | | not null | | plain | |
+ staattnum | smallint | | not null | | plain | |
+ stainherit | boolean | | not null | | plain | |
+ stanullfrac | real | | not null | | plain | |
+ stawidth | integer | | not null | | plain | |
+ stadistinct | real | | not null | | plain | |
+ stakind1 | smallint | | not null | | plain | |
+ stakind2 | smallint | | not null | | plain | |
+ stakind3 | smallint | | not null | | plain | |
+ stakind4 | smallint | | not null | | plain | |
+ stakind5 | smallint | | not null | | plain | |
+ staop1 | oid | | not null | | plain | |
+ staop2 | oid | | not null | | plain | |
+ staop3 | oid | | not null | | plain | |
+ staop4 | oid | | not null | | plain | |
+ staop5 | oid | | not null | | plain | |
+ stanumbers1 | real[] | | | | extended | |
+ stanumbers2 | real[] | | | | extended | |
+ stanumbers3 | real[] | | | | extended | |
+ stanumbers4 | real[] | | | | extended | |
+ stanumbers5 | real[] | | | | extended | |
+ stavalues1 | anyarray | | | | extended | |
+ stavalues2 | anyarray | | | | extended | |
+ stavalues3 | anyarray | | | | extended | |
+ stavalues4 | anyarray | | | | extended | |
+ stavalues5 | anyarray | | | | extended | |
+Indexes:
+ "pg_statistic_relid_att_inh_index" UNIQUE, btree (starelid, staattnum, stainherit)
+
/* ----------------
+ * pg_statistic definition. cpp turns this into
+ * typedef struct FormData_pg_statistic
+ * ----------------
+ */
+CATALOG(pg_statistic,2619,StatisticRelationId)
+{
+ /* These fields form the unique key for the entry: */
+ Oid starelid BKI_LOOKUP(pg_class); /* relation containing
+ * attribute */
+ int16 staattnum; /* attribute (column) stats are for */
+ bool stainherit; /* true if inheritance children are included */
+
+ /* the fraction of the column's entries that are NULL: */
+ float4 stanullfrac;
+
+ /*
+ * stawidth is the average width in bytes of non-null entries. For
+ * fixed-width datatypes this is of course the same as the typlen, but for
+ * var-width types it is more useful. Note that this is the average width
+ * of the data as actually stored, post-TOASTing (eg, for a
+ * moved-out-of-line value, only the size of the pointer object is
+ * counted). This is the appropriate definition for the primary use of
+ * the statistic, which is to estimate sizes of in-memory hash tables of
+ * tuples.
+ */
+ int32 stawidth;
+
+ /* ----------------
+ * stadistinct indicates the (approximate) number of distinct non-null
+ * data values in the column. The interpretation is:
+ * 0 unknown or not computed
+ * > 0 actual number of distinct values
+ * < 0 negative of multiplier for number of rows
+ * The special negative case allows us to cope with columns that are
+ * unique (stadistinct = -1) or nearly so (for example, a column in which
+ * non-null values appear about twice on the average could be represented
+ * by stadistinct = -0.5 if there are no nulls, or -0.4 if 20% of the
+ * column is nulls). Because the number-of-rows statistic in pg_class may
+ * be updated more frequently than pg_statistic is, it's important to be
+ * able to describe such situations as a multiple of the number of rows,
+ * rather than a fixed number of distinct values. But in other cases a
+ * fixed number is correct (eg, a boolean column).
+ * ----------------
+ */
+ float4 stadistinct;
+
+ /* ----------------
+ * To allow keeping statistics on different kinds of datatypes,
+ * we do not hard-wire any particular meaning for the remaining
+ * statistical fields. Instead, we provide several "slots" in which
+ * statistical data can be placed. Each slot includes:
+ * kind integer code identifying kind of data (see below)
+ * op OID of associated operator, if needed
+ * coll OID of relevant collation, or 0 if none
+ * numbers float4 array (for statistical values)
+ * values anyarray (for representations of data values)
+ * The ID, operator, and collation fields are never NULL; they are zeroes
+ * in an unused slot. The numbers and values fields are NULL in an
+ * unused slot, and might also be NULL in a used slot if the slot kind
+ * has no need for one or the other.
+ * ----------------
+ */
+
+ int16 stakind1;
+ int16 stakind2;
+ int16 stakind3;
+ int16 stakind4;
+ int16 stakind5;
+
+ Oid staop1 BKI_LOOKUP_OPT(pg_operator);
+ Oid staop2 BKI_LOOKUP_OPT(pg_operator);
+ Oid staop3 BKI_LOOKUP_OPT(pg_operator);
+ Oid staop4 BKI_LOOKUP_OPT(pg_operator);
+ Oid staop5 BKI_LOOKUP_OPT(pg_operator);
+
+ Oid stacoll1 BKI_LOOKUP_OPT(pg_collation);
+ Oid stacoll2 BKI_LOOKUP_OPT(pg_collation);
+ Oid stacoll3 BKI_LOOKUP_OPT(pg_collation);
+ Oid stacoll4 BKI_LOOKUP_OPT(pg_collation);
+ Oid stacoll5 BKI_LOOKUP_OPT(pg_collation);
+
+#ifdef CATALOG_VARLEN /* variable-length fields start here */
+ float4 stanumbers1[1];
+ float4 stanumbers2[1];
+ float4 stanumbers3[1];
+ float4 stanumbers4[1];
+ float4 stanumbers5[1];
+
+ /*
+ * Values in these arrays are values of the column's data type, or of some
+ * related type such as an array element type. We presently have to cheat
+ * quite a bit to allow polymorphic arrays of this kind, but perhaps
+ * someday it'll be a less bogus facility.
+ */
+ anyarray stavalues1;
+ anyarray stavalues2;
+ anyarray stavalues3;
+ anyarray stavalues4;
+ anyarray stavalues5;
+#endif
+} FormData_pg_statistic;
+
从数据库命令行的角度和内核 C 代码的角度来看,统计信息的内容都是一致的。所有的属性都以 sta
开头。其中:
starelid
表示当前列所属的表或索引staattnum
表示本行统计信息属于上述表或索引中的第几列stainherit
表示统计信息是否包含子列stanullfrac
表示该列中值为 NULL 的行数比例stawidth
表示该列非空值的平均宽度stadistinct
表示列中非空值的唯一值数量 0
表示未知或未计算> 0
表示唯一值的实际数量< 0
表示 negative of multiplier for number of rows由于不同数据类型所能够被计算的统计信息可能会有一些细微的差别,在接下来的部分中,PostgreSQL 预留了一些存放统计信息的 槽(slots)。目前的内核里暂时预留了五个槽:
#define STATISTIC_NUM_SLOTS 5
+
每一种特定的统计信息可以使用一个槽,具体在槽里放什么完全由这种统计信息的定义自由决定。每一个槽的可用空间包含这么几个部分(其中的 N
表示槽的编号,取值为 1
到 5
):
stakindN
:标识这种统计信息的整数编号staopN
:用于计算或使用统计信息的运算符 OIDstacollN
:排序规则 OIDstanumbersN
:浮点数数组stavaluesN
:任意值数组PostgreSQL 内核中规定,统计信息的编号 1
至 99
被保留给 PostgreSQL 核心统计信息使用,其它部分的编号安排如内核注释所示:
/*
+ * The present allocation of "kind" codes is:
+ *
+ * 1-99: reserved for assignment by the core PostgreSQL project
+ * (values in this range will be documented in this file)
+ * 100-199: reserved for assignment by the PostGIS project
+ * (values to be documented in PostGIS documentation)
+ * 200-299: reserved for assignment by the ESRI ST_Geometry project
+ * (values to be documented in ESRI ST_Geometry documentation)
+ * 300-9999: reserved for future public assignments
+ *
+ * For private use you may choose a "kind" code at random in the range
+ * 10000-30000. However, for code that is to be widely disseminated it is
+ * better to obtain a publicly defined "kind" code by request from the
+ * PostgreSQL Global Development Group.
+ */
+
目前可以在内核代码中看到的 PostgreSQL 核心统计信息有 7 个,编号分别从 1
到 7
。我们可以看看这 7 种统计信息分别如何使用上述的槽。
/*
+ * In a "most common values" slot, staop is the OID of the "=" operator
+ * used to decide whether values are the same or not, and stacoll is the
+ * collation used (same as column's collation). stavalues contains
+ * the K most common non-null values appearing in the column, and stanumbers
+ * contains their frequencies (fractions of total row count). The values
+ * shall be ordered in decreasing frequency. Note that since the arrays are
+ * variable-size, K may be chosen by the statistics collector. Values should
+ * not appear in MCV unless they have been observed to occur more than once;
+ * a unique column will have no MCV slot.
+ */
+#define STATISTIC_KIND_MCV 1
+
对于一个列中的 最常见值,在 staop
中保存 =
运算符来决定一个值是否等于一个最常见值。在 stavalues
中保存了该列中最常见的 K 个非空值,stanumbers
中分别保存了这 K 个值出现的频率。
/*
+ * A "histogram" slot describes the distribution of scalar data. staop is
+ * the OID of the "<" operator that describes the sort ordering, and stacoll
+ * is the relevant collation. (In theory more than one histogram could appear,
+ * if a datatype has more than one useful sort operator or we care about more
+ * than one collation. Currently the collation will always be that of the
+ * underlying column.) stavalues contains M (>=2) non-null values that
+ * divide the non-null column data values into M-1 bins of approximately equal
+ * population. The first stavalues item is the MIN and the last is the MAX.
+ * stanumbers is not used and should be NULL. IMPORTANT POINT: if an MCV
+ * slot is also provided, then the histogram describes the data distribution
+ * *after removing the values listed in MCV* (thus, it's a "compressed
+ * histogram" in the technical parlance). This allows a more accurate
+ * representation of the distribution of a column with some very-common
+ * values. In a column with only a few distinct values, it's possible that
+ * the MCV list describes the entire data population; in this case the
+ * histogram reduces to empty and should be omitted.
+ */
+#define STATISTIC_KIND_HISTOGRAM 2
+
表示一个(数值)列的数据分布直方图。staop
保存 <
运算符用于决定数据分布的排序顺序。stavalues
包含了能够将该列的非空值划分到 M - 1 个容量接近的桶中的 M 个非空值。如果该列中已经有了 MCV 的槽,那么数据分布直方图中将不包含 MCV 中的值,以获得更精确的数据分布。
/*
+ * A "correlation" slot describes the correlation between the physical order
+ * of table tuples and the ordering of data values of this column, as seen
+ * by the "<" operator identified by staop with the collation identified by
+ * stacoll. (As with the histogram, more than one entry could theoretically
+ * appear.) stavalues is not used and should be NULL. stanumbers contains
+ * a single entry, the correlation coefficient between the sequence of data
+ * values and the sequence of their actual tuple positions. The coefficient
+ * ranges from +1 to -1.
+ */
+#define STATISTIC_KIND_CORRELATION 3
+
在 stanumbers
中保存数据值和它们的实际元组位置的相关系数。
/*
+ * A "most common elements" slot is similar to a "most common values" slot,
+ * except that it stores the most common non-null *elements* of the column
+ * values. This is useful when the column datatype is an array or some other
+ * type with identifiable elements (for instance, tsvector). staop contains
+ * the equality operator appropriate to the element type, and stacoll
+ * contains the collation to use with it. stavalues contains
+ * the most common element values, and stanumbers their frequencies. Unlike
+ * MCV slots, frequencies are measured as the fraction of non-null rows the
+ * element value appears in, not the frequency of all rows. Also unlike
+ * MCV slots, the values are sorted into the element type's default order
+ * (to support binary search for a particular value). Since this puts the
+ * minimum and maximum frequencies at unpredictable spots in stanumbers,
+ * there are two extra members of stanumbers, holding copies of the minimum
+ * and maximum frequencies. Optionally, there can be a third extra member,
+ * which holds the frequency of null elements (expressed in the same terms:
+ * the fraction of non-null rows that contain at least one null element). If
+ * this member is omitted, the column is presumed to contain no null elements.
+ *
+ * Note: in current usage for tsvector columns, the stavalues elements are of
+ * type text, even though their representation within tsvector is not
+ * exactly text.
+ */
+#define STATISTIC_KIND_MCELEM 4
+
与 MCV 类似,但是保存的是列中的 最常见元素,主要用于数组等类型。同样,在 staop
中保存了等值运算符用于判断元素出现的频率高低。但与 MCV 不同的是这里的频率计算的分母是非空的行,而不是所有的行。另外,所有的常见元素使用元素对应数据类型的默认顺序进行排序,以便二分查找。
/*
+ * A "distinct elements count histogram" slot describes the distribution of
+ * the number of distinct element values present in each row of an array-type
+ * column. Only non-null rows are considered, and only non-null elements.
+ * staop contains the equality operator appropriate to the element type,
+ * and stacoll contains the collation to use with it.
+ * stavalues is not used and should be NULL. The last member of stanumbers is
+ * the average count of distinct element values over all non-null rows. The
+ * preceding M (>=2) members form a histogram that divides the population of
+ * distinct-elements counts into M-1 bins of approximately equal population.
+ * The first of these is the minimum observed count, and the last the maximum.
+ */
+#define STATISTIC_KIND_DECHIST 5
+
表示列中出现所有数值的频率分布直方图。stanumbers
数组的前 M 个元素是将列中所有唯一值的出现次数大致均分到 M - 1 个桶中的边界值。后续跟上一个所有唯一值的平均出现次数。这个统计信息应该会被用于计算 选择率。
/*
+ * A "length histogram" slot describes the distribution of range lengths in
+ * rows of a range-type column. stanumbers contains a single entry, the
+ * fraction of empty ranges. stavalues is a histogram of non-empty lengths, in
+ * a format similar to STATISTIC_KIND_HISTOGRAM: it contains M (>=2) range
+ * values that divide the column data values into M-1 bins of approximately
+ * equal population. The lengths are stored as float8s, as measured by the
+ * range type's subdiff function. Only non-null rows are considered.
+ */
+#define STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM 6
+
长度直方图描述了一个范围类型的列的范围长度分布。同样也是一个长度为 M 的直方图,保存在 stanumbers
中。
/*
+ * A "bounds histogram" slot is similar to STATISTIC_KIND_HISTOGRAM, but for
+ * a range-type column. stavalues contains M (>=2) range values that divide
+ * the column data values into M-1 bins of approximately equal population.
+ * Unlike a regular scalar histogram, this is actually two histograms combined
+ * into a single array, with the lower bounds of each value forming a
+ * histogram of lower bounds, and the upper bounds a histogram of upper
+ * bounds. Only non-NULL, non-empty ranges are included.
+ */
+#define STATISTIC_KIND_BOUNDS_HISTOGRAM 7
+
边界直方图同样也被用于范围类型,与数据分布直方图类似。stavalues
中保存了使该列数值大致均分到 M - 1 个桶中的 M 个范围边界值。只考虑非空行。
知道 pg_statistic
最终需要保存哪些信息以后,再来看看内核如何收集和计算这些信息。让我们进入 PostgreSQL 内核的执行器代码中。对于 ANALYZE
这种工具性质的指令,执行器代码通过 standard_ProcessUtility()
函数中的 switch case 将每一种指令路由到实现相应功能的函数中。
/*
+ * standard_ProcessUtility itself deals only with utility commands for
+ * which we do not provide event trigger support. Commands that do have
+ * such support are passed down to ProcessUtilitySlow, which contains the
+ * necessary infrastructure for such triggers.
+ *
+ * This division is not just for performance: it's critical that the
+ * event trigger code not be invoked when doing START TRANSACTION for
+ * example, because we might need to refresh the event trigger cache,
+ * which requires being in a valid transaction.
+ */
+void
+standard_ProcessUtility(PlannedStmt *pstmt,
+ const char *queryString,
+ bool readOnlyTree,
+ ProcessUtilityContext context,
+ ParamListInfo params,
+ QueryEnvironment *queryEnv,
+ DestReceiver *dest,
+ QueryCompletion *qc)
+{
+ // ...
+
+ switch (nodeTag(parsetree))
+ {
+ // ...
+
+ case T_VacuumStmt:
+ ExecVacuum(pstate, (VacuumStmt *) parsetree, isTopLevel);
+ break;
+
+ // ...
+ }
+
+ // ...
+}
+
ANALYZE
的处理逻辑入口和 VACUUM
一致,进入 ExecVacuum()
函数。
/*
+ * Primary entry point for manual VACUUM and ANALYZE commands
+ *
+ * This is mainly a preparation wrapper for the real operations that will
+ * happen in vacuum().
+ */
+void
+ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)
+{
+ // ...
+
+ /* Now go through the common routine */
+ vacuum(vacstmt->rels, ¶ms, NULL, isTopLevel);
+}
+
在 parse 了一大堆 option 之后,进入了 vacuum()
函数。在这里,内核代码将会首先明确一下要分析哪些表。因为 ANALYZE
命令在使用上可以:
在明确要分析哪些表以后,依次将每一个表传入 analyze_rel()
函数:
if (params->options & VACOPT_ANALYZE)
+{
+ // ...
+
+ analyze_rel(vrel->oid, vrel->relation, params,
+ vrel->va_cols, in_outer_xact, vac_strategy);
+
+ // ...
+}
+
进入 analyze_rel()
函数以后,内核代码将会对将要被分析的表加 ShareUpdateExclusiveLock
锁,以防止两个并发进行的 ANALYZE
。然后根据待分析表的类型来决定具体的处理方式(比如分析一个 FDW 外表就应该直接调用 FDW routine 中提供的 ANALYZE 功能了)。接下来,将这个表传入 do_analyze_rel()
函数中。
/*
+ * analyze_rel() -- analyze one relation
+ *
+ * relid identifies the relation to analyze. If relation is supplied, use
+ * the name therein for reporting any failure to open/lock the rel; do not
+ * use it once we've successfully opened the rel, since it might be stale.
+ */
+void
+analyze_rel(Oid relid, RangeVar *relation,
+ VacuumParams *params, List *va_cols, bool in_outer_xact,
+ BufferAccessStrategy bstrategy)
+{
+ // ...
+
+ /*
+ * Do the normal non-recursive ANALYZE. We can skip this for partitioned
+ * tables, which don't contain any rows.
+ */
+ if (onerel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)
+ do_analyze_rel(onerel, params, va_cols, acquirefunc,
+ relpages, false, in_outer_xact, elevel);
+
+ // ...
+}
+
进入 do_analyze_rel()
函数后,内核代码将进一步明确要分析一个表中的哪些列:用户可能指定只分析表中的某几个列——被频繁访问的列才更有被分析的价值。然后还要打开待分析表的所有索引,看看是否有可以被分析的列。
为了得到每一列的统计信息,显然我们需要把每一列的数据从磁盘上读起来再去做计算。这里就有一个比较关键的问题了:到底扫描多少行数据呢?理论上,分析尽可能多的数据,最好是全部的数据,肯定能够得到最精确的统计数据;但是对一张很大的表来说,我们没有办法在内存中放下所有的数据,并且分析的阻塞时间也是不可接受的。所以用户可以指定要采样的最大行数,从而在运行开销和统计信息准确性上达成一个妥协:
/*
+ * Determine how many rows we need to sample, using the worst case from
+ * all analyzable columns. We use a lower bound of 100 rows to avoid
+ * possible overflow in Vitter's algorithm. (Note: that will also be the
+ * target in the corner case where there are no analyzable columns.)
+ */
+targrows = 100;
+for (i = 0; i < attr_cnt; i++)
+{
+ if (targrows < vacattrstats[i]->minrows)
+ targrows = vacattrstats[i]->minrows;
+}
+for (ind = 0; ind < nindexes; ind++)
+{
+ AnlIndexData *thisdata = &indexdata[ind];
+
+ for (i = 0; i < thisdata->attr_cnt; i++)
+ {
+ if (targrows < thisdata->vacattrstats[i]->minrows)
+ targrows = thisdata->vacattrstats[i]->minrows;
+ }
+}
+
+/*
+ * Look at extended statistics objects too, as those may define custom
+ * statistics target. So we may need to sample more rows and then build
+ * the statistics with enough detail.
+ */
+minrows = ComputeExtStatisticsRows(onerel, attr_cnt, vacattrstats);
+
+if (targrows < minrows)
+ targrows = minrows;
+
在确定需要采样多少行数据后,内核代码分配了一块相应长度的元组数组,然后开始使用 acquirefunc
函数指针采样数据:
/*
+ * Acquire the sample rows
+ */
+rows = (HeapTuple *) palloc(targrows * sizeof(HeapTuple));
+pgstat_progress_update_param(PROGRESS_ANALYZE_PHASE,
+ inh ? PROGRESS_ANALYZE_PHASE_ACQUIRE_SAMPLE_ROWS_INH :
+ PROGRESS_ANALYZE_PHASE_ACQUIRE_SAMPLE_ROWS);
+if (inh)
+ numrows = acquire_inherited_sample_rows(onerel, elevel,
+ rows, targrows,
+ &totalrows, &totaldeadrows);
+else
+ numrows = (*acquirefunc) (onerel, elevel,
+ rows, targrows,
+ &totalrows, &totaldeadrows);
+
这个函数指针指向的是 analyze_rel()
函数中设置好的 acquire_sample_rows()
函数。该函数使用两阶段模式对表中的数据进行采样:
两阶段同时进行。在采样完成后,被采样到的元组应该已经被放置在元组数组中了。对这个元组数组按照元组的位置进行快速排序,并使用这些采样到的数据估算整个表中的存活元组与死元组的个数:
/*
+ * acquire_sample_rows -- acquire a random sample of rows from the table
+ *
+ * Selected rows are returned in the caller-allocated array rows[], which
+ * must have at least targrows entries.
+ * The actual number of rows selected is returned as the function result.
+ * We also estimate the total numbers of live and dead rows in the table,
+ * and return them into *totalrows and *totaldeadrows, respectively.
+ *
+ * The returned list of tuples is in order by physical position in the table.
+ * (We will rely on this later to derive correlation estimates.)
+ *
+ * As of May 2004 we use a new two-stage method: Stage one selects up
+ * to targrows random blocks (or all blocks, if there aren't so many).
+ * Stage two scans these blocks and uses the Vitter algorithm to create
+ * a random sample of targrows rows (or less, if there are less in the
+ * sample of blocks). The two stages are executed simultaneously: each
+ * block is processed as soon as stage one returns its number and while
+ * the rows are read stage two controls which ones are to be inserted
+ * into the sample.
+ *
+ * Although every row has an equal chance of ending up in the final
+ * sample, this sampling method is not perfect: not every possible
+ * sample has an equal chance of being selected. For large relations
+ * the number of different blocks represented by the sample tends to be
+ * too small. We can live with that for now. Improvements are welcome.
+ *
+ * An important property of this sampling method is that because we do
+ * look at a statistically unbiased set of blocks, we should get
+ * unbiased estimates of the average numbers of live and dead rows per
+ * block. The previous sampling method put too much credence in the row
+ * density near the start of the table.
+ */
+static int
+acquire_sample_rows(Relation onerel, int elevel,
+ HeapTuple *rows, int targrows,
+ double *totalrows, double *totaldeadrows)
+{
+ // ...
+
+ /* Outer loop over blocks to sample */
+ while (BlockSampler_HasMore(&bs))
+ {
+ bool block_accepted;
+ BlockNumber targblock = BlockSampler_Next(&bs);
+ // ...
+ }
+
+ // ...
+
+ /*
+ * If we didn't find as many tuples as we wanted then we're done. No sort
+ * is needed, since they're already in order.
+ *
+ * Otherwise we need to sort the collected tuples by position
+ * (itempointer). It's not worth worrying about corner cases where the
+ * tuples are already sorted.
+ */
+ if (numrows == targrows)
+ qsort((void *) rows, numrows, sizeof(HeapTuple), compare_rows);
+
+ /*
+ * Estimate total numbers of live and dead rows in relation, extrapolating
+ * on the assumption that the average tuple density in pages we didn't
+ * scan is the same as in the pages we did scan. Since what we scanned is
+ * a random sample of the pages in the relation, this should be a good
+ * assumption.
+ */
+ if (bs.m > 0)
+ {
+ *totalrows = floor((liverows / bs.m) * totalblocks + 0.5);
+ *totaldeadrows = floor((deadrows / bs.m) * totalblocks + 0.5);
+ }
+ else
+ {
+ *totalrows = 0.0;
+ *totaldeadrows = 0.0;
+ }
+
+ // ...
+}
+
回到 do_analyze_rel()
函数。采样到数据以后,对于要分析的每一个列,分别计算统计数据,然后更新 pg_statistic
系统表:
/*
+ * Compute the statistics. Temporary results during the calculations for
+ * each column are stored in a child context. The calc routines are
+ * responsible to make sure that whatever they store into the VacAttrStats
+ * structure is allocated in anl_context.
+ */
+if (numrows > 0)
+{
+ // ...
+
+ for (i = 0; i < attr_cnt; i++)
+ {
+ VacAttrStats *stats = vacattrstats[i];
+ AttributeOpts *aopt;
+
+ stats->rows = rows;
+ stats->tupDesc = onerel->rd_att;
+ stats->compute_stats(stats,
+ std_fetch_func,
+ numrows,
+ totalrows);
+
+ // ...
+ }
+
+ // ...
+
+ /*
+ * Emit the completed stats rows into pg_statistic, replacing any
+ * previous statistics for the target columns. (If there are stats in
+ * pg_statistic for columns we didn't process, we leave them alone.)
+ */
+ update_attstats(RelationGetRelid(onerel), inh,
+ attr_cnt, vacattrstats);
+
+ // ...
+}
+
显然,对于不同类型的列,其 compute_stats
函数指针指向的计算函数肯定不太一样。所以我们不妨看看给这个函数指针赋值的地方:
/*
+ * std_typanalyze -- the default type-specific typanalyze function
+ */
+bool
+std_typanalyze(VacAttrStats *stats)
+{
+ // ...
+
+ /*
+ * Determine which standard statistics algorithm to use
+ */
+ if (OidIsValid(eqopr) && OidIsValid(ltopr))
+ {
+ /* Seems to be a scalar datatype */
+ stats->compute_stats = compute_scalar_stats;
+ /*--------------------
+ * The following choice of minrows is based on the paper
+ * "Random sampling for histogram construction: how much is enough?"
+ * by Surajit Chaudhuri, Rajeev Motwani and Vivek Narasayya, in
+ * Proceedings of ACM SIGMOD International Conference on Management
+ * of Data, 1998, Pages 436-447. Their Corollary 1 to Theorem 5
+ * says that for table size n, histogram size k, maximum relative
+ * error in bin size f, and error probability gamma, the minimum
+ * random sample size is
+ * r = 4 * k * ln(2*n/gamma) / f^2
+ * Taking f = 0.5, gamma = 0.01, n = 10^6 rows, we obtain
+ * r = 305.82 * k
+ * Note that because of the log function, the dependence on n is
+ * quite weak; even at n = 10^12, a 300*k sample gives <= 0.66
+ * bin size error with probability 0.99. So there's no real need to
+ * scale for n, which is a good thing because we don't necessarily
+ * know it at this point.
+ *--------------------
+ */
+ stats->minrows = 300 * attr->attstattarget;
+ }
+ else if (OidIsValid(eqopr))
+ {
+ /* We can still recognize distinct values */
+ stats->compute_stats = compute_distinct_stats;
+ /* Might as well use the same minrows as above */
+ stats->minrows = 300 * attr->attstattarget;
+ }
+ else
+ {
+ /* Can't do much but the trivial stuff */
+ stats->compute_stats = compute_trivial_stats;
+ /* Might as well use the same minrows as above */
+ stats->minrows = 300 * attr->attstattarget;
+ }
+
+ // ...
+}
+
这个条件判断语句可以被解读为:
=
(eqopr
:equals operator)和 <
(ltopr
:less than operator),那么这个列应该是一个数值类型,可以使用 compute_scalar_stats()
函数进行分析=
运算符,那么依旧还可以使用 compute_distinct_stats
进行唯一值的统计分析compute_trivial_stats
进行一些简单的分析我们可以分别看看这三个分析函数里做了啥,但我不准备深入每一个分析函数解读其中的逻辑了。因为其中的思想基于一些很古早的统计学论文,古早到连 PDF 上的字母都快看不清了。在代码上没有特别大的可读性,因为基本是参照论文中的公式实现的,不看论文根本没法理解变量和公式的含义。
如果某个列的数据类型不支持等值运算符和比较运算符,那么就只能进行一些简单的分析,比如:
这些可以通过对采样后的元组数组进行循环遍历后轻松得到。
/*
+ * compute_trivial_stats() -- compute very basic column statistics
+ *
+ * We use this when we cannot find a hash "=" operator for the datatype.
+ *
+ * We determine the fraction of non-null rows and the average datum width.
+ */
+static void
+compute_trivial_stats(VacAttrStatsP stats,
+ AnalyzeAttrFetchFunc fetchfunc,
+ int samplerows,
+ double totalrows)
+{}
+
如果某个列只支持等值运算符,也就是说我们只能知道一个数值 是什么,但不能和其它数值比大小。所以无法分析数值在大小范围上的分布,只能分析数值在出现频率上的分布。所以该函数分析的统计数据包含:
/*
+ * compute_distinct_stats() -- compute column statistics including ndistinct
+ *
+ * We use this when we can find only an "=" operator for the datatype.
+ *
+ * We determine the fraction of non-null rows, the average width, the
+ * most common values, and the (estimated) number of distinct values.
+ *
+ * The most common values are determined by brute force: we keep a list
+ * of previously seen values, ordered by number of times seen, as we scan
+ * the samples. A newly seen value is inserted just after the last
+ * multiply-seen value, causing the bottommost (oldest) singly-seen value
+ * to drop off the list. The accuracy of this method, and also its cost,
+ * depend mainly on the length of the list we are willing to keep.
+ */
+static void
+compute_distinct_stats(VacAttrStatsP stats,
+ AnalyzeAttrFetchFunc fetchfunc,
+ int samplerows,
+ double totalrows)
+{}
+
如果一个列的数据类型支持等值运算符和比较运算符,那么可以进行最详尽的分析。分析目标包含:
/*
+ * compute_distinct_stats() -- compute column statistics including ndistinct
+ *
+ * We use this when we can find only an "=" operator for the datatype.
+ *
+ * We determine the fraction of non-null rows, the average width, the
+ * most common values, and the (estimated) number of distinct values.
+ *
+ * The most common values are determined by brute force: we keep a list
+ * of previously seen values, ordered by number of times seen, as we scan
+ * the samples. A newly seen value is inserted just after the last
+ * multiply-seen value, causing the bottommost (oldest) singly-seen value
+ * to drop off the list. The accuracy of this method, and also its cost,
+ * depend mainly on the length of the list we are willing to keep.
+ */
+static void
+compute_distinct_stats(VacAttrStatsP stats,
+ AnalyzeAttrFetchFunc fetchfunc,
+ int samplerows,
+ double totalrows)
+{}
+
以 PostgreSQL 优化器需要的统计信息为切入点,分析了 ANALYZE
命令的大致执行流程。出于简洁性,在流程分析上没有覆盖各种 corner case 和相关的处理逻辑。
很多 PolarDB PG 的用户都有 TP (Transactional Processing) 和 AP (Analytical Processing) 共用的需求。他们期望数据库在白天处理高并发的 TP 请求,在夜间 TP 流量下降、机器负载空闲时进行 AP 的报表分析。但是即使这样,依然没有最大化利用空闲机器的资源。原先的 PolarDB PG 数据库在处理复杂的 AP 查询时会遇到两大挑战:
为了解决用户实际使用中的痛点,PolarDB 实现了 HTAP 特性。当前业界 HTAP 的解决方案主要有以下三种:
基于 PolarDB 的存储计算分离架构,我们研发了分布式 MPP 执行引擎,提供了跨机并行执行、弹性计算弹性扩展的保证,使得 PolarDB 初步具备了 HTAP 的能力:
PolarDB HTAP 的核心是分布式 MPP 执行引擎,是典型的火山模型引擎。A、B 两张表先做 join 再做聚合输出,这也是 PostgreSQL 单机执行引擎的执行流程。
在传统的 MPP 执行引擎中,数据被打散到不同的节点上,不同节点上的数据可能具有不同的分布属性,比如哈希分布、随机分布、复制分布等。传统的 MPP 执行引擎会针对不同表的数据分布特点,在执行计划中插入算子来保证上层算子对数据的分布属性无感知。
不同的是,PolarDB 是共享存储架构,存储上的数据可以被所有计算节点全量访问。如果使用传统的 MPP 执行引擎,每个计算节点 Worker 都会扫描全量数据,从而得到重复的数据;同时,也没有起到扫描时分治加速的效果,并不能称得上是真正意义上的 MPP 引擎。
因此,在 PolarDB 分布式 MPP 执行引擎中,我们借鉴了火山模型论文中的思想,对所有扫描算子进行并发处理,引入了 PxScan 算子来屏蔽共享存储。PxScan 算子将 shared-storage 的数据映射为 shared-nothing 的数据,通过 Worker 之间的协调,将目标表划分为多个虚拟分区数据块,每个 Worker 扫描各自的虚拟分区数据块,从而实现了跨机分布式并行扫描。
PxScan 算子扫描出来的数据会通过 Shuffle 算子来重分布。重分布后的数据在每个 Worker 上如同单机执行一样,按照火山模型来执行。
传统 MPP 只能在指定节点发起 MPP 查询,因此每个节点上都只能有单个 Worker 扫描一张表。为了支持云原生下 serverless 弹性扩展的需求,我们引入了分布式事务一致性保证。
任意选择一个节点作为 Coordinator 节点,它的 ReadLSN 会作为约定的 LSN,从所有 MPP 节点的快照版本号中选择最小的版本号作为全局约定的快照版本号。通过 LSN 的回放等待和 Global Snaphot 同步机制,确保在任何一个节点发起 MPP 查询时,数据和快照均能达到一致可用的状态。
为了实现 serverless 的弹性扩展,我们从共享存储的特点出发,将 Coordinator 节点全链路上各个模块需要的外部依赖全部放至共享存储上。各个 Worker 节点运行时需要的参数也会通过控制链路从 Coordinator 节点同步过来,从而使 Coordinator 节点和 Worker 节点全链路 无状态化 (Stateless)。
基于以上两点设计,PolarDB 的弹性扩展具备了以下几大优势:
倾斜是传统 MPP 固有的问题,其根本原因主要是数据分布倾斜和数据计算倾斜:
倾斜会导致传统 MPP 在执行时出现木桶效应,执行完成时间受制于执行最慢的子任务。
PolarDB 设计并实现了 自适应扫描机制。如上图所示,采用 Coordinator 节点来协调 Worker 节点的工作模式。在扫描数据时,Coordinator 节点会在内存中创建一个任务管理器,根据扫描任务对 Worker 节点进行调度。Coordinator 节点内部分为两个线程:
扫描进度较快的 Worker 能够扫描多个数据块,实现能者多劳。比如上图中 RO1 与 RO3 的 Worker 各自扫描了 4 个数据块, RO2 由于计算倾斜可以扫描更多数据块,因此它最终扫描了 6 个数据块。
PolarDB HTAP 的自适应扫描机制还充分考虑了 PostgreSQL 的 Buffer Pool 亲和性,保证每个 Worker 尽可能扫描固定的数据块,从而最大化命中 Buffer Pool 的概率,降低 I/O 开销。
我们使用 256 GB 内存的 16 个 PolarDB PG 实例作为 RO 节点,搭建了 1 TB 的 TPC-H 环境进行对比测试。相较于单机并行,分布式 MPP 并行充分利用了所有 RO 节点的计算资源和底层共享存储的 I/O 带宽,从根本上解决了前文提及的 HTAP 诸多挑战。在 TPC-H 的 22 条 SQL 中,有 3 条 SQL 加速了 60 多倍,19 条 SQL 加速了 10 多倍,平均加速 23 倍。
此外,我们也测试了弹性扩展计算资源带来的性能变化。通过增加 CPU 的总核心数,从 16 核增加到 128 核,TPC-H 的总运行时间线性提升,每条 SQL 的执行速度也呈线性提升,这也验证了 PolarDB HTAP serverless 弹性扩展的特点。
在测试中发现,当 CPU 的总核数增加到 256 核时,性能提升不再明显。原因是此时 PolarDB 共享存储的 I/O 带宽已经打满,成为了瓶颈。
我们将 PolarDB 的分布式 MPP 执行引擎与传统数据库的 MPP 执行引擎进行了对比,同样使用了 256 GB 内存的 16 个节点。
在 1 TB 的 TPC-H 数据上,当保持与传统 MPP 数据库相同单机并行度的情况下(多机单进程),PolarDB 的性能是传统 MPP 数据库的 90%。其中最本质的原因是传统 MPP 数据库的数据默认是哈希分布的,当两张表的 join key 是各自的分布键时,可以不用 shuffle 直接进行本地的 Wise Join。而 PolarDB 的底层是共享存储池,PxScan 算子并行扫描出来的数据等价于随机分布,必须进行 shuffle 重分布以后才能像传统 MPP 数据库一样进行后续的处理。因此,TPC-H 涉及到表连接时,PolarDB 相比传统 MPP 数据库多了一次网络 shuffle 的开销。
PolarDB 分布式 MPP 执行引擎能够进行弹性扩展,数据无需重分布。因此,在有限的 16 台机器上执行 MPP 时,PolarDB 还可以继续扩展单机并行度,充分利用每台机器的资源:当 PolarDB 的单机并行度为 8 时,它的性能是传统 MPP 数据库的 5-6 倍;当 PolarDB 的单机并行度呈线性增加时,PolarDB 的总体性能也呈线性增加。只需要修改配置参数,就可以即时生效。
经过持续迭代的研发,目前 PolarDB HTAP 在 Parallel Query 上支持的功能特性主要有五大部分:
基于 PolarDB 读写分离架构和 HTAP serverless 弹性扩展的设计, PolarDB Parallel DML 支持一写多读、多写多读两种特性。
不同的特性适用不同的场景,用户可以根据自己的业务特点来选择不同的 PDML 功能特性。
PolarDB 分布式 MPP 执行引擎,不仅可以用于只读查询和 DML,还可以用于 索引构建加速。OLTP 业务中有大量的索引,而 B-Tree 索引创建的过程大约有 80% 的时间消耗在排序和构建索引页上,20% 消耗在写入索引页上。如下图所示,PolarDB 利用 RO 节点对数据进行分布式 MPP 加速排序,采用流水化的技术来构建索引页,同时使用批量写入技术来提升索引页的写入速度。
在目前索引构建加速这一特性中,PolarDB 已经对 B-Tree 索引的普通创建以及 B-Tree 索引的在线创建 (Concurrently) 两种功能进行了支持。
PolarDB HTAP 适用于日常业务中的 轻分析类业务,例如:对账业务,报表业务。
PolarDB PG 引擎默认不开启 MPP 功能。若您需要使用此功能,请使用如下参数:
polar_enable_px
:指定是否开启 MPP 功能。默认为 OFF
,即不开启。polar_px_max_workers_number
:设置单个节点上的最大 MPP Worker 进程数,默认为 30
。该参数限制了单个节点上的最大并行度,节点上所有会话的 MPP workers 进程数不能超过该参数大小。polar_px_dop_per_node
:设置当前会话并行查询的并行度,默认为 1
,推荐值为当前 CPU 总核数。若设置该参数为 N
,则一个会话在每个节点上将会启用 N
个 MPP Worker 进程,用于处理当前的 MPP 逻辑polar_px_nodes
:指定参与 MPP 的只读节点。默认为空,表示所有只读节点都参与。可配置为指定节点参与 MPP,以逗号分隔px_worker
:指定 MPP 是否对特定表生效。默认不生效。MPP 功能比较消耗集群计算节点的资源,因此只有对设置了 px_workers
的表才使用该功能。例如: ALTER TABLE t1 SET(px_workers=1)
表示 t1 表允许 MPPALTER TABLE t1 SET(px_workers=-1)
表示 t1 表禁止 MPPALTER TABLE t1 SET(px_workers=0)
表示 t1 表忽略 MPP(默认状态)本示例以简单的单表查询操作,来描述 MPP 的功能是否有效。
-- 创建 test 表并插入基础数据。
+CREATE TABLE test(id int);
+INSERT INTO test SELECT generate_series(1,1000000);
+
+-- 默认情况下 MPP 功能不开启,单表查询执行计划为 PG 原生的 Seq Scan
+EXPLAIN SELECT * FROM test;
+ QUERY PLAN
+--------------------------------------------------------
+ Seq Scan on test (cost=0.00..35.50 rows=2550 width=4)
+(1 row)
+
开启并使用 MPP 功能:
-- 对 test 表启用 MPP 功能
+ALTER TABLE test SET (px_workers=1);
+
+-- 开启 MPP 功能
+SET polar_enable_px = on;
+
+EXPLAIN SELECT * FROM test;
+
+ QUERY PLAN
+-------------------------------------------------------------------------------
+ PX Coordinator 2:1 (slice1; segments: 2) (cost=0.00..431.00 rows=1 width=4)
+ -> Seq Scan on test (scan partial) (cost=0.00..431.00 rows=1 width=4)
+ Optimizer: PolarDB PX Optimizer
+(3 rows)
+
配置参与 MPP 的计算节点范围:
-- 查询当前所有只读节点的名称
+CREATE EXTENSION polar_monitor;
+
+SELECT name,host,port FROM polar_cluster_info WHERE px_node='t';
+ name | host | port
+-------+-----------+------
+ node1 | 127.0.0.1 | 5433
+ node2 | 127.0.0.1 | 5434
+(2 rows)
+
+-- 当前集群有 2 个只读节点,名称分别为:node1,node2
+
+-- 指定 node1 只读节点参与 MPP
+SET polar_px_nodes = 'node1';
+
+-- 查询参与并行查询的节点
+SHOW polar_px_nodes;
+ polar_px_nodes
+----------------
+ node1
+(1 row)
+
+EXPLAIN SELECT * FROM test;
+ QUERY PLAN
+-------------------------------------------------------------------------------
+ PX Coordinator 1:1 (slice1; segments: 1) (cost=0.00..431.00 rows=1 width=4)
+ -> Partial Seq Scan on test (cost=0.00..431.00 rows=1 width=4)
+ Optimizer: PolarDB PX Optimizer
+(3 rows)
+
当前 MPP 对分区表支持的功能如下所示:
--分区表 MPP 功能默认关闭,需要先开启 MPP 功能
+SET polar_enable_px = ON;
+
+-- 执行以下语句,开启分区表 MPP 功能
+SET polar_px_enable_partition = true;
+
+-- 执行以下语句,开启多级分区表 MPP 功能
+SET polar_px_optimizer_multilevel_partitioning = true;
+
当前仅支持对 B-Tree 索引的构建,且暂不支持 INCLUDE
等索引构建语法,暂不支持表达式等索引列类型。
如果需要使用 MPP 功能加速创建索引,请使用如下参数:
polar_px_dop_per_node
:指定通过 MPP 加速构建索引的并行度。默认为 1
。polar_px_enable_replay_wait
:当使用 MPP 加速索引构建时,当前会话内无需手动开启该参数,该参数将自动生效,以保证最近更新的数据表项可以被创建到索引中,保证索引表的完整性。索引创建完成后,该参数将会被重置为数据库默认值。polar_px_enable_btbuild
:是否开启使用 MPP 加速创建索引。取值为 OFF
时不开启(默认),取值为 ON
时开启。polar_bt_write_page_buffer_size
:指定索引构建过程中的写 I/O 策略。该参数默认值为 0
(不开启),单位为块,最大值可设置为 8192
。推荐设置为 4096
。 polar_bt_write_page_buffer_size
大小的 buffer,对于需要写盘的索引页,会通过该 buffer 进行 I/O 合并再统一写盘,避免了频繁调度 I/O 带来的性能开销。该参数会额外提升 20% 的索引创建性能。-- 开启使用 MPP 加速创建索引功能。
+SET polar_px_enable_btbuild = on;
+
+-- 使用如下语法创建索引
+CREATE INDEX t ON test(id) WITH(px_build = ON);
+
+-- 查询表结构
+\\d test
+ Table "public.test"
+ Column | Type | Collation | Nullable | Default
+--------+---------+-----------+----------+---------
+ id | integer | | |
+ id2 | integer | | |
+Indexes:
+ "t" btree (id) WITH (px_build=finish)
+
很多 PolarDB PG 的用户都有 TP (Transactional Processing) 和 AP (Analytical Processing) 共用的需求。他们期望数据库在白天处理高并发的 TP 请求,在夜间 TP 流量下降、机器负载空闲时进行 AP 的报表分析。但是即使这样,依然没有最大化利用空闲机器的资源。原先的 PolarDB PG 数据库在处理复杂的 AP 查询时会遇到两大挑战:
为了解决用户实际使用中的痛点,PolarDB 实现了 HTAP 特性。当前业界 HTAP 的解决方案主要有以下三种:
基于 PolarDB 的存储计算分离架构,我们研发了分布式 MPP 执行引擎,提供了跨机并行执行、弹性计算弹性扩展的保证,使得 PolarDB 初步具备了 HTAP 的能力:
PolarDB HTAP 的核心是分布式 MPP 执行引擎,是典型的火山模型引擎。A、B 两张表先做 join 再做聚合输出,这也是 PostgreSQL 单机执行引擎的执行流程。
在传统的 MPP 执行引擎中,数据被打散到不同的节点上,不同节点上的数据可能具有不同的分布属性,比如哈希分布、随机分布、复制分布等。传统的 MPP 执行引擎会针对不同表的数据分布特点,在执行计划中插入算子来保证上层算子对数据的分布属性无感知。
不同的是,PolarDB 是共享存储架构,存储上的数据可以被所有计算节点全量访问。如果使用传统的 MPP 执行引擎,每个计算节点 Worker 都会扫描全量数据,从而得到重复的数据;同时,也没有起到扫描时分治加速的效果,并不能称得上是真正意义上的 MPP 引擎。
因此,在 PolarDB 分布式 MPP 执行引擎中,我们借鉴了火山模型论文中的思想,对所有扫描算子进行并发处理,引入了 PxScan 算子来屏蔽共享存储。PxScan 算子将 shared-storage 的数据映射为 shared-nothing 的数据,通过 Worker 之间的协调,将目标表划分为多个虚拟分区数据块,每个 Worker 扫描各自的虚拟分区数据块,从而实现了跨机分布式并行扫描。
PxScan 算子扫描出来的数据会通过 Shuffle 算子来重分布。重分布后的数据在每个 Worker 上如同单机执行一样,按照火山模型来执行。
传统 MPP 只能在指定节点发起 MPP 查询,因此每个节点上都只能有单个 Worker 扫描一张表。为了支持云原生下 serverless 弹性扩展的需求,我们引入了分布式事务一致性保证。
任意选择一个节点作为 Coordinator 节点,它的 ReadLSN 会作为约定的 LSN,从所有 MPP 节点的快照版本号中选择最小的版本号作为全局约定的快照版本号。通过 LSN 的回放等待和 Global Snapshot 同步机制,确保在任何一个节点发起 MPP 查询时,数据和快照均能达到一致可用的状态。
为了实现 serverless 的弹性扩展,我们从共享存储的特点出发,将 Coordinator 节点全链路上各个模块需要的外部依赖全部放至共享存储上。各个 Worker 节点运行时需要的参数也会通过控制链路从 Coordinator 节点同步过来,从而使 Coordinator 节点和 Worker 节点全链路 无状态化 (Stateless)。
基于以上两点设计,PolarDB 的弹性扩展具备了以下几大优势:
倾斜是传统 MPP 固有的问题,其根本原因主要是数据分布倾斜和数据计算倾斜:
倾斜会导致传统 MPP 在执行时出现木桶效应,执行完成时间受制于执行最慢的子任务。
PolarDB 设计并实现了 自适应扫描机制。如上图所示,采用 Coordinator 节点来协调 Worker 节点的工作模式。在扫描数据时,Coordinator 节点会在内存中创建一个任务管理器,根据扫描任务对 Worker 节点进行调度。Coordinator 节点内部分为两个线程:
扫描进度较快的 Worker 能够扫描多个数据块,实现能者多劳。比如上图中 RO1 与 RO3 的 Worker 各自扫描了 4 个数据块, RO2 由于计算倾斜可以扫描更多数据块,因此它最终扫描了 6 个数据块。
PolarDB HTAP 的自适应扫描机制还充分考虑了 PostgreSQL 的 Buffer Pool 亲和性,保证每个 Worker 尽可能扫描固定的数据块,从而最大化命中 Buffer Pool 的概率,降低 I/O 开销。
我们使用 256 GB 内存的 16 个 PolarDB PG 实例作为 RO 节点,搭建了 1 TB 的 TPC-H 环境进行对比测试。相较于单机并行,分布式 MPP 并行充分利用了所有 RO 节点的计算资源和底层共享存储的 I/O 带宽,从根本上解决了前文提及的 HTAP 诸多挑战。在 TPC-H 的 22 条 SQL 中,有 3 条 SQL 加速了 60 多倍,19 条 SQL 加速了 10 多倍,平均加速 23 倍。
此外,我们也测试了弹性扩展计算资源带来的性能变化。通过增加 CPU 的总核心数,从 16 核增加到 128 核,TPC-H 的总运行时间线性提升,每条 SQL 的执行速度也呈线性提升,这也验证了 PolarDB HTAP serverless 弹性扩展的特点。
在测试中发现,当 CPU 的总核数增加到 256 核时,性能提升不再明显。原因是此时 PolarDB 共享存储的 I/O 带宽已经打满,成为了瓶颈。
我们将 PolarDB 的分布式 MPP 执行引擎与传统数据库的 MPP 执行引擎进行了对比,同样使用了 256 GB 内存的 16 个节点。
在 1 TB 的 TPC-H 数据上,当保持与传统 MPP 数据库相同单机并行度的情况下(多机单进程),PolarDB 的性能是传统 MPP 数据库的 90%。其中最本质的原因是传统 MPP 数据库的数据默认是哈希分布的,当两张表的 join key 是各自的分布键时,可以不用 shuffle 直接进行本地的 Wise Join。而 PolarDB 的底层是共享存储池,PxScan 算子并行扫描出来的数据等价于随机分布,必须进行 shuffle 重分布以后才能像传统 MPP 数据库一样进行后续的处理。因此,TPC-H 涉及到表连接时,PolarDB 相比传统 MPP 数据库多了一次网络 shuffle 的开销。
PolarDB 分布式 MPP 执行引擎能够进行弹性扩展,数据无需重分布。因此,在有限的 16 台机器上执行 MPP 时,PolarDB 还可以继续扩展单机并行度,充分利用每台机器的资源:当 PolarDB 的单机并行度为 8 时,它的性能是传统 MPP 数据库的 5-6 倍;当 PolarDB 的单机并行度呈线性增加时,PolarDB 的总体性能也呈线性增加。只需要修改配置参数,就可以即时生效。
经过持续迭代的研发,目前 PolarDB HTAP 在 Parallel Query 上支持的功能特性主要有五大部分:
基于 PolarDB 读写分离架构和 HTAP serverless 弹性扩展的设计, PolarDB Parallel DML 支持一写多读、多写多读两种特性。
不同的特性适用不同的场景,用户可以根据自己的业务特点来选择不同的 PDML 功能特性。
PolarDB 分布式 MPP 执行引擎,不仅可以用于只读查询和 DML,还可以用于 索引构建加速。OLTP 业务中有大量的索引,而 B-Tree 索引创建的过程大约有 80% 的时间消耗在排序和构建索引页上,20% 消耗在写入索引页上。如下图所示,PolarDB 利用 RO 节点对数据进行分布式 MPP 加速排序,采用流水化的技术来构建索引页,同时使用批量写入技术来提升索引页的写入速度。
在目前索引构建加速这一特性中,PolarDB 已经对 B-Tree 索引的普通创建以及 B-Tree 索引的在线创建 (Concurrently) 两种功能进行了支持。
PolarDB HTAP 适用于日常业务中的 轻分析类业务,例如:对账业务,报表业务。
PolarDB PG 引擎默认不开启 MPP 功能。若您需要使用此功能,请使用如下参数:
polar_enable_px
:指定是否开启 MPP 功能。默认为 OFF
,即不开启。polar_px_max_workers_number
:设置单个节点上的最大 MPP Worker 进程数,默认为 30
。该参数限制了单个节点上的最大并行度,节点上所有会话的 MPP workers 进程数不能超过该参数大小。polar_px_dop_per_node
:设置当前会话并行查询的并行度,默认为 1
,推荐值为当前 CPU 总核数。若设置该参数为 N
,则一个会话在每个节点上将会启用 N
个 MPP Worker 进程,用于处理当前的 MPP 逻辑polar_px_nodes
:指定参与 MPP 的只读节点。默认为空,表示所有只读节点都参与。可配置为指定节点参与 MPP,以逗号分隔px_worker
:指定 MPP 是否对特定表生效。默认不生效。MPP 功能比较消耗集群计算节点的资源,因此只有对设置了 px_workers
的表才使用该功能。例如: ALTER TABLE t1 SET(px_workers=1)
表示 t1 表允许 MPPALTER TABLE t1 SET(px_workers=-1)
表示 t1 表禁止 MPPALTER TABLE t1 SET(px_workers=0)
表示 t1 表忽略 MPP(默认状态)本示例以简单的单表查询操作,来描述 MPP 的功能是否有效。
-- 创建 test 表并插入基础数据。
+CREATE TABLE test(id int);
+INSERT INTO test SELECT generate_series(1,1000000);
+
+-- 默认情况下 MPP 功能不开启,单表查询执行计划为 PG 原生的 Seq Scan
+EXPLAIN SELECT * FROM test;
+ QUERY PLAN
+--------------------------------------------------------
+ Seq Scan on test (cost=0.00..35.50 rows=2550 width=4)
+(1 row)
+
开启并使用 MPP 功能:
-- 对 test 表启用 MPP 功能
+ALTER TABLE test SET (px_workers=1);
+
+-- 开启 MPP 功能
+SET polar_enable_px = on;
+
+EXPLAIN SELECT * FROM test;
+
+ QUERY PLAN
+-------------------------------------------------------------------------------
+ PX Coordinator 2:1 (slice1; segments: 2) (cost=0.00..431.00 rows=1 width=4)
+ -> Seq Scan on test (scan partial) (cost=0.00..431.00 rows=1 width=4)
+ Optimizer: PolarDB PX Optimizer
+(3 rows)
+
配置参与 MPP 的计算节点范围:
-- 查询当前所有只读节点的名称
+CREATE EXTENSION polar_monitor;
+
+SELECT name,host,port FROM polar_cluster_info WHERE px_node='t';
+ name | host | port
+-------+-----------+------
+ node1 | 127.0.0.1 | 5433
+ node2 | 127.0.0.1 | 5434
+(2 rows)
+
+-- 当前集群有 2 个只读节点,名称分别为:node1,node2
+
+-- 指定 node1 只读节点参与 MPP
+SET polar_px_nodes = 'node1';
+
+-- 查询参与并行查询的节点
+SHOW polar_px_nodes;
+ polar_px_nodes
+----------------
+ node1
+(1 row)
+
+EXPLAIN SELECT * FROM test;
+ QUERY PLAN
+-------------------------------------------------------------------------------
+ PX Coordinator 1:1 (slice1; segments: 1) (cost=0.00..431.00 rows=1 width=4)
+ -> Partial Seq Scan on test (cost=0.00..431.00 rows=1 width=4)
+ Optimizer: PolarDB PX Optimizer
+(3 rows)
+
当前 MPP 对分区表支持的功能如下所示:
--分区表 MPP 功能默认关闭,需要先开启 MPP 功能
+SET polar_enable_px = ON;
+
+-- 执行以下语句,开启分区表 MPP 功能
+SET polar_px_enable_partition = true;
+
+-- 执行以下语句,开启多级分区表 MPP 功能
+SET polar_px_optimizer_multilevel_partitioning = true;
+
当前仅支持对 B-Tree 索引的构建,且暂不支持 INCLUDE
等索引构建语法,暂不支持表达式等索引列类型。
如果需要使用 MPP 功能加速创建索引,请使用如下参数:
polar_px_dop_per_node
:指定通过 MPP 加速构建索引的并行度。默认为 1
。polar_px_enable_replay_wait
:当使用 MPP 加速索引构建时,当前会话内无需手动开启该参数,该参数将自动生效,以保证最近更新的数据表项可以被创建到索引中,保证索引表的完整性。索引创建完成后,该参数将会被重置为数据库默认值。polar_px_enable_btbuild
:是否开启使用 MPP 加速创建索引。取值为 OFF
时不开启(默认),取值为 ON
时开启。polar_bt_write_page_buffer_size
:指定索引构建过程中的写 I/O 策略。该参数默认值为 0
(不开启),单位为块,最大值可设置为 8192
。推荐设置为 4096
。 polar_bt_write_page_buffer_size
大小的 buffer,对于需要写盘的索引页,会通过该 buffer 进行 I/O 合并再统一写盘,避免了频繁调度 I/O 带来的性能开销。该参数会额外提升 20% 的索引创建性能。-- 开启使用 MPP 加速创建索引功能。
+SET polar_px_enable_btbuild = on;
+
+-- 使用如下语法创建索引
+CREATE INDEX t ON test(id) WITH(px_build = ON);
+
+-- 查询表结构
+\\d test
+ Table "public.test"
+ Column | Type | Collation | Nullable | Default
+--------+---------+-----------+----------+---------
+ id | integer | | |
+ id2 | integer | | |
+Indexes:
+ "t" btree (id) WITH (px_build=finish)
+
随着用户业务数据量越来越大,业务越来越复杂,传统数据库系统面临巨大挑战,如:
针对上述传统数据库的问题,阿里云研发了 PolarDB 云原生数据库。采用了自主研发的计算集群和存储集群分离的架构。具备如下优势:
下面会从两个方面来解读 PolarDB 的架构,分别是:存储计算分离架构、HTAP 架构。
PolarDB 是存储计算分离的设计,存储集群和计算集群可以分别独立扩展:
基于 Shared-Storage 后,主节点和多个只读节点共享一份存储数据,主节点刷脏不能再像传统的刷脏方式了,否则:
对于第一个问题,我们需要有页面多版本能力;对于第二个问题,我们需要主库控制脏页的刷脏速度。
读写分离后,单个计算节点无法发挥出存储侧大 IO 带宽的优势,也无法通过增加计算资源来加速大的查询。我们研发了基于 Shared-Storage 的 MPP 分布式并行执行,来加速在 OLTP 场景下 OLAP 查询。 PolarDB 支持一套 OLTP 场景型的数据在如下两种计算引擎下使用:
在使用相同的硬件资源时性能达到了传统 MPP 数据库的 90%,同时具备了 SQL 级别的弹性:在计算能力不足时,可随时增加参与 OLAP 分析查询的 CPU,而数据无需重分布。
基于 Shared-Storage 之后,数据库由传统的 share nothing,转变成了 shared storage 架构。需要解决如下问题:
首先来看下基于 Shared-Storage 的 PolarDB 的架构原理。
传统 share nothing 的数据库,主节点和只读节点都有自己的内存和存储,只需要从主节点复制 WAL 日志到只读节点,并在只读节点上依次回放日志即可,这也是复制状态机的基本原理。
前面讲到过存储计算分离后,Shared-Storage 上读取到的页面是一致的,内存状态是通过从 Shared-Storage 上读取最新的 WAL 并回放得来,如下图:
上述流程中,只读节点中基于日志回放出来的页面会被淘汰掉,此后需要再次从存储上读取页面,会出现读取的页面是之前的老页面,称为“过去页面”。如下图:
只读节点在任意时刻读取页面时,需要找到对应的 Base 页面和对应起点的日志,依次回放。如下图:
通过上述分析,需要维护每个 Page 到日志的“倒排”索引,而只读节点的内存是有限的,因此这个 Page 到日志的索引需要持久化,PolarDB 设计了一个可持久化的索引结构 - LogIndex。LogIndex 本质是一个可持久化的 hash 数据结构。
通过 LogIndex 解决了刷脏依赖“过去页面”的问题,也是得只读节点的回放转变成了 Lazy 的回放:只需要回放日志的 meta 信息即可。
在存储计算分离后,刷脏依赖还存在“未来页面”的问题。如下图所示:
“未来页面”的原因是主节点刷脏的速度超过了任一只读节点的回放速度(虽然只读节点的 Lazy 回放已经很快了)。因此,解法就是对主节点刷脏进度时做控制:不能超过最慢的只读节点的回放位点。如下图所示:
如下图所示:
可以看到,整个链路是很长的,只读节点延迟高,影响用户业务读写分离负载均衡。
因为底层是 Shared-Storage,只读节点可直接从 Shared-Storage 上读取所需要的 WAL 数据。因此主节点只把 WAL 日志的元数据(去掉 Payload)复制到只读节点,这样网络传输量小,减少关键路径上的 IO。如下图所示:
通过上述优化,能显著减少主节点和只读节点间的网络传输量。从下图可以看到网络传输量减少了 98%。
在传统 DB 中日志回放的过程中会读取大量的 Page 并逐个日志 Apply,然后落盘。该流程在用户读 IO 的关键路径上,借助存储计算分离可以做到:如果只读节点上 Page 不在 BufferPool 中,不产生任何 IO,仅仅记录 LogIndex 即可。
可以将回放进程中的如下 IO 操作 offload 到 session 进程中:
如下图所示,在只读节点上的回放进程中,在 Apply 一条 WAL 的 meta 时:
通过上述优化,能显著减少回放的延迟,比 AWS Aurora 快 30 倍。
在主节点执行 DDL 时,比如:drop table,需要在所有节点上都对表上排他锁,这样能保证表文件不会在只读节点上读取时被主节点删除掉了(因为文件在 Shared-Storage 上只有一份)。在所有只读节点上对表上排他锁是通过 WAL 复制到所有的只读节点,只读节点回放 DDL 锁来完成。而回放进程在回放 DDL 锁时,对表上锁可能会阻塞很久,因此可以通过把 DDL 锁也 offload 到其他进程上来优化回放进程的关键路径。
通过上述优化,能够回放进程一直处于平滑的状态,不会因为去等 DDL 而阻塞了回放的关键路径。
上述 3 个优化之后,极大的降低了复制延迟,能够带来如下优势:
数据库 OOM、Crash 等场景恢复时间长,本质上是日志回放慢,在共享存储 Direct-IO 模型下问题更加突出。
前面讲到过通过 LogIndex 我们在只读节点上做到了 Lazy 的回放,那么在主节点重启后的 recovery 过程中,本质也是在回放日志,那么我们可以借助 Lazy 回放来加速 recovery 的过程:
优化之后(回放 500MB 日志量):
上述方案优化了在 recovery 的重启速度,但是在重启之后,session 进程通过读取 WAL 日志来回放想要的 page。表现就是在 recovery 之后会有短暂的响应慢的问题。优化的办法为在数据库重启时 BufferPool 并不销毁,如下图所示:crash 和 restart 期间 BufferPool 不销毁。
内核中的共享内存分成 2 部分:
而 BufferPool 中并不是所有的 Page 都是可以复用的,比如:在重启前,某进程对 Page 上 X 锁,随后 crash 了,该 X 锁就没有进程来释放了。因此,在 crash 和 restart 之后需要把所有的 BufferPool 遍历一遍,剔除掉不能被复用的 Page。另外,BufferPool 的回收依赖 k8s。该优化之后,使得重启前后性能平稳。
PolarDB 读写分离后,由于底层是存储池,理论上 IO 吞吐是无限大的。而大查询只能在单个计算节点上执行,单个计算节点的 CPU/MEM/IO 是有限的,因此单个计算节点无法发挥出存储侧的大 IO 带宽的优势,也无法通过增加计算资源来加速大的查询。我们研发了基于 Shared-Storage 的 MPP 分布式并行执行,来加速在 OLTP 场景下 OLAP 查询。
PolarDB 底层存储在不同节点上是共享的,因此不能直接像传统 MPP 一样去扫描表。我们在原来单机执行引擎上支持了 MPP 分布式并行执行,同时对 Shared-Storage 进行了优化。 基于 Shared-Storage 的 MPP 是业界首创,它的原理是:
如图所示:
基于社区的 GPORCA 优化器扩展了能感知共享存储特性的 Transformation Rules。使得能够探索共享存储下特有的 Plan 空间,比如:对于一个表在 PolarDB 中既可以全量的扫描,也可以分区域扫描,这个是和传统 MPP 的本质区别。图中,上面灰色部分是 PolarDB 内核与 GPORCA 优化器的适配部分。下半部分是 ORCA 内核,灰色模块是我们在 ORCA 内核中对共享存储特性所做的扩展。
PolarDB 中有 4 类算子需要并行化,下面介绍一个具有代表性的 Seqscan 的算子的并行化。为了最大限度的利用存储的大 IO 带宽,在顺序扫描时,按照 4MB 为单位做逻辑切分,将 IO 尽量打散到不同的盘上,达到所有的盘同时提供读服务的效果。这样做还有一个优势,就是每个只读节点只扫描部分表文件,那么最终能缓存的表大小是所有只读节点的 BufferPool 总和。
下面的图表中:
倾斜是传统 MPP 固有的问题:
以上两点会导致分布执行时存在长尾进程。
需要注意的是:尽管是动态分配,尽量维护 buffer 的亲和性;另外,每个算子的上下文存储在 worker 的私有内存中,Coordinator 不存储具体表的信息;
下面表格中,当出现大对象时,静态切分出现数据倾斜,而动态扫描仍然能够线性提升。
那我们利用数据共享的特点,还可以支持云原生下极致弹性的要求:把 Coordinator 全链路上各个模块所需要的外部依赖存在共享存储上,同时 worker 全链路上需要的运行时参数通过控制链路从 Coordinator 同步过来,使 Coordinator 和 worker 无状态化。
因此:
多个计算节点数据一致性通过等待回放和 globalsnapshot 机制来完成。等待回放保证所有 worker 能看到所需要的数据版本,而 globalsnapshot 保证了选出一个统一的版本。
我们使用 1TB 的 TPC-H 进行了测试,首先对比了 PolarDB 新的分布式并行和单机并行的性能:有 3 个 SQL 提速 60 倍,19 个 SQL 提速 10 倍以上;
另外,使用分布式执行引擎测,试增加 CPU 时的性能,可以看到,从 16 核和 128 核时性能线性提升;单看 22 条 SQL,通过该增加 CPU,每个条 SQL 性能线性提升。
与传统 MPP 数据库相比,同样使用 16 个节点,PolarDB 的性能是传统 MPP 数据库的 90%。
前面讲到我们给 PolarDB 的分布式引擎做到了弹性扩展,数据不需要充分重分布,当 dop = 8 时,性能是传统 MPP 数据库的 5.6 倍。
OLTP 业务中会建大量的索引,经分析建索引过程中:80%是在排序和构建索引页,20%在写索引页。通过使用分布式并行来加速排序过程,同时流水化批量写入。
上述优化能够使得创建索引有 4~5 倍的提升。
PolarDB 是对多模数据库,支持时空数据。时空数据库是计算密集型和 IO 密集型,可以借助分布式执行来加速。我们针对共享存储开发了扫描共享 RTREE 索引的功能。
本文从架构层面分析了 PolarDB 的技术要点:
后续文章将具体讨论更多的技术细节,比如:如何基于 Shared-Storage 的查询优化器,LogIndex 如何做到高性能,如何闪回到任意时间点,如何在 Shared-Storage 上支持 MPP,如何和 X-Paxos 结合构建高可用等等,敬请期待。
',163);function ia(s,sa){const t=i("ArticleInfo"),e=i("router-link");return c(),P("div",null,[aa,l(t,{frontmatter:s.$frontmatter},null,8,["frontmatter"]),la,ea,a("nav",ra,[a("ul",null,[a("li",null,[l(e,{to:"#传统数据库的问题"},{default:r(()=>[o("传统数据库的问题")]),_:1})]),a("li",null,[l(e,{to:"#polardb-云原生数据库的优势"},{default:r(()=>[o("PolarDB 云原生数据库的优势")]),_:1})]),a("li",null,[l(e,{to:"#polardb-整体架构概述"},{default:r(()=>[o("PolarDB 整体架构概述")]),_:1}),a("ul",null,[a("li",null,[l(e,{to:"#存储计算分离架构概述"},{default:r(()=>[o("存储计算分离架构概述")]),_:1})]),a("li",null,[l(e,{to:"#htap-架构概述"},{default:r(()=>[o("HTAP 架构概述")]),_:1})])])]),a("li",null,[l(e,{to:"#polardb-存储计算分离架构详解"},{default:r(()=>[o("PolarDB:存储计算分离架构详解")]),_:1}),a("ul",null,[a("li",null,[l(e,{to:"#shared-storage-带来的挑战"},{default:r(()=>[o("Shared-Storage 带来的挑战")]),_:1})]),a("li",null,[l(e,{to:"#架构原理"},{default:r(()=>[o("架构原理")]),_:1})]),a("li",null,[l(e,{to:"#数据一致性"},{default:r(()=>[o("数据一致性")]),_:1})]),a("li",null,[l(e,{to:"#低延迟复制"},{default:r(()=>[o("低延迟复制")]),_:1})]),a("li",null,[l(e,{to:"#recovery-优化"},{default:r(()=>[o("Recovery 优化")]),_:1})])])]),a("li",null,[l(e,{to:"#polardb-htap-架构详解"},{default:r(()=>[o("PolarDB:HTAP 架构详解")]),_:1}),a("ul",null,[a("li",null,[l(e,{to:"#htap-架构原理"},{default:r(()=>[o("HTAP 架构原理")]),_:1})]),a("li",null,[l(e,{to:"#分布式优化器"},{default:r(()=>[o("分布式优化器")]),_:1})]),a("li",null,[l(e,{to:"#算子并行化"},{default:r(()=>[o("算子并行化")]),_:1})]),a("li",null,[l(e,{to:"#消除数据倾斜问题"},{default:r(()=>[o("消除数据倾斜问题")]),_:1})]),a("li",null,[l(e,{to:"#sql-级别弹性扩展"},{default:r(()=>[o("SQL 级别弹性扩展")]),_:1})]),a("li",null,[l(e,{to:"#事务一致性"},{default:r(()=>[o("事务一致性")]),_:1})]),a("li",null,[l(e,{to:"#tpc-h-性能-加速比"},{default:r(()=>[o("TPC-H 性能:加速比")]),_:1})]),a("li",null,[l(e,{to:"#tpc-h-性能-和传统-mpp-数据库的对比"},{default:r(()=>[o("TPC-H 性能:和传统 MPP 数据库的对比")]),_:1})]),a("li",null,[l(e,{to:"#分布式执行加速索引创建"},{default:r(()=>[o("分布式执行加速索引创建")]),_:1})]),a("li",null,[l(e,{to:"#分布式并行执行加速多模-时空数据库"},{default:r(()=>[o("分布式并行执行加速多模:时空数据库")]),_:1})])])]),a("li",null,[l(e,{to:"#总结"},{default:r(()=>[o("总结")]),_:1})])])]),oa])}const na=d(J,[["render",ia],["__file","arch-overview.html.vue"]]),ha=JSON.parse('{"path":"/zh/theory/arch-overview.html","title":"特性总览","lang":"zh-CN","frontmatter":{"author":"北侠","date":"2021/08/24","minute":35},"headers":[{"level":2,"title":"传统数据库的问题","slug":"传统数据库的问题","link":"#传统数据库的问题","children":[]},{"level":2,"title":"PolarDB 云原生数据库的优势","slug":"polardb-云原生数据库的优势","link":"#polardb-云原生数据库的优势","children":[]},{"level":2,"title":"PolarDB 整体架构概述","slug":"polardb-整体架构概述","link":"#polardb-整体架构概述","children":[{"level":3,"title":"存储计算分离架构概述","slug":"存储计算分离架构概述","link":"#存储计算分离架构概述","children":[]},{"level":3,"title":"HTAP 架构概述","slug":"htap-架构概述","link":"#htap-架构概述","children":[]}]},{"level":2,"title":"PolarDB:存储计算分离架构详解","slug":"polardb-存储计算分离架构详解","link":"#polardb-存储计算分离架构详解","children":[{"level":3,"title":"Shared-Storage 带来的挑战","slug":"shared-storage-带来的挑战","link":"#shared-storage-带来的挑战","children":[]},{"level":3,"title":"架构原理","slug":"架构原理","link":"#架构原理","children":[]},{"level":3,"title":"数据一致性","slug":"数据一致性","link":"#数据一致性","children":[]},{"level":3,"title":"低延迟复制","slug":"低延迟复制","link":"#低延迟复制","children":[]},{"level":3,"title":"Recovery 优化","slug":"recovery-优化","link":"#recovery-优化","children":[]}]},{"level":2,"title":"PolarDB:HTAP 架构详解","slug":"polardb-htap-架构详解","link":"#polardb-htap-架构详解","children":[{"level":3,"title":"HTAP 架构原理","slug":"htap-架构原理","link":"#htap-架构原理","children":[]},{"level":3,"title":"分布式优化器","slug":"分布式优化器","link":"#分布式优化器","children":[]},{"level":3,"title":"算子并行化","slug":"算子并行化","link":"#算子并行化","children":[]},{"level":3,"title":"消除数据倾斜问题","slug":"消除数据倾斜问题","link":"#消除数据倾斜问题","children":[]},{"level":3,"title":"SQL 级别弹性扩展","slug":"sql-级别弹性扩展","link":"#sql-级别弹性扩展","children":[]},{"level":3,"title":"事务一致性","slug":"事务一致性","link":"#事务一致性","children":[]},{"level":3,"title":"TPC-H 性能:加速比","slug":"tpc-h-性能-加速比","link":"#tpc-h-性能-加速比","children":[]},{"level":3,"title":"TPC-H 性能:和传统 MPP 数据库的对比","slug":"tpc-h-性能-和传统-mpp-数据库的对比","link":"#tpc-h-性能-和传统-mpp-数据库的对比","children":[]},{"level":3,"title":"分布式执行加速索引创建","slug":"分布式执行加速索引创建","link":"#分布式执行加速索引创建","children":[]},{"level":3,"title":"分布式并行执行加速多模:时空数据库","slug":"分布式并行执行加速多模-时空数据库","link":"#分布式并行执行加速多模-时空数据库","children":[]}]},{"level":2,"title":"总结","slug":"总结","link":"#总结","children":[]}],"git":{"updatedTime":1688442053000},"filePathRelative":"zh/theory/arch-overview.md"}');export{na as comp,ha as data}; diff --git a/assets/arch-overview.html-HH6zgOS9.js b/assets/arch-overview.html-HH6zgOS9.js new file mode 100644 index 00000000000..cce53845128 --- /dev/null +++ b/assets/arch-overview.html-HH6zgOS9.js @@ -0,0 +1 @@ +import{_ as l,a as d,b as h,c as p}from"./9_future_pages-BcUohDxW.js";import{_ as c,r,o as u,c as m,d as a,a as e,w as o,e as g,b as s}from"./app-CWFDhr_k.js";const f="/PolarDB-for-PostgreSQL/assets/2_compute-storage_separation_architecture-DeGBA65n.png",y="/PolarDB-for-PostgreSQL/assets/3_HTAP_architecture-I4sKPnSm.png",b="/PolarDB-for-PostgreSQL/assets/4_principles_of_shared_storage-Hlsdix_w.png",P="/PolarDB-for-PostgreSQL/assets/5_In-memory_page_synchronization-CKftf6Kn.png",_="/PolarDB-for-PostgreSQL/assets/8_solution_to_outdated_pages_LogIndex-DbCiVXvT.png",v="/PolarDB-for-PostgreSQL/assets/10_solutions_to_future_pages-BGGzTBE5.png",w="/PolarDB-for-PostgreSQL/assets/11_issues_of_conventional_streaming_replication-DwTUzOjN.png",T="/PolarDB-for-PostgreSQL/assets/12_Replicate_only_metadata_of_WAL_records-ClVIxSwL.png",L="/PolarDB-for-PostgreSQL/assets/13_optimization1_result-BQy09yeW.png",D="/PolarDB-for-PostgreSQL/assets/14_optimize_log_apply_of_WAL_records-7YKqeruN.png",A="/PolarDB-for-PostgreSQL/assets/15_optimization2_result-CkrrZGbX.png",x="/PolarDB-for-PostgreSQL/assets/16_optimize_log_apply_of_DDL_locks-Cs7jhUrC.png",B="/PolarDB-for-PostgreSQL/assets/17_optimization3_result-BiMnb371.png",S="/PolarDB-for-PostgreSQL/assets/18_recovery_optimization_background-CImW1jwn.png",k="/PolarDB-for-PostgreSQL/assets/19_lazy_recovery-CQLH1lNR.png",W="/PolarDB-for-PostgreSQL/assets/20_recovery_optimization_result-eJpF7n23.png",I="/PolarDB-for-PostgreSQL/assets/21_Persistent_BufferPool-D7Oo-1Tf.png",z="/PolarDB-for-PostgreSQL/assets/22_buffer_pool_structure-Bynk1Re4.png",C="/PolarDB-for-PostgreSQL/assets/23_persistent_buffer_pool_result-B6IpwyKL.png",O="/PolarDB-for-PostgreSQL/assets/24_principles_of_HTAP-DRIqraol.png",q="/PolarDB-for-PostgreSQL/assets/25_distributed_optimizer-33WN7tag.png",Q="/PolarDB-for-PostgreSQL/assets/26_parallelism_of_operators-CevrM8T0.png",H="/PolarDB-for-PostgreSQL/assets/27_parallelism_of_operators_result-B8WUMc35.png",M="/PolarDB-for-PostgreSQL/assets/28_data_skew-Bc-lMd2x.png",R="/PolarDB-for-PostgreSQL/assets/29_Solve_data_skew_result-FMKHOO7m.png",G="/PolarDB-for-PostgreSQL/assets/30_SQL_statement-level_scalability-4b9wO_OC.png",N="/PolarDB-for-PostgreSQL/assets/31_schedule_workloads-fsM4XP26.png",E="/PolarDB-for-PostgreSQL/assets/32_transactional_consistency-DAkhSP40.png",F="/PolarDB-for-PostgreSQL/assets/33_TPC-H_performance_Speedup1-CaejVmud.png",j="/PolarDB-for-PostgreSQL/assets/34_TPC-H_performance_Speedup2-NYqbFFGV.png",Y="/PolarDB-for-PostgreSQL/assets/35_TPC-H_performance_Speedup3-CAf2dSql.png",V="/PolarDB-for-PostgreSQL/assets/36_TPC-H_performance_Comparison_with_mpp1-CRu2KPbr.png",K="/PolarDB-for-PostgreSQL/assets/37_TPC-H_performance_Comparison_with_mpp2-kp_W5mzr.png",U="/PolarDB-for-PostgreSQL/assets/38_Index_creation_accelerated_by_PX-DG6Vpr3E.png",X="/PolarDB-for-PostgreSQL/assets/39_Index_creation_accelerated_by_PX2-D4dlmYVP.png",$="/PolarDB-for-PostgreSQL/assets/40_spatio-temporal_databases-5W-HXunh.png",J="/PolarDB-for-PostgreSQL/assets/41_spatio-temporal_databases_result-G02MD0_K.png",Z={},ee=e("h1",{id:"overview",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#overview"},[e("span",null,"Overview")])],-1),ae=e("p",null,"PolarDB for PostgreSQL (hereafter simplified as PolarDB) is a stable, reliable, scalable, highly available, and secure enterprise-grade database service that is independently developed by Alibaba Cloud to help you increase security compliance and cost-effectiveness. PolarDB is 100% compatible with PostgreSQL. It runs in a proprietary compute-storage separation architecture of Alibaba Cloud to support the horizontal scaling of the storage and computing capabilities.",-1),te=e("p",null,"PolarDB can process a mix of online transaction processing (OLTP) workloads and online analytical processing (OLAP) workloads in parallel. PolarDB also provides a wide range of innovative multi-model database capabilities to help you process, analyze, and search for diversified data, such as spatio-temporal, GIS, image, vector, and graph data.",-1),oe=e("p",null,"PolarDB supports various deployment architectures. For example, PolarDB supports compute-storage separation, three-node X-Paxos clusters, and local SSDs.",-1),se={class:"table-of-contents"},re=g('If you are using a conventional database system and the complexity of your workloads continues to increase, you may face the following challenges as the amount of your business data grows:
To help you resolve the issues that occur in conventional database systems, Alibaba Cloud provides PolarDB. PolarDB runs in a proprietary compute-storage separation architecture of Alibaba Cloud. This architecture has the following benefits:
PolarDB is integrated with various technologies and innovations. This document describes the following two aspects of the PolarDB architecture in sequence: compute-storage separation and hybrid transactional/analytical processing (HTAP). You can find and read the content of your interest with ease.
This section explains the following two aspects of the PolarDB architecture: compute-storage separation and HTAP.
PolarDB supports compute-storage separation. Each PolarDB cluster consists of a computing cluster and a storage cluster. You can flexibly scale out the computing cluster or the storage cluster based on your business requirements.
After the shared-storage architecture is used in PolarDB, the primary node and the read-only nodes share the same physical storage. If the primary node still uses the method that is used in conventional database systems to flush write-ahead logging (WAL) records, the following issues may occur.
To resolve the first issue, PolarDB must support multiple versions for each page. To resolve the second issue, PolarDB must control the speed at which the primary node flushes WAL records.
When read/write splitting is enabled, each individual compute node cannot fully utilize the high I/O throughput that is provided by the shared storage. In addition, you cannot accelerate large queries by adding computing resources. To resolve these issues, PolarDB uses the shared storage-based MPP architecture to accelerate OLAP queries in OLTP scenarios.
PolarDB supports a complete suite of data types that are used in OLTP scenarios. PolarDB also supports two computing engines, which can process these types of data:
When the same hardware resources are used, PolarDB delivers performance that is 90% of the performance delivered by traditional MPP database. PolarDB also provides SQL statement-level scalability. If the computing power of your PolarDB cluster is insufficient, you can allocate more CPU resources to OLAP queries without the need to rearrange data.
The following sections provide more details about compute-storage separation and HTAP.
Compute-storage separation enables the compute nodes of your PolarDB cluster to share the same physical storage. Shared storage brings the following challenges:
The following basic principles of shared storage apply to PolarDB:
In a conventional database system, the primary instance and read-only instances each are allocated independent memory resources and storage resources. The primary instance replicates WAL records to the read-only instances, and the read-only instances read and apply the WAL records. These basic principles also apply to replication state machines.
In a PolarDB cluster, the primary node replicates WAL records to the shared storage. The read-only nodes read and apply the most recent WAL records from the shared storage to ensure that the pages in the memory of the read-only nodes are synchronous with the pages in the memory of the primary node.
In the workflow shown in the preceding figure, the new page that the read-only nodes obtain by applying WAL records is removed from the buffer pools of the read-only nodes. When you query the page on the read-only nodes, the read-only nodes read the page from the shared storage. As a result, only the previous version of the page is returned. This previous version is called an outdated page. The following figure shows more details.
When you query a page on the read-only nodes at a specific point in time, the read-only nodes need to read the base version of the page and the WAL records up to that point in time. Then, the read-only nodes need to apply the WAL records one by one in sequence. The following figure shows more details.
PolarDB needs to maintain an inverted index that stores the mapping from each page to the WAL records of the page. However, the memory capacity of each read-only node is limited. Therefore, these inverted indexes must be persistently stored. To meet this requirement, PolarDB provides LogIndex. LogIndex is an index structure, which is used to persistently store hash data.
LogIndex helps prevent outdated pages and enable the read-only nodes to run in lazy log apply mode. In the lazy log apply mode, the read-only nodes apply only the metadata of the WAL records for dirty pages.
The read-only nodes may return future pages, whose versions are later than the versions that are recorded on the read-only nodes. The following figure shows more details.
The read-only nodes apply WAL records at high speeds in lazy apply mode. However, the speeds may still be lower than the speed at which the primary node flushes WAL records. If the primary node flushes WAL records faster than the read-only nodes apply WAL records, future pages are returned. To prevent future pages, PolarDB must ensure that the speed at which the primary node flushes WAL records does not exceed the speeds at which the read-only nodes apply WAL records. The following figure shows more details.
The full path is long, and the latency on the read-only nodes is high. This may cause an imbalance between the read loads and write loads over the read/write splitting link.
The read-only nodes can read WAL records from the shared storage. Therefore, the primary node can remove the payloads of WAL records and send only the metadata of WAL records to the read-only nodes. This alleviates the pressure on network transmission and reduces the I/O loads on critical paths. The following figure shows more details.
This optimization method significantly reduces the amount of data that needs to be transmitted between the primary node and the read-only nodes. The amount of data that needs to be transmitted decreases by 98%, as shown in the following figure.
Conventional database systems need to read a large number of pages, apply WAL records to these pages one by one, and then flush the updated pages to the disk. To reduce the read I/O loads on critical paths, PolarDB supports compute-storage separation. If the page that you query on the read-only nodes cannot be hit in the buffer pools of the read-only nodes, no I/O loads are generated and only LogIndex records are recorded.
The following I/O operations that are performed by log apply processes can be offloaded to session processes:
In the example shown in the following figure, when the log apply process of a read-only node applies the metadata of a WAL record of a page:
This optimization method significantly reduces the log apply latency and increases the log apply speed by 30 times compared with Amazon Aurora.
When the primary node runs a DDL operation such as DROP TABLE to modify a table, the primary node acquires an exclusive DDL lock on the table. The exclusive DDL lock is replicated to the read-only nodes along with WAL records. The read-only nodes apply the WAL records to acquire the exclusive DDL lock on the table. This ensures that the table cannot be deleted by the primary node when a read-only node is reading the table. Only one copy of the table is stored in the shared storage.
When the applying process of a read-only node applies the exclusive DDL lock, the read-only node may require a long period of time to acquire the exclusive DDL lock on the table. You can optimize the critical path of the log apply process by offloading the task of acquiring the exclusive DDL lock to other processes.
This optimization method ensures that the critical path of the log apply process of a read-only node is not blocked even if the log apply process needs to wait for the release of an exclusive DDL lock.
The three optimization methods in combination significantly reduce replication latency and have the following benefits:
If the read-only nodes apply WAL records at low speeds, your PolarDB cluster may require a long period of time to recover from exceptions such as out of memory (OOM) errors and unexpected crashes. When the direct I/O model is used for the shared storage, the severity of this issue increases.
The preceding sections explain how LogIndex enables the read-only nodes to apply WAL records in lazy log apply mode. In general, the recovery process of the primary node after a restart is the same as the process in which the read-only nodes apply WAL records. In this sense, the lazy log apply mode can also be used to accelerate the recovery of the primary node.
The example in the following figure shows how the optimized recovery method significantly reduces the time that is required to apply 500 MB of WAL records.
After the primary node recovers, a session process may need to apply the pages that the session process reads. When a session process is applying pages, the primary node responds at low speeds for a short period of time. To resolve this issue, PolarDB does not delete pages from the buffer pool of the primary node if the primary node restarts or unexpectedly crashes.
The shared memory of the database engine consists of the following two parts:
Not all pages in the buffer pool of the primary node can be reused. For example, if a process acquires an exclusive lock on a page before the primary node restarts and then unexpectedly crashes, no other processes can release the exclusive lock on the page. Therefore, after the primary node unexpectedly crashes or restarts, it needs to traverse all pages in its buffer pool to identify and remove the pages that cannot be reused. In addition, the recycling of buffer pools depends on Kubernetes.
This optimized buffer pool mechanism ensures the stable performance of your PolarDB cluster before and after a restart.
The shared storage of PolarDB is organized as a storage pool. When read/write splitting is enabled, the theoretical I/O throughput that is supported by the shared storage is infinite. However, large queries can be run only on individual compute nodes, and the CPU, memory, and I/O specifications of a single compute node are limited. Therefore, a single compute node cannot fully utilize the high I/O throughput that is supported by the shared storage or accelerate large queries by acquiring more computing resources. To resolve these issues, PolarDB uses the shared storage-based MPP architecture to accelerate OLAP queries in OLTP scenarios.
In a PolarDB cluster, the physical storage is shared among all compute nodes. Therefore, you cannot use the method of scanning tables in conventional MPP databases to scan tables in PolarDB clusters. PolarDB supports MPP on standalone execution engines and provides optimized shared storage. This shared storage-based MPP architecture is the first architecture of its kind in the industry. We recommend that you familiarize yourself with following basic principles of this architecture before you use PolarDB:
The preceding figure shows an example.
The GPORCA optmizer is extended to provide a set of transformation rules that can recognize shared storage. The GPORCA optimizer enables PolarDB to access a specific amount of planned search space. For example, PolarDB can scan a table as a whole or as different virtual partitions. This is a major difference between shared storage-based MPP and conventional MPP.
The modules in gray in the upper part of the following figure are modules of the database engine. These modules enable the database engine of PolarDB to adapt to the GPORCA optimizer.
The modules in the lower part of the following figure comprise the GPORCA optimizer. Among these modules, the modules in gray are extended modules, which enable the GPORCA optimizer to communicate with the shared storage of PolarDB.
Four types of operators in PolarDB require parallelism. This section describes how to enable parallelism for operators that are used to run sequential scans. To fully utilize the I/O throughput that is supported by the shared storage, PolarDB splits each table into logical units during a sequential scan. Each unit contains 4 MB of data. This way, PolarDB can distribute I/O loads to different disks, and the disks can simultaneously scan data to accelerate the sequential scan. In addition, each read-only node needs to scan only specific tables rather than all tables. The size of tables that can be cached is the total size of the buffer pools of all read-only nodes.
Parallelism has the following benefits, as shown in the following figure:
Data skew is a common issue in conventional MPP:
Although a scan task is dynamically distributed, we recommend that you maintain the affinity of buffers at your best. In addition, the context of each operator is stored in the private memory of the worker threads. The coordinator node does not store the information about specific tables.
In the example shown in the following table, PolarDB uses static sharding to shard large objects. During the static sharding process, data skew occurs, but the performance of dynamic scanning can still linearly increase.
Data sharing helps deliver ultimate scalability in cloud-native environments. The full path of the coordinator node involves various modules, and PolarDB can store the external dependencies of these modules to the shared storage. In addition, the full path of a worker thread involves a number of operational parameters, and PolarDB can synchronize these parameters from the coordinator node over the control path. This way, the coordinator node and the worker thread are stateless.
The following conclusions are made based on the preceding analysis:
The log apply wait mechanism and the global snapshot mechanism are used to ensure data consistency among multiple compute nodes. The log apply wait mechanism ensures that all worker threads can obtain the most recent version of each page. The global snapshot mechanism ensures that a unified version of each page can be selected.
A total of 1 TB of data is used for TPC-H testing. First, run 22 SQL statements in a PolarDB cluster and in a conventional database system. The PolarDB cluster supports distributed parallelism, and the conventional database system supports standalone parallelism. The test result shows that the PolarDB cluster executes three SQL statements at speeds that are 60 times higher and 19 statements at speeds that are 10 times higher than the conventional database system.
Then, run a TPC-H test by using a distributed execution engine. The test result shows that the speed at which each of the 22 SQL statements runs linearly increases as the number of cores increases from 16 to 128.
When 16 nodes are configured, PolarDB delivers performance that is 90% of the performance delivered by MPP-based database.
As mentioned earlier, the distributed execution engine of PolarDB supports scalability, and data in PolarDB does not need to be redistributed. When the degree of parallelism (DOP) is 8, PolarDB delivers performance that is 5.6 times the performance delivered by MPP-based database.
A large number of indexes are created in OLTP scenarios. The workloads that you run to create these indexes are divided into two parts: 80% of the workloads are run to sort and create index pages, and 20% of the workloads are run to write index pages. Distributed execution accelerates the process of sorting indexes and supports the batch writing of index pages.
Distributed execution accelerates the creation of indexes by four to five times.
PolarDB is a multi-model database service that supports spatio-temporal data. PolarDB runs CPU-bound workloads and I/O-bound workloads. These workloads can be accelerated by distributed execution. The shared storage of PolarDB supports scans on shared R-tree indexes.
This document describes the crucial technologies that are used in the PolarDB architecture:
More technical details about PolarDB will be discussed in other documents. For example, how the shared storage-based query optimizer runs, how LogIndex achieves high performance, how PolarDB flashes your data back to a specific point in time, how MPP can be implemented in the shared storage, and how PolarDB works with X-Paxos to ensure high availability.
',168);function ie(i,ne){const n=r("ArticleInfo"),t=r("router-link");return u(),m("div",null,[ee,a(n,{frontmatter:i.$frontmatter},null,8,["frontmatter"]),ae,te,oe,e("nav",se,[e("ul",null,[e("li",null,[a(t,{to:"#issues-in-conventional-database-systems"},{default:o(()=>[s("Issues in Conventional Database Systems")]),_:1})]),e("li",null,[a(t,{to:"#benefits-of-polardb"},{default:o(()=>[s("Benefits of PolarDB")]),_:1})]),e("li",null,[a(t,{to:"#a-guide-to-this-document"},{default:o(()=>[s("A Guide to This Document")]),_:1}),e("ul",null,[e("li",null,[a(t,{to:"#compute-storage-separation"},{default:o(()=>[s("Compute-Storage Separation")]),_:1})]),e("li",null,[a(t,{to:"#htap"},{default:o(()=>[s("HTAP")]),_:1})])])]),e("li",null,[a(t,{to:"#polardb-compute-storage-separation"},{default:o(()=>[s("PolarDB: Compute-Storage Separation")]),_:1}),e("ul",null,[e("li",null,[a(t,{to:"#challenges-of-shared-storage"},{default:o(()=>[s("Challenges of Shared Storage")]),_:1})]),e("li",null,[a(t,{to:"#basic-principles-of-shared-storage"},{default:o(()=>[s("Basic Principles of Shared Storage")]),_:1})]),e("li",null,[a(t,{to:"#data-consistency"},{default:o(()=>[s("Data Consistency")]),_:1})]),e("li",null,[a(t,{to:"#low-latency-replication"},{default:o(()=>[s("Low-latency Replication")]),_:1})]),e("li",null,[a(t,{to:"#recovery-optimization"},{default:o(()=>[s("Recovery Optimization")]),_:1})])])]),e("li",null,[a(t,{to:"#polardb-htap"},{default:o(()=>[s("PolarDB HTAP")]),_:1}),e("ul",null,[e("li",null,[a(t,{to:"#basic-principles-of-htap"},{default:o(()=>[s("Basic Principles of HTAP")]),_:1})]),e("li",null,[a(t,{to:"#distributed-optimizer"},{default:o(()=>[s("Distributed Optimizer")]),_:1})]),e("li",null,[a(t,{to:"#parallelism-of-operators"},{default:o(()=>[s("Parallelism of Operators")]),_:1})]),e("li",null,[a(t,{to:"#solve-the-issue-of-data-skew"},{default:o(()=>[s("Solve the Issue of Data Skew")]),_:1})])])]),e("li",null,[a(t,{to:"#sql-statement-level-scalability"},{default:o(()=>[s("SQL Statement-level Scalability")]),_:1}),e("ul",null,[e("li",null,[a(t,{to:"#transactional-consistency"},{default:o(()=>[s("Transactional Consistency")]),_:1})]),e("li",null,[a(t,{to:"#tpc-h-performance-speedup"},{default:o(()=>[s("TPC-H Performance: Speedup")]),_:1})]),e("li",null,[a(t,{to:"#tpc-h-performance-comparison-with-traditional-mpp-database"},{default:o(()=>[s("TPC-H Performance: Comparison with Traditional MPP Database")]),_:1})]),e("li",null,[a(t,{to:"#index-creation-accelerated-by-distributed-execution"},{default:o(()=>[s("Index Creation Accelerated by Distributed Execution")]),_:1})]),e("li",null,[a(t,{to:"#multi-model-spatio-temporal-database-accelerated-by-distributed-parallel-execution"},{default:o(()=>[s("Multi-model Spatio-temporal Database Accelerated by Distributed, Parallel Execution")]),_:1})])])]),e("li",null,[a(t,{to:"#summary"},{default:o(()=>[s("Summary")]),_:1})])])]),re])}const he=c(Z,[["render",ie],["__file","arch-overview.html.vue"]]),pe=JSON.parse('{"path":"/theory/arch-overview.html","title":"Overview","lang":"en-US","frontmatter":{"author":"北侠","date":"2021/08/24","minute":35},"headers":[{"level":2,"title":"Issues in Conventional Database Systems","slug":"issues-in-conventional-database-systems","link":"#issues-in-conventional-database-systems","children":[]},{"level":2,"title":"Benefits of PolarDB","slug":"benefits-of-polardb","link":"#benefits-of-polardb","children":[]},{"level":2,"title":"A Guide to This Document","slug":"a-guide-to-this-document","link":"#a-guide-to-this-document","children":[{"level":3,"title":"Compute-Storage Separation","slug":"compute-storage-separation","link":"#compute-storage-separation","children":[]},{"level":3,"title":"HTAP","slug":"htap","link":"#htap","children":[]}]},{"level":2,"title":"PolarDB: Compute-Storage Separation","slug":"polardb-compute-storage-separation","link":"#polardb-compute-storage-separation","children":[{"level":3,"title":"Challenges of Shared Storage","slug":"challenges-of-shared-storage","link":"#challenges-of-shared-storage","children":[]},{"level":3,"title":"Basic Principles of Shared Storage","slug":"basic-principles-of-shared-storage","link":"#basic-principles-of-shared-storage","children":[]},{"level":3,"title":"Data Consistency","slug":"data-consistency","link":"#data-consistency","children":[]},{"level":3,"title":"Low-latency Replication","slug":"low-latency-replication","link":"#low-latency-replication","children":[]},{"level":3,"title":"Recovery Optimization","slug":"recovery-optimization","link":"#recovery-optimization","children":[]}]},{"level":2,"title":"PolarDB HTAP","slug":"polardb-htap","link":"#polardb-htap","children":[{"level":3,"title":"Basic Principles of HTAP","slug":"basic-principles-of-htap","link":"#basic-principles-of-htap","children":[]},{"level":3,"title":"Distributed Optimizer","slug":"distributed-optimizer","link":"#distributed-optimizer","children":[]},{"level":3,"title":"Parallelism of Operators","slug":"parallelism-of-operators","link":"#parallelism-of-operators","children":[]},{"level":3,"title":"Solve the Issue of Data Skew","slug":"solve-the-issue-of-data-skew","link":"#solve-the-issue-of-data-skew","children":[]}]},{"level":2,"title":"SQL Statement-level Scalability","slug":"sql-statement-level-scalability","link":"#sql-statement-level-scalability","children":[{"level":3,"title":"Transactional Consistency","slug":"transactional-consistency","link":"#transactional-consistency","children":[]},{"level":3,"title":"TPC-H Performance: Speedup","slug":"tpc-h-performance-speedup","link":"#tpc-h-performance-speedup","children":[]},{"level":3,"title":"TPC-H Performance: Comparison with Traditional MPP Database","slug":"tpc-h-performance-comparison-with-traditional-mpp-database","link":"#tpc-h-performance-comparison-with-traditional-mpp-database","children":[]},{"level":3,"title":"Index Creation Accelerated by Distributed Execution","slug":"index-creation-accelerated-by-distributed-execution","link":"#index-creation-accelerated-by-distributed-execution","children":[]},{"level":3,"title":"Multi-model Spatio-temporal Database Accelerated by Distributed, Parallel Execution","slug":"multi-model-spatio-temporal-database-accelerated-by-distributed-parallel-execution","link":"#multi-model-spatio-temporal-database-accelerated-by-distributed-parallel-execution","children":[]}]},{"level":2,"title":"Summary","slug":"summary","link":"#summary","children":[]}],"git":{"updatedTime":1688442053000},"filePathRelative":"theory/arch-overview.md"}');export{he as comp,pe as data}; diff --git a/assets/avail-online-promote.html-T9ctesGj.js b/assets/avail-online-promote.html-T9ctesGj.js new file mode 100644 index 00000000000..ab017b21a78 --- /dev/null +++ b/assets/avail-online-promote.html-T9ctesGj.js @@ -0,0 +1,2 @@ +import{_ as d,r as a,o as c,c as p,d as l,a as e,w as t,e as g,b as n}from"./app-CWFDhr_k.js";const m="/PolarDB-for-PostgreSQL/assets/online_promote_postmaster-C4ViJDEx.png",u="/PolarDB-for-PostgreSQL/assets/online_promote_startup-DirLTg8T.png",_="/PolarDB-for-PostgreSQL/assets/online_promote_logindex_bgw-D8AmDDQh.png",h={},P=e("h1",{id:"只读节点-online-promote",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#只读节点-online-promote"},[e("span",null,"只读节点 Online Promote")])],-1),L={class:"table-of-contents"},O=g(`PolarDB 是基于共享存储的一写多读架构,与传统数据库的主备架构有所不同:
传统数据库支持 Standby 节点升级为主库节点的 Promote 操作,在不重启的情况下,提升备库节点为主库节点,继续提供读写服务,保证集群高可用的同时,也有效降低了实例的恢复时间 RTO。
PolarDB 同样需要只读备库节点提升为主库节点的 Promote 能力,鉴于只读节点与传统数据库 Standby 节点的不同,PolarDB 提出了一种一写多读架构下只读节点的 OnlinePromote 机制。
使用 pg_ctl
工具对 Replica 节点执行 Promote 操作:
pg_ctl promote -D [datadir]
+
PolarDB 使用和传统数据库一致的备库节点 Promote 方法,触发条件如下:
pg_ctl
工具的 Promote 命令,pg_ctl
工具会向 Postmaster 进程发送信号,接收到信号的 Postmaster 进程再通知其他进程执行相应的操作,完成整个 Promote 操作。recovery.conf
中定义 trigger file 的路径,其他组件通过生成 trigger file 来触发。相比于传统数据库 Standby 节点的 Promote 操作,PolarDB Replica 节点的 OnlinePromote 操作需要多考虑以下几个问题:
SIGTERM
信号给当前所有 Backend 进程。 SIGUSR2
信号给 Startup 进程,通知其结束回放并处理 OnlinePromote 操作;SIGUSR2
信号给 Polar Worker 辅助进程,通知其停止对于部分 LogIndex 数据的解析,因为这部分 LogIndex 数据只对于正常运行期间的 Replica 节点有用处。SIGUSR2
信号给 LogIndex BGW (Background Ground Worker) 后台回放进程,通知其处理 OnlinePromote 操作。POLAR_BG_WAITING_RESET
状态;POLAR_BG_ONLINE_PROMOTE
,至此实例可以对外提供读写服务。LogIndex BGW 进程有自己的状态机,在其生命周期内,一直按照该状态机运行,具体每个状态机的操作内容如下:
POLAR_BG_WAITING_RESET
:LogIndex BGW 进程状态重置,通知其他进程状态机发生变化;POLAR_BG_ONLINE_PROMOTE
:读取 LogIndex 数据,组织并分发回放任务,利用并行回放进程组回放 WAL 日志,该状态的进程需要回放完所有的 LogIndex 数据才会进行状态切换,最后推进后台回放进程的回放位点;POLAR_BG_REDO_NOT_START
:表示回放任务结束;POLAR_BG_RO_BUF_REPLAYING
:Replica 节点正常运行时,进程处于该状态,读取 LogIndex 数据,按照 WAL 日志的顺序回放一定量的 WAL 日志,每回放一轮,便会推进后台回放进程的回放位点;POLAR_BG_PARALLEL_REPLAYING
:LogIndex BGW 进程每次读取一定量的 LogIndex 数据,组织并分发回放任务,利用并行回放进程组回放 WAL 日志,每回放一轮,便会推进后台回放进程的回放位点。LogIndex BGW 进程接收到 Postmaster 的 SIGUSR2
信号后,执行 OnlinePromote 操作的流程如下:
POLAR_BG_WAITING_RESET
;POLAR_BG_ONLINE_PROMOTE
状态; MarkBufferDirty
标记该页面为脏页,等待刷脏;POLAR_BG_REDO_NOT_START
。每个脏页都带有一个 Oldest LSN,该 LSN 在 FlushList 里是有序的,目的是通过这个 LSN 来确定一致性位点。
Replica 节点在 OnlinePromote 过程后,由于同时存在着回放和新的页面写入,如果像主库节点一样,直接将当前的 WAL 日志插入位点设为 Buffer 的 Oldest LSN,可能会导致:比它小的 Buffer 还未落盘,但新的一致性位点已经被设置。
所以 Replica 节点在 OnlinePromote 过程中需要面对两个问题:
PolarDB 在 Replica 节点 OnlinePromote 的过程中,将上述两类情况产生的脏页的 Oldest LSN 都设置为 LogIndex BGW 进程推进的回放位点。只有当标记为相同 Oldest LSN 的 Buffer 都落盘了,才将一致性位点向前推进。
',32);function f(i,R){const r=a("Badge"),s=a("ArticleInfo"),o=a("router-link");return c(),p("div",null,[P,l(r,{type:"tip",text:"V11 / v1.1.1-",vertical:"top"}),l(s,{frontmatter:i.$frontmatter},null,8,["frontmatter"]),e("nav",L,[e("ul",null,[e("li",null,[l(o,{to:"#背景"},{default:t(()=>[n("背景")]),_:1})]),e("li",null,[l(o,{to:"#使用"},{default:t(()=>[n("使用")]),_:1})]),e("li",null,[l(o,{to:"#onlinepromote-原理"},{default:t(()=>[n("OnlinePromote 原理")]),_:1}),e("ul",null,[e("li",null,[l(o,{to:"#触发机制"},{default:t(()=>[n("触发机制")]),_:1})]),e("li",null,[l(o,{to:"#postmaster-进程处理过程"},{default:t(()=>[n("Postmaster 进程处理过程")]),_:1})]),e("li",null,[l(o,{to:"#startup-进程处理过程"},{default:t(()=>[n("Startup 进程处理过程")]),_:1})]),e("li",null,[l(o,{to:"#logindex-bgw-进程处理过程"},{default:t(()=>[n("LogIndex BGW 进程处理过程")]),_:1})]),e("li",null,[l(o,{to:"#刷脏控制"},{default:t(()=>[n("刷脏控制")]),_:1})])])])])]),O])}const x=d(h,[["render",f],["__file","avail-online-promote.html.vue"]]),I=JSON.parse('{"path":"/zh/features/v11/availability/avail-online-promote.html","title":"只读节点 Online Promote","lang":"zh-CN","frontmatter":{"author":"学弈","date":"2022/09/20","minute":25},"headers":[{"level":2,"title":"背景","slug":"背景","link":"#背景","children":[]},{"level":2,"title":"使用","slug":"使用","link":"#使用","children":[]},{"level":2,"title":"OnlinePromote 原理","slug":"onlinepromote-原理","link":"#onlinepromote-原理","children":[{"level":3,"title":"触发机制","slug":"触发机制","link":"#触发机制","children":[]},{"level":3,"title":"Postmaster 进程处理过程","slug":"postmaster-进程处理过程","link":"#postmaster-进程处理过程","children":[]},{"level":3,"title":"Startup 进程处理过程","slug":"startup-进程处理过程","link":"#startup-进程处理过程","children":[]},{"level":3,"title":"LogIndex BGW 进程处理过程","slug":"logindex-bgw-进程处理过程","link":"#logindex-bgw-进程处理过程","children":[]},{"level":3,"title":"刷脏控制","slug":"刷脏控制","link":"#刷脏控制","children":[]}]}],"git":{"updatedTime":1672148725000},"filePathRelative":"zh/features/v11/availability/avail-online-promote.md"}');export{x as comp,I as data}; diff --git a/assets/avail-parallel-replay.html-D096vyF0.js b/assets/avail-parallel-replay.html-D096vyF0.js new file mode 100644 index 00000000000..ae5e293d555 --- /dev/null +++ b/assets/avail-parallel-replay.html-D096vyF0.js @@ -0,0 +1,2 @@ +import{_ as c,r as m,o,c as h,d as l,a as s,w as n,e,b as a}from"./app-CWFDhr_k.js";const g="/PolarDB-for-PostgreSQL/assets/pr_parallel_execute_1-G5wm23sg.png",u="/PolarDB-for-PostgreSQL/assets/pr_parallel_execute_2-DKTSeCXj.png",d="/PolarDB-for-PostgreSQL/assets/pr_parallel_execute_task-ZQLNYhyU.png",y="/PolarDB-for-PostgreSQL/assets/pr_parallel_execute_dispatcher-Bm0zmxyF.png",v="/PolarDB-for-PostgreSQL/assets/pr_parallel_execute_procs_1-Ca_1VCCc.png",k="/PolarDB-for-PostgreSQL/assets/pr_parallel_execute_procs_2-04_6Ft4Y.png",T="/PolarDB-for-PostgreSQL/assets/pr_parallel_execute_procs_3-Do9JFVYQ.png",x="/PolarDB-for-PostgreSQL/assets/pr_parallel_replay_1-DmLHAryy.png",_="/PolarDB-for-PostgreSQL/assets/pr_parallel_replay_2-B0XET_z3.png",w={},b=s("h1",{id:"wal-日志并行回放",tabindex:"-1"},[s("a",{class:"header-anchor",href:"#wal-日志并行回放"},[s("span",null,"WAL 日志并行回放")])],-1),z={class:"table-of-contents"},f=e('在 PolarDB for PostgreSQL 的一写多读架构下,只读节点(Replica 节点)运行过程中,LogIndex 后台回放进程(LogIndex Background Worker)和会话进程(Backend)分别使用 LogIndex 数据在不同的 Buffer 上回放 WAL 日志,本质上达到了一种并行回放 WAL 日志的效果。
鉴于 WAL 日志回放在 PolarDB 集群的高可用中起到至关重要的作用,将并行回放 WAL 日志的思想用到常规的日志回放路径上,是一种很好的优化思路。
并行回放 WAL 日志至少可以在以下三个场景下发挥优势:
一条 WAL 日志可能修改多个数据块 Block,因此可以使用如下定义来表示 WAL 日志的回放过程:
',10),L=s("ul",null,[s("li",null,[a("假设第 "),s("code",null,"i"),a(" 条 WAL 日志 LSN 为 "),s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mi",null,"L"),s("mi",null,"S"),s("msub",null,[s("mi",null,"N"),s("mi",null,"i")])]),s("annotation",{encoding:"application/x-tex"},"LSN_i")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.8333em","vertical-align":"-0.15em"}}),s("span",{class:"mord mathnormal"},"L"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.05764em"}},"S"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.10903em"}},"N"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3117em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.109em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mathnormal mtight"},"i")])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.15em"}},[s("span")])])])])])])])]),a(",其修改了 "),s("code",null,"m"),a(" 个数据块,则定义第 "),s("code",null,"i"),a(" 条 WAL 日志修改的数据块列表 "),s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mi",null,"B"),s("mi",null,"l"),s("mi",null,"o"),s("mi",null,"c"),s("msub",null,[s("mi",null,"k"),s("mi",null,"i")]),s("mo",null,"="),s("mo",{stretchy:"false"},"["),s("mi",null,"B"),s("mi",null,"l"),s("mi",null,"o"),s("mi",null,"c"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mi",null,"i"),s("mo",{separator:"true"},","),s("mn",null,"0")])]),s("mo",{separator:"true"},","),s("mi",null,"B"),s("mi",null,"l"),s("mi",null,"o"),s("mi",null,"c"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mi",null,"i"),s("mo",{separator:"true"},","),s("mn",null,"1")])]),s("mo",{separator:"true"},","),s("mi",{mathvariant:"normal"},"."),s("mi",{mathvariant:"normal"},"."),s("mi",{mathvariant:"normal"},"."),s("mo",{separator:"true"},","),s("mi",null,"B"),s("mi",null,"l"),s("mi",null,"o"),s("mi",null,"c"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mi",null,"i"),s("mo",{separator:"true"},","),s("mi",null,"m")])]),s("mo",{stretchy:"false"},"]")]),s("annotation",{encoding:"application/x-tex"},"Block_i = [Block_{i,0}, Block_{i,1}, ..., Block_{i,m}]")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.8444em","vertical-align":"-0.15em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"Bl"),s("span",{class:"mord mathnormal"},"oc"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3117em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mathnormal mtight"},"i")])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.15em"}},[s("span")])])])])]),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0361em","vertical-align":"-0.2861em"}}),s("span",{class:"mopen"},"["),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"Bl"),s("span",{class:"mord mathnormal"},"oc"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3117em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mathnormal mtight"},"i"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"0")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"Bl"),s("span",{class:"mord mathnormal"},"oc"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3117em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mathnormal mtight"},"i"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"1")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord"},"..."),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"Bl"),s("span",{class:"mord mathnormal"},"oc"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3117em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mathnormal mtight"},"i"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mathnormal mtight"},"m")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mclose"},"]")])])]),a(";")]),s("li",null,[a("定义最小的回放子任务为 "),s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mi",null,"i"),s("mo",{separator:"true"},","),s("mi",null,"j")])]),s("mo",null,"="),s("mrow",null,[s("mi",null,"L"),s("mi",null,"S"),s("msub",null,[s("mi",null,"N"),s("mi",null,"i")]),s("mo",null,"−"),s("mo",null,">"),s("mi",null,"B"),s("mi",null,"l"),s("mi",null,"o"),s("mi",null,"c"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mi",null,"i"),s("mo",{separator:"true"},","),s("mi",null,"j")])])])]),s("annotation",{encoding:"application/x-tex"},"Task_{i,j}={LSN_i -> Block_{i,j}}")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.9805em","vertical-align":"-0.2861em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3117em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mathnormal mtight"},"i"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mathnormal mtight",style:{"margin-right":"0.05724em"}},"j")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.9805em","vertical-align":"-0.2861em"}}),s("span",{class:"mord"},[s("span",{class:"mord mathnormal"},"L"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.05764em"}},"S"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.10903em"}},"N"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3117em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.109em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mathnormal mtight"},"i")])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.15em"}},[s("span")])])])])]),s("span",{class:"mord"},"−"),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},">"),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"Bl"),s("span",{class:"mord mathnormal"},"oc"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3117em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mathnormal mtight"},"i"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mathnormal mtight",style:{"margin-right":"0.05724em"}},"j")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])])])])])]),a(",表示在数据块 "),s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mi",null,"B"),s("mi",null,"l"),s("mi",null,"o"),s("mi",null,"c"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mi",null,"i"),s("mo",{separator:"true"},","),s("mi",null,"j")])])]),s("annotation",{encoding:"application/x-tex"},"Block_{i,j}")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.9805em","vertical-align":"-0.2861em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"Bl"),s("span",{class:"mord mathnormal"},"oc"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3117em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mathnormal mtight"},"i"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mathnormal mtight",style:{"margin-right":"0.05724em"}},"j")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])])])])]),a(" 上回放第 "),s("code",null,"i"),a(" 条 WAL 日志;")]),s("li",null,[a("因此,一条修改了 "),s("code",null,"k"),a(" 个 Block 的 WAL 日志就可以表示成 "),s("code",null,"k"),a(" 个回放子任务的集合:"),s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mi",null,"T"),s("mi",null,"A"),s("mi",null,"S"),s("msub",null,[s("mi",null,"K"),s("mrow",null,[s("mi",null,"i"),s("mo",{separator:"true"},","),s("mo",null,"∗")])]),s("mo",null,"="),s("mo",{stretchy:"false"},"["),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mi",null,"i"),s("mo",{separator:"true"},","),s("mn",null,"0")])]),s("mo",{separator:"true"},","),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mi",null,"i"),s("mo",{separator:"true"},","),s("mn",null,"1")])]),s("mo",{separator:"true"},","),s("mi",{mathvariant:"normal"},"."),s("mi",{mathvariant:"normal"},"."),s("mi",{mathvariant:"normal"},"."),s("mo",{separator:"true"},","),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mi",null,"i"),s("mo",{separator:"true"},","),s("mi",null,"k")])]),s("mo",{stretchy:"false"},"]")]),s("annotation",{encoding:"application/x-tex"},"TASK_{i,*} = [Task_{i,0}, Task_{i,1}, ..., Task_{i,k}]")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.9694em","vertical-align":"-0.2861em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"A"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.05764em"}},"S"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.07153em"}},"K"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3117em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0715em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mathnormal mtight"},"i"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"∗")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0361em","vertical-align":"-0.2861em"}}),s("span",{class:"mopen"},"["),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3117em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mathnormal mtight"},"i"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"0")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3117em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mathnormal mtight"},"i"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"1")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord"},"..."),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3361em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mathnormal mtight"},"i"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mathnormal mtight",style:{"margin-right":"0.03148em"}},"k")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mclose"},"]")])])]),a(";")]),s("li",null,[a("进而,多条 WAL 日志就可以表示成一系列回放子任务的集合:"),s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mi",null,"T"),s("mi",null,"A"),s("mi",null,"S"),s("msub",null,[s("mi",null,"K"),s("mrow",null,[s("mo",null,"∗"),s("mo",{separator:"true"},","),s("mo",null,"∗")])]),s("mo",null,"="),s("mo",{stretchy:"false"},"["),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"0"),s("mo",{separator:"true"},","),s("mo",null,"∗")])]),s("mo",{separator:"true"},","),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"1"),s("mo",{separator:"true"},","),s("mo",null,"∗")])]),s("mo",{separator:"true"},","),s("mi",{mathvariant:"normal"},"."),s("mi",{mathvariant:"normal"},"."),s("mi",{mathvariant:"normal"},"."),s("mo",{separator:"true"},","),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mi",null,"N"),s("mo",{separator:"true"},","),s("mo",null,"∗")])]),s("mo",{stretchy:"false"},"]")]),s("annotation",{encoding:"application/x-tex"},"TASK_{*,*} = [Task_{0,*}, Task_{1,*}, ..., Task_{N,*}]")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.9694em","vertical-align":"-0.2861em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"A"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.05764em"}},"S"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.07153em"}},"K"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.1757em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0715em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"∗"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"∗")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0361em","vertical-align":"-0.2861em"}}),s("span",{class:"mopen"},"["),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"0"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"∗")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"1"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"∗")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord"},"..."),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3283em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mathnormal mtight",style:{"margin-right":"0.10903em"}},"N"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"∗")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mclose"},"]")])])]),a(";")])],-1),B=s("p",null,[a("在日志回放子任务集合 "),s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mo",null,"∗"),s("mo",{separator:"true"},","),s("mo",null,"∗")])])]),s("annotation",{encoding:"application/x-tex"},"Task_{*,*}")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.9805em","vertical-align":"-0.2861em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.1757em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"∗"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"∗")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])])])])]),a(" 中,每个子任务的执行,有时并不依赖于前序子任务的执行结果。假设回放子任务集合如下:"),s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mi",null,"T"),s("mi",null,"A"),s("mi",null,"S"),s("msub",null,[s("mi",null,"K"),s("mrow",null,[s("mo",null,"∗"),s("mo",{separator:"true"},","),s("mo",null,"∗")])]),s("mo",null,"="),s("mo",{stretchy:"false"},"["),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"0"),s("mo",{separator:"true"},","),s("mo",null,"∗")])]),s("mo",{separator:"true"},","),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"1"),s("mo",{separator:"true"},","),s("mo",null,"∗")])]),s("mo",{separator:"true"},","),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"2"),s("mo",{separator:"true"},","),s("mo",null,"∗")])]),s("mo",{stretchy:"false"},"]")]),s("annotation",{encoding:"application/x-tex"},"TASK_{*,*} = [Task_{0,*}, Task_{1,*}, Task_{2,*}]")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.9694em","vertical-align":"-0.2861em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"A"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.05764em"}},"S"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.07153em"}},"K"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.1757em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0715em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"∗"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"∗")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0361em","vertical-align":"-0.2861em"}}),s("span",{class:"mopen"},"["),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"0"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"∗")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"1"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"∗")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"2"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"∗")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mclose"},"]")])])]),a(",其中:")],-1),M=s("ul",null,[s("li",null,[s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"0"),s("mo",{separator:"true"},","),s("mo",null,"∗")])]),s("mo",null,"="),s("mo",{stretchy:"false"},"["),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"0"),s("mo",{separator:"true"},","),s("mn",null,"0")])]),s("mo",{separator:"true"},","),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"0"),s("mo",{separator:"true"},","),s("mn",null,"1")])]),s("mo",{separator:"true"},","),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"0"),s("mo",{separator:"true"},","),s("mn",null,"2")])]),s("mo",{stretchy:"false"},"]")]),s("annotation",{encoding:"application/x-tex"},"Task_{0,*}=[Task_{0,0}, Task_{0,1}, Task_{0,2}]")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.9805em","vertical-align":"-0.2861em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"0"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"∗")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0361em","vertical-align":"-0.2861em"}}),s("span",{class:"mopen"},"["),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"0"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"0")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"0"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"1")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"0"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"2")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mclose"},"]")])])])]),s("li",null,[s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"1"),s("mo",{separator:"true"},","),s("mo",null,"∗")])]),s("mo",null,"="),s("mo",{stretchy:"false"},"["),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"1"),s("mo",{separator:"true"},","),s("mn",null,"0")])]),s("mo",{separator:"true"},","),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"1"),s("mo",{separator:"true"},","),s("mn",null,"1")])]),s("mo",{stretchy:"false"},"]")]),s("annotation",{encoding:"application/x-tex"},"Task_{1,*}=[Task_{1,0}, Task_{1,1}]")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.9805em","vertical-align":"-0.2861em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"1"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"∗")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0361em","vertical-align":"-0.2861em"}}),s("span",{class:"mopen"},"["),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"1"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"0")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"1"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"1")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mclose"},"]")])])]),a(",")]),s("li",null,[s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"2"),s("mo",{separator:"true"},","),s("mo",null,"∗")])]),s("mo",null,"="),s("mo",{stretchy:"false"},"["),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"2"),s("mo",{separator:"true"},","),s("mn",null,"0")])]),s("mo",{stretchy:"false"},"]")]),s("annotation",{encoding:"application/x-tex"},"Task_{2,*}=[Task_{2,0}]")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.9805em","vertical-align":"-0.2861em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"2"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"∗")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0361em","vertical-align":"-0.2861em"}}),s("span",{class:"mopen"},"["),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"2"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"0")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mclose"},"]")])])])])],-1),N=s("p",null,[a("并且 "),s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mi",null,"B"),s("mi",null,"l"),s("mi",null,"o"),s("mi",null,"c"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"0"),s("mo",{separator:"true"},","),s("mn",null,"0")])]),s("mo",null,"="),s("mi",null,"B"),s("mi",null,"l"),s("mi",null,"o"),s("mi",null,"c"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"1"),s("mo",{separator:"true"},","),s("mn",null,"0")])])]),s("annotation",{encoding:"application/x-tex"},"Block_{0,0} = Block_{1,0}")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.9805em","vertical-align":"-0.2861em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"Bl"),s("span",{class:"mord mathnormal"},"oc"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"0"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"0")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.9805em","vertical-align":"-0.2861em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"Bl"),s("span",{class:"mord mathnormal"},"oc"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"1"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"0")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])])])])]),a(","),s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mi",null,"B"),s("mi",null,"l"),s("mi",null,"o"),s("mi",null,"c"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"0"),s("mo",{separator:"true"},","),s("mn",null,"1")])]),s("mo",null,"="),s("mi",null,"B"),s("mi",null,"l"),s("mi",null,"o"),s("mi",null,"c"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"1"),s("mo",{separator:"true"},","),s("mn",null,"1")])])]),s("annotation",{encoding:"application/x-tex"},"Block_{0,1} = Block_{1,1}")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.9805em","vertical-align":"-0.2861em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"Bl"),s("span",{class:"mord mathnormal"},"oc"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"0"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"1")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.9805em","vertical-align":"-0.2861em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"Bl"),s("span",{class:"mord mathnormal"},"oc"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"1"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"1")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])])])])]),a(","),s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mi",null,"B"),s("mi",null,"l"),s("mi",null,"o"),s("mi",null,"c"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"0"),s("mo",{separator:"true"},","),s("mn",null,"2")])]),s("mo",null,"="),s("mi",null,"B"),s("mi",null,"l"),s("mi",null,"o"),s("mi",null,"c"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"2"),s("mo",{separator:"true"},","),s("mn",null,"0")])])]),s("annotation",{encoding:"application/x-tex"},"Block_{0,2} = Block_{2,0}")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.9805em","vertical-align":"-0.2861em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"Bl"),s("span",{class:"mord mathnormal"},"oc"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"0"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"2")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.9805em","vertical-align":"-0.2861em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"Bl"),s("span",{class:"mord mathnormal"},"oc"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"2"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"0")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])])])])])],-1),S=s("p",null,[a("则可以并行回放的子任务集合有三个:"),s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mo",{stretchy:"false"},"["),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"0"),s("mo",{separator:"true"},","),s("mn",null,"0")])]),s("mo",{separator:"true"},","),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"1"),s("mo",{separator:"true"},","),s("mn",null,"0")])]),s("mo",{stretchy:"false"},"]")]),s("annotation",{encoding:"application/x-tex"},"[Task_{0,0},Task_{1,0}]")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0361em","vertical-align":"-0.2861em"}}),s("span",{class:"mopen"},"["),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"0"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"0")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"1"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"0")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mclose"},"]")])])]),a("、"),s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mo",{stretchy:"false"},"["),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"0"),s("mo",{separator:"true"},","),s("mn",null,"1")])]),s("mo",{separator:"true"},","),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"1"),s("mo",{separator:"true"},","),s("mn",null,"1")])]),s("mo",{stretchy:"false"},"]")]),s("annotation",{encoding:"application/x-tex"},"[Task_{0,1},Task_{1,1}]")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0361em","vertical-align":"-0.2861em"}}),s("span",{class:"mopen"},"["),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"0"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"1")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"1"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"1")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mclose"},"]")])])]),a("、"),s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("semantics",null,[s("mrow",null,[s("mo",{stretchy:"false"},"["),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"0"),s("mo",{separator:"true"},","),s("mn",null,"2")])]),s("mo",{separator:"true"},","),s("mi",null,"T"),s("mi",null,"a"),s("mi",null,"s"),s("msub",null,[s("mi",null,"k"),s("mrow",null,[s("mn",null,"2"),s("mo",{separator:"true"},","),s("mn",null,"0")])]),s("mo",{stretchy:"false"},"]")]),s("annotation",{encoding:"application/x-tex"},"[Task_{0,2},Task_{2,0}]")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0361em","vertical-align":"-0.2861em"}}),s("span",{class:"mopen"},"["),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"0"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"2")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mpunct"},","),s("span",{class:"mspace",style:{"margin-right":"0.1667em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"T"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord"},[s("span",{class:"mord mathnormal",style:{"margin-right":"0.03148em"}},"k"),s("span",{class:"msupsub"},[s("span",{class:"vlist-t vlist-t2"},[s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.3011em"}},[s("span",{style:{top:"-2.55em","margin-left":"-0.0315em","margin-right":"0.05em"}},[s("span",{class:"pstrut",style:{height:"2.7em"}}),s("span",{class:"sizing reset-size6 size3 mtight"},[s("span",{class:"mord mtight"},[s("span",{class:"mord mtight"},"2"),s("span",{class:"mpunct mtight"},","),s("span",{class:"mord mtight"},"0")])])])]),s("span",{class:"vlist-s"},"")]),s("span",{class:"vlist-r"},[s("span",{class:"vlist",style:{height:"0.2861em"}},[s("span")])])])])]),s("span",{class:"mclose"},"]")])])])],-1),A=e('综上所述,在整个 WAL 日志所表示的回放子任务集合中,存在很多子任务序列可以并行执行,而且不会影响最终回放结果的一致性。PolarDB 借助这种思想,提出了一种并行任务执行框架,并成功运用到了 WAL 日志回放的过程中。
将一段共享内存根据并发进程数目进行等分,每一段作为一个环形队列,分配给一个进程。通过配置参数设定每个环形队列的深度:
环形队列的内容由 Task Node 组成,每个 Task Node 包含五个状态:Idle、Running、Hold、Finished、Removed。
Idle
:表示该 Task Node 未分配任务;Running
:表示该 Task Node 已经分配任务,正在等待进程执行,或已经在执行;Hold
:表示该 Task Node 有前向依赖的任务,需要等待依赖的任务执行完再执行;Finished
:表示进程组中的进程已经执行完该任务;Removed
:当 Dispatcher 进程发现一个任务的状态已经为 Finished
,那么该任务所有的前置依赖任务也都应该为 Finished
状态,Removed
状态表示 Dispatcher 进程已经将该任务以及该任务所有前置任务都从管理结构体中删除;可以通过该机制保证 Dispatcher 进程按顺序处理有依赖关系的任务执行结果。上述状态机的状态转移过程中,黑色线标识的状态转移过程在 Dispatcher 进程中完成,橙色线标识的状态转移过程在并行回放进程组中完成。
Dispatcher 进程有三个关键数据结构:Task HashMap、Task Running Queue 以及 Task Idle Nodes。
Hold
,需等待前置任务先执行。Idle
状态的 Task Node;Dispatcher 调度策略如下:
Idle
的 Task Node 来调度任务执行;目的是让任务尽量平均分配到不同的进程进行执行。该并行执行针对的是相同类型的任务,它们具有相同的 Task Node 数据结构;在进程组初始化时配置 SchedContext
,指定负责执行具体任务的函数指针:
TaskStartup
表示进程执行任务前需要进行的初始化动作TaskHandler
根据传入的 Task Node,负责执行具体的任务TaskCleanup
表示执行进程退出前需要执行的回收动作进程组中的进程从环形队列中获取一个 Task Node,如果 Task Node 当前的状态是 Hold
,则将该 Task Node 插入到 Hold List 的尾部;如果 Task Node 的状态为 Running,则调用 TaskHandler 执行;如果 TaskHandler 执行失败,则设置该 Task Node 重新执行需要等待调用的次数,默认为 3,将该 Task Node 插入到 Hold List 的头部。
进程优先从 Hold List 头部搜索,获取可执行的 Task;如果 Task 状态为 Running,且等待调用次数为 0,则执行该 Task;如果 Task 状态为 Running,但等待调用次数大于 0,则将等待调用次数减去 1。
根据 LogIndex 章节介绍,LogIndex 数据中记录了 WAL 日志和其修改的数据块之间的对应关系,而且 LogIndex 数据支持使用 LSN 进行检索,鉴于此,PolarDB 数据库在 Standby 节点持续回放 WAL 日志过程中,引入了上述并行任务执行框架,并结合 LogIndex 数据将 WAL 日志的回放任务并行化,提高了 Standby 节点数据同步的速度。
在 Standby 节点的 postgresql.conf
中添加以下参数开启功能:
polar_enable_parallel_replay_standby_mode = ON
+
PostgreSQL 的备份流程可以总结为以下几步:
backup_label
文件,其中包含基础备份的起始点位置CHECKPOINT
backup_label
文件备份 PostgreSQL 数据库最简便方法是使用 pg_basebackup
工具。
PolarDB for PostgreSQL 采用基于共享存储的存算分离架构,其数据目录分为以下两类:
由于本地数据目录中的目录和文件不涉及数据库的核心数据,因此在备份数据库时,备份本地数据目录是可选的。可以仅备份共享存储上的数据目录,然后使用 initdb
重新生成新的本地存储目录。但是计算节点的本地配置文件需要被手动备份,如 postgresql.conf
、pg_hba.conf
等文件。
通过以下 SQL 命令可以查看节点的本地数据目录:
postgres=# SHOW data_directory;
+ data_directory
+------------------------
+ /home/postgres/primary
+(1 row)
+
本地数据目录类似于 PostgreSQL 的数据目录,大多数目录和文件都是通过 initdb
生成的。随着数据库服务的运行,本地数据目录中会产生更多的本地文件,如临时文件、缓存文件、配置文件、日志文件等。其结构如下:
$ tree ./ -L 1
+./
+├── base
+├── current_logfiles
+├── global
+├── pg_commit_ts
+├── pg_csnlog
+├── pg_dynshmem
+├── pg_hba.conf
+├── pg_ident.conf
+├── pg_log
+├── pg_logical
+├── pg_logindex
+├── pg_multixact
+├── pg_notify
+├── pg_replslot
+├── pg_serial
+├── pg_snapshots
+├── pg_stat
+├── pg_stat_tmp
+├── pg_subtrans
+├── pg_tblspc
+├── PG_VERSION
+├── pg_xact
+├── polar_cache_trash
+├── polar_dma.conf
+├── polar_fullpage
+├── polar_node_static.conf
+├── polar_rel_size_cache
+├── polar_shmem
+├── polar_shmem_stat_file
+├── postgresql.auto.conf
+├── postgresql.conf
+├── postmaster.opts
+└── postmaster.pid
+
+21 directories, 12 files
+
通过以下 SQL 命令可以查看所有计算节点在共享存储上的共享数据目录:
postgres=# SHOW polar_datadir;
+ polar_datadir
+-----------------------
+ /nvme1n1/shared_data/
+(1 row)
+
共享数据目录中存放 PolarDB for PostgreSQL 的核心数据文件,如表文件、索引文件、WAL 日志、DMA、LogIndex、Flashback Log 等。这些文件被所有节点共享,因此必须被备份。其结构如下:
$ sudo pfs -C disk ls /nvme1n1/shared_data/
+ Dir 1 512 Wed Jan 11 09:34:01 2023 base
+ Dir 1 7424 Wed Jan 11 09:34:02 2023 global
+ Dir 1 0 Wed Jan 11 09:34:02 2023 pg_tblspc
+ Dir 1 512 Wed Jan 11 09:35:05 2023 pg_wal
+ Dir 1 384 Wed Jan 11 09:35:01 2023 pg_logindex
+ Dir 1 0 Wed Jan 11 09:34:02 2023 pg_twophase
+ Dir 1 128 Wed Jan 11 09:34:02 2023 pg_xact
+ Dir 1 0 Wed Jan 11 09:34:02 2023 pg_commit_ts
+ Dir 1 256 Wed Jan 11 09:34:03 2023 pg_multixact
+ Dir 1 0 Wed Jan 11 09:34:03 2023 pg_csnlog
+ Dir 1 256 Wed Jan 11 09:34:03 2023 polar_dma
+ Dir 1 512 Wed Jan 11 09:35:09 2023 polar_fullpage
+ File 1 32 Wed Jan 11 09:35:00 2023 RWID
+ Dir 1 256 Wed Jan 11 10:25:42 2023 pg_replslot
+ File 1 224 Wed Jan 11 10:19:37 2023 polar_non_exclusive_backup_label
+total 16384 (unit: 512Bytes)
+
该工具的主要功能是将一个运行中的 PolarDB for PostgreSQL 数据库的数据目录(包括本地数据目录和共享数据目录)备份到目标目录中。
polar_basebackup takes a base backup of a running PostgreSQL server.
+
+Usage:
+ polar_basebackup [OPTION]...
+
+Options controlling the output:
+ -D, --pgdata=DIRECTORY receive base backup into directory
+ -F, --format=p|t output format (plain (default), tar)
+ -r, --max-rate=RATE maximum transfer rate to transfer data directory
+ (in kB/s, or use suffix "k" or "M")
+ -R, --write-recovery-conf
+ write recovery.conf for replication
+ -T, --tablespace-mapping=OLDDIR=NEWDIR
+ relocate tablespace in OLDDIR to NEWDIR
+ --waldir=WALDIR location for the write-ahead log directory
+ -X, --wal-method=none|fetch|stream
+ include required WAL files with specified method
+ -z, --gzip compress tar output
+ -Z, --compress=0-9 compress tar output with given compression level
+
+General options:
+ -c, --checkpoint=fast|spread
+ set fast or spread checkpointing
+ -C, --create-slot create replication slot
+ -l, --label=LABEL set backup label
+ -n, --no-clean do not clean up after errors
+ -N, --no-sync do not wait for changes to be written safely to disk
+ -P, --progress show progress information
+ -S, --slot=SLOTNAME replication slot to use
+ -v, --verbose output verbose messages
+ -V, --version output version information, then exit
+ --no-slot prevent creation of temporary replication slot
+ --no-verify-checksums
+ do not verify checksums
+ -?, --help show this help, then exit
+
+Connection options:
+ -d, --dbname=CONNSTR connection string
+ -h, --host=HOSTNAME database server host or socket directory
+ -p, --port=PORT database server port number
+ -s, --status-interval=INTERVAL
+ time between status packets sent to server (in seconds)
+ -U, --username=NAME connect as specified database user
+ -w, --no-password never prompt for password
+ -W, --password force password prompt (should happen automatically)
+ --polardata=datadir receive polar data backup into directory
+ --polar_disk_home=disk_home polar_disk_home for polar data backup
+ --polar_host_id=host_id polar_host_id for polar data backup
+ --polar_storage_cluster_name=cluster_name polar_storage_cluster_name for polar data backup
+
polar_basebackup
的参数及用法几乎和 pg_basebackup
一致,新增了以下与共享存储相关的参数:
--polar_disk_home
/ --polar_host_id
/ --polar_storage_cluster_name
:这三个参数指定了用于存放备份共享数据的共享存储节点--polardata
:该参数指定了备份共享存储节点上存放共享数据的路径;如不指定,则默认将共享数据备份到本地数据备份目录的 polar_shared_data/
路径下基础备份可用于搭建一个新的 Replica(RO)节点。如前文所述,一个正在运行中的 PolarDB for PostgreSQL 实例的数据文件分布在各计算节点的本地存储和存储节点的共享存储中。下面将说明如何使用 polar_basebackup
将实例的数据文件备份到一个本地磁盘上,并从这个备份上启动一个 Replica 节点。
首先,在将要部署 Replica 节点的机器上启动 PFSD 守护进程,挂载到正在运行中的共享存储的 PFS 文件系统上。后续启动的 Replica 节点将使用这个守护进程来访问共享存储。
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme1n1 -w 2
+
运行如下命令,将实例 Primary 节点的本地数据和共享数据备份到用于部署 Replica 节点的本地存储路径 /home/postgres/replica1
下:
polar_basebackup \\
+ --host=[Primary节点所在IP] \\
+ --port=[Primary节点所在端口号] \\
+ -D /home/postgres/replica1 \\
+ -X stream --progress --write-recovery-conf -v
+
将看到如下输出:
polar_basebackup: initiating base backup, waiting for checkpoint to complete
+polar_basebackup: checkpoint completed
+polar_basebackup: write-ahead log start point: 0/16ADD60 on timeline 1
+polar_basebackup: starting background WAL receiver
+polar_basebackup: created temporary replication slot "pg_basebackup_359"
+851371/851371 kB (100%), 2/2 tablespaces
+polar_basebackup: write-ahead log end point: 0/16ADE30
+polar_basebackup: waiting for background process to finish streaming ...
+polar_basebackup: base backup completed
+
备份完成后,可以以这个备份目录作为本地数据目录,启动一个新的 Replica 节点。由于本地数据目录中不需要共享存储上已有的共享数据文件,所以删除掉本地数据目录中的 polar_shared_data/
目录:
rm -rf ~/replica1/polar_shared_data
+
重新编辑 Replica 节点的配置文件 ~/replica1/postgresql.conf
:
-polar_hostid=1
++polar_hostid=2
+-synchronous_standby_names='replica1'
+
重新编辑 Replica 节点的复制配置文件 ~/replica1/recovery.conf
:
polar_replica='on'
+recovery_target_timeline='latest'
+primary_slot_name='replica1'
+primary_conninfo='host=[Primary节点所在IP] port=5432 user=postgres dbname=postgres application_name=replica1'
+
启动 Replica 节点:
pg_ctl -D $HOME/replica1 start
+
在 Primary 节点上执行建表并插入数据,在 Replica 节点上可以查到 Primary 节点插入的数据:
$ psql -q \\
+ -h [Primary节点所在IP] \\
+ -p 5432 \\
+ -d postgres \\
+ -c "CREATE TABLE t (t1 INT PRIMARY KEY, t2 INT); INSERT INTO t VALUES (1, 1),(2, 3),(3, 3);"
+
+$ psql -q \\
+ -h [Replica节点所在IP] \\
+ -p 5432 \\
+ -d postgres \\
+ -c "SELECT * FROM t;"
+ t1 | t2
+----+----
+ 1 | 1
+ 2 | 3
+ 3 | 3
+(3 rows)
+
基础备份也可以用于搭建一个新的 Standby 节点。如下图所示,Standby 节点与 Primary / Replica 节点各自使用独立的共享存储,与 Primary 节点使用物理复制保持同步。Standby 节点可用于作为主共享存储的灾备。
假设此时用于部署 Standby 计算节点的机器已经准备好用于后备的共享存储 nvme2n1
:
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:1 0 40G 0 disk
+└─nvme0n1p1 259:2 0 40G 0 part /etc/hosts
+nvme2n1 259:3 0 70G 0 disk
+nvme1n1 259:0 0 60G 0 disk
+
将这个共享存储格式化为 PFS 格式,并启动 PFSD 守护进程挂载到 PFS 文件系统:
sudo pfs -C disk mkfs nvme2n1
+sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme2n1 -w 2
+
在用于部署 Standby 节点的机器上执行备份,以 ~/standby
作为本地数据目录,以 /nvme2n1/shared_data
作为共享存储目录:
polar_basebackup \\
+ --host=[Primary节点所在IP] \\
+ --port=[Primary节点所在端口号] \\
+ -D /home/postgres/standby \\
+ --polardata=/nvme2n1/shared_data/ \\
+ --polar_storage_cluster_name=disk \\
+ --polar_disk_name=nvme2n1 \\
+ --polar_host_id=3 \\
+ -X stream --progress --write-recovery-conf -v
+
将会看到如下输出。其中,除了 polar_basebackup
的输出以外,还有 PFS 的输出日志:
[PFSD_SDK INF Jan 11 10:11:27.247112][99]pfs_mount_prepare 103: begin prepare mount cluster(disk), PBD(nvme2n1), hostid(3),flags(0x13)
+[PFSD_SDK INF Jan 11 10:11:27.247161][99]pfs_mount_prepare 165: pfs_mount_prepare success for nvme2n1 hostid 3
+[PFSD_SDK INF Jan 11 10:11:27.293900][99]chnl_connection_poll_shm 1238: ack data update s_mount_epoch 1
+[PFSD_SDK INF Jan 11 10:11:27.293912][99]chnl_connection_poll_shm 1266: connect and got ack data from svr, err = 0, mntid 0
+[PFSD_SDK INF Jan 11 10:11:27.293979][99]pfsd_sdk_init 191: pfsd_chnl_connect success
+[PFSD_SDK INF Jan 11 10:11:27.293987][99]pfs_mount_post 208: pfs_mount_post err : 0
+[PFSD_SDK ERR Jan 11 10:11:27.297257][99]pfsd_opendir 1437: opendir /nvme2n1/shared_data/ error: No such file or directory
+[PFSD_SDK INF Jan 11 10:11:27.297396][99]pfsd_mkdir 1320: mkdir /nvme2n1/shared_data
+polar_basebackup: initiating base backup, waiting for checkpoint to complete
+WARNING: a labelfile "/nvme1n1/shared_data//polar_non_exclusive_backup_label" is already on disk
+HINT: POLAR: we overwrite it
+polar_basebackup: checkpoint completed
+polar_basebackup: write-ahead log start point: 0/16C91F8 on timeline 1
+polar_basebackup: starting background WAL receiver
+polar_basebackup: created temporary replication slot "pg_basebackup_373"
+...
+[PFSD_SDK INF Jan 11 10:11:32.992005][99]pfsd_open 539: open /nvme2n1/shared_data/polar_non_exclusive_backup_label with inode 6325, fd 0
+[PFSD_SDK INF Jan 11 10:11:32.993074][99]pfsd_open 539: open /nvme2n1/shared_data/global/pg_control with inode 8373, fd 0
+851396/851396 kB (100%), 2/2 tablespaces
+polar_basebackup: write-ahead log end point: 0/16C9300
+polar_basebackup: waiting for background process to finish streaming ...
+polar_basebackup: base backup completed
+[PFSD_SDK INF Jan 11 10:11:52.378220][99]pfsd_umount_force 247: pbdname nvme2n1
+[PFSD_SDK INF Jan 11 10:11:52.378229][99]pfs_umount_prepare 269: pfs_umount_prepare. pbdname:nvme2n1
+[PFSD_SDK INF Jan 11 10:11:52.404010][99]chnl_connection_release_shm 1164: client umount return : deleted /var/run/pfsd//nvme2n1/99.pid
+[PFSD_SDK INF Jan 11 10:11:52.404171][99]pfs_umount_post 281: pfs_umount_post. pbdname:nvme2n1
+[PFSD_SDK INF Jan 11 10:11:52.404174][99]pfsd_umount_force 261: umount success for nvme2n1
+
上述命令会在当前机器的本地存储上备份 Primary 节点的本地数据目录,在参数指定的共享存储目录上备份共享数据目录。
重新编辑 Standby 节点的配置文件 ~/standby/postgresql.conf
:
-polar_hostid=1
++polar_hostid=3
+-polar_disk_name='nvme1n1'
+-polar_datadir='/nvme1n1/shared_data/'
++polar_disk_name='nvme2n1'
++polar_datadir='/nvme2n1/shared_data/'
+-synchronous_standby_names='replica1'
+
在 Standby 节点的复制配置文件 ~/standby/recovery.conf
中添加:
+recovery_target_timeline = 'latest'
++primary_slot_name = 'standby1'
+
在 Primary 节点上创建用于与 Standby 进行物理复制的复制槽:
$ psql \\
+ --host=[Primary节点所在IP] --port=5432 \\
+ -d postgres \\
+ -c "SELECT * FROM pg_create_physical_replication_slot('standby1');"
+ slot_name | lsn
+-----------+-----
+ standby1 |
+(1 row)
+
启动 Standby 节点:
pg_ctl -D $HOME/standby start
+
在 Primary 节点上创建表并插入数据,在 Standby 节点上可以查询到数据:
$ psql -q \\
+ -h [Primary节点所在IP] \\
+ -p 5432 \\
+ -d postgres \\
+ -c "CREATE TABLE t (t1 INT PRIMARY KEY, t2 INT); INSERT INTO t VALUES (1, 1),(2, 3),(3, 3);"
+
+$ psql -q \\
+ -h [Standby节点所在IP] \\
+ -p 5432 \\
+ -d postgres \\
+ -c "SELECT * FROM t;"
+ t1 | t2
+----+----
+ 1 | 1
+ 2 | 3
+ 3 | 3
+(3 rows)
+
PostgreSQL 的备份流程可以总结为以下几步:
backup_label
文件,其中包含基础备份的起始点位置CHECKPOINT
backup_label
文件备份 PostgreSQL 数据库最简便方法是使用 pg_basebackup
工具。
PolarDB for PostgreSQL 采用基于共享存储的存算分离架构,其数据目录分为以下两类:
由于本地数据目录中的目录和文件不涉及数据库的核心数据,因此在备份数据库时,备份本地数据目录是可选的。可以仅备份共享存储上的数据目录,然后使用 initdb
重新生成新的本地存储目录。但是计算节点的本地配置文件需要被手动备份,如 postgresql.conf
、pg_hba.conf
等文件。
通过以下 SQL 命令可以查看节点的本地数据目录:
postgres=# SHOW data_directory;
+ data_directory
+------------------------
+ /home/postgres/primary
+(1 row)
+
本地数据目录类似于 PostgreSQL 的数据目录,大多数目录和文件都是通过 initdb
生成的。随着数据库服务的运行,本地数据目录中会产生更多的本地文件,如临时文件、缓存文件、配置文件、日志文件等。其结构如下:
$ tree ./ -L 1
+./
+├── base
+├── current_logfiles
+├── global
+├── pg_commit_ts
+├── pg_csnlog
+├── pg_dynshmem
+├── pg_hba.conf
+├── pg_ident.conf
+├── pg_log
+├── pg_logical
+├── pg_logindex
+├── pg_multixact
+├── pg_notify
+├── pg_replslot
+├── pg_serial
+├── pg_snapshots
+├── pg_stat
+├── pg_stat_tmp
+├── pg_subtrans
+├── pg_tblspc
+├── PG_VERSION
+├── pg_xact
+├── polar_cache_trash
+├── polar_dma.conf
+├── polar_fullpage
+├── polar_node_static.conf
+├── polar_rel_size_cache
+├── polar_shmem
+├── polar_shmem_stat_file
+├── postgresql.auto.conf
+├── postgresql.conf
+├── postmaster.opts
+└── postmaster.pid
+
+21 directories, 12 files
+
通过以下 SQL 命令可以查看所有计算节点在共享存储上的共享数据目录:
postgres=# SHOW polar_datadir;
+ polar_datadir
+-----------------------
+ /nvme1n1/shared_data/
+(1 row)
+
共享数据目录中存放 PolarDB for PostgreSQL 的核心数据文件,如表文件、索引文件、WAL 日志、DMA、LogIndex、Flashback Log 等。这些文件被所有节点共享,因此必须被备份。其结构如下:
$ sudo pfs -C disk ls /nvme1n1/shared_data/
+ Dir 1 512 Wed Jan 11 09:34:01 2023 base
+ Dir 1 7424 Wed Jan 11 09:34:02 2023 global
+ Dir 1 0 Wed Jan 11 09:34:02 2023 pg_tblspc
+ Dir 1 512 Wed Jan 11 09:35:05 2023 pg_wal
+ Dir 1 384 Wed Jan 11 09:35:01 2023 pg_logindex
+ Dir 1 0 Wed Jan 11 09:34:02 2023 pg_twophase
+ Dir 1 128 Wed Jan 11 09:34:02 2023 pg_xact
+ Dir 1 0 Wed Jan 11 09:34:02 2023 pg_commit_ts
+ Dir 1 256 Wed Jan 11 09:34:03 2023 pg_multixact
+ Dir 1 0 Wed Jan 11 09:34:03 2023 pg_csnlog
+ Dir 1 256 Wed Jan 11 09:34:03 2023 polar_dma
+ Dir 1 512 Wed Jan 11 09:35:09 2023 polar_fullpage
+ File 1 32 Wed Jan 11 09:35:00 2023 RWID
+ Dir 1 256 Wed Jan 11 10:25:42 2023 pg_replslot
+ File 1 224 Wed Jan 11 10:19:37 2023 polar_non_exclusive_backup_label
+total 16384 (unit: 512Bytes)
+
该工具的主要功能是将一个运行中的 PolarDB for PostgreSQL 数据库的数据目录(包括本地数据目录和共享数据目录)备份到目标目录中。
polar_basebackup takes a base backup of a running PostgreSQL server.
+
+Usage:
+ polar_basebackup [OPTION]...
+
+Options controlling the output:
+ -D, --pgdata=DIRECTORY receive base backup into directory
+ -F, --format=p|t output format (plain (default), tar)
+ -r, --max-rate=RATE maximum transfer rate to transfer data directory
+ (in kB/s, or use suffix "k" or "M")
+ -R, --write-recovery-conf
+ write recovery.conf for replication
+ -T, --tablespace-mapping=OLDDIR=NEWDIR
+ relocate tablespace in OLDDIR to NEWDIR
+ --waldir=WALDIR location for the write-ahead log directory
+ -X, --wal-method=none|fetch|stream
+ include required WAL files with specified method
+ -z, --gzip compress tar output
+ -Z, --compress=0-9 compress tar output with given compression level
+
+General options:
+ -c, --checkpoint=fast|spread
+ set fast or spread checkpointing
+ -C, --create-slot create replication slot
+ -l, --label=LABEL set backup label
+ -n, --no-clean do not clean up after errors
+ -N, --no-sync do not wait for changes to be written safely to disk
+ -P, --progress show progress information
+ -S, --slot=SLOTNAME replication slot to use
+ -v, --verbose output verbose messages
+ -V, --version output version information, then exit
+ --no-slot prevent creation of temporary replication slot
+ --no-verify-checksums
+ do not verify checksums
+ -?, --help show this help, then exit
+
+Connection options:
+ -d, --dbname=CONNSTR connection string
+ -h, --host=HOSTNAME database server host or socket directory
+ -p, --port=PORT database server port number
+ -s, --status-interval=INTERVAL
+ time between status packets sent to server (in seconds)
+ -U, --username=NAME connect as specified database user
+ -w, --no-password never prompt for password
+ -W, --password force password prompt (should happen automatically)
+ --polardata=datadir receive polar data backup into directory
+ --polar_disk_home=disk_home polar_disk_home for polar data backup
+ --polar_host_id=host_id polar_host_id for polar data backup
+ --polar_storage_cluster_name=cluster_name polar_storage_cluster_name for polar data backup
+
polar_basebackup
的参数及用法几乎和 pg_basebackup
一致,新增了以下与共享存储相关的参数:
--polar_disk_home
/ --polar_host_id
/ --polar_storage_cluster_name
:这三个参数指定了用于存放备份共享数据的共享存储节点--polardata
:该参数指定了备份共享存储节点上存放共享数据的路径;如不指定,则默认将共享数据备份到本地数据备份目录的 polar_shared_data/
路径下基础备份可用于搭建一个新的 Replica(RO)节点。如前文所述,一个正在运行中的 PolarDB for PostgreSQL 实例的数据文件分布在各计算节点的本地存储和存储节点的共享存储中。下面将说明如何使用 polar_basebackup
将实例的数据文件备份到一个本地磁盘上,并从这个备份上启动一个 Replica 节点。
首先,在将要部署 Replica 节点的机器上启动 PFSD 守护进程,挂载到正在运行中的共享存储的 PFS 文件系统上。后续启动的 Replica 节点将使用这个守护进程来访问共享存储。
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme1n1 -w 2
+
运行如下命令,将实例 Primary 节点的本地数据和共享数据备份到用于部署 Replica 节点的本地存储路径 /home/postgres/replica1
下:
polar_basebackup \\
+ --host=[Primary节点所在IP] \\
+ --port=[Primary节点所在端口号] \\
+ -D /home/postgres/replica1 \\
+ -X stream --progress --write-recovery-conf -v
+
将看到如下输出:
polar_basebackup: initiating base backup, waiting for checkpoint to complete
+polar_basebackup: checkpoint completed
+polar_basebackup: write-ahead log start point: 0/16ADD60 on timeline 1
+polar_basebackup: starting background WAL receiver
+polar_basebackup: created temporary replication slot "pg_basebackup_359"
+851371/851371 kB (100%), 2/2 tablespaces
+polar_basebackup: write-ahead log end point: 0/16ADE30
+polar_basebackup: waiting for background process to finish streaming ...
+polar_basebackup: base backup completed
+
备份完成后,可以以这个备份目录作为本地数据目录,启动一个新的 Replica 节点。由于本地数据目录中不需要共享存储上已有的共享数据文件,所以删除掉本地数据目录中的 polar_shared_data/
目录:
rm -rf ~/replica1/polar_shared_data
+
重新编辑 Replica 节点的配置文件 ~/replica1/postgresql.conf
:
-polar_hostid=1
++polar_hostid=2
+-synchronous_standby_names='replica1'
+
重新编辑 Replica 节点的复制配置文件 ~/replica1/recovery.conf
:
polar_replica='on'
+recovery_target_timeline='latest'
+primary_slot_name='replica1'
+primary_conninfo='host=[Primary节点所在IP] port=5432 user=postgres dbname=postgres application_name=replica1'
+
启动 Replica 节点:
pg_ctl -D $HOME/replica1 start
+
在 Primary 节点上执行建表并插入数据,在 Replica 节点上可以查到 Primary 节点插入的数据:
$ psql -q \\
+ -h [Primary节点所在IP] \\
+ -p 5432 \\
+ -d postgres \\
+ -c "CREATE TABLE t (t1 INT PRIMARY KEY, t2 INT); INSERT INTO t VALUES (1, 1),(2, 3),(3, 3);"
+
+$ psql -q \\
+ -h [Replica节点所在IP] \\
+ -p 5432 \\
+ -d postgres \\
+ -c "SELECT * FROM t;"
+ t1 | t2
+----+----
+ 1 | 1
+ 2 | 3
+ 3 | 3
+(3 rows)
+
基础备份也可以用于搭建一个新的 Standby 节点。如下图所示,Standby 节点与 Primary / Replica 节点各自使用独立的共享存储,与 Primary 节点使用物理复制保持同步。Standby 节点可用于作为主共享存储的灾备。
假设此时用于部署 Standby 计算节点的机器已经准备好用于后备的共享存储 nvme2n1
:
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:1 0 40G 0 disk
+└─nvme0n1p1 259:2 0 40G 0 part /etc/hosts
+nvme2n1 259:3 0 70G 0 disk
+nvme1n1 259:0 0 60G 0 disk
+
将这个共享存储格式化为 PFS 格式,并启动 PFSD 守护进程挂载到 PFS 文件系统:
sudo pfs -C disk mkfs nvme2n1
+sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme2n1 -w 2
+
在用于部署 Standby 节点的机器上执行备份,以 ~/standby
作为本地数据目录,以 /nvme2n1/shared_data
作为共享存储目录:
polar_basebackup \\
+ --host=[Primary节点所在IP] \\
+ --port=[Primary节点所在端口号] \\
+ -D /home/postgres/standby \\
+ --polardata=/nvme2n1/shared_data/ \\
+ --polar_storage_cluster_name=disk \\
+ --polar_disk_name=nvme2n1 \\
+ --polar_host_id=3 \\
+ -X stream --progress --write-recovery-conf -v
+
将会看到如下输出。其中,除了 polar_basebackup
的输出以外,还有 PFS 的输出日志:
[PFSD_SDK INF Jan 11 10:11:27.247112][99]pfs_mount_prepare 103: begin prepare mount cluster(disk), PBD(nvme2n1), hostid(3),flags(0x13)
+[PFSD_SDK INF Jan 11 10:11:27.247161][99]pfs_mount_prepare 165: pfs_mount_prepare success for nvme2n1 hostid 3
+[PFSD_SDK INF Jan 11 10:11:27.293900][99]chnl_connection_poll_shm 1238: ack data update s_mount_epoch 1
+[PFSD_SDK INF Jan 11 10:11:27.293912][99]chnl_connection_poll_shm 1266: connect and got ack data from svr, err = 0, mntid 0
+[PFSD_SDK INF Jan 11 10:11:27.293979][99]pfsd_sdk_init 191: pfsd_chnl_connect success
+[PFSD_SDK INF Jan 11 10:11:27.293987][99]pfs_mount_post 208: pfs_mount_post err : 0
+[PFSD_SDK ERR Jan 11 10:11:27.297257][99]pfsd_opendir 1437: opendir /nvme2n1/shared_data/ error: No such file or directory
+[PFSD_SDK INF Jan 11 10:11:27.297396][99]pfsd_mkdir 1320: mkdir /nvme2n1/shared_data
+polar_basebackup: initiating base backup, waiting for checkpoint to complete
+WARNING: a labelfile "/nvme1n1/shared_data//polar_non_exclusive_backup_label" is already on disk
+HINT: POLAR: we overwrite it
+polar_basebackup: checkpoint completed
+polar_basebackup: write-ahead log start point: 0/16C91F8 on timeline 1
+polar_basebackup: starting background WAL receiver
+polar_basebackup: created temporary replication slot "pg_basebackup_373"
+...
+[PFSD_SDK INF Jan 11 10:11:32.992005][99]pfsd_open 539: open /nvme2n1/shared_data/polar_non_exclusive_backup_label with inode 6325, fd 0
+[PFSD_SDK INF Jan 11 10:11:32.993074][99]pfsd_open 539: open /nvme2n1/shared_data/global/pg_control with inode 8373, fd 0
+851396/851396 kB (100%), 2/2 tablespaces
+polar_basebackup: write-ahead log end point: 0/16C9300
+polar_basebackup: waiting for background process to finish streaming ...
+polar_basebackup: base backup completed
+[PFSD_SDK INF Jan 11 10:11:52.378220][99]pfsd_umount_force 247: pbdname nvme2n1
+[PFSD_SDK INF Jan 11 10:11:52.378229][99]pfs_umount_prepare 269: pfs_umount_prepare. pbdname:nvme2n1
+[PFSD_SDK INF Jan 11 10:11:52.404010][99]chnl_connection_release_shm 1164: client umount return : deleted /var/run/pfsd//nvme2n1/99.pid
+[PFSD_SDK INF Jan 11 10:11:52.404171][99]pfs_umount_post 281: pfs_umount_post. pbdname:nvme2n1
+[PFSD_SDK INF Jan 11 10:11:52.404174][99]pfsd_umount_force 261: umount success for nvme2n1
+
上述命令会在当前机器的本地存储上备份 Primary 节点的本地数据目录,在参数指定的共享存储目录上备份共享数据目录。
重新编辑 Standby 节点的配置文件 ~/standby/postgresql.conf
:
-polar_hostid=1
++polar_hostid=3
+-polar_disk_name='nvme1n1'
+-polar_datadir='/nvme1n1/shared_data/'
++polar_disk_name='nvme2n1'
++polar_datadir='/nvme2n1/shared_data/'
+-synchronous_standby_names='replica1'
+
在 Standby 节点的复制配置文件 ~/standby/recovery.conf
中添加:
+recovery_target_timeline = 'latest'
++primary_slot_name = 'standby1'
+
在 Primary 节点上创建用于与 Standby 进行物理复制的复制槽:
$ psql \\
+ --host=[Primary节点所在IP] --port=5432 \\
+ -d postgres \\
+ -c "SELECT * FROM pg_create_physical_replication_slot('standby1');"
+ slot_name | lsn
+-----------+-----
+ standby1 |
+(1 row)
+
启动 Standby 节点:
pg_ctl -D $HOME/standby start
+
在 Primary 节点上创建表并插入数据,在 Standby 节点上可以查询到数据:
$ psql -q \\
+ -h [Primary节点所在IP] \\
+ -p 5432 \\
+ -d postgres \\
+ -c "CREATE TABLE t (t1 INT PRIMARY KEY, t2 INT); INSERT INTO t VALUES (1, 1),(2, 3),(3, 3);"
+
+$ psql -q \\
+ -h [Standby节点所在IP] \\
+ -p 5432 \\
+ -d postgres \\
+ -c "SELECT * FROM t;"
+ t1 | t2
+----+----
+ 1 | 1
+ 2 | 3
+ 3 | 3
+(3 rows)
+
传统数据库的主备架构,主备有各自的存储,备节点回放 WAL 日志并读写自己的存储,主备节点在存储层没有耦合。PolarDB 的实现是基于共享存储的一写多读架构,主备使用共享存储中的一份数据。读写节点,也称为主节点或 Primary 节点,可以读写共享存储中的数据;只读节点,也称为备节点或 Replica 节点,仅能各自通过回放日志,从共享存储中读取数据,而不能写入。基本架构图如下所示:
一写多读架构下,只读节点可能从共享存储中读到两类数据页:
未来页:数据页中包含只读节点尚未回放到的数据,比如只读节点回放到 LSN 为 200 的 WAL 日志,但数据页中已经包含 LSN 为 300 的 WAL 日志对应的改动。此类数据页被称为“未来页”。
过去页:数据页中未包含所有回放位点之前的改动,比如只读节点将数据页回放到 LSN 为 200 的 WAL 日志,但该数据页在从 Buffer Pool 淘汰之后,再次从共享存储中读取的数据页中没有包含 LSN 为 200 的 WAL 日志的改动,此类数据页被称为“过去页”。
对于只读节点而言,只需要访问与其回放位点相对应的数据页。如果读取到如上所述的“未来页”和“过去页”应该如何处理呢?
除此之外,Buffer 管理还需要维护一致性位点,对于某个数据页,只读节点仅需回放一致性位点和当前回放位点之间的 WAL 日志即可,从而加速回放效率。
为避免只读节点读取到“未来页”,PolarDB 引入刷脏控制功能,即在主节点要将数据页写入共享存储时,判断所有只读节点是否均已回放到该数据页最近一次修改对应的 WAL 日志。
主节点 Buffer Pool 中的数据页,根据是否包含“未来数据”(即只读节点的回放位点之后新产生的数据),可以分为两类:可以写入存储的和不能写入存储的。该判断依赖两个位点:
刷脏控制判断规则如下:
if buffer latest lsn <= oldest apply lsn
+ flush buffer
+else
+ do not flush buffer
+
为将数据页回放到指定的 LSN 位点,只读节点会维护数据页与该页上的 LSN 的映射关系,这种映射关系保存在 LogIndex 中。LogIndex 可以理解为是一种可以持久化存储的 HashTable。访问数据页时,会从该映射关系中获取数据页需要回放的所有 LSN,依次回放对应的 WAL 日志,最终生成需要使用的数据页。
可见,数据页上的修改越多,其对应的 LSN 也越多,回放所需耗时也越长。为了尽量减少数据页需要回放的 LSN 数量,PolarDB 中引入了一致性位点的概念。
一致性位点表示该位点之前的所有 WAL 日志修改的数据页均已经持久化到存储。主备之间,主节点向备节点发送当前 WAL 日志的写入位点和一致性位点,备节点向主节点反馈当前回放的位点和当前使用的最小 WAL 日志位点。由于一致性位点之前的 WAL 修改都已经写入共享存储,备节点从存储上读取新的数据页面时,无需再回放该位点之前的 WAL 日志,但是备节点回放 Buffer Pool 中的被标记为 Outdate 的数据页面时,有可能需要回放该位点之前的 WAL 日志。因此,主库节点可以根据备节点传回的‘当前使用的最小 WAL 日志位点’和一致性位点,将 LogIndex 中所有小于两个位点的 LSN 清理掉,既加速回放效率,同时还能减少 LogIndex 占用的空间。
为维护一致性位点,PolarDB 为每个 Buffer 引入了一个内存状态,即第一次修改该 Buffer 对应的 LSN,称之为 oldest LSN,所有 Buffer 中最小的 oldest LSN 即为一致性位点。
一种获取一致性位点的方法是遍历 Buffer Pool 中所有 Buffer,找到最小值,但遍历代价较大,CPU 开销和耗时都不能接受。为高效获取一致性位点,PolarDB 引入 FlushList 机制,将 Buffer Pool 中所有脏页按照 oldest LSN 从小到大排序。借助 FlushList,获取一致性位点的时间复杂度可以达到 O(1)。
第一次修改 Buffer 并将其标记为脏时,将该 Buffer 插入到 FlushList 中,并设置其 oldest LSN。Buffer 被写入存储时,将该内存中的标记清除。
为高效推进一致性位点,PolarDB 的后台刷脏进程(bgwriter)采用“先被修改的 Buffer 先落盘”的刷脏策略,即 bgwriter 会从前往后遍历 FlushList,逐个刷脏,一旦有脏页写入存储,一致性位点就可以向前推进。以上图为例,如果 oldest LSN 为 10 的 Buffer 落盘,一致性位点就可以推进到 30。
为进一步提升一致性位点的推进效率,PolarDB 实现了并行刷脏。每个后台刷脏进程会从 FlushList 中获取一批数据页进行刷脏。
引入刷脏控制之后,仅满足刷脏条件的 Buffer 才能写入存储,假如某个 Buffer 修改非常频繁,可能导致 Buffer Latest LSN 总是大于 Oldest Apply LSN,该 Buffer 始终无法满足刷脏条件,此类 Buffer 我们称之为热点页。热点页会导致一致性位点无法推进,为解决热点页的刷脏问题,PolarDB 引入了 Copy Buffer 机制。
Copy Buffer 机制会将特定的、不满足刷脏条件的 Buffer 从 Buffer Pool 中拷贝至新增的 Copy Buffer Pool 中,Copy Buffer Pool 中的 Buffer 不会再被修改,其对应的 Latest LSN 也不会更新,随着 Oldest Apply LSN 的推进,Copy Buffer 会逐步满足刷脏条件,从而可以将 Copy Buffer 落盘。
引入 Copy Buffer 机制后,刷脏的流程如下:
如下图中,[oldest LSN, latest LSN]
为 [30, 500]
的 Buffer 被认为是热点页,将当前 Buffer 拷贝至 Copy Buffer Pool 中,随后该数据页再次被修改,假设修改对应的 LSN 为 600,则设置其 Oldest LSN 为 600,并将其从 FlushList 中删除,然后追加至 FlushList 末尾。此时,Copy Buffer 中数据页不会再修改,其 Latest LSN 始终为 500,若满足刷脏条件,则可以将 Copy Buffer 写入存储。
需要注意的是,引入 Copy Buffer 之后,一致性位点的计算方法有所改变。FlushList 中的 Oldest LSN 不再是最小的 Oldest LSN,Copy Buffer Pool 中可能存在更小的 oldest LSN。因此,除考虑 FlushList 中的 Oldest LSN 之外,还需要遍历 Copy Buffer Pool,找到 Copy Buffer Pool 中最小的 Oldest LSN,取两者的最小值即为一致性位点。
PolarDB 引入的一致性位点概念,与 checkpoint 的概念类似。PolarDB 中 checkpoint 位点表示该位点之前的所有数据都已经落盘,数据库 Crash Recovery 时可以从 checkpoint 位点开始恢复,提升恢复效率。普通的 checkpoint 会将所有 Buffer Pool 中的脏页以及其他内存数据落盘,这个过程可能耗时较长且在此期间 I/O 吞吐较大,可能会对正常的业务请求产生影响。
借助一致性位点,PolarDB 中引入了一种特殊的 checkpoint:Lazy Checkpoint。之所以称之为 Lazy(懒惰的),是与普通的 checkpoint 相比,lazy checkpoint 不会把 Buffer Pool 中所有的脏页落盘,而是直接使用当前的一致性位点作为 checkpoint 位点,极大地提升了 checkpoint 的执行效率。
Lazy Checkpoint 的整体思路是将普通 checkpoint 一次性刷大量脏页落盘的逻辑转换为后台刷脏进程持续不断落盘并维护一致性位点的逻辑。需要注意的是,Lazy Checkpoint 与 PolarDB 中 Full Page Write 的功能有冲突,开启 Full Page Write 之后会自动关闭该功能。
',44),d=[h];function L(B,g){return p(),o("div",null,d)}const y=r(c,[["render",L],["__file","buffer-management.html.vue"]]),P=JSON.parse('{"path":"/zh/theory/buffer-management.html","title":"缓冲区管理","lang":"zh-CN","frontmatter":{},"headers":[{"level":2,"title":"背景介绍","slug":"背景介绍","link":"#背景介绍","children":[]},{"level":2,"title":"术语解释","slug":"术语解释","link":"#术语解释","children":[]},{"level":2,"title":"刷脏控制","slug":"刷脏控制","link":"#刷脏控制","children":[]},{"level":2,"title":"一致性位点","slug":"一致性位点","link":"#一致性位点","children":[{"level":3,"title":"FlushList","slug":"flushlist","link":"#flushlist","children":[]},{"level":3,"title":"并行刷脏","slug":"并行刷脏","link":"#并行刷脏","children":[]}]},{"level":2,"title":"热点页","slug":"热点页","link":"#热点页","children":[]},{"level":2,"title":"Lazy Checkpoint","slug":"lazy-checkpoint","link":"#lazy-checkpoint","children":[]}],"git":{"updatedTime":1712565495000},"filePathRelative":"zh/theory/buffer-management.md"}');export{y as comp,P as data}; diff --git a/assets/buffer-management.html-DCPVMkcU.js b/assets/buffer-management.html-DCPVMkcU.js new file mode 100644 index 00000000000..6c944c43a9d --- /dev/null +++ b/assets/buffer-management.html-DCPVMkcU.js @@ -0,0 +1,5 @@ +import{_ as e,c as t,a,b as o}from"./9_future_pages-BcUohDxW.js";import{_ as s,o as r,c as n,e as h}from"./app-CWFDhr_k.js";const l="/PolarDB-for-PostgreSQL/assets/42_buffer_conntrol-B6DLwXMx.png",i="/PolarDB-for-PostgreSQL/assets/42_FlushList-BngbHiEv.png",d="/PolarDB-for-PostgreSQL/assets/43_parr_Flush-_B8PTLS0.png",f="/PolarDB-for-PostgreSQL/assets/44_Copy_Buffer-BLy2qaLH.png",p={},c=h('In a conventional database system, the primary instance and the read-only instances are each allocated a specific amount of exclusive storage space. The read-only instances can apply write-ahead logging (WAL) records and can read and write data to their own storage. A PolarDB cluster consists of a primary node and at least one read-only node. The primary node and the read-only nodes share the same physical storage. The primary node can read and write data to the shared storage. The read-only nodes can read data from the shared storage by applying WAL records but cannot write data to the shared storage. The following figure shows the architecture of a PolarDB cluster.
The read-only nodes may read two types of pages from the shared storage:
Future pages: The pages that the read-only nodes read from the shared storage incorporate changes that are made after the apply log sequence numbers (LSNs) of the pages. For example, the read-only nodes have applied all WAL records up to the WAL record with an LSN of 200 to a page, but the change described by the most recent WAL record with an LSN of 300 has been incorporated into the same page in the shared storage. These pages are called future pages.
Outdated pages: The pages that the read-only nodes read from the shared storage do not incorporate changes that are made before the apply LSNs of the pages. For example, the read-only nodes have applied all WAL records up to the most recent WAL record with an LSN of 200 to a page, but the change described by a previous WAL record with an LSN of 200 has not been incorporated into the same page in the shared storage. These pages are called outdated pages.
Each read-only node expects to read pages that incorporate only the changes made up to the apply LSNs of the pages on that read-only node. If the read-only nodes read outdated pages or future pages from the shared storage, you can take the following measures:
Buffer management involves consistent LSNs. For a specific page, each read-only node needs to apply only the WAL records that are generated between the consistent LSN and the apply LSN. This reduces the time that is required to apply WAL records on the read-only nodes.
PolarDB provides a flushing control mechanism to prevent the read-only nodes from reading future pages from the shared storage. Before the primary node writes a page to the shared storage, the primary node checks whether all the read-only nodes have applied the most recent WAL record of the page.
The pages in the buffer pool of the primary node are divided into the following two types based on whether the pages incorporate the changes that are made after the apply LSNs of the pages: pages that can be flushed to the shared storage and pages that cannot be flushed to the shared storage. This categorization is based on the following LSNs:
The primary node determines whether to flush a dirty page to the shared storage based on the following rules:
if buffer latest lsn <= oldest apply lsn
+ flush buffer
+else
+ do not flush buffer
+
To apply the WAL records of a page up to a specified LSN, each read-only node manages the mapping between the page and the LSNs of all WAL records that are generated for the page. This mapping is stored as a LogIndex. A LogIndex is used as a hash table that can be persistently stored. When a read-only node requests a page, the read-only node traverses the LogIndex of the page to obtain the LSNs of all WAL records that need to be applied. Then, the read-only node applies the WAL records in sequence to generate the most recent version of the page.
For a specific page, more changes mean more LSNs and a longer period of time required to apply WAL records. To minimize the number of WAL records that need to be applied for each page, PolarDB provides consistent LSNs.
After all changes that are made up to the consistent LSN of a page are written to the shared storage, the page is persistently stored. The primary node sends the write LSN and consistent LSN of the page to each read-only node, and each read-only node sends the apply LSN of the page and the min used LSN of the page to the primary node. The read-only nodes do not need to apply the WAL records that are generated before the consistent LSN of the page while reading it from shared storage. But the read-only nodes may still need to apply the WAL records that are generated before the consistent LSN of the page while replaying outdated page in buffer pool. Therefore, all LSNs that are smaller than the consistent LSN and the min used LSN can be removed from the LogIndex of the page. This reduces the number of WAL records that the read-only nodes need to apply. This also reduces the storage space that is occupied by LogIndex records.
PolarDB holds a specific state for each buffer in the memory. The state of a buffer in the memory is represented by the LSN that marks the first change to the buffer. This LSN is called the oldest LSN. The consistent LSN of a page is the smallest oldest LSN among the oldest LSNs of all buffers for the page.
A conventional method of obtaining the consistent LSN of a page requires the primary node to traverse the LSNs of all buffers for the page in the buffer pool. This method causes significant CPU overhead and a long traversal process. To address these issues, PolarDB uses a flush list, in which all dirty pages in the buffer pool are sorted in ascending order based on their oldest LSNs. The flush list helps you reduce the time complexity of obtaining consistent LSNs to O(1).
When a buffer is updated for the first time, the buffer is labeled as dirty. PolarDB inserts the buffer into the flush list and generates an oldest LSN for the buffer. When the buffer is flushed to the shared storage, the label is removed.
To efficiently move the consistent LSN of each page towards the head of the flush list, PolarDB runs a BGWRITER process to traverse all buffers in the flush list in chronological order and flush early buffers to the shared storage one by one. After a buffer is flushed to the shared storage, the consistent LSN is moved one position forward towards the head of the flush list. In the example shown in the preceding figure, if the buffer with an oldest LSN of 10 is flushed to the shared storage, the buffer with an oldest LSN of 30 is moved one position forward towards the head of the flush list. LSN 30 becomes the consistent LSN.
To further improve the efficiency of moving the consistent LSN of each page to the head of the flush list, PolarDB runs multiple BGWRITER processes to flush buffers in parallel. Each BGWRITER process reads a number of buffers from the flush list and flushes the buffers to the shared storage at a time.
After the flushing control mechanism is introduced, PolarDB flushes only the buffers that meet specific flush conditions to the shared storage. If a buffer is frequently updated, its latest LSN may remain larger than its oldest apply LSN. As a result, the buffer can never meet the flush conditions. This type of buffer is called hot buffers. If a page has hot buffers, the consistent LSN of the page cannot be moved towards the head of the flush list. To resolve this issue, PolarDB provides a copy buffering mechanism.
The copy buffering mechanism allows PolarDB to copy buffers that do not meet the flush conditions to a copy buffer pool. Buffers in the copy buffer pool and their latest LSNs are no longer updated. As the oldest apply LSN moves towards the head of the flush list, these buffers start to meet the flush conditions. When these buffers meet the flush conditions, PolarDB can flush them from the copy buffer pool to the shared storage.
The following flush rules apply:
In the example shown in the following figure, the buffer with an oldest LSN of 30 and a latest LSN of 500 is considered a hot buffer. The buffer is updated after it is copied to the copy buffer pool. If the change is marked by LSN 600, PolarDB changes the oldest LSN of the buffer to 600 and moves the buffer to the tail of the flush list. At this time, the copy of the buffer is no longer updated, and the latest LSN of the copy remains 500. When the copy meets the flush conditions, PolarDB flushes the copy to the shared storage.
After the copy buffering mechanism is introduced, PolarDB uses a different method to calculate the consistent LSN of each page. For a specific page, the oldest LSN in the flush list is no longer the smallest oldest LSN because the oldest LSN in the copy buffer pool can be smaller. Therefore, PolarDB needs to compare the oldest LSN in the flush list with the oldest LSN in the copy buffer pool. The smaller oldest LSN is considered the consistent LSN.
PolarDB supports consistent LSNs, which are similar to checkpoints. All changes that are made to a page before the checkpoint LSN of the page are flushed to the shared storage. If a recovery operation is run, PolarDB starts to recover the page from the checkpoint LSN. This improves recovery efficiency. If regular checkpoint LSNs are used, PolarDB flushes all dirty pages in the buffer pool and other in-memory pages to the shared storage. This process may require a long period of time and high I/O throughput. As a result, normal queries may be affected.
Consistent LSNs empower PolarDB to implement lazy checkpointing. If the lazy checkpointing mechanism is used, PolarDB does not flush all dirty pages in the buffer pool to the shared storage. Instead, PolarDB uses consistent LSNs as checkpoint LSNs. This significantly increases checkpointing efficiency.
The underlying logic of the lazy checkpointing mechanism allows PolarDB to run BGWRITER processes that continuously flush dirty pages and maintain consistent LSNs. The lazy checkpointing mechanism cannot be used with the full page write feature. If you enable the full page write feature, the lazy checkpointing mechanism is automatically disabled.
',44),u=[c];function g(m,y){return r(),n("div",null,u)}const S=s(p,[["render",g],["__file","buffer-management.html.vue"]]),N=JSON.parse('{"path":"/theory/buffer-management.html","title":"Buffer Management","lang":"en-US","frontmatter":{},"headers":[{"level":2,"title":"Background Information","slug":"background-information","link":"#background-information","children":[]},{"level":2,"title":"Terms","slug":"terms","link":"#terms","children":[]},{"level":2,"title":"Flushing Control","slug":"flushing-control","link":"#flushing-control","children":[]},{"level":2,"title":"Consistent LSNs","slug":"consistent-lsns","link":"#consistent-lsns","children":[{"level":3,"title":"Flush Lists","slug":"flush-lists","link":"#flush-lists","children":[]},{"level":3,"title":"Parallel Flushing","slug":"parallel-flushing","link":"#parallel-flushing","children":[]}]},{"level":2,"title":"Hot Buffers","slug":"hot-buffers","link":"#hot-buffers","children":[]},{"level":2,"title":"Lazy Checkpointing","slug":"lazy-checkpointing","link":"#lazy-checkpointing","children":[]}],"git":{"updatedTime":1712565495000},"filePathRelative":"theory/buffer-management.md"}');export{S as comp,N as data}; diff --git a/assets/bulk-read-and-extend.html-BBfUOq3E.js b/assets/bulk-read-and-extend.html-BBfUOq3E.js new file mode 100644 index 00000000000..69383dc30b0 --- /dev/null +++ b/assets/bulk-read-and-extend.html-BBfUOq3E.js @@ -0,0 +1,13 @@ +import{_ as i,r as t,o as d,c as u,d as e,a,w as l,b as n,e as h}from"./app-CWFDhr_k.js";const k="/PolarDB-for-PostgreSQL/assets/bulk_read-BLiLaQKw.png",_="/PolarDB-for-PostgreSQL/assets/bulk_vacuum_data-brwcBF5e.png",f="/PolarDB-for-PostgreSQL/assets/bulk_seq_scan-esf1ZpVK.png",g="/PolarDB-for-PostgreSQL/assets/bulk_insert_data-BD1AwARt.png",B="/PolarDB-for-PostgreSQL/assets/bulk_create_index_data-USGv1Det.png",S={},b=a("h1",{id:"预读-预扩展",tabindex:"-1"},[a("a",{class:"header-anchor",href:"#预读-预扩展"},[a("span",null,"预读 / 预扩展")])],-1),x={class:"table-of-contents"},P=a("h2",{id:"背景介绍",tabindex:"-1"},[a("a",{class:"header-anchor",href:"#背景介绍"},[a("span",null,"背景介绍")])],-1),m={href:"https://en.wikipedia.org/wiki/Ext4",target:"_blank",rel:"noopener noreferrer"},v=h('在 PostgreSQL 读取堆表的过程中,会以 8kB 页为单位通过文件系统读取页面至内存缓冲池(Buffer Pool)中。PFS 对于这种数据量较小的 I/O 操作并不是特别高效。所以,PolarDB 为了适配 PFS 而设计了 堆表批量预读。当读取的页数量大于 1 时,将会触发批量预读,一次 I/O 读取 128kB 数据至 Buffer Pool 中。预读对顺序扫描(Sequential Scan)、Vacuum 两种场景性能可以带来一倍左右的提升,在索引创建场景下可以带来 18% 的性能提升。
在 PostgreSQL 中,表空间的扩展过程中将会逐个申请并扩展 8kB 的页。即使是 PostgreSQL 支持的批量页扩展,进行一次 N 页扩展的流程中也包含了 N 次 I/O 操作。这种页扩展不符合 PFS 最小页扩展粒度为 4MB 的特性。为此,PolarDB 设计了堆表批量预扩展,在扩展堆表的过程中,一次 I/O 扩展 4MB 页。在写表频繁的场景下(如装载数据),能够带来一倍的性能提升。
索引创建预扩展与堆表预扩展的功能类似。索引创建预扩展特别针对 PFS 优化索引创建过程。在索引创建的页扩展过程中,一次 I/O 扩展 4MB 页。这种设计可以在创建索引的过程中带来 30% 的性能提升。
注意
当前索引创建预扩展只适配了 B-Tree 索引。其他索引类型暂未支持。
堆表预读的实现步骤主要分为四步:
palloc
在内存中申请一段大小为 N * 页大小
的空间,简称为 p
N * 页大小
的数据拷贝至 p
中p
中 N 个页的内容逐个拷贝至从 Buffer Pool 申请的 N 个 Buffer 中。后续的读取操作会直接命中 Buffer。数据流图如下所示:
预扩展的实现步骤主要分为三步:
索引创建预扩展的实现步骤与预扩展类似,但没有涉及 Buffer 的申请。步骤如下:
堆表预读的参数名为 polar_bulk_read_size
,功能默认开启,默认大小为 128kB。不建议用户自行修改该参数,128kB 是贴合 PFS 的最优值,自行调整并不会带来性能的提升。
关闭功能:
ALTER SYSTEM SET polar_bulk_read_size = 0;
+SELECT pg_reload_conf();
+
打开功能并设置预读大小为 128kB:
ALTER SYSTEM SET polar_bulk_read_size = '128kB';
+SELECT pg_reload_conf();
+
堆表预扩展的参数名为 polar_bulk_extend_size
,功能默认开启,预扩展的大小默认是 4MB。不建议用户自行修改该参数值,4MB 是贴合 PFS 的最优值。
关闭功能:
ALTER SYSTEM SET polar_bulk_extend_size = 0;
+SELECT pg_reload_conf();
+
打开功能并设置预扩展大小为 4MB:
ALTER SYSTEM SET polar_bulk_extend_size = '4MB';
+SELECT pg_reload_conf();
+
索引创建预扩展的参数名为 polar_index_create_bulk_extend_size
,功能默认开启。索引创建预扩展的大小默认是 4MB。不建议用户自行修改该参数值,4MB 是贴合 PFS 的最优值。
关闭功能:
ALTER SYSTEM SET polar_index_create_bulk_extend_size = 0;
+SELECT pg_reload_conf();
+
打开功能,并设置预扩展大小为 4MB:
ALTER SYSTEM SET polar_index_create_bulk_extend_size = 512;
+SELECT pg_reload_conf();
+
为了展示堆表预读、堆表预扩展、索引创建预扩展的性能提升效果,我们在 PolarDB for PostgreSQL 14 的实例上进行了测试。
400GB 表的 Vacuum 性能:
400GB 表的 SeqScan 性能:
结论:
400GB 表数据装载性能:
结论:
400GB 表创建索引性能:
结论:
PolarDB for PostgreSQL 的 ePQ 弹性跨机并行查询功能可以将一个大查询分散到多个节点上执行,从而加快查询速度。该功能会涉及到各个节点之间的通信,包括执行计划的分发、执行的控制、结果的获取等等。因此设计了 集群拓扑视图 功能,用于为 ePQ 组件收集并展示集群的拓扑信息,实现跨节点查询。
集群拓扑视图的维护是完全透明的,用户只需要按照部署文档搭建一写多读的集群,集群拓扑视图即可正确维护起来。关键在于需要搭建带有流复制槽的 Replica / Standby 节点。
使用以下接口可以获取集群拓扑视图(执行结果来自于 PolarDB for PostgreSQL 11):
postgres=# SELECT * FROM polar_cluster_info;
+ name | host | port | release_date | version | slot_name | type | state | cpu | cpu_quota | memory | memory_quota | iops | iops_quota | connection | connection_quota | px_connection | px_connection_quota | px_node
+-------+-----------+------+--------------+---------+-----------+---------+-------+-----+-----------+--------+--------------+------+------------+------------+------------------+---------------+---------------------+---------
+ node0 | 127.0.0.1 | 5432 | 20220930 | 1.1.27 | | RW | Ready | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | f
+ node1 | 127.0.0.1 | 5433 | 20220930 | 1.1.27 | replica1 | RO | Ready | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | t
+ node2 | 127.0.0.1 | 5434 | 20220930 | 1.1.27 | replica2 | RO | Ready | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | t
+ node3 | 127.0.0.1 | 5431 | 20220930 | 1.1.27 | standby1 | Standby | Ready | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | f
+(4 rows)
+
name
是节点的名称,是自动生成的。host
/ port
表示了节点的连接信息。在这里,都是本地地址。release_date
和 version
标识了 PolarDB 的版本信息。slot_name
是节点连接所使用的流复制槽,只有使用流复制槽连接上来的节点才会被统计在该视图中(除 Primary 节点外)。type
表示节点的类型,有三类: state
表示节点的状态。有 Offline / Going Offline / Disabled / Initialized / Pending / Ready / Unknown 这些状态,其中只有 Ready 才有可能参与 PX 计算,其他的都无法参与 PX 计算。px_node
表示是否参与 PX 计算。对于 ePQ 查询来说,默认只有 Replica 节点参与。可以通过参数控制使用 Primary 节点或者 Standby 节点参与计算:
-- 使 Primary 节点参与计算
+SET polar_px_use_master = ON;
+
+-- 使 Standby 节点参与计算
+SET polar_px_use_standby = ON;
+
提示
从 PolarDB for PostgreSQL 14 起,polar_px_use_master
参数改名为 polar_px_use_primary
。
还可以使用 polar_px_nodes
指定哪些节点参与 PX 计算。例如使用上述集群拓扑视图,可以执行如下命令,让 PX 查询只在 replica1 上执行。
SET polar_px_nodes = 'node1';
+
集群拓扑视图信息的采集是通过流复制来传递信息的。该功能对流复制协议增加了新的消息类型用于集群拓扑视图的传递。分为以下两个步骤:
集群拓扑视图并非定时更新与发送,因为视图并非一直变化。只有当节点刚启动时,或发生关键状态变化时再进行更新发送。
在具体实现上,Primary 节点收集的全局状态带有版本 generation,只有在接收到节点拓扑变化才会递增;当全局状态版本更新后,才会发送到其他节点,其他节点接收到后,设置到自己的节点上。
状态指标:
同 WAL Sender / WAL Reciver 的其他消息的做法,新增 'm'
和 'M'
消息类型,用于收集节点信息和广播集群拓扑视图。
提供接口获取 Replica 列表,提供 IP / port 等信息,用于 PX 查询。
预留了较多的负载接口,可以根据负载来实现动态调整并行度。(尚未接入)
同时增加了参数 polar_px_use_master
/ polar_px_use_standby
,将 Primary / Standby 加入到 PX 计算中,默认不打开(可能会有正确性问题,因为快照格式、Vacuum 等原因,快照有可能不可用)。
ePQ 会使用上述信息生成节点的连接信息并缓存下来,并在 ePQ 查询中使用该视图。当 generation 更新或者设置了 polar_px_nodes
/ polar_px_use_master
/ polar_px_use_standby
时,该缓存会被重置,并在下次使用时重新生成缓存。
通过 polar_monitor
插件提供视图,将上述集群拓扑视图提供出去,在任意节点均可获取。
Before submitting for code review, please do unit test and pass all tests under src/test, such as regress and isolation. Unit tests or function tests should be submitted with code modification.
In addition to code review, this doc offers instructions for the whole cycle of high-quality development, from design, implementation, testing, documentation, to preparing for code review. Many good questions are asked for critical steps during development, such as about design, about function, about complexity, about test, about naming, about documentation, and about code review. The doc summarized rules for code review as follows.
In doing a code review, you should make sure that:
警告
需要翻译
Before submitting for code review, please do unit test and pass all tests under src/test, such as regress and isolation. Unit tests or function tests should be submitted with code modification.
In addition to code review, this doc offers instructions for the whole cycle of high-quality development, from design, implementation, testing, documentation, to preparing for code review. Many good questions are asked for critical steps during development, such as about design, about function, about complexity, about test, about naming, about documentation, and about code review. The doc summarized rules for code review as follows.
In doing a code review, you should make sure that:
通过 curl
安装 Node 版本管理器 nvm
。
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
+command -v nvm
+
如果上一步显示 command not found
,那么请关闭当前终端,然后重新打开。
如果 nvm
已经被成功安装,执行以下命令安装 Node 的 LTS 版本:
nvm install --lts
+
Node.js 安装完毕后,使用如下命令检查安装是否成功:
node -v
+npm -v
+
使用 npm
全局安装软件包管理器 pnpm
:
npm install -g pnpm
+pnpm -v
+
在 PolarDB for PostgreSQL 工程的根目录下运行以下命令,pnpm
将会根据 package.json
安装所有依赖:
pnpm install
+
在 PolarDB for PostgreSQL 工程的根目录下运行以下命令:
pnpm run docs:dev
+
文档开发服务器将运行于 http://localhost:8080/PolarDB-for-PostgreSQL/
,打开浏览器即可访问。对 Markdown 文件作出修改后,可以在网页上实时查看变化。
PolarDB for PostgreSQL 的文档资源位于工程根目录的 docs/
目录下。其目录被组织为:
└── docs
+ ├── .vuepress
+ │ ├── configs
+ │ ├── public
+ │ └── styles
+ ├── README.md
+ ├── architecture
+ ├── contributing
+ ├── guide
+ ├── imgs
+ ├── roadmap
+ └── zh
+ ├── README.md
+ ├── architecture
+ ├── contributing
+ ├── guide
+ ├── imgs
+ └── roadmap
+
可以看到,docs/zh/
目录下是其父级目录除 .vuepress/
以外的翻版。docs/
目录中全部为英语文档,docs/zh/
目录下全部是相对应的简体中文文档。
.vuepress/
目录下包含文档工程的全局配置信息:
config.ts
:文档配置configs/
:文档配置模块(导航栏 / 侧边栏、英文 / 中文等配置)public/
:公共静态资源styles/
:文档主题默认样式覆盖通过 curl
安装 Node 版本管理器 nvm
。
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
+command -v nvm
+
如果上一步显示 command not found
,那么请关闭当前终端,然后重新打开。
如果 nvm
已经被成功安装,执行以下命令安装 Node 的 LTS 版本:
nvm install --lts
+
Node.js 安装完毕后,使用如下命令检查安装是否成功:
node -v
+npm -v
+
使用 npm
全局安装软件包管理器 pnpm
:
npm install -g pnpm
+pnpm -v
+
在 PolarDB for PostgreSQL 工程的根目录下运行以下命令,pnpm
将会根据 package.json
安装所有依赖:
pnpm install
+
在 PolarDB for PostgreSQL 工程的根目录下运行以下命令:
pnpm run docs:dev
+
文档开发服务器将运行于 http://localhost:8080/PolarDB-for-PostgreSQL/
,打开浏览器即可访问。对 Markdown 文件作出修改后,可以在网页上实时查看变化。
PolarDB for PostgreSQL 的文档资源位于工程根目录的 docs/
目录下。其目录被组织为:
└── docs
+ ├── .vuepress
+ │ ├── configs
+ │ ├── public
+ │ └── styles
+ ├── README.md
+ ├── architecture
+ ├── contributing
+ ├── guide
+ ├── imgs
+ ├── roadmap
+ └── zh
+ ├── README.md
+ ├── architecture
+ ├── contributing
+ ├── guide
+ ├── imgs
+ └── roadmap
+
可以看到,docs/zh/
目录下是其父级目录除 .vuepress/
以外的翻版。docs/
目录中全部为英语文档,docs/zh/
目录下全部是相对应的简体中文文档。
.vuepress/
目录下包含文档工程的全局配置信息:
config.ts
:文档配置configs/
:文档配置模块(导航栏 / 侧边栏、英文 / 中文等配置)public/
:公共静态资源styles/
:文档主题默认样式覆盖PolarDB for PostgreSQL 基于 PostgreSQL 和其它开源项目进行开发,我们的主要目标是为 PostgreSQL 建立一个更大的社区。我们欢迎来自社区的贡献者提交他们的代码或想法。在更远的未来,我们希望这个项目能够被来自阿里巴巴内部和外部的开发者共同管理。
POLARDB_11_STABLE
是 PolarDB 的稳定分支,只接受来自 POLARDB_11_DEV
的合并POLARDB_11_DEV
是 PolarDB 的稳定开发分支,接受来自开源社区的 PR 合并,以及内部开发者的直接推送新的代码将被合并到 POLARDB_11_DEV
上,再由内部开发者定期合并到 POLARDB_11_STABLE
上。
git clone https://github.com/<your-github>/PolarDB-for-PostgreSQL.git
+
从稳定开发分支 POLARDB_11_DEV
上检出一个新的开发分支,假设这个分支名为 dev
:
git checkout POLARDB_11_DEV
+git checkout -b dev
+
git status
+git add <files-to-change>
+git commit -m "modification for dev"
+
首先点击您自己仓库页面上的 Fetch upstream
确保您的稳定开发分支与 PolarDB 官方仓库的稳定开发分支一致。然后将稳定开发分支上的最新修改拉取到本地:
git checkout POLARDB_11_DEV
+git pull
+
接下来将您的开发分支变基到目前的稳定开发分支,并解决冲突:
git checkout dev
+git rebase POLARDB_11_DEV
+-- 解决冲突 --
+git push -f dev
+
点击 New pull request 或 Compare & pull request 按钮,选择对 ApsaraDB/PolarDB-for-PostgreSQL:POLARDB_11_DEV
分支和 <your-github>/PolarDB-for-PostgreSQL:dev
分支进行比较,并撰写 PR 描述。
GitHub 会对您的 PR 进行自动化的回归测试,您的 PR 需要 100% 通过这些测试。
您可以与维护者就代码中的问题进行讨论,并解决他们提出的评审意见。
如果您的代码通过了测试和评审,PolarDB 的维护者将会把您的 PR 合并到稳定分支上。
`,19);function L(x,R){const s=t("ExternalLinkIcon"),l=t("RouteLink");return d(),c("div",null,[p,e("ul",null,[e("li",null,[a("签署 PolarDB for PostgreSQL 的 "),e("a",u,[a("CLA"),n(s)])])]),g,e("ul",null,[b,e("li",null,[a("查阅 "),n(l,{to:"/zh/deploying/deploy.html"},{default:i(()=>[a("进阶部署")]),_:1}),a(" 了解如何从源码编译开发 PolarDB")]),e("li",null,[a("向您的复制源码仓库推送代码,并确保代码符合我们的 "),n(l,{to:"/zh/contributing/coding-style.html"},{default:i(()=>[a("编码风格规范")]),_:1})]),v,_,m]),f,k,e("p",null,[a("在 "),e("a",P,[a("PolarDB for PostgreSQL"),n(s)]),a(" 的代码仓库页面上,点击右上角的 "),D,a(" 按钮复制您自己的 PolarDB 仓库。")]),B])}const S=r(h,[["render",L],["__file","contributing-polardb-kernel.html.vue"]]),E=JSON.parse('{"path":"/zh/contributing/contributing-polardb-kernel.html","title":"贡献代码","lang":"zh-CN","frontmatter":{},"headers":[{"level":2,"title":"分支说明与管理方式","slug":"分支说明与管理方式","link":"#分支说明与管理方式","children":[]},{"level":2,"title":"贡献代码之前","slug":"贡献代码之前","link":"#贡献代码之前","children":[]},{"level":2,"title":"贡献流程","slug":"贡献流程","link":"#贡献流程","children":[]},{"level":2,"title":"代码提交实例说明","slug":"代码提交实例说明","link":"#代码提交实例说明","children":[{"level":3,"title":"复制您自己的仓库","slug":"复制您自己的仓库","link":"#复制您自己的仓库","children":[]},{"level":3,"title":"克隆您的仓库到本地","slug":"克隆您的仓库到本地","link":"#克隆您的仓库到本地","children":[]},{"level":3,"title":"创建本地开发分支","slug":"创建本地开发分支","link":"#创建本地开发分支","children":[]},{"level":3,"title":"在本地仓库修改代码并提交","slug":"在本地仓库修改代码并提交","link":"#在本地仓库修改代码并提交","children":[]},{"level":3,"title":"变基并提交到远程仓库","slug":"变基并提交到远程仓库","link":"#变基并提交到远程仓库","children":[]},{"level":3,"title":"创建 Pull Request","slug":"创建-pull-request","link":"#创建-pull-request","children":[]},{"level":3,"title":"解决代码评审中的问题","slug":"解决代码评审中的问题","link":"#解决代码评审中的问题","children":[]},{"level":3,"title":"代码合并","slug":"代码合并","link":"#代码合并","children":[]}]}],"git":{"updatedTime":1689229584000},"filePathRelative":"zh/contributing/contributing-polardb-kernel.md"}');export{S as comp,E as data}; diff --git a/assets/contributing-polardb-kernel.html-D8Ecfytw.js b/assets/contributing-polardb-kernel.html-D8Ecfytw.js new file mode 100644 index 00000000000..a427e3a08af --- /dev/null +++ b/assets/contributing-polardb-kernel.html-D8Ecfytw.js @@ -0,0 +1,13 @@ +import{_ as i,r as s,o as c,c as d,a as e,b as a,d as n,w as r,e as l}from"./app-CWFDhr_k.js";const p={},h=l('PolarDB for PostgreSQL is an open source product from PostgreSQL and other open source projects. Our main target is to create a larger community for PostgreSQL. Contributors are welcomed to submit their code and ideas. In a long run, we hope this project can be managed by developers from both inside and outside Alibaba.
POLARDB_11_STABLE
is the stable branch of PolarDB, it can accept the merge from POLARDB_11_DEV
onlyPOLARDB_11_DEV
is the stable development branch of PolarDB, it can accept the merge from both pull requests and direct pushes from maintainersNew features will be merged to POLARDB_11_DEV
, and will be merged to POLARDB_11_STABLE
periodically by maintainers
git clone https://github.com/<your-github>/PolarDB-for-PostgreSQL.git
+
Check out a new development branch from the stable development branch POLARDB_11_DEV
. Suppose your branch is named as dev
:
git checkout POLARDB_11_DEV
+git checkout -b dev
+
git status
+git add <files-to-change>
+git commit -m "modification for dev"
+
Click Fetch upstream
on your own repository page to make sure your stable development branch is up do date with PolarDB official. Then pull the latest commits on stable development branch to your local repository.
git checkout POLARDB_11_DEV
+git pull
+
Then, rebase your development branch to the stable development branch, and resolve the conflict:
git checkout dev
+git rebase POLARDB_11_DEV
+-- resolve conflict --
+git push -f dev
+
Click New pull request or Compare & pull request button, choose to compare branches ApsaraDB/PolarDB-for-PostgreSQL:POLARDB_11_DEV
and <your-github>/PolarDB-for-PostgreSQL:dev
, and write PR description.
GitHub will automatically run regression test on your code. Your PR should pass all these checks.
Resolve all problems raised by reviewers and update the PR.
It is done by PolarDB maintainers.
`,19);function L(R,x){const t=s("ExternalLinkIcon"),o=s("RouteLink");return c(),d("div",null,[h,e("ul",null,[e("li",null,[a("Sign the "),e("a",u,[a("CLA"),n(t)]),a(" of PolarDB for PostgreSQL")])]),m,b,e("ul",null,[g,e("li",null,[a("Checkout documentations for "),n(o,{to:"/deploying/deploy.html"},{default:r(()=>[a("Advanced Deployment")]),_:1}),a(" from PolarDB source code.")]),e("li",null,[a("Push changes to your personal fork and make sure they follow our "),n(o,{to:"/contributing/coding-style.html"},{default:r(()=>[a("coding style")]),_:1}),a(".")]),v,f,k]),_,y,P,e("p",null,[a("On GitHub repository of "),e("a",D,[a("PolarDB for PostgreSQL"),n(t)]),a(", Click "),B,a(" button to create your own PolarDB repository.")]),C])}const A=i(p,[["render",L],["__file","contributing-polardb-kernel.html.vue"]]),S=JSON.parse(`{"path":"/contributing/contributing-polardb-kernel.html","title":"Code Contributing","lang":"en-US","frontmatter":{},"headers":[{"level":2,"title":"Branch Description and Management","slug":"branch-description-and-management","link":"#branch-description-and-management","children":[]},{"level":2,"title":"Before Contributing","slug":"before-contributing","link":"#before-contributing","children":[]},{"level":2,"title":"Contributing","slug":"contributing","link":"#contributing","children":[]},{"level":2,"title":"An Example of Submitting Code Change to PolarDB","slug":"an-example-of-submitting-code-change-to-polardb","link":"#an-example-of-submitting-code-change-to-polardb","children":[{"level":3,"title":"Fork Your Own Repository","slug":"fork-your-own-repository","link":"#fork-your-own-repository","children":[]},{"level":3,"title":"Create Local Repository","slug":"create-local-repository","link":"#create-local-repository","children":[]},{"level":3,"title":"Create a Local Development Branch","slug":"create-a-local-development-branch","link":"#create-a-local-development-branch","children":[]},{"level":3,"title":"Make Changes and Commit Locally","slug":"make-changes-and-commit-locally","link":"#make-changes-and-commit-locally","children":[]},{"level":3,"title":"Rebase and Commit to Remote Repository","slug":"rebase-and-commit-to-remote-repository","link":"#rebase-and-commit-to-remote-repository","children":[]},{"level":3,"title":"Create a Pull Request","slug":"create-a-pull-request","link":"#create-a-pull-request","children":[]},{"level":3,"title":"Address Reviewers' Comments","slug":"address-reviewers-comments","link":"#address-reviewers-comments","children":[]},{"level":3,"title":"Merge","slug":"merge","link":"#merge","children":[]}]}],"git":{"updatedTime":1689229584000},"filePathRelative":"contributing/contributing-polardb-kernel.md"}`);export{A as comp,S as data}; diff --git a/assets/cpu-usage-high.html-DFJ8nB9s.js b/assets/cpu-usage-high.html-DFJ8nB9s.js new file mode 100644 index 00000000000..3bb8161fb9f --- /dev/null +++ b/assets/cpu-usage-high.html-DFJ8nB9s.js @@ -0,0 +1,34 @@ +import{_ as d,r as p,o as i,c as k,d as n,a as s,w as e,b as a,e as o}from"./app-CWFDhr_k.js";const u={},_=s("h1",{id:"cpu-使用率高的排查方法",tabindex:"-1"},[s("a",{class:"header-anchor",href:"#cpu-使用率高的排查方法"},[s("span",null,"CPU 使用率高的排查方法")])],-1),g=s("p",null,"在 PolarDB for PostgreSQL 的使用过程中,可能会出现 CPU 使用率异常升高甚至达到满载的情况。本文将介绍造成这种情况的常见原因和排查方法,以及相应的解决方案。",-1),h={class:"table-of-contents"},y=o(`当 CPU 使用率上升时,最有可能的情况是业务量的上涨导致数据库使用的计算资源增多。所以首先需要排查目前数据库的活跃连接数是否比平时高很多。如果数据库配备了监控系统,那么活跃连接数的变化情况可以通过图表的形式观察到;否则可以直接连接到数据库,执行如下 SQL 来获取当前活跃连接数:
SELECT COUNT(*) FROM pg_stat_activity WHERE state NOT LIKE 'idle';
+
pg_stat_activity
是 PostgreSQL 的内置系统视图,该视图返回的每一行都是一个正在运行中的 PostgreSQL 进程,state
列表示进程当前的状态。该列可能的取值为:
active
:进程正在执行查询idle
:进程空闲,正在等待新的客户端命令idle in transaction
:进程处于事务中,但目前暂未执行查询idle in transaction (aborted)
:进程处于事务中,且有一条语句发生过错误fastpath function call
:进程正在执行一个 fast-path 函数disabled
:进程的状态采集功能被关闭上述 SQL 能够查询到所有非空闲状态的进程数,即可能占用 CPU 的活跃连接数。如果活跃连接数较平时更多,则 CPU 使用率的上升是符合预期的。
如果 CPU 使用率上升,而活跃连接数的变化范围处在正常范围内,那么有可能出现了较多性能较差的慢查询。这些慢查询可能在很长一段时间里占用了较多的 CPU,导致 CPU 使用率上升。PostgreSQL 提供了慢查询日志的功能,执行时间高于 log_min_duration_statement
的 SQL 将会被记录到慢查询日志中。然而当 CPU 占用率接近满载时,将会导致整个系统的停滞,所有 SQL 的执行可能都会慢下来,所以慢查询日志中记录的信息可能非常多,并不容易排查。
如果没有在当前数据库中创建过 pg_stat_statements
插件的话,首先需要创建这个插件。该过程将会注册好插件提供的函数及视图:
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
+
该插件和数据库系统本身都会不断累积统计信息。为了排查 CPU 异常升高后这段时间内的问题,需要把数据库和插件中留存的统计信息做一次清空,然后开始收集从当前时刻开始的统计信息:
-- 清空当前数据库的统计信息
+SELECT pg_stat_reset();
+-- 清空 pg_stat_statements 插件截止目前收集的统计信息
+SELECT pg_stat_statements_reset();
+
接下来需要等待一段时间(1-2 分钟),使数据库和插件充分采集这段时间内的统计信息。
统计信息收集完毕后,参考使用如下 SQL 查询执行时间最长的 5 条 SQL:
-- < PostgreSQL 13
+SELECT * FROM pg_stat_statements ORDER BY total_time DESC LIMIT 5;
+-- >= PostgreSQL 13
+SELECT * FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 5;
+
当一张表缺少索引,而对该表的查询基本上都是点查时,数据库将不得不使用全表扫描,并在内存中进行过滤条件的判断,处理掉大量的无效记录,导致 CPU 使用率大幅提升。利用 pg_stat_statements
插件的统计信息,参考如下 SQL,可以列出截止目前读取 Buffer 数量最多的 5 条 SQL:
SELECT * FROM pg_stat_statements
+ORDER BY shared_blks_hit + shared_blks_read DESC
+LIMIT 5;
+
SELECT * FROM pg_stat_user_tables
+WHERE n_live_tup > 100000 AND seq_scan > 0
+ORDER BY seq_tup_read DESC
+LIMIT 5;
+
通过系统内置视图 pg_stat_activity
,可以查询出长时间执行不结束的 SQL,这些 SQL 有极大可能造成 CPU 使用率过高。参考以下 SQL 获取查询执行时间最长,且目前还未退出的 5 条 SQL:
SELECT
+ *,
+ extract(epoch FROM (NOW() - xact_start)) AS xact_stay,
+ extract(epoch FROM (NOW() - query_start)) AS query_stay
+FROM pg_stat_activity
+WHERE state NOT LIKE 'idle%'
+ORDER BY query_stay DESC
+LIMIT 5;
+
结合前一步中排查到的 使用全表扫描最多的表,参考如下 SQL 获取 在该表上 执行时间超过一定阈值(比如 10s)的慢查询:
SELECT * FROM pg_stat_activity
+WHERE
+ state NOT LIKE 'idle%' AND
+ query ILIKE '%表名%' AND
+ NOW() - query_start > interval '10s';
+
对于异常占用 CPU 较高的 SQL,如果仅有个别非预期 SQL,则可以通过给后端进程发送信号的方式,先让 SQL 执行中断,使 CPU 使用率恢复正常。参考如下 SQL,以慢查询执行所使用的进程 pid(pg_stat_activity
视图的 pid
列)作为参数,中止相应的进程的执行:
SELECT pg_cancel_backend(pid);
+SELECT pg_terminate_backend(pid);
+
如果执行较慢的 SQL 是业务上必要的 SQL,那么需要对它进行调优。
首先可以对 SQL 涉及到的表进行采样,更新其统计信息,使优化器能够产生更加准确的执行计划。采样需要占用一定的 CPU,最好在业务低谷期运行:
ANALYZE 表名;
+
对于全表扫描较多的表,可以在常用的过滤列上创建索引,以尽量使用索引扫描,减少全表扫描在内存中过滤不符合条件的记录所造成的 CPU 浪费。
`,13);function v(c,C){const r=p("ArticleInfo"),t=p("router-link"),l=p("ExternalLinkIcon");return i(),k("div",null,[_,n(r,{frontmatter:c.$frontmatter},null,8,["frontmatter"]),g,s("nav",h,[s("ul",null,[s("li",null,[n(t,{to:"#业务量上涨"},{default:e(()=>[a("业务量上涨")]),_:1})]),s("li",null,[n(t,{to:"#慢查询"},{default:e(()=>[a("慢查询")]),_:1}),s("ul",null,[s("li",null,[n(t,{to:"#定位执行时间较长的慢查询"},{default:e(()=>[a("定位执行时间较长的慢查询")]),_:1})]),s("li",null,[n(t,{to:"#定位读取-buffer-数量较多的慢查询"},{default:e(()=>[a("定位读取 Buffer 数量较多的慢查询")]),_:1})]),s("li",null,[n(t,{to:"#定位长时间执行不结束的慢查询"},{default:e(()=>[a("定位长时间执行不结束的慢查询")]),_:1})]),s("li",null,[n(t,{to:"#解决方法与优化思路"},{default:e(()=>[a("解决方法与优化思路")]),_:1})])])])])]),y,s("p",null,[s("a",w,[f,n(l)]),a(" 插件能够记录数据库服务器上所有 SQL 语句在优化和执行阶段的统计信息。由于该插件需要使用共享内存,因此插件名需要被配置在 "),E,a(" 参数中。")]),L,s("p",null,[a("借助 PostgreSQL 内置系统视图 "),s("a",S,[m,n(l)]),a(" 中的统计信息,也可以统计出使用全表扫描的次数最多的表。参考如下 SQL,可以获取具备一定规模数据量(元组约为 10 万个)且使用全表扫描获取到的元组数量最多的 5 张表:")]),q])}const T=d(u,[["render",v],["__file","cpu-usage-high.html.vue"]]),Q=JSON.parse('{"path":"/zh/operation/cpu-usage-high.html","title":"CPU 使用率高的排查方法","lang":"zh-CN","frontmatter":{"author":"棠羽","date":"2023/03/06","minute":20},"headers":[{"level":2,"title":"业务量上涨","slug":"业务量上涨","link":"#业务量上涨","children":[]},{"level":2,"title":"慢查询","slug":"慢查询","link":"#慢查询","children":[{"level":3,"title":"定位执行时间较长的慢查询","slug":"定位执行时间较长的慢查询","link":"#定位执行时间较长的慢查询","children":[]},{"level":3,"title":"定位读取 Buffer 数量较多的慢查询","slug":"定位读取-buffer-数量较多的慢查询","link":"#定位读取-buffer-数量较多的慢查询","children":[]},{"level":3,"title":"定位长时间执行不结束的慢查询","slug":"定位长时间执行不结束的慢查询","link":"#定位长时间执行不结束的慢查询","children":[]},{"level":3,"title":"解决方法与优化思路","slug":"解决方法与优化思路","link":"#解决方法与优化思路","children":[]}]}],"git":{"updatedTime":1678166880000},"filePathRelative":"zh/operation/cpu-usage-high.md"}');export{T as comp,Q as data}; diff --git a/assets/curve-cluster-BJx5WECB.png b/assets/curve-cluster-BJx5WECB.png new file mode 100644 index 00000000000..d60d7dde8bd Binary files /dev/null and b/assets/curve-cluster-BJx5WECB.png differ diff --git a/assets/customize-dev-env.html-D-yf3RaV.js b/assets/customize-dev-env.html-D-yf3RaV.js new file mode 100644 index 00000000000..1e19d058c0b --- /dev/null +++ b/assets/customize-dev-env.html-D-yf3RaV.js @@ -0,0 +1,154 @@ +import{_ as r,r as l,o as c,c as d,a as s,b as n,d as a,w as o,e as t}from"./app-CWFDhr_k.js";const m={},v=s("h1",{id:"定制开发环境",tabindex:"-1"},[s("a",{class:"header-anchor",href:"#定制开发环境"},[s("span",null,"定制开发环境")])],-1),u=s("h2",{id:"自行构建开发镜像",tabindex:"-1"},[s("a",{class:"header-anchor",href:"#自行构建开发镜像"},[s("span",null,"自行构建开发镜像")])],-1),b={href:"https://hub.docker.com/r/polardb/polardb_pg_devel/tags",target:"_blank",rel:"noopener noreferrer"},k=s("code",null,"polardb/polardb_pg_devel",-1),g=s("code",null,"linux/amd64",-1),h=s("code",null,"linux/arm64",-1),_={href:"https://hub.docker.com/_/ubuntu/tags",target:"_blank",rel:"noopener noreferrer"},f=s("code",null,"ubuntu:20.04",-1),S=t(`FROM ubuntu:20.04
+LABEL maintainer="mrdrivingduck@gmail.com"
+CMD bash
+
+# Timezone problem
+ENV TZ=Asia/Shanghai
+RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
+
+# Upgrade softwares
+RUN apt update -y && \\
+ apt upgrade -y && \\
+ apt clean -y
+
+# GCC (force to 9) and LLVM (force to 11)
+RUN apt install -y \\
+ gcc-9 \\
+ g++-9 \\
+ llvm-11-dev \\
+ clang-11 \\
+ make \\
+ gdb \\
+ pkg-config \\
+ locales && \\
+ update-alternatives --install \\
+ /usr/bin/gcc gcc /usr/bin/gcc-9 60 --slave \\
+ /usr/bin/g++ g++ /usr/bin/g++-9 && \\
+ update-alternatives --install \\
+ /usr/bin/llvm-config llvm-config /usr/bin/llvm-config-11 60 --slave \\
+ /usr/bin/clang++ clang++ /usr/bin/clang++-11 --slave \\
+ /usr/bin/clang clang /usr/bin/clang-11 && \\
+ apt clean -y
+
+# Generate locale
+RUN sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && \\
+ sed -i '/zh_CN.UTF-8/s/^# //g' /etc/locale.gen && \\
+ locale-gen
+
+# Dependencies
+RUN apt install -y \\
+ libicu-dev \\
+ bison \\
+ flex \\
+ python3-dev \\
+ libreadline-dev \\
+ libgss-dev \\
+ libssl-dev \\
+ libpam0g-dev \\
+ libxml2-dev \\
+ libxslt1-dev \\
+ libldap2-dev \\
+ uuid-dev \\
+ liblz4-dev \\
+ libkrb5-dev \\
+ gettext \\
+ libxerces-c-dev \\
+ tcl-dev \\
+ libperl-dev \\
+ libipc-run-perl \\
+ libaio-dev \\
+ libfuse-dev && \\
+ apt clean -y
+
+# Tools
+RUN apt install -y \\
+ iproute2 \\
+ wget \\
+ ccache \\
+ sudo \\
+ vim \\
+ git \\
+ cmake && \\
+ apt clean -y
+
+# set to empty if GitHub is not barriered
+# ENV GITHUB_PROXY=https://ghproxy.com/
+ENV GITHUB_PROXY=
+
+ENV ZLOG_VERSION=1.2.14
+ENV PFSD_VERSION=pfsd4pg-release-1.2.42-20220419
+
+# install dependencies from GitHub mirror
+RUN cd /usr/local && \\
+ # zlog for PFSD
+ wget --no-verbose --no-check-certificate "\${GITHUB_PROXY}https://github.com/HardySimpson/zlog/archive/refs/tags/\${ZLOG_VERSION}.tar.gz" && \\
+ # PFSD
+ wget --no-verbose --no-check-certificate "\${GITHUB_PROXY}https://github.com/ApsaraDB/PolarDB-FileSystem/archive/refs/tags/\${PFSD_VERSION}.tar.gz" && \\
+ # unzip and install zlog
+ gzip -d $ZLOG_VERSION.tar.gz && \\
+ tar xpf $ZLOG_VERSION.tar && \\
+ cd zlog-$ZLOG_VERSION && \\
+ make && make install && \\
+ echo '/usr/local/lib' >> /etc/ld.so.conf && ldconfig && \\
+ cd .. && \\
+ rm -rf $ZLOG_VERSION* && \\
+ rm -rf zlog-$ZLOG_VERSION && \\
+ # unzip and install PFSD
+ gzip -d $PFSD_VERSION.tar.gz && \\
+ tar xpf $PFSD_VERSION.tar && \\
+ cd PolarDB-FileSystem-$PFSD_VERSION && \\
+ sed -i 's/-march=native //' CMakeLists.txt && \\
+ ./autobuild.sh && ./install.sh && \\
+ cd .. && \\
+ rm -rf $PFSD_VERSION* && \\
+ rm -rf PolarDB-FileSystem-$PFSD_VERSION*
+
+# create default user
+ENV USER_NAME=postgres
+RUN echo "create default user" && \\
+ groupadd -r $USER_NAME && \\
+ useradd -ms /bin/bash -g $USER_NAME $USER_NAME -p '' && \\
+ usermod -aG sudo $USER_NAME
+
+# modify conf
+RUN echo "modify conf" && \\
+ mkdir -p /var/log/pfs && chown $USER_NAME /var/log/pfs && \\
+ mkdir -p /var/run/pfs && chown $USER_NAME /var/run/pfs && \\
+ mkdir -p /var/run/pfsd && chown $USER_NAME /var/run/pfsd && \\
+ mkdir -p /dev/shm/pfsd && chown $USER_NAME /dev/shm/pfsd && \\
+ touch /var/run/pfsd/.pfsd && \\
+ echo "ulimit -c unlimited" >> /home/postgres/.bashrc && \\
+ echo "export PGHOST=127.0.0.1" >> /home/postgres/.bashrc && \\
+ echo "alias pg='psql -h /home/postgres/tmp_master_dir_polardb_pg_1100_bld/'" >> /home/postgres/.bashrc
+
+ENV PATH="/home/postgres/tmp_basedir_polardb_pg_1100_bld/bin:$PATH"
+WORKDIR /home/$USER_NAME
+USER $USER_NAME
+
将上述内容复制到一个文件内(假设文件名为 Dockerfile-PolarDB
)后,使用如下命令构建镜像:
提示
💡 请在下面的高亮行中按需替换 <image_name>
内的 Docker 镜像名称
docker build --network=host \\
+ -t <image_name> \\
+ -f Dockerfile-PolarDB .
+
该方式假设您从一台具有 root 权限的干净的 CentOS 7 操作系统上从零开始,可以是:
centos:centos7
上启动的 Docker 容器PolarDB for PostgreSQL 需要以非 root 用户运行。以下步骤能够帮助您创建一个名为 postgres
的用户组和一个名为 postgres
的用户。
提示
如果您已经有了一个非 root 用户,但名称不是 postgres:postgres
,可以忽略该步骤;但请注意在后续示例步骤中将命令中用户相关的信息替换为您自己的用户组名与用户名。
下面的命令能够创建用户组 postgres
和用户 postgres
,并为该用户赋予 sudo 和工作目录的权限。需要以 root 用户执行这些命令。
# install sudo
+yum install -y sudo
+# create user and group
+groupadd -r postgres
+useradd -m -g postgres postgres -p ''
+usermod -aG wheel postgres
+# make postgres as sudoer
+chmod u+w /etc/sudoers
+echo 'postgres ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers
+chmod u-w /etc/sudoers
+# grant access to home directory
+chown -R postgres:postgres /home/postgres/
+echo 'source /etc/bashrc' >> /home/postgres/.bashrc
+# for su postgres
+sed -i 's/4096/unlimited/g' /etc/security/limits.d/20-nproc.conf
+
接下来,切换到 postgres
用户,就可以进行后续的步骤了:
su postgres
+source /etc/bashrc
+cd ~
+
在 PolarDB for PostgreSQL 源码库根目录的 deps/
子目录下,放置了在各个 Linux 发行版上编译安装 PolarDB 和 PFS 需要运行的所有依赖。因此,首先需要克隆 PolarDB 的源码库。
源码下载完毕后,使用 sudo
执行 deps/
目录下的相应脚本 deps-***.sh
自动完成所有的依赖安装。比如:
cd PolarDB-for-PostgreSQL
+sudo ./deps/deps-centos7.sh
+
FROM ubuntu:20.04
+LABEL maintainer="mrdrivingduck@gmail.com"
+CMD bash
+
+# Timezone problem
+ENV TZ=Asia/Shanghai
+RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
+
+# Upgrade softwares
+RUN apt update -y && \\
+ apt upgrade -y && \\
+ apt clean -y
+
+# GCC (force to 9) and LLVM (force to 11)
+RUN apt install -y \\
+ gcc-9 \\
+ g++-9 \\
+ llvm-11-dev \\
+ clang-11 \\
+ make \\
+ gdb \\
+ pkg-config \\
+ locales && \\
+ update-alternatives --install \\
+ /usr/bin/gcc gcc /usr/bin/gcc-9 60 --slave \\
+ /usr/bin/g++ g++ /usr/bin/g++-9 && \\
+ update-alternatives --install \\
+ /usr/bin/llvm-config llvm-config /usr/bin/llvm-config-11 60 --slave \\
+ /usr/bin/clang++ clang++ /usr/bin/clang++-11 --slave \\
+ /usr/bin/clang clang /usr/bin/clang-11 && \\
+ apt clean -y
+
+# Generate locale
+RUN sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && \\
+ sed -i '/zh_CN.UTF-8/s/^# //g' /etc/locale.gen && \\
+ locale-gen
+
+# Dependencies
+RUN apt install -y \\
+ libicu-dev \\
+ bison \\
+ flex \\
+ python3-dev \\
+ libreadline-dev \\
+ libgss-dev \\
+ libssl-dev \\
+ libpam0g-dev \\
+ libxml2-dev \\
+ libxslt1-dev \\
+ libldap2-dev \\
+ uuid-dev \\
+ liblz4-dev \\
+ libkrb5-dev \\
+ gettext \\
+ libxerces-c-dev \\
+ tcl-dev \\
+ libperl-dev \\
+ libipc-run-perl \\
+ libaio-dev \\
+ libfuse-dev && \\
+ apt clean -y
+
+# Tools
+RUN apt install -y \\
+ iproute2 \\
+ wget \\
+ ccache \\
+ sudo \\
+ vim \\
+ git \\
+ cmake && \\
+ apt clean -y
+
+# set to empty if GitHub is not barriered
+# ENV GITHUB_PROXY=https://ghproxy.com/
+ENV GITHUB_PROXY=
+
+ENV ZLOG_VERSION=1.2.14
+ENV PFSD_VERSION=pfsd4pg-release-1.2.42-20220419
+
+# install dependencies from GitHub mirror
+RUN cd /usr/local && \\
+ # zlog for PFSD
+ wget --no-verbose --no-check-certificate "\${GITHUB_PROXY}https://github.com/HardySimpson/zlog/archive/refs/tags/\${ZLOG_VERSION}.tar.gz" && \\
+ # PFSD
+ wget --no-verbose --no-check-certificate "\${GITHUB_PROXY}https://github.com/ApsaraDB/PolarDB-FileSystem/archive/refs/tags/\${PFSD_VERSION}.tar.gz" && \\
+ # unzip and install zlog
+ gzip -d $ZLOG_VERSION.tar.gz && \\
+ tar xpf $ZLOG_VERSION.tar && \\
+ cd zlog-$ZLOG_VERSION && \\
+ make && make install && \\
+ echo '/usr/local/lib' >> /etc/ld.so.conf && ldconfig && \\
+ cd .. && \\
+ rm -rf $ZLOG_VERSION* && \\
+ rm -rf zlog-$ZLOG_VERSION && \\
+ # unzip and install PFSD
+ gzip -d $PFSD_VERSION.tar.gz && \\
+ tar xpf $PFSD_VERSION.tar && \\
+ cd PolarDB-FileSystem-$PFSD_VERSION && \\
+ sed -i 's/-march=native //' CMakeLists.txt && \\
+ ./autobuild.sh && ./install.sh && \\
+ cd .. && \\
+ rm -rf $PFSD_VERSION* && \\
+ rm -rf PolarDB-FileSystem-$PFSD_VERSION*
+
+# create default user
+ENV USER_NAME=postgres
+RUN echo "create default user" && \\
+ groupadd -r $USER_NAME && \\
+ useradd -ms /bin/bash -g $USER_NAME $USER_NAME -p '' && \\
+ usermod -aG sudo $USER_NAME
+
+# modify conf
+RUN echo "modify conf" && \\
+ mkdir -p /var/log/pfs && chown $USER_NAME /var/log/pfs && \\
+ mkdir -p /var/run/pfs && chown $USER_NAME /var/run/pfs && \\
+ mkdir -p /var/run/pfsd && chown $USER_NAME /var/run/pfsd && \\
+ mkdir -p /dev/shm/pfsd && chown $USER_NAME /dev/shm/pfsd && \\
+ touch /var/run/pfsd/.pfsd && \\
+ echo "ulimit -c unlimited" >> /home/postgres/.bashrc && \\
+ echo "export PGHOST=127.0.0.1" >> /home/postgres/.bashrc && \\
+ echo "alias pg='psql -h /home/postgres/tmp_master_dir_polardb_pg_1100_bld/'" >> /home/postgres/.bashrc
+
+ENV PATH="/home/postgres/tmp_basedir_polardb_pg_1100_bld/bin:$PATH"
+WORKDIR /home/$USER_NAME
+USER $USER_NAME
+
将上述内容复制到一个文件内(假设文件名为 Dockerfile-PolarDB
)后,使用如下命令构建镜像:
TIP
💡 请在下面的高亮行中按需替换 <image_name>
内的 Docker 镜像名称
docker build --network=host \\
+ -t <image_name> \\
+ -f Dockerfile-PolarDB .
+
该方式假设您从一台具有 root 权限的干净的 CentOS 7 操作系统上从零开始,可以是:
centos:centos7
上启动的 Docker 容器PolarDB for PostgreSQL 需要以非 root 用户运行。以下步骤能够帮助您创建一个名为 postgres
的用户组和一个名为 postgres
的用户。
TIP
如果您已经有了一个非 root 用户,但名称不是 postgres:postgres
,可以忽略该步骤;但请注意在后续示例步骤中将命令中用户相关的信息替换为您自己的用户组名与用户名。
下面的命令能够创建用户组 postgres
和用户 postgres
,并为该用户赋予 sudo 和工作目录的权限。需要以 root 用户执行这些命令。
# install sudo
+yum install -y sudo
+# create user and group
+groupadd -r postgres
+useradd -m -g postgres postgres -p ''
+usermod -aG wheel postgres
+# make postgres as sudoer
+chmod u+w /etc/sudoers
+echo 'postgres ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers
+chmod u-w /etc/sudoers
+# grant access to home directory
+chown -R postgres:postgres /home/postgres/
+echo 'source /etc/bashrc' >> /home/postgres/.bashrc
+# for su postgres
+sed -i 's/4096/unlimited/g' /etc/security/limits.d/20-nproc.conf
+
接下来,切换到 postgres
用户,就可以进行后续的步骤了:
su postgres
+source /etc/bashrc
+cd ~
+
在 PolarDB for PostgreSQL 源码库根目录的 deps/
子目录下,放置了在各个 Linux 发行版上编译安装 PolarDB 和 PFS 需要运行的所有依赖。因此,首先需要克隆 PolarDB 的源码库。
源码下载完毕后,使用 sudo
执行 deps/
目录下的相应脚本 deps-***.sh
自动完成所有的依赖安装。比如:
cd PolarDB-for-PostgreSQL
+sudo ./deps/deps-centos7.sh
+
在高可用的场景中,为保证 RPO = 0,主库和备库之间需配置为同步复制模式。但当主备库距离较远时,同步复制的方式会存在较大延迟,从而对主库性能带来较大影响。异步复制对主库的性能影响较小,但会带来一定程度的数据丢失。PolarDB for PostgreSQL 采用基于共享存储的一写多读架构,可同时提供 AZ 内 / 跨 AZ / 跨域级别的高可用。为了减少日志同步对主库的影响,PolarDB for PostgreSQL 引入了 DataMax 节点。在进行跨 AZ 甚至跨域同步时,DataMax 节点可以作为主库日志的中转节点,能够以较低成本实现零数据丢失的同时,降低日志同步对主库性能的影响。
PolarDB for PostgreSQL 基于物理流复制实现主备库之间的数据同步,主库与备库的流复制模式分为 同步模式 及 异步模式 两种:
synchronous_standby_names
参数开启备库同步后,可以通过 synchronous_commit
参数设置主库及备库之间的同步级别,包括: remote_write
:主库的事务提交需等待对应 WAL 日志写入主库磁盘文件及备库的系统缓存中后,才能进行事务提交的后续操作;on
:主库的事务提交需等待对应 WAL 日志已写入主库及备库的磁盘文件中后,才能进行事务提交的后续操作;remote_apply
:主库的事务提交需等待对应 WAL 日志写入主库及备库的磁盘文件中,并且备库已经回放完相应 WAL 日志使备库上的查询对该事务可见后,才能进行事务提交的后续操作。同步模式保证了主库的事务提交操作需等待备库接收到对应的 WAL 日志数据之后才可执行,实现了主库与备库之间的零数据丢失,可保证 RPO = 0。然而,该模式下主库的事务提交操作能否继续进行依赖于备库的 WAL 日志接收结果,当主备之间距离较远导致传输延迟较大时,同步模式会对主库的性能带来影响。极端情况下,若备库异常崩溃,则主库会一直阻塞等待备库,导致无法正常提供服务。
针对传统主备模式下同步复制对主库性能影响较大的问题,PolarDB for PostgreSQL 新增了 DataMax 节点用于实现远程同步,该模式下的高可用架构如下所示:
其中:
DataMax 是一种新的节点角色,用户需要通过配置文件来标识当前节点是否为 DataMax 节点。DataMax 模式下,Startup 进程在回放完 DataMax 节点自身日志之后,从 PM_HOT_STANDBY
进入到 PM_DATAMAX
模式。PM_DATAMAX
模式下,Startup 进程仅进行相关信号及状态的处理,并通知 Postmaster 进程启动流复制,Startup 进程不再进行日志回放的操作。因此 DataMax 节点不会保存 Primary 节点的数据文件,从而降低了存储成本。
如上图所示,DataMax 节点通过 WalReceiver 进程向 Primary 节点发起流复制请求,接收并保存 Primary 节点发送的 WAL 日志信息;同时通过 WalSender 进程将所接收的主库 WAL 日志发送给异地的备库节点;备库节点接收到 WAL 日志后,通知其 Startup 进程进行日志回放,从而实现备库节点与 Primary 节点的数据同步。
DataMax 节点在数据目录中新增了 polar_datamax/
目录,用于保存所接收的主库 WAL 日志。DataMax 节点自身的 WAL 日志仍保存在原始目录下,两者的 WAL 日志不会相互覆盖,DataMax 节点也可以有自身的独有数据。
由于 DataMax 节点不会回放 Primary 节点的日志数据,在 DataMax 节点因为异常原因需要重启恢复时,就有了日志起始位点的问题。DataMax 节点通过 polar_datamax_meta
元数据文件存储相关的位点信息,以此来确认运行的起始位点:
InvalidXLogRecPtr
位点,表明其需要从 Primary 节点当前最旧的位点开始复制; Primary 节点接收到 InvalidXLogRecPtr
的流复制请求之后,会开始从当前最旧且完整的 WAL segment 文件开始发送 WAL 日志,并将相应复制槽的 restart_lsn
设置为该位点;如下图所示,增加 DataMax 节点后,若 Primary 节点与 Replica 节点同时异常,或存储无法提供服务时,则可将位于不同可用区的 Standby 节点提升为 Primary 节点,保证服务的可用性。在将 Standby 节点提升为 Primary 节点并向外提供服务之前,会确认 Standby 节点是否已从 DataMax 节点拉取完所有日志,待 Standby 节点获取完所有日志后才会将其提升为 Primary 节点。由于 DataMax 节点与 Primary 节点为同步复制,因此该场景下可保证 RPO = 0。
此外,DataMax 节点在进行日志清理时,除了保留下游 Standby 节点尚未接收的 WAL 日志文件以外,还会保留上游 Primary 节点尚未删除的 WAL 日志文件,避免 Primary 节点异常后,备份系统无法获取到 Primary 节点相较于 DataMax 节点多出的日志信息,保证集群数据的完整性。
若 DataMax 节点异常,则优先尝试通过重启进行恢复;若重启失败则会对其进行重建。因 DataMax 节点与 Primary 节点的存储彼此隔离,因此两者的数据不会互相影响。此外,DataMax 节点同样可以使用计算存储分离架构,确保 DataMax 节点的异常不会导致其存储的 WAL 日志数据丢失。
类似地,DataMax 节点实现了如下几种日志同步模式,用户可以根据具体业务需求进行相应配置:
综上,通过 DataMax 日志中转节点降低日志同步延迟、分流 Primary 节点的日志传输压力,在性能稳定的情况下,可以保障跨 AZ / 跨域 RPO = 0 的高可用。
初始化 DataMax 节点时需要指定 Primary 节点的 system identifier:
# 获取 Primary 节点的 system identifier
+~/tmp_basedir_polardb_pg_1100_bld/bin/pg_controldata -D ~/primary | grep 'system identifier'
+
+# 创建 DataMax 节点
+# -i 参数指定的 [primary_system_identifier] 为上一步得到的 Primary 节点 system identifier
+~/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D datamax -i [primary_system_identifier]
+
+# 如有需要,参考 Primary 节点,对 DataMax 节点的共享存储进行初始化
+sudo pfs -C disk mkdir /nvme0n1/dm_shared_data
+sudo ~/tmp_basedir_polardb_pg_1100_bld/bin/polar-initdb.sh ~/datamax/ /nvme0n1/dm_shared_data/
+
以可写节点的形式拉起 DataMax 节点,创建用户和插件以方便后续运维。DataMax 节点默认为只读模式,无法创建用户和插件。
~/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D ~/datamax
+
创建管理账号及插件:
postgres=# create user test superuser;
+CREATE ROLE
+postgres=# create extension polar_monitor;
+CREATE EXTENSION
+
关闭 DataMax 节点:
~/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl stop -D ~/datamax;
+
在 DataMax 节点的 recovery.conf
中添加 polar_datamax_mode
参数,表示当前节点为 DataMax 节点:
polar_datamax_mode = standalone
+recovery_target_timeline='latest'
+primary_slot_name='datamax'
+primary_conninfo='host=[主节点的IP] port=[主节点的端口] user=[$USER] dbname=postgres application_name=datamax'
+
启动 DataMax 节点:
~/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D ~/datamax
+
DataMax 节点自身可通过 polar_get_datamax_info()
接口来判断其运行是否正常:
postgres=# SELECT * FROM polar_get_datamax_info();
+ min_received_timeline | min_received_lsn | last_received_timeline | last_received_lsn | last_valid_received_lsn | clean_reserved_lsn | force_clean
+-----------------------+------------------+------------------------+-------------------+-------------------------+--------------------+-------------
+ 1 | 0/40000000 | 1 | 0/4079DFE0 | 0/4079DFE0 | 0/0 | f
+(1 row)
+
在 Primary 节点可以通过 pg_replication_slots
查看对应复制槽的状态:
postgres=# SELECT * FROM pg_replication_slots;
+ slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn
+-----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------
+ datamax | | physical | | | f | t | 124551 | 570 | | 0/4079DFE0 |
+(1 row)
+
通过配置 Primary 节点的 postgresql.conf
,可以设置下游 DataMax 节点的日志同步模式:
最大保护模式。其中 datamax
为 Primary 节点创建的复制槽名称:
polar_enable_transaction_sync_mode = on
+synchronous_commit = on
+synchronous_standby_names = 'datamax'
+
最大性能模式:
polar_enable_transaction_sync_mode = on
+synchronous_commit = on
+
最大高可用模式:
polar_sync_replication_timeout
用于设置同步超时时间阈值,单位为毫秒;等待同步复制锁超过此阈值时,同步复制将降级为异步复制;polar_sync_rep_timeout_break_lsn_lag
用于设置同步恢复延迟阈值,单位为字节;当异步复制延迟阈值小于此阈值时,异步复制将重新恢复为同步复制。polar_enable_transaction_sync_mode = on
+synchronous_commit = on
+synchronous_standby_names = 'datamax'
+polar_sync_replication_timeout = 10s
+polar_sync_rep_timeout_break_lsn_lag = 8kB
+
docker pull polardb/polardb_pg_local_instance
+
新建一个空白目录 \${your_data_dir}
作为 PolarDB-PG 实例的数据目录。启动容器时,将该目录作为 VOLUME 挂载到容器内,对数据目录进行初始化。在初始化的过程中,可以传入环境变量覆盖默认值:
POLARDB_PORT
:PolarDB-PG 运行所需要使用的端口号,默认值为 5432
;镜像将会使用三个连续的端口号(默认 5432-5434
)POLARDB_USER
:初始化数据库时创建默认的 superuser(默认 postgres
)POLARDB_PASSWORD
:默认 superuser 的密码使用如下命令初始化数据库:
docker run -it --rm \\
+ --env POLARDB_PORT=5432 \\
+ --env POLARDB_USER=u1 \\
+ --env POLARDB_PASSWORD=your_password \\
+ -v \${your_data_dir}:/var/polardb \\
+ polardb/polardb_pg_local_instance \\
+ echo 'done'
+
数据库初始化完毕后,使用 -d
参数以后台模式创建容器,启动 PolarDB-PG 服务。通常 PolarDB-PG 的端口需要暴露给外界使用,使用 -p
参数将容器内的端口范围暴露到容器外。比如,初始化数据库时使用的是 5432-5434
端口,如下命令将会把这三个端口映射到容器外的 54320-54322
端口:
docker run -d \\
+ -p 54320-54322:5432-5434 \\
+ -v \${your_data_dir}:/var/polardb \\
+ polardb/polardb_pg_local_instance
+
或者也可以直接让容器与宿主机共享网络:
docker run -d \\
+ --network=host \\
+ -v \${your_data_dir}:/var/polardb \\
+ polardb/polardb_pg_local_instance
+
docker pull polardb/polardb_pg_local_instance
+
新建一个空白目录 \${your_data_dir}
作为 PolarDB-PG 实例的数据目录。启动容器时,将该目录作为 VOLUME 挂载到容器内,对数据目录进行初始化。在初始化的过程中,可以传入环境变量覆盖默认值:
POLARDB_PORT
:PolarDB-PG 运行所需要使用的端口号,默认值为 5432
;镜像将会使用三个连续的端口号(默认 5432-5434
)POLARDB_USER
:初始化数据库时创建默认的 superuser(默认 postgres
)POLARDB_PASSWORD
:默认 superuser 的密码使用如下命令初始化数据库:
docker run -it --rm \\
+ --env POLARDB_PORT=5432 \\
+ --env POLARDB_USER=u1 \\
+ --env POLARDB_PASSWORD=your_password \\
+ -v \${your_data_dir}:/var/polardb \\
+ polardb/polardb_pg_local_instance \\
+ echo 'done'
+
数据库初始化完毕后,使用 -d
参数以后台模式创建容器,启动 PolarDB-PG 服务。通常 PolarDB-PG 的端口需要暴露给外界使用,使用 -p
参数将容器内的端口范围暴露到容器外。比如,初始化数据库时使用的是 5432-5434
端口,如下命令将会把这三个端口映射到容器外的 54320-54322
端口:
docker run -d \\
+ -p 54320-54322:5432-5434 \\
+ -v \${your_data_dir}:/var/polardb \\
+ polardb/polardb_pg_local_instance
+
或者也可以直接让容器与宿主机共享网络:
docker run -d \\
+ --network=host \\
+ -v \${your_data_dir}:/var/polardb \\
+ polardb/polardb_pg_local_instance
+
代码克隆完毕后,进入源码目录:
cd PolarDB-for-PostgreSQL/
+
./polardb_build.sh --with-pfsd
+
WARNING
上述脚本在编译完成后,会自动部署一个基于 本地文件系统 的实例,运行于 5432
端口上。
手动键入以下命令停止这个实例,以便 在 PFS 和共享存储上重新部署实例:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl \\
+ -D $HOME/tmp_master_dir_polardb_pg_1100_bld/ \\
+ stop
+
在节点本地初始化数据目录 $HOME/primary/
:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D $HOME/primary
+
在共享存储的 /pool@@volume_my_/shared_data
目录上初始化共享数据目录
# 使用 pfs 创建共享数据目录
+sudo pfs -C curve mkdir /pool@@volume_my_/shared_data
+# 初始化 db 的本地和共享数据目录
+sudo $HOME/tmp_basedir_polardb_pg_1100_bld/bin/polar-initdb.sh \\
+ $HOME/primary/ /pool@@volume_my_/shared_data/ curve
+
编辑读写节点的配置。打开 $HOME/primary/postgresql.conf
,增加配置项:
port=5432
+polar_hostid=1
+polar_enable_shared_storage_mode=on
+polar_disk_name='pool@@volume_my_'
+polar_datadir='/pool@@volume_my_/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='curve'
+logging_collector=on
+log_line_prefix='%p\\t%r\\t%u\\t%m\\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+synchronous_standby_names='replica1'
+
打开 $HOME/primary/pg_hba.conf
,增加以下配置项:
host replication postgres 0.0.0.0/0 trust
+
最后,启动读写节点:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D $HOME/primary
+
检查读写节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5432 \\
+ -d postgres \\
+ -c 'select version();'
+# 下面为输出内容
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
在读写节点上,为对应的只读节点创建相应的 replication slot,用于只读节点的物理流复制:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5432 \\
+ -d postgres \\
+ -c "select pg_create_physical_replication_slot('replica1');"
+# 下面为输出内容
+ pg_create_physical_replication_slot
+-------------------------------------
+ (replica1,)
+(1 row)
+
在只读节点上,使用 --with-pfsd
选项编译 PolarDB 内核。
./polardb_build.sh --with-pfsd
+
WARNING
上述脚本在编译完成后,会自动部署一个基于 本地文件系统 的实例,运行于 5432
端口上。
手动键入以下命令停止这个实例,以便 在 PFS 和共享存储上重新部署实例:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl \\
+ -D $HOME/tmp_master_dir_polardb_pg_1100_bld/ \\
+ stop
+
在节点本地初始化数据目录 $HOME/replica1/
:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D $HOME/replica1
+
编辑只读节点的配置。打开 $HOME/replica1/postgresql.conf
,增加配置项:
port=5433
+polar_hostid=2
+polar_enable_shared_storage_mode=on
+polar_disk_name='pool@@volume_my_'
+polar_datadir='/pool@@volume_my_/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='curve'
+logging_collector=on
+log_line_prefix='%p\\t%r\\t%u\\t%m\\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+
创建 $HOME/replica1/recovery.conf
,增加以下配置项:
WARNING
请在下面替换读写节点(容器)所在的 IP 地址。
polar_replica='on'
+recovery_target_timeline='latest'
+primary_slot_name='replica1'
+primary_conninfo='host=[读写节点所在IP] port=5432 user=postgres dbname=postgres application_name=replica1'
+
最后,启动只读节点:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D $HOME/replica1
+
检查只读节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5433 \\
+ -d postgres \\
+ -c 'select version();'
+# 下面为输出内容
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
部署完成后,需要进行实例检查和测试,确保读写节点可正常写入数据、只读节点可以正常读取。
登录 读写节点,创建测试表并插入样例数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \\
+ -p 5432 \\
+ -d postgres \\
+ -c "create table t(t1 int primary key, t2 int);insert into t values (1, 1),(2, 3),(3, 3);"
+
登录 只读节点,查询刚刚插入的样例数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \\
+ -p 5433 \\
+ -d postgres \\
+ -c "select * from t;"
+# 下面为输出内容
+ t1 | t2
+----+----
+ 1 | 1
+ 2 | 3
+ 3 | 3
+(3 rows)
+
在读写节点上插入的数据对只读节点可见。
`,38);function H(r,$){const c=e("ArticleInfo"),p=e("ExternalLinkIcon"),l=e("CodeGroupItem"),i=e("CodeGroup"),d=e("RouteLink");return k(),v("div",null,[_,s(c,{frontmatter:r.$frontmatter},null,8,["frontmatter"]),b,a("p",null,[n("我们在 DockerHub 上提供了一个 "),a("a",g,[n("PolarDB 开发镜像"),s(p)]),n(",里面已经包含编译运行 PolarDB for PostgreSQL 所需要的所有依赖。您可以直接使用这个开发镜像进行实例搭建。镜像目前支持 AMD64 和 ARM64 两种 CPU 架构。")]),h,a("p",null,[n("在前置文档中,我们已经从 DockerHub 上拉取了 PolarDB 开发镜像,并且进入到了容器中。进入容器后,从 "),a("a",f,[n("GitHub"),s(p)]),n(" 上下载 PolarDB for PostgreSQL 的源代码,稳定分支为 "),y,n("。如果因网络原因不能稳定访问 GitHub,则可以访问 "),a("a",E,[n("Gitee 国内镜像"),s(p)]),n("。")]),s(i,null,{default:t(()=>[s(l,{title:"GitHub"},{default:t(()=>[P]),_:1}),s(l,{title:"Gitee 国内镜像"},{default:t(()=>[x]),_:1})]),_:1}),B,a("p",null,[n("在读写节点上,使用 "),D,n(" 选项编译 PolarDB 内核。请参考 "),s(d,{to:"/development/dev-on-docker.html#%E7%BC%96%E8%AF%91%E6%B5%8B%E8%AF%95%E9%80%89%E9%A1%B9%E8%AF%B4%E6%98%8E"},{default:t(()=>[n("编译测试选项说明")]),_:1}),n(" 查看更多编译选项的说明。")]),O])}const S=u(m,[["render",H],["__file","db-pfs-curve.html.vue"]]),A=JSON.parse('{"path":"/deploying/db-pfs-curve.html","title":"基于 PFS for CurveBS 文件系统部署","lang":"en-US","frontmatter":{"author":"程义","date":"2022/11/02","minute":15},"headers":[{"level":2,"title":"源码下载","slug":"源码下载","link":"#源码下载","children":[]},{"level":2,"title":"编译部署 PolarDB","slug":"编译部署-polardb","link":"#编译部署-polardb","children":[{"level":3,"title":"读写节点部署","slug":"读写节点部署","link":"#读写节点部署","children":[]},{"level":3,"title":"只读节点部署","slug":"只读节点部署","link":"#只读节点部署","children":[]},{"level":3,"title":"集群检查和测试","slug":"集群检查和测试","link":"#集群检查和测试","children":[]}]}],"git":{"updatedTime":1690894847000},"filePathRelative":"deploying/db-pfs-curve.md"}');export{S as comp,A as data}; diff --git a/assets/db-pfs-curve.html-CPxfO2YN.js b/assets/db-pfs-curve.html-CPxfO2YN.js new file mode 100644 index 00000000000..8db70b66786 --- /dev/null +++ b/assets/db-pfs-curve.html-CPxfO2YN.js @@ -0,0 +1,95 @@ +import{_ as u,r as e,o as k,c as v,d as s,a,b as n,w as t,e as o}from"./app-CWFDhr_k.js";const m={},_=a("h1",{id:"基于-pfs-for-curvebs-文件系统部署",tabindex:"-1"},[a("a",{class:"header-anchor",href:"#基于-pfs-for-curvebs-文件系统部署"},[a("span",null,"基于 PFS for CurveBS 文件系统部署")])],-1),b=a("p",null,"本文将指导您在分布式文件系统 PolarDB File System(PFS)上编译部署 PolarDB,适用于已经在 Curve 块存储上格式化并挂载 PFS 的计算节点。",-1),g={href:"https://hub.docker.com/r/polardb/polardb_pg_devel/tags",target:"_blank",rel:"noopener noreferrer"},h=a("h2",{id:"源码下载",tabindex:"-1"},[a("a",{class:"header-anchor",href:"#源码下载"},[a("span",null,"源码下载")])],-1),f={href:"https://github.com/ApsaraDB/PolarDB-for-PostgreSQL",target:"_blank",rel:"noopener noreferrer"},y=a("code",null,"POLARDB_11_STABLE",-1),E={href:"https://gitee.com/mirrors/PolarDB-for-PostgreSQL",target:"_blank",rel:"noopener noreferrer"},P=a("div",{class:"language-bash","data-ext":"sh","data-title":"sh"},[a("pre",{class:"language-bash"},[a("code",null,[a("span",{class:"token function"},"git"),n(" clone "),a("span",{class:"token parameter variable"},"-b"),n(` POLARDB_11_STABLE https://github.com/ApsaraDB/PolarDB-for-PostgreSQL.git +`)])])],-1),x=a("div",{class:"language-bash","data-ext":"sh","data-title":"sh"},[a("pre",{class:"language-bash"},[a("code",null,[a("span",{class:"token function"},"git"),n(" clone "),a("span",{class:"token parameter variable"},"-b"),n(` POLARDB_11_STABLE https://gitee.com/mirrors/PolarDB-for-PostgreSQL +`)])])],-1),B=o(`代码克隆完毕后,进入源码目录:
cd PolarDB-for-PostgreSQL/
+
./polardb_build.sh --with-pfsd
+
注意
上述脚本在编译完成后,会自动部署一个基于 本地文件系统 的实例,运行于 5432
端口上。
手动键入以下命令停止这个实例,以便 在 PFS 和共享存储上重新部署实例:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl \\
+ -D $HOME/tmp_master_dir_polardb_pg_1100_bld/ \\
+ stop
+
在节点本地初始化数据目录 $HOME/primary/
:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D $HOME/primary
+
在共享存储的 /pool@@volume_my_/shared_data
目录上初始化共享数据目录
# 使用 pfs 创建共享数据目录
+sudo pfs -C curve mkdir /pool@@volume_my_/shared_data
+# 初始化 db 的本地和共享数据目录
+sudo $HOME/tmp_basedir_polardb_pg_1100_bld/bin/polar-initdb.sh \\
+ $HOME/primary/ /pool@@volume_my_/shared_data/ curve
+
编辑读写节点的配置。打开 $HOME/primary/postgresql.conf
,增加配置项:
port=5432
+polar_hostid=1
+polar_enable_shared_storage_mode=on
+polar_disk_name='pool@@volume_my_'
+polar_datadir='/pool@@volume_my_/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='curve'
+logging_collector=on
+log_line_prefix='%p\\t%r\\t%u\\t%m\\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+synchronous_standby_names='replica1'
+
打开 $HOME/primary/pg_hba.conf
,增加以下配置项:
host replication postgres 0.0.0.0/0 trust
+
最后,启动读写节点:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D $HOME/primary
+
检查读写节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5432 \\
+ -d postgres \\
+ -c 'select version();'
+# 下面为输出内容
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
在读写节点上,为对应的只读节点创建相应的 replication slot,用于只读节点的物理流复制:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5432 \\
+ -d postgres \\
+ -c "select pg_create_physical_replication_slot('replica1');"
+# 下面为输出内容
+ pg_create_physical_replication_slot
+-------------------------------------
+ (replica1,)
+(1 row)
+
在只读节点上,使用 --with-pfsd
选项编译 PolarDB 内核。
./polardb_build.sh --with-pfsd
+
注意
上述脚本在编译完成后,会自动部署一个基于 本地文件系统 的实例,运行于 5432
端口上。
手动键入以下命令停止这个实例,以便 在 PFS 和共享存储上重新部署实例:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl \\
+ -D $HOME/tmp_master_dir_polardb_pg_1100_bld/ \\
+ stop
+
在节点本地初始化数据目录 $HOME/replica1/
:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D $HOME/replica1
+
编辑只读节点的配置。打开 $HOME/replica1/postgresql.conf
,增加配置项:
port=5433
+polar_hostid=2
+polar_enable_shared_storage_mode=on
+polar_disk_name='pool@@volume_my_'
+polar_datadir='/pool@@volume_my_/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='curve'
+logging_collector=on
+log_line_prefix='%p\\t%r\\t%u\\t%m\\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+
创建 $HOME/replica1/recovery.conf
,增加以下配置项:
注意
请在下面替换读写节点(容器)所在的 IP 地址。
polar_replica='on'
+recovery_target_timeline='latest'
+primary_slot_name='replica1'
+primary_conninfo='host=[读写节点所在IP] port=5432 user=postgres dbname=postgres application_name=replica1'
+
最后,启动只读节点:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D $HOME/replica1
+
检查只读节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5433 \\
+ -d postgres \\
+ -c 'select version();'
+# 下面为输出内容
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
部署完成后,需要进行实例检查和测试,确保读写节点可正常写入数据、只读节点可以正常读取。
登录 读写节点,创建测试表并插入样例数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \\
+ -p 5432 \\
+ -d postgres \\
+ -c "create table t(t1 int primary key, t2 int);insert into t values (1, 1),(2, 3),(3, 3);"
+
登录 只读节点,查询刚刚插入的样例数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \\
+ -p 5433 \\
+ -d postgres \\
+ -c "select * from t;"
+# 下面为输出内容
+ t1 | t2
+----+----
+ 1 | 1
+ 2 | 3
+ 3 | 3
+(3 rows)
+
在读写节点上插入的数据对只读节点可见。
`,38);function H(r,$){const c=e("ArticleInfo"),p=e("ExternalLinkIcon"),l=e("CodeGroupItem"),i=e("CodeGroup"),d=e("RouteLink");return k(),v("div",null,[_,s(c,{frontmatter:r.$frontmatter},null,8,["frontmatter"]),b,a("p",null,[n("我们在 DockerHub 上提供了一个 "),a("a",g,[n("PolarDB 开发镜像"),s(p)]),n(",里面已经包含编译运行 PolarDB for PostgreSQL 所需要的所有依赖。您可以直接使用这个开发镜像进行实例搭建。镜像目前支持 AMD64 和 ARM64 两种 CPU 架构。")]),h,a("p",null,[n("在前置文档中,我们已经从 DockerHub 上拉取了 PolarDB 开发镜像,并且进入到了容器中。进入容器后,从 "),a("a",f,[n("GitHub"),s(p)]),n(" 上下载 PolarDB for PostgreSQL 的源代码,稳定分支为 "),y,n("。如果因网络原因不能稳定访问 GitHub,则可以访问 "),a("a",E,[n("Gitee 国内镜像"),s(p)]),n("。")]),s(i,null,{default:t(()=>[s(l,{title:"GitHub"},{default:t(()=>[P]),_:1}),s(l,{title:"Gitee 国内镜像"},{default:t(()=>[x]),_:1})]),_:1}),B,a("p",null,[n("在读写节点上,使用 "),D,n(" 选项编译 PolarDB 内核。请参考 "),s(d,{to:"/zh/development/dev-on-docker.html#%E7%BC%96%E8%AF%91%E6%B5%8B%E8%AF%95%E9%80%89%E9%A1%B9%E8%AF%B4%E6%98%8E"},{default:t(()=>[n("编译测试选项说明")]),_:1}),n(" 查看更多编译选项的说明。")]),O])}const S=u(m,[["render",H],["__file","db-pfs-curve.html.vue"]]),L=JSON.parse('{"path":"/zh/deploying/db-pfs-curve.html","title":"基于 PFS for CurveBS 文件系统部署","lang":"zh-CN","frontmatter":{"author":"程义","date":"2022/11/02","minute":15},"headers":[{"level":2,"title":"源码下载","slug":"源码下载","link":"#源码下载","children":[]},{"level":2,"title":"编译部署 PolarDB","slug":"编译部署-polardb","link":"#编译部署-polardb","children":[{"level":3,"title":"读写节点部署","slug":"读写节点部署","link":"#读写节点部署","children":[]},{"level":3,"title":"只读节点部署","slug":"只读节点部署","link":"#只读节点部署","children":[]},{"level":3,"title":"集群检查和测试","slug":"集群检查和测试","link":"#集群检查和测试","children":[]}]}],"git":{"updatedTime":1690894847000},"filePathRelative":"zh/deploying/db-pfs-curve.md"}');export{S as comp,L as data}; diff --git a/assets/db-pfs.html-BPWvx_FG.js b/assets/db-pfs.html-BPWvx_FG.js new file mode 100644 index 00000000000..ad6f764c453 --- /dev/null +++ b/assets/db-pfs.html-BPWvx_FG.js @@ -0,0 +1,85 @@ +import{_ as r,r as l,o as i,c as u,d as n,a,w as s,e as d,b as e}from"./app-CWFDhr_k.js";const k={},v=a("h1",{id:"基于-pfs-文件系统部署",tabindex:"-1"},[a("a",{class:"header-anchor",href:"#基于-pfs-文件系统部署"},[a("span",null,"基于 PFS 文件系统部署")])],-1),m=a("p",null,"本文将指导您在分布式文件系统 PolarDB File System(PFS)上编译部署 PolarDB,适用于已经在共享存储上格式化并挂载 PFS 文件系统的计算节点。",-1),b={class:"table-of-contents"},_=d(`初始化读写节点的本地数据目录 ~/primary/
:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D $HOME/primary
+
在共享存储的 /nvme1n1/shared_data/
路径上创建共享数据目录,然后使用 polar-initdb.sh
脚本初始化共享数据目录:
# 使用 pfs 创建共享数据目录
+sudo pfs -C disk mkdir /nvme1n1/shared_data
+# 初始化 db 的本地和共享数据目录
+sudo $HOME/tmp_basedir_polardb_pg_1100_bld/bin/polar-initdb.sh \\
+ $HOME/primary/ /nvme1n1/shared_data/
+
编辑读写节点的配置。打开 ~/primary/postgresql.conf
,增加配置项:
port=5432
+polar_hostid=1
+polar_enable_shared_storage_mode=on
+polar_disk_name='nvme1n1'
+polar_datadir='/nvme1n1/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='disk'
+logging_collector=on
+log_line_prefix='%p\\t%r\\t%u\\t%m\\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+synchronous_standby_names='replica1'
+
编辑读写节点的客户端认证文件 ~/primary/pg_hba.conf
,增加以下配置项,允许只读节点进行物理复制:
host replication postgres 0.0.0.0/0 trust
+
最后,启动读写节点:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D $HOME/primary
+
检查读写节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5432 \\
+ -d postgres \\
+ -c 'SELECT version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
在读写节点上,为对应的只读节点创建相应的复制槽,用于只读节点的物理复制:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5432 \\
+ -d postgres \\
+ -c "SELECT pg_create_physical_replication_slot('replica1');"
+ pg_create_physical_replication_slot
+-------------------------------------
+ (replica1,)
+(1 row)
+
在只读节点本地磁盘的 ~/replica1
路径上创建一个空目录,然后通过 polar-replica-initdb.sh
脚本使用共享存储上的数据目录来初始化只读节点的本地目录。初始化后的本地目录中没有默认配置文件,所以还需要使用 initdb
创建一个临时的本地目录模板,然后将所有的默认配置文件拷贝到只读节点的本地目录下:
mkdir -m 0700 $HOME/replica1
+sudo ~/tmp_basedir_polardb_pg_1100_bld/bin/polar-replica-initdb.sh \\
+ /nvme1n1/shared_data/ $HOME/replica1/
+
+$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D /tmp/replica1
+cp /tmp/replica1/*.conf $HOME/replica1/
+
编辑只读节点的配置。打开 ~/replica1/postgresql.conf
,增加配置项:
port=5433
+polar_hostid=2
+polar_enable_shared_storage_mode=on
+polar_disk_name='nvme1n1'
+polar_datadir='/nvme1n1/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='disk'
+logging_collector=on
+log_line_prefix='%p\\t%r\\t%u\\t%m\\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+
创建只读节点的复制配置文件 ~/replica1/recovery.conf
,增加读写节点的连接信息,以及复制槽名称:
polar_replica='on'
+recovery_target_timeline='latest'
+primary_slot_name='replica1'
+primary_conninfo='host=[读写节点所在IP] port=5432 user=postgres dbname=postgres application_name=replica1'
+
最后,启动只读节点:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D $HOME/replica1
+
检查只读节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5433 \\
+ -d postgres \\
+ -c 'SELECT version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
部署完成后,需要进行实例检查和测试,确保读写节点可正常写入数据、只读节点可以正常读取。
登录 读写节点,创建测试表并插入样例数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \\
+ -p 5432 \\
+ -d postgres \\
+ -c "CREATE TABLE t (t1 INT PRIMARY KEY, t2 INT); INSERT INTO t VALUES (1, 1),(2, 3),(3, 3);"
+
登录 只读节点,查询刚刚插入的样例数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \\
+ -p 5433 \\
+ -d postgres \\
+ -c "SELECT * FROM t;"
+ t1 | t2
+----+----
+ 1 | 1
+ 2 | 3
+ 3 | 3
+(3 rows)
+
在读写节点上插入的数据对只读节点可见,这意味着基于共享存储的 PolarDB 计算节点集群搭建成功。
初始化读写节点的本地数据目录 ~/primary/
:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D $HOME/primary
+
在共享存储的 /nvme1n1/shared_data/
路径上创建共享数据目录,然后使用 polar-initdb.sh
脚本初始化共享数据目录:
# 使用 pfs 创建共享数据目录
+sudo pfs -C disk mkdir /nvme1n1/shared_data
+# 初始化 db 的本地和共享数据目录
+sudo $HOME/tmp_basedir_polardb_pg_1100_bld/bin/polar-initdb.sh \\
+ $HOME/primary/ /nvme1n1/shared_data/
+
编辑读写节点的配置。打开 ~/primary/postgresql.conf
,增加配置项:
port=5432
+polar_hostid=1
+polar_enable_shared_storage_mode=on
+polar_disk_name='nvme1n1'
+polar_datadir='/nvme1n1/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='disk'
+logging_collector=on
+log_line_prefix='%p\\t%r\\t%u\\t%m\\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+synchronous_standby_names='replica1'
+
编辑读写节点的客户端认证文件 ~/primary/pg_hba.conf
,增加以下配置项,允许只读节点进行物理复制:
host replication postgres 0.0.0.0/0 trust
+
最后,启动读写节点:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D $HOME/primary
+
检查读写节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5432 \\
+ -d postgres \\
+ -c 'SELECT version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
在读写节点上,为对应的只读节点创建相应的复制槽,用于只读节点的物理复制:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5432 \\
+ -d postgres \\
+ -c "SELECT pg_create_physical_replication_slot('replica1');"
+ pg_create_physical_replication_slot
+-------------------------------------
+ (replica1,)
+(1 row)
+
在只读节点本地磁盘的 ~/replica1
路径上创建一个空目录,然后通过 polar-replica-initdb.sh
脚本使用共享存储上的数据目录来初始化只读节点的本地目录。初始化后的本地目录中没有默认配置文件,所以还需要使用 initdb
创建一个临时的本地目录模板,然后将所有的默认配置文件拷贝到只读节点的本地目录下:
mkdir -m 0700 $HOME/replica1
+sudo ~/tmp_basedir_polardb_pg_1100_bld/bin/polar-replica-initdb.sh \\
+ /nvme1n1/shared_data/ $HOME/replica1/
+
+$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D /tmp/replica1
+cp /tmp/replica1/*.conf $HOME/replica1/
+
编辑只读节点的配置。打开 ~/replica1/postgresql.conf
,增加配置项:
port=5433
+polar_hostid=2
+polar_enable_shared_storage_mode=on
+polar_disk_name='nvme1n1'
+polar_datadir='/nvme1n1/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='disk'
+logging_collector=on
+log_line_prefix='%p\\t%r\\t%u\\t%m\\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+
创建只读节点的复制配置文件 ~/replica1/recovery.conf
,增加读写节点的连接信息,以及复制槽名称:
polar_replica='on'
+recovery_target_timeline='latest'
+primary_slot_name='replica1'
+primary_conninfo='host=[读写节点所在IP] port=5432 user=postgres dbname=postgres application_name=replica1'
+
最后,启动只读节点:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D $HOME/replica1
+
检查只读节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5433 \\
+ -d postgres \\
+ -c 'SELECT version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
部署完成后,需要进行实例检查和测试,确保读写节点可正常写入数据、只读节点可以正常读取。
登录 读写节点,创建测试表并插入样例数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \\
+ -p 5432 \\
+ -d postgres \\
+ -c "CREATE TABLE t (t1 INT PRIMARY KEY, t2 INT); INSERT INTO t VALUES (1, 1),(2, 3),(3, 3);"
+
登录 只读节点,查询刚刚插入的样例数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \\
+ -p 5433 \\
+ -d postgres \\
+ -c "SELECT * FROM t;"
+ t1 | t2
+----+----
+ 1 | 1
+ 2 | 3
+ 3 | 3
+(3 rows)
+
在读写节点上插入的数据对只读节点可见,这意味着基于共享存储的 PolarDB 计算节点集群搭建成功。
在共享存储一写多读的架构下,数据文件实际上只有一份。得益于多版本机制,不同节点的读写实际上并不会冲突。但是有一些数据操作不具有多版本机制,其中比较有代表性的就是文件操作。
多版本机制仅限于文件内的元组,但不包括文件本身。对文件进行创建、删除等操作实际上会对全集群立即可见,这会导致 RO 在读取文件时出现文件消失的情况,因此需要做一些同步操作,来防止此类情况。
对文件进行操作通常使用 DDL,因此对于 DDL 操作,PolarDB 提供了一种同步机制,来防止并发的文件操作的出现。除了同步机制外,DDL 的其他逻辑和单机执行逻辑并无区别。
同步 DDL 机制利用 AccessExclusiveLock(后文简称 DDL 锁)来进行 RW / RO 的 DDL 操作同步。
图 1:DDL 锁和 WAL 日志的关系 |
DDL 锁是数据库中最高级的表锁,对其他所有的锁级别都互斥,会伴随着 WAL 日志同步到 RO 节点上,并且可以获取到该锁在 WAL 日志的写入位点。当 RO 回放超过 Lock LSN 位点时,就可以认为在 RO 中已经获取了这把锁。DDL 锁会伴随着事务的结束而释放。
如图 1 所示,当回放到 ApplyLSN1 时,表示未获取到 DDL 锁;当回放到 ApplyLSN2 时,表示获取到了该锁;当回放到 ApplyLSN3 时,已经释放了 DDL 锁。
图 2:DDL 锁的获取条件 |
当所有 RO 都回放超过了 Lock LSN 这个位点时(如图 2 所示),可以认为 RW 的事务在集群级别获取到了这把锁。获取到这把锁就意味着 RW / RO 中没有其他的会话能够访问这张表,此时 RW 就可以对这张表做各种文件相关的操作。
说明:Standby 有独立的文件存储,获取锁时不会出现上述情况。
图 3:同步 DDL 流程图 |
图 3 所示流程说明如下:
DDL 锁是 PostgreSQL 数据库最高级别的锁,当对一个表进行 DROP / ALTER / LOCK / VACUUM (FULL) table 等操作时,需要先获取到 DDL 锁。RW 是通过用户的主动操作来获取锁,获取锁成功时会写入到日志中,RO 则通过回放日志获取锁。
当以下操作的对象都是某张表,<
表示时间先后顺序时,同步 DDL 的执行逻辑如下:
结合以上执行逻辑可以得到以下操作的先后顺序:各个 RW / RO 查询操作结束 < RW 获取全局 DDL 锁 < RW 写数据 < RW 释放全局 DDL 锁 < RW / RO 新增查询操作。
可以看到在写共享存储的数据时,RW / RO 上都不会存在查询,因此不会造成正确性问题。在整个操作的过程中,都是遵循 2PL 协议的,因此对于多个表,也可以保证正确性。
上述机制中存在一个问题,就是锁同步发生在主备同步的主路径中,当 RO 的锁同步被阻塞时,会造成 RO 的数据同步阻塞(如图 1 所示,回放进程的 3、4 阶段在等待本地查询会话结束后才能获取锁)。PolarDB 默认设置的同步超时时间为 30s,如果 RW 压力过大,有可能造成较大的数据延迟。
RO 中回放的 DDL 锁还会出现叠加效果,例如 RW 在 1s 内写下了 10 个 DDL 锁日志,在 RO 却需要 300s 才能回放完毕。数据延迟对于 PolarDB 是十分危险的,它会造成 RW 无法及时刷脏、及时做检查点,如果此时发生崩溃,恢复系统会需要更长的时间,这会导致极大的稳定性风险。
针对此问题,PolarDB 对 RO 锁回放进行了优化。
图 4:RO 异步 DDL 锁回放 |
优化思路:设计一个异步进程来回放这些锁,从而不阻塞主回放进程的工作。
整体流程如图 4 所示,和图 3 不同的是,回放进程会将锁获取的操作卸载到锁回放进程中进行,并且立刻回到主回放流程中,从而不受锁回放阻塞的影响。
锁回放冲突并不是一个常见的情况,因此主回放进程并非将所有的锁都卸载到锁回放进程中进行,它会尝试获取锁,如果获取成功了,就不需要卸载到锁回放进程中进行,这样可以有效减少进程间的同步开销。
该功能在 PolarDB 中默认启用,能够有效的减少回放冲突造成的回放延迟,以及衍生出来的稳定性问题。在 AWS Aurora 中不具备该特性,当发生冲突时会严重增加延迟。
在异步回放的模式下,仅仅是获取锁的操作者变了,但是执行逻辑并未发生变化,依旧能够保证 RW 获取到全局 DDL 锁、写数据、释放全局 DDL 锁这期间不会存在任何查询,因此不会存在正确性问题。
',38),o=[i];function h(p,L){return l(),e("div",null,o)}const R=t(D,[["render",h],["__file","ddl-synchronization.html.vue"]]),g=JSON.parse('{"path":"/zh/theory/ddl-synchronization.html","title":"DDL 同步","lang":"zh-CN","frontmatter":{},"headers":[{"level":2,"title":"概述","slug":"概述","link":"#概述","children":[]},{"level":2,"title":"术语","slug":"术语","link":"#术语","children":[]},{"level":2,"title":"同步 DDL 机制","slug":"同步-ddl-机制","link":"#同步-ddl-机制","children":[{"level":3,"title":"DDL 锁","slug":"ddl-锁","link":"#ddl-锁","children":[]},{"level":3,"title":"如何保证数据正确性","slug":"如何保证数据正确性","link":"#如何保证数据正确性","children":[]}]},{"level":2,"title":"RO 锁回放优化","slug":"ro-锁回放优化","link":"#ro-锁回放优化","children":[{"level":3,"title":"异步 DDL 锁回放","slug":"异步-ddl-锁回放","link":"#异步-ddl-锁回放","children":[]},{"level":3,"title":"如何保证数据正确性","slug":"如何保证数据正确性-1","link":"#如何保证数据正确性-1","children":[]}]}],"git":{"updatedTime":1656919280000},"filePathRelative":"zh/theory/ddl-synchronization.md"}');export{R as comp,g as data}; diff --git a/assets/ddl-synchronization.html-BbbKn5-k.js b/assets/ddl-synchronization.html-BbbKn5-k.js new file mode 100644 index 00000000000..549338479c5 --- /dev/null +++ b/assets/ddl-synchronization.html-BbbKn5-k.js @@ -0,0 +1 @@ +import{_ as e,o,c as t,e as a}from"./app-CWFDhr_k.js";const n="/PolarDB-for-PostgreSQL/assets/45_DDL_1-DTO4vstL.png",s="/PolarDB-for-PostgreSQL/assets/46_DDL_2-DbGJlVgS.png",r="/PolarDB-for-PostgreSQL/assets/47_DDL_3-BhxLvj_M.png",i="/PolarDB-for-PostgreSQL/assets/48_DDL_4-sjnuZLtQ.png",l={},c=a('In a shared storage architecture that consists of one primary node and multiple read-only nodes, a data file has only one copy. Due to multi-version concurrency control (MVCC), the read and write operations performed on different nodes do not conflict. However, MVCC cannot be used to ensure consistency for some specific data operations, such as file operations.
MVCC applies to tuples within a file but does not apply to the file itself. File operations such as creating and deleting files are visible to the entire cluster immediately after they are performed. This causes an issue that files disappear while read-only nodes are reading the files. To prevent the issue from occurring, file operations need to be synchronized.
In most cases, DDL is used to perform operations on files. For DDL operations, PolarDB provides a synchronization mechanism to prevent concurrent file operations. The logic of DDL operations in PolarDB is the same as the logic of single-node execution. However, the synchronization mechanism is different.
The DDL synchronization mechanism uses AccessExclusiveLocks (DDL locks) to synchronize DDL operations between primary and read-only nodes.
Figure 1: Relationship Between DDL Lock and WAL Log |
DDL locks are table locks at the highest level in databases. DDL locks and locks at other levels are mutually exclusive. When the primary node synchronizes a WAL log file of a table to the read-only nodes, the primary node acquires the LSN of the lock in the WAL log file. When a read-only node applies the WAL log file beyond the LSN of the lock, the lock is considered to have been acquired on the read-only node. The DDL lock is released after the transaction ends. Figure 1 shows the entire process from the acquisition to the release of a DDL lock. When the WAL log file is applied at Apply LSN 1, the DDL lock is not acquired. When the WAL log file is applied at Apply LSN 2, the DDL lock is acquired. When the WAL log file is applied at Apply LSN 3, the DDL lock is released.
Figure 2: Conditions for Acquiring DDL Lock |
When the WAL log file is applied beyond the LSN of the lock on all read-only nodes, the DDL lock is considered to have been acquired by the transaction of the primary node at the cluster level. Then, this table cannot be accessed over other sessions on the primary node or read-only nodes. During this time period, the primary node can perform various file operations on the table.
Note: A standby node in an active/standby environment has independent file storage. When a standby node acquires a lock, the preceding situation never occurs.
Figure 3: DDL Synchronization Workflow |
Figure 3 shows the workflow of how DDL operations are synchronized.
DDL locks are locks at the highest level in PostgreSQL databases. Before a database performs operations such as DROP, ALTER, LOCK, and VACUUM (FULL) on a table, a DDL lock must be acquired. The primary node acquires the DDL lock by responding to user requests. When the lock is acquired, the primary node writes the DDL lock to the log file. Read-only nodes acquire the DDL lock by applying the log file.
DDL operations on a table are synchronized based on the following logic. The <
indicator shows that the operations are performed from left to right.
The sequence of the following operations is inferred based on the preceding execution logic: Queries on the primary node and each read-only node end < The primary node acquires a global DDL lock < The primary node writes data < The primary node releases the global DDL lock < The primary node and read-only nodes run new queries.
When the primary node writes data to the shared storage, no queries are run on the primary node or read-only nodes. This way, data correctness is ensured. The entire operation process follows the two-phase locking (2PL) protocol. This way, data correctness is ensured among multiple tables.
In the preceding synchronization mechanism, DDL locks are synchronized in the main process that is used for primary/secondary synchronization. When the synchronization of a DDL lock to a read-only node is blocked, the synchronization of data to the read-only node is also blocked. In the third and fourth phases of the apply process shown in Figure 1, the DDL lock can be acquired only after the session in which local queries are run is closed. The default timeout period for synchronization in PolarDB is 30s. If the primary node runs in heavy load, a large data latency may occur.
In specific cases, for a read-only node to apply a DDL lock, the data latency is the sum of the time used to apply each log entry. For example, if the primary node writes 10 log entries for a DDL lock within 1s, the read-only node requires 300s to apply all log entries. Data latency can affect the system stability of PolarDB in a negative manner. The primary node may be unable to clean dirty data and perform checkpoints at the earliest opportunity due to data latency. If the system stops responding when a large data latency occurs, the system requires an extended period of time to recover. This can lead to great stability risks.
To resolve this issue, PolarDB optimizes DDL lock apply on read-only nodes.
Figure 4: Asynchronous Apply of DDL Locks on Read-Only Nodes |
PolarDB uses an asynchronous process to apply DDL locks so that the main apply process is not blocked.
Figure 4 shows the overall workflow in which PolarDB offloads the acquisition of DDL locks from the main apply process to the lock apply process and immediately returns to the main apply process. This way, the main apply process is not affected even if lock apply are blocked.
Lock apply conflicts rarely occur. PolarDB does not offload the acquisition of all locks to the lock apply process. PolarDB first attempts to acquire a lock in the main apply process. Then, if the attempt is a success, PolarDB does not offload the lock acquisition to the lock apply process. This can reduce the synchronization overheads between processes.
By default, the asynchronous lock apply feature is enabled in PolarDB. This feature can reduce the apply latency caused by apply conflicts to ensure service stability. AWS Aurora does not provide similar features. Apply conflicts in AWS Aurora can severely increase data latency.
In asynchronous apply mode, only the executor who acquires locks changes, but the execution logic does not change. During the process in which the primary node acquires a global DDL lock, writes data, and then releases the global DDL lock, no queries are run. This way, data correctness is not affected.
',37),h=[c];function d(p,y){return o(),t("div",null,h)}const D=e(l,[["render",d],["__file","ddl-synchronization.html.vue"]]),f=JSON.parse('{"path":"/theory/ddl-synchronization.html","title":"DDL Synchronization","lang":"en-US","frontmatter":{},"headers":[{"level":2,"title":"Overview","slug":"overview","link":"#overview","children":[]},{"level":2,"title":"Terms","slug":"terms","link":"#terms","children":[]},{"level":2,"title":"DDL Synchronization Mechanism","slug":"ddl-synchronization-mechanism","link":"#ddl-synchronization-mechanism","children":[{"level":3,"title":"DDL Locks","slug":"ddl-locks","link":"#ddl-locks","children":[]},{"level":3,"title":"How to Ensure Data Correctness","slug":"how-to-ensure-data-correctness","link":"#how-to-ensure-data-correctness","children":[]}]},{"level":2,"title":"Apply Optimization for DDL Locks on RO","slug":"apply-optimization-for-ddl-locks-on-ro","link":"#apply-optimization-for-ddl-locks-on-ro","children":[{"level":3,"title":"Asynchronous Apply of DDL Locks","slug":"asynchronous-apply-of-ddl-locks","link":"#asynchronous-apply-of-ddl-locks","children":[]},{"level":3,"title":"How to Ensure Data Correctness","slug":"how-to-ensure-data-correctness-1","link":"#how-to-ensure-data-correctness-1","children":[]}]}],"git":{"updatedTime":1656919280000},"filePathRelative":"theory/ddl-synchronization.md"}');export{D as comp,f as data}; diff --git a/assets/deploy-official.html-C8FbgX50.js b/assets/deploy-official.html-C8FbgX50.js new file mode 100644 index 00000000000..18edbe8c8a2 --- /dev/null +++ b/assets/deploy-official.html-C8FbgX50.js @@ -0,0 +1 @@ +import{_ as a,r as n,o as r,c as l,a as e,b as o,d as c}from"./app-CWFDhr_k.js";const s={},i=e("h1",{id:"阿里云官网购买实例",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#阿里云官网购买实例"},[e("span",null,"阿里云官网购买实例")])],-1),d={href:"https://www.aliyun.com/product/polardb",target:"_blank",rel:"noopener noreferrer"};function p(f,_){const t=n("ExternalLinkIcon");return r(),l("div",null,[i,e("p",null,[o("阿里云官网直接提供了可供购买的 "),e("a",d,[o("云原生关系型数据库 PolarDB PostgreSQL 引擎"),c(t)]),o("。")])])}const m=a(s,[["render",p],["__file","deploy-official.html.vue"]]),u=JSON.parse('{"path":"/deploying/deploy-official.html","title":"阿里云官网购买实例","lang":"en-US","frontmatter":{},"headers":[],"git":{"updatedTime":1656919280000},"filePathRelative":"deploying/deploy-official.md"}');export{m as comp,u as data}; diff --git a/assets/deploy-official.html-Cz-2Z8gk.js b/assets/deploy-official.html-Cz-2Z8gk.js new file mode 100644 index 00000000000..898a76c9eaf --- /dev/null +++ b/assets/deploy-official.html-Cz-2Z8gk.js @@ -0,0 +1 @@ +import{_ as a,r as n,o as r,c as l,a as e,b as o,d as c}from"./app-CWFDhr_k.js";const s={},i=e("h1",{id:"阿里云官网购买实例",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#阿里云官网购买实例"},[e("span",null,"阿里云官网购买实例")])],-1),d={href:"https://www.aliyun.com/product/polardb",target:"_blank",rel:"noopener noreferrer"};function p(f,_){const t=n("ExternalLinkIcon");return r(),l("div",null,[i,e("p",null,[o("阿里云官网直接提供了可供购买的 "),e("a",d,[o("云原生关系型数据库 PolarDB PostgreSQL 引擎"),c(t)]),o("。")])])}const m=a(s,[["render",p],["__file","deploy-official.html.vue"]]),u=JSON.parse('{"path":"/zh/deploying/deploy-official.html","title":"阿里云官网购买实例","lang":"zh-CN","frontmatter":{},"headers":[],"git":{"updatedTime":1656919280000},"filePathRelative":"zh/deploying/deploy-official.md"}');export{m as comp,u as data}; diff --git a/assets/deploy-stack.html-D1Dsck6Z.js b/assets/deploy-stack.html-D1Dsck6Z.js new file mode 100644 index 00000000000..0645f56ea38 --- /dev/null +++ b/assets/deploy-stack.html-D1Dsck6Z.js @@ -0,0 +1 @@ +import{_ as o,r,o as l,c,a,b as e,d as s}from"./app-CWFDhr_k.js";const n="/PolarDB-for-PostgreSQL/assets/63-PolarDBStack-arch-dvSH2TVi.png",d={},p=a("h1",{id:"基于-polardb-stack-共享存储",tabindex:"-1"},[a("a",{class:"header-anchor",href:"#基于-polardb-stack-共享存储"},[a("span",null,"基于 PolarDB Stack 共享存储")])],-1),i=a("p",null,"PolarDB Stack 是轻量级 PolarDB PaaS 软件。基于共享存储提供一写多读的 PolarDB 数据库服务,特别定制和深度优化了数据库生命周期管理。通过 PolarDB Stack 可以一键部署 PolarDB-for-PostgreSQL 内核和 PolarDB-FileSystem。",-1),_={href:"https://github.com/ApsaraDB/PolarDB-Stack-Operator/blob/master/README.md",target:"_blank",rel:"noopener noreferrer"},h=a("p",null,[a("img",{src:n,alt:"PolarDB Stack arch"})],-1);function k(m,B){const t=r("ExternalLinkIcon");return l(),c("div",null,[p,i,a("p",null,[e("PolarDB Stack 架构如下图所示,进入 "),a("a",_,[e("PolarDB Stack 的部署文档"),s(t)])]),h])}const S=o(d,[["render",k],["__file","deploy-stack.html.vue"]]),D=JSON.parse('{"path":"/deploying/deploy-stack.html","title":"基于 PolarDB Stack 共享存储","lang":"en-US","frontmatter":{},"headers":[],"git":{"updatedTime":1656919280000},"filePathRelative":"deploying/deploy-stack.md"}');export{S as comp,D as data}; diff --git a/assets/deploy-stack.html-pSYPYiII.js b/assets/deploy-stack.html-pSYPYiII.js new file mode 100644 index 00000000000..e3ceda23baa --- /dev/null +++ b/assets/deploy-stack.html-pSYPYiII.js @@ -0,0 +1 @@ +import{_ as o,r,o as l,c,a,b as t,d as s}from"./app-CWFDhr_k.js";const n="/PolarDB-for-PostgreSQL/assets/63-PolarDBStack-arch-dvSH2TVi.png",d={},p=a("h1",{id:"基于-polardb-stack-共享存储",tabindex:"-1"},[a("a",{class:"header-anchor",href:"#基于-polardb-stack-共享存储"},[a("span",null,"基于 PolarDB Stack 共享存储")])],-1),i=a("p",null,"PolarDB Stack 是轻量级 PolarDB PaaS 软件。基于共享存储提供一写多读的 PolarDB 数据库服务,特别定制和深度优化了数据库生命周期管理。通过 PolarDB Stack 可以一键部署 PolarDB-for-PostgreSQL 内核和 PolarDB-FileSystem。",-1),_={href:"https://github.com/ApsaraDB/PolarDB-Stack-Operator/blob/master/README.md",target:"_blank",rel:"noopener noreferrer"},h=a("p",null,[a("img",{src:n,alt:"PolarDB Stack arch"})],-1);function k(m,B){const e=r("ExternalLinkIcon");return l(),c("div",null,[p,i,a("p",null,[t("PolarDB Stack 架构如下图所示,进入 "),a("a",_,[t("PolarDB Stack 的部署文档"),s(e)])]),h])}const D=o(d,[["render",k],["__file","deploy-stack.html.vue"]]),S=JSON.parse('{"path":"/zh/deploying/deploy-stack.html","title":"基于 PolarDB Stack 共享存储","lang":"zh-CN","frontmatter":{},"headers":[],"git":{"updatedTime":1656919280000},"filePathRelative":"zh/deploying/deploy-stack.md"}');export{D as comp,S as data}; diff --git a/assets/deploy.html-C9--LiK4.js b/assets/deploy.html-C9--LiK4.js new file mode 100644 index 00000000000..66fb5d00f3a --- /dev/null +++ b/assets/deploy.html-C9--LiK4.js @@ -0,0 +1 @@ +import{_ as i,r as s,o as u,c as h,d as l,a as t,b as e,w as n}from"./app-CWFDhr_k.js";const c={},p=t("h1",{id:"进阶部署",tabindex:"-1"},[t("a",{class:"header-anchor",href:"#进阶部署"},[t("span",null,"进阶部署")])],-1),f=t("p",null,"部署 PolarDB for PostgreSQL 需要在以下三个层面上做准备:",-1),m=t("li",null,[t("strong",null,"块存储设备层"),e(":用于提供存储介质。可以是单个物理块存储设备(本地存储),也可以是多个物理块设备构成的分布式块存储。")],-1),g=t("strong",null,"文件系统层",-1),y={href:"https://github.com/ApsaraDB/PolarDB-FileSystem",target:"_blank",rel:"noopener noreferrer"},S=t("li",null,[t("strong",null,"数据库层"),e(":PolarDB for PostgreSQL 的编译和部署环境。")],-1),P=t("p",null,"以下表格给出了三个层次排列组合出的的不同实践方式,其中的步骤包含:",-1),b=t("ul",null,[t("li",null,"存储层:块存储设备的准备"),t("li",null,"文件系统:PolarDB File System 的编译、挂载"),t("li",null,"数据库层:PolarDB for PostgreSQL 各集群形态的编译部署")],-1),v={href:"https://hub.docker.com/r/polardb/polardb_pg_devel/tags",target:"_blank",rel:"noopener noreferrer"},B=t("thead",null,[t("tr",null,[t("th"),t("th",null,"块存储"),t("th",null,"文件系统")])],-1),D=t("td",null,"本地 SSD",-1),k=t("td",null,"本地文件系统(如 ext4)",-1),x={href:"https://developer.aliyun.com/live/249628"},F=t("td",null,"阿里云 ECS + ESSD 云盘",-1),z=t("td",null,"PFS",-1),C={href:"https://developer.aliyun.com/live/250218"},L={href:"https://opencurve.io/Curve/HOME",target:"_blank",rel:"noopener noreferrer"},E={href:"https://github.com/opencurve/PolarDB-FileSystem",target:"_blank",rel:"noopener noreferrer"},N=t("td",null,"Ceph 共享存储",-1),I=t("td",null,"PFS",-1),Q=t("td",null,"NBD 共享存储",-1),A=t("td",null,"PFS",-1);function R(d,V){const _=s("ArticleInfo"),r=s("ExternalLinkIcon"),o=s("RouteLink"),a=s("Badge");return u(),h("div",null,[p,l(_,{frontmatter:d.$frontmatter},null,8,["frontmatter"]),f,t("ol",null,[m,t("li",null,[g,e(":由于 PostgreSQL 将数据存储在文件中,因此需要在块存储设备上架设文件系统。根据底层块存储设备的不同,可以选用单机文件系统(如 ext4)或分布式文件系统 "),t("a",y,[e("PolarDB File System(PFS)"),l(r)]),e("。")]),S]),P,b,t("p",null,[e("我们强烈推荐使用发布在 DockerHub 上的 "),t("a",v,[e("PolarDB 开发镜像"),l(r)]),e(" 来完成实践!开发镜像中已经包含了文件系统层和数据库层所需要安装的所有依赖,无需手动安装。")]),t("table",null,[B,t("tbody",null,[t("tr",null,[t("td",null,[l(o,{to:"/zh/deploying/db-localfs.html"},{default:n(()=>[e("实践 1(极简本地部署)")]),_:1})]),D,k]),t("tr",null,[t("td",null,[l(o,{to:"/zh/deploying/storage-aliyun-essd.html"},{default:n(()=>[e("实践 2(生产环境最佳实践)")]),_:1}),e(),t("a",x,[l(a,{type:"tip",text:"视频",vertical:"top"})])]),F,z]),t("tr",null,[t("td",null,[l(o,{to:"/zh/deploying/storage-curvebs.html"},{default:n(()=>[e("实践 3(生产环境最佳实践)")]),_:1}),e(),t("a",C,[l(a,{type:"tip",text:"视频",vertical:"top"})])]),t("td",null,[t("a",L,[e("CurveBS"),l(r)]),e(" 共享存储")]),t("td",null,[t("a",E,[e("PFS for Curve"),l(r)])])]),t("tr",null,[t("td",null,[l(o,{to:"/zh/deploying/storage-ceph.html"},{default:n(()=>[e("实践 4")]),_:1})]),N,I]),t("tr",null,[t("td",null,[l(o,{to:"/zh/deploying/storage-nbd.html"},{default:n(()=>[e("实践 5")]),_:1})]),Q,A])])])])}const H=i(c,[["render",R],["__file","deploy.html.vue"]]),O=JSON.parse('{"path":"/zh/deploying/deploy.html","title":"进阶部署","lang":"zh-CN","frontmatter":{"author":"棠羽","date":"2022/05/09","minute":10},"headers":[],"git":{"updatedTime":1690894847000},"filePathRelative":"zh/deploying/deploy.md"}');export{H as comp,O as data}; diff --git a/assets/deploy.html-CdxVmx4z.js b/assets/deploy.html-CdxVmx4z.js new file mode 100644 index 00000000000..d6e073e03ed --- /dev/null +++ b/assets/deploy.html-CdxVmx4z.js @@ -0,0 +1 @@ +import{_ as i,r as s,o as u,c,d as l,a as t,b as e,w as n}from"./app-CWFDhr_k.js";const h={},p=t("h1",{id:"进阶部署",tabindex:"-1"},[t("a",{class:"header-anchor",href:"#进阶部署"},[t("span",null,"进阶部署")])],-1),f=t("p",null,"部署 PolarDB for PostgreSQL 需要在以下三个层面上做准备:",-1),m=t("li",null,[t("strong",null,"块存储设备层"),e(":用于提供存储介质。可以是单个物理块存储设备(本地存储),也可以是多个物理块设备构成的分布式块存储。")],-1),g=t("strong",null,"文件系统层",-1),y={href:"https://github.com/ApsaraDB/PolarDB-FileSystem",target:"_blank",rel:"noopener noreferrer"},S=t("li",null,[t("strong",null,"数据库层"),e(":PolarDB for PostgreSQL 的编译和部署环境。")],-1),P=t("p",null,"以下表格给出了三个层次排列组合出的的不同实践方式,其中的步骤包含:",-1),b=t("ul",null,[t("li",null,"存储层:块存储设备的准备"),t("li",null,"文件系统:PolarDB File System 的编译、挂载"),t("li",null,"数据库层:PolarDB for PostgreSQL 各集群形态的编译部署")],-1),v={href:"https://hub.docker.com/r/polardb/polardb_pg_devel/tags",target:"_blank",rel:"noopener noreferrer"},B=t("thead",null,[t("tr",null,[t("th"),t("th",null,"块存储"),t("th",null,"文件系统")])],-1),D=t("td",null,"本地 SSD",-1),k=t("td",null,"本地文件系统(如 ext4)",-1),x={href:"https://developer.aliyun.com/live/249628"},F=t("td",null,"阿里云 ECS + ESSD 云盘",-1),L=t("td",null,"PFS",-1),C={href:"https://developer.aliyun.com/live/250218"},E={href:"https://opencurve.io/Curve/HOME",target:"_blank",rel:"noopener noreferrer"},N={href:"https://github.com/opencurve/PolarDB-FileSystem",target:"_blank",rel:"noopener noreferrer"},I=t("td",null,"Ceph 共享存储",-1),Q=t("td",null,"PFS",-1),A=t("td",null,"NBD 共享存储",-1),R=t("td",null,"PFS",-1);function V(d,w){const _=s("ArticleInfo"),r=s("ExternalLinkIcon"),o=s("RouteLink"),a=s("Badge");return u(),c("div",null,[p,l(_,{frontmatter:d.$frontmatter},null,8,["frontmatter"]),f,t("ol",null,[m,t("li",null,[g,e(":由于 PostgreSQL 将数据存储在文件中,因此需要在块存储设备上架设文件系统。根据底层块存储设备的不同,可以选用单机文件系统(如 ext4)或分布式文件系统 "),t("a",y,[e("PolarDB File System(PFS)"),l(r)]),e("。")]),S]),P,b,t("p",null,[e("我们强烈推荐使用发布在 DockerHub 上的 "),t("a",v,[e("PolarDB 开发镜像"),l(r)]),e(" 来完成实践!开发镜像中已经包含了文件系统层和数据库层所需要安装的所有依赖,无需手动安装。")]),t("table",null,[B,t("tbody",null,[t("tr",null,[t("td",null,[l(o,{to:"/deploying/db-localfs.html"},{default:n(()=>[e("实践 1(极简本地部署)")]),_:1})]),D,k]),t("tr",null,[t("td",null,[l(o,{to:"/deploying/storage-aliyun-essd.html"},{default:n(()=>[e("实践 2(生产环境最佳实践)")]),_:1}),e(),t("a",x,[l(a,{type:"tip",text:"视频",vertical:"top"})])]),F,L]),t("tr",null,[t("td",null,[l(o,{to:"/deploying/storage-curvebs.html"},{default:n(()=>[e("实践 3(生产环境最佳实践)")]),_:1}),e(),t("a",C,[l(a,{type:"tip",text:"视频",vertical:"top"})])]),t("td",null,[t("a",E,[e("CurveBS"),l(r)]),e(" 共享存储")]),t("td",null,[t("a",N,[e("PFS for Curve"),l(r)])])]),t("tr",null,[t("td",null,[l(o,{to:"/deploying/storage-ceph.html"},{default:n(()=>[e("实践 4")]),_:1})]),I,Q]),t("tr",null,[t("td",null,[l(o,{to:"/deploying/storage-nbd.html"},{default:n(()=>[e("实践 5")]),_:1})]),A,R])])])])}const O=i(h,[["render",V],["__file","deploy.html.vue"]]),T=JSON.parse('{"path":"/deploying/deploy.html","title":"进阶部署","lang":"en-US","frontmatter":{"author":"棠羽","date":"2022/05/09","minute":10},"headers":[],"git":{"updatedTime":1690894847000},"filePathRelative":"deploying/deploy.md"}');export{O as comp,T as data}; diff --git a/assets/dev-on-docker.html-Bik0LUI_.js b/assets/dev-on-docker.html-Bik0LUI_.js new file mode 100644 index 00000000000..548eb894283 --- /dev/null +++ b/assets/dev-on-docker.html-Bik0LUI_.js @@ -0,0 +1,31 @@ +import{_ as i,r as t,o as c,c as p,a,b as e,d as n,w as o,e as s}from"./app-CWFDhr_k.js";const u={},h=s('警告
为简化使用,容器内的 postgres
用户没有设置密码,仅供体验。如果在生产环境等高安全性需求场合,请务必修改健壮的密码!
代码克隆完毕后,进入源码目录:
cd PolarDB-for-PostgreSQL/
+
# 拉取 PolarDB 开发镜像
+docker pull polardb/polardb_pg_devel
+
此时我们已经在开发机器的源码目录中。从开发镜像上创建一个容器,将当前目录作为一个 volume 挂载到容器中,这样可以:
docker run -it \\
+ -v $PWD:/home/postgres/polardb_pg \\
+ --shm-size=512m --cap-add=SYS_PTRACE --privileged=true \\
+ --name polardb_pg_devel \\
+ polardb/polardb_pg_devel \\
+ bash
+
进入容器后,为容器内用户获取源码目录的权限,然后编译部署 PolarDB-PG 实例。
# 获取权限并编译部署
+cd polardb_pg
+sudo chmod -R a+wr ./
+sudo chown -R postgres:postgres ./
+./polardb_build.sh
+
+# 验证
+psql -h 127.0.0.1 -c 'select version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
以下表格列出了编译、初始化或测试 PolarDB-PG 集群所可能使用到的选项及说明。更多选项及其说明详见源码目录下的 polardb_build.sh
脚本。
如无定制的需求,则可以按照下面给出的选项编译部署不同形态的 PolarDB-PG 集群并进行测试。
5432
端口)./polardb_build.sh
+
5432
端口)5433
端口)./polardb_build.sh --withrep --repnum=1
+
5432
端口)5433
端口)5434
端口)./polardb_build.sh --withrep --repnum=1 --withstandby
+
5432
端口)5433
/ 5434
端口)./polardb_build.sh --initpx
+
普通实例回归测试:
./polardb_build.sh --withrep -r -e -r-external -r-contrib -r-pl --with-tde
+
HTAP 实例回归测试:
./polardb_build.sh -r-px -e -r-external -r-contrib -r-pl --with-tde
+
DMA 实例回归测试:
./polardb_build.sh -r -e -r-external -r-contrib -r-pl --with-tde --with-dma
+
DANGER
为简化使用,容器内的 postgres
用户没有设置密码,仅供体验。如果在生产环境等高安全性需求场合,请务必修改健壮的密码!
代码克隆完毕后,进入源码目录:
cd PolarDB-for-PostgreSQL/
+
# 拉取 PolarDB 开发镜像
+docker pull polardb/polardb_pg_devel
+
此时我们已经在开发机器的源码目录中。从开发镜像上创建一个容器,将当前目录作为一个 volume 挂载到容器中,这样可以:
docker run -it \\
+ -v $PWD:/home/postgres/polardb_pg \\
+ --shm-size=512m --cap-add=SYS_PTRACE --privileged=true \\
+ --name polardb_pg_devel \\
+ polardb/polardb_pg_devel \\
+ bash
+
进入容器后,为容器内用户获取源码目录的权限,然后编译部署 PolarDB-PG 实例。
# 获取权限并编译部署
+cd polardb_pg
+sudo chmod -R a+wr ./
+sudo chown -R postgres:postgres ./
+./polardb_build.sh
+
+# 验证
+psql -h 127.0.0.1 -c 'select version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
以下表格列出了编译、初始化或测试 PolarDB-PG 集群所可能使用到的选项及说明。更多选项及其说明详见源码目录下的 polardb_build.sh
脚本。
如无定制的需求,则可以按照下面给出的选项编译部署不同形态的 PolarDB-PG 集群并进行测试。
5432
端口)./polardb_build.sh
+
5432
端口)5433
端口)./polardb_build.sh --withrep --repnum=1
+
5432
端口)5433
端口)5434
端口)./polardb_build.sh --withrep --repnum=1 --withstandby
+
5432
端口)5433
/ 5434
端口)./polardb_build.sh --initpx
+
普通实例回归测试:
./polardb_build.sh --withrep -r -e -r-external -r-contrib -r-pl --with-tde
+
HTAP 实例回归测试:
./polardb_build.sh -r-px -e -r-external -r-contrib -r-pl --with-tde
+
DMA 实例回归测试:
./polardb_build.sh -r -e -r-external -r-contrib -r-pl --with-tde --with-dma
+
PostgreSQL 支持并行(多进程扫描/排序)和并发(不阻塞 DML)创建索引,但只能在创建索引的过程中使用单个计算节点的资源。
PolarDB-PG 的 ePQ 弹性跨机并行查询特性支持对 B-Tree 类型的索引创建进行加速。ePQ 能够利用多个计算节点的 I/O 带宽并行扫描全表数据,并利用多个计算节点的 CPU 和内存资源对每行数据在表中的物理位置按索引列值进行排序,构建索引元组。最终,将有序的索引元组归并到创建索引的进程中,写入索引页面,完成索引的创建。
创建一张包含三个列,数据量为 1000000 行的表:
CREATE TABLE t (id INT, age INT, msg TEXT);
+
+INSERT INTO t
+SELECT
+ random() * 1000000,
+ random() * 10000,
+ md5(random()::text)
+FROM generate_series(1, 1000000);
+
使用 ePQ 创建索引需要以下三个步骤:
polar_enable_px
为 ON
,打开 ePQ 的开关polar_px_dop_per_node
调整查询并行度px_build
属性为 ON
SET polar_enable_px TO ON;
+SET polar_px_dop_per_node TO 8;
+CREATE INDEX t_idx1 ON t(id, msg) WITH(px_build = ON);
+
类似地,ePQ 支持并发创建索引,只需要在 CREATE INDEX
后加上 CONCURRENTLY
关键字即可:
SET polar_enable_px TO ON;
+SET polar_px_dop_per_node TO 8;
+CREATE INDEX CONCURRENTLY t_idx2 ON t(id, msg) WITH(px_build = ON);
+
ePQ 加速创建索引暂不支持以下场景:
UNIQUE
索引INCLUDING
列TABLESPACE
WHERE
而成为部分索引(Partial Index)对于物化视图的创建和刷新,以及 CREATE TABLE AS
/ SELECT INTO
语法,由于在数据库层面需要完成的工作步骤十分相似,因此 PostgreSQL 内核使用同一套代码逻辑来处理这几种语法。内核执行过程中的主要步骤包含:
CREATE TABLE AS
/ SELECT INTO
语法中定义的查询,扫描符合查询条件的数据PolarDB for PostgreSQL 对上述两个步骤分别引入了 ePQ 并行扫描和批量数据写入的优化。在需要扫描或写入的数据量较大时,能够显著提升上述 DDL 语法的性能,缩短执行时间:
将以下参数设置为 ON
即可启用 ePQ 并行扫描来加速上述语法中的查询过程,目前其默认值为 ON
。该参数生效的前置条件是 ePQ 特性的总开关 polar_enable_px
被打开。
SET polar_px_enable_create_table_as = ON;
+
由于 ePQ 特性的限制,该优化不支持 CREATE TABLE AS ... WITH OIDS
语法。对于该语法的处理流程中将会回退使用 PostgreSQL 内置优化器为 DDL 定义中的查询生成执行计划,并通过 PostgreSQL 的单机执行器完成查询。
将以下参数设置为 ON
即可启用批量写入来加速上述语法中的写入过程,目前其默认值为 ON
。
SET polar_enable_create_table_as_bulk_insert = ON;
+
PostgreSQL 提供了 EXPLAIN
命令用于 SQL 语句的性能分析。它能够输出 SQL 对应的查询计划,以及在执行过程中的具体耗时、资源消耗等信息,可用于排查 SQL 的性能瓶颈。
EXPLAIN
命令原先只适用于单机执行的 SQL 性能分析。PolarDB-PG 的 ePQ 弹性跨机并行查询扩展了 EXPLAIN
的功能,使其可以打印 ePQ 的跨机并行执行计划,还能够统计 ePQ 执行计划在各个算子上的执行时间、数据扫描量、内存使用量等信息,并以统一的视角返回给客户端。
ePQ 的执行计划是分片的。每个计划分片(Slice)由计算节点上的虚拟执行单元(Segment)启动的一组进程(Gang)负责执行,完成 SQL 的一部分计算。ePQ 在执行计划中引入了 Motion 算子,用于在执行不同计划分片的进程组之间进行数据传递。因此,Motion 算子就是计划分片的边界。
ePQ 中总共引入了三种 Motion 算子:
PX Coordinator
:源端数据发送到同一个目标端(汇聚)PX Broadcast
:源端数据发送到每一个目标端(广播)PX Hash
:源端数据经过哈希计算后发送到某一个目标端(重分布)以一个简单查询作为例子:
=> CREATE TABLE t (id INT);
+=> SET polar_enable_px TO ON;
+=> EXPLAIN (COSTS OFF) SELECT * FROM t LIMIT 1;
+ QUERY PLAN
+-------------------------------------------------
+ Limit
+ -> PX Coordinator 6:1 (slice1; segments: 6)
+ -> Partial Seq Scan on t
+ Optimizer: PolarDB PX Optimizer
+(4 rows)
+
以上执行计划以 Motion 算子为界,被分为了两个分片:一个是接收最终结果的分片 slice0
,一个是扫描数据的分片slice1
。对于 slice1
这个计划分片,ePQ 将使用六个执行单元(segments: 6
)分别启动一个进程来执行,这六个进程各自负责扫描表的一部分数据(Partial Seq Scan
),通过 Motion 算子将六个进程的数据汇聚到一个目标端(PX Coordinator 6:1
),传递给 Limit
算子。
如果查询逐渐复杂,则执行计划中的计划分片和 Motion 算子会越来越多:
=> CREATE TABLE t1 (a INT, b INT, c INT);
+=> SET polar_enable_px TO ON;
+=> EXPLAIN (COSTS OFF) SELECT SUM(b) FROM t1 GROUP BY a LIMIT 1;
+ QUERY PLAN
+------------------------------------------------------------
+ Limit
+ -> PX Coordinator 6:1 (slice1; segments: 6)
+ -> GroupAggregate
+ Group Key: a
+ -> Sort
+ Sort Key: a
+ -> PX Hash 6:6 (slice2; segments: 6)
+ Hash Key: a
+ -> Partial Seq Scan on t1
+ Optimizer: PolarDB PX Optimizer
+(10 rows)
+
以上执行计划中总共有三个计划分片。将会有六个进程(segments: 6
)负责执行 slice2
分片,分别扫描表的一部分数据,然后通过 Motion 算子(PX Hash 6:6
)将数据重分布到另外六个(segments: 6
)负责执行 slice1
分片的进程上,各自完成排序(Sort
)和聚合(GroupAggregate
),最终通过 Motion 算子(PX Coordinator 6:1
)将数据汇聚到结果分片 slice0
。
PolarDB-PG 的 ePQ 弹性跨机并行查询特性提供了精细的粒度控制方法,可以合理使用集群内的计算资源。在最大程度利用闲置计算资源进行并行查询,提升资源利用率的同时,避免了对其它业务负载产生影响:
参数 polar_px_nodes
指定了参与 ePQ 的计算节点范围,默认值为空,表示所有只读节点都参与 ePQ 并行查询:
=> SHOW polar_px_nodes;
+ polar_px_nodes
+----------------
+
+(1 row)
+
如果希望读写节点也参与 ePQ 并行,则可以设置如下参数:
SET polar_px_use_primary TO ON;
+
如果部分只读节点负载较高,则可以通过修改 polar_px_nodes
参数设置仅特定几个而非所有只读节点参与 ePQ 并行查询。参数 polar_px_nodes
的合法格式是一个以英文逗号分隔的节点名称列表。获取节点名称需要安装 polar_monitor
插件:
CREATE EXTENSION IF NOT EXISTS polar_monitor;
+
通过 polar_monitor
插件提供的集群拓扑视图,可以查询到集群中所有计算节点的名称:
=> SELECT name,slot_name,type FROM polar_cluster_info;
+ name | slot_name | type
+-------+-----------+---------
+ node0 | | Primary
+ node1 | standby1 | Standby
+ node2 | replica1 | Replica
+ node3 | replica2 | Replica
+(4 rows)
+
其中:
Primary
表示读写节点Replica
表示只读节点Standby
表示备库节点通用的最佳实践是使用负载较低的只读节点参与 ePQ 并行查询:
=> SET polar_px_nodes = 'node2,node3';
+=> SHOW polar_px_nodes;
+ polar_px_nodes
+----------------
+ node2,node3
+(1 row)
+
参数 polar_px_dop_per_node
用于设置当前会话中的 ePQ 查询在每个计算节点上的执行单元(Segment)数量,每个执行单元会为其需要执行的每一个计划分片(Slice)启动一个进程。
该参数默认值为 3
,通用最佳实践值为当前计算节点 CPU 核心数的一半。如果计算节点的 CPU 负载较高,可以酌情递减该参数,控制计算节点的 CPU 占用率至 80% 以下;如果查询性能不佳时,可以酌情递增该参数,也需要保持计算节点的 CPU 水位不高于 80%。否则可能会拖慢其它的后台进程。
创建一张表:
CREATE TABLE test(id INT);
+
假设集群内有两个只读节点,polar_px_nodes
为空,此时 ePQ 将使用集群内的所有只读节点参与并行查询;参数 polar_px_dop_per_node
的值为 3
,表示每个计算节点上将会有三个执行单元。执行计划如下:
=> SHOW polar_px_nodes;
+ polar_px_nodes
+----------------
+
+(1 row)
+
+=> SHOW polar_px_dop_per_node;
+ polar_px_dop_per_node
+-----------------------
+ 3
+(1 row)
+
+=> EXPLAIN SELECT * FROM test;
+ QUERY PLAN
+-------------------------------------------------------------------------------
+ PX Coordinator 6:1 (slice1; segments: 6) (cost=0.00..431.00 rows=1 width=4)
+ -> Partial Seq Scan on test (cost=0.00..431.00 rows=1 width=4)
+ Optimizer: PolarDB PX Optimizer
+(3 rows)
+
从执行计划中可以看出,两个只读节点上总计有六个执行单元(segments: 6
)将会执行这个计划中唯一的计划分片 slice1
。这意味着总计会有六个进程并行执行当前查询。
此时,调整 polar_px_dop_per_node
为 4
,再次执行查询,两个只读节点上总计会有八个执行单元参与当前查询。由于执行计划中只有一个计划分片 slice1
,这意味着总计会有八个进程并行执行当前查询:
=> SET polar_px_dop_per_node TO 4;
+SET
+=> EXPLAIN SELECT * FROM test;
+ QUERY PLAN
+-------------------------------------------------------------------------------
+ PX Coordinator 8:1 (slice1; segments: 8) (cost=0.00..431.00 rows=1 width=4)
+ -> Partial Seq Scan on test (cost=0.00..431.00 rows=1 width=4)
+ Optimizer: PolarDB PX Optimizer
+(3 rows)
+
此时,如果设置 polar_px_use_primary
参数,让读写节点也参与查询,那么读写节点上也将会有四个执行单元参与 ePQ 并行执行,集群内总计 12 个进程参与并行执行:
=> SET polar_px_use_primary TO ON;
+SET
+=> EXPLAIN SELECT * FROM test;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ PX Coordinator 12:1 (slice1; segments: 12) (cost=0.00..431.00 rows=1 width=4)
+ -> Partial Seq Scan on test (cost=0.00..431.00 rows=1 width=4)
+ Optimizer: PolarDB PX Optimizer
+(3 rows)
+
随着数据量的不断增长,表的规模将会越来越大。为了方便管理和提高查询性能,比较好的实践是使用分区表,将大表拆分成多个子分区表。甚至每个子分区表还可以进一步拆成二级子分区表,从而形成了多级分区表。
PolarDB-PG 支持 ePQ 弹性跨机并行查询,能够利用集群中多个计算节点提升只读查询的性能。ePQ 不仅能够对普通表进行高效的跨机并行查询,对分区表也实现了跨机并行查询。
ePQ 对分区表的基础功能支持包含:
此外,ePQ 还支持了部分与分区表相关的高级功能:
ePQ 暂不支持对具有多列分区键的分区表进行并行查询。
创建一张分区策略为 Range 的分区表,并创建三个子分区:
CREATE TABLE t1 (id INT) PARTITION BY RANGE(id);
+CREATE TABLE t1_p1 PARTITION OF t1 FOR VALUES FROM (0) TO (200);
+CREATE TABLE t1_p2 PARTITION OF t1 FOR VALUES FROM (200) TO (400);
+CREATE TABLE t1_p3 PARTITION OF t1 FOR VALUES FROM (400) TO (600);
+
设置参数打开 ePQ 开关和 ePQ 分区表扫描功能的开关:
SET polar_enable_px TO ON;
+SET polar_px_enable_partition TO ON;
+
查看对分区表进行全表扫描的执行计划:
=> EXPLAIN (COSTS OFF) SELECT * FROM t1;
+ QUERY PLAN
+-------------------------------------------
+ PX Coordinator 6:1 (slice1; segments: 6)
+ -> Append
+ -> Partial Seq Scan on t1_p1
+ -> Partial Seq Scan on t1_p2
+ -> Partial Seq Scan on t1_p3
+ Optimizer: PolarDB PX Optimizer
+(6 rows)
+
ePQ 将会启动一组进程并行扫描分区表的每一个子表。每一个扫描进程都会通过 Append
算子依次扫描每一个子表的一部分数据(Partial Seq Scan
),并通过 Motion 算子(PX Coordinator
)将所有进程的扫描结果汇聚到发起查询的进程并返回。
当查询的过滤条件中包含分区键时,ePQ 优化器可以根据过滤条件对将要扫描的分区表进行裁剪,避免扫描不需要的子分区,节省系统资源,提升查询性能。以上述 t1
表为例,查看以下查询的执行计划:
=> EXPLAIN (COSTS OFF) SELECT * FROM t1 WHERE id < 100;
+ QUERY PLAN
+-------------------------------------------
+ PX Coordinator 6:1 (slice1; segments: 6)
+ -> Append
+ -> Partial Seq Scan on t1_p1
+ Filter: (id < 100)
+ Optimizer: PolarDB PX Optimizer
+(5 rows)
+
由于查询的过滤条件 id < 100
包含分区键,因此 ePQ 优化器可以根据分区表的分区边界,在产生执行计划时去除不符合过滤条件的子分区(t1_p2
、t1_p3
),只保留符合过滤条件的子分区(t1_p1
)。
在进行分区表之间的连接操作时,如果分区策略和边界相同,并且连接条件为分区键时,ePQ 优化器可以产生以子分区为单位进行连接的执行计划,避免两张分区表的进行笛卡尔积式的连接,节省系统资源,提升查询性能。
以两张 Range 分区表的连接为例。使用以下 SQL 创建两张分区策略和边界都相同的分区表 t2
和 t3
:
CREATE TABLE t2 (id INT) PARTITION BY RANGE(id);
+CREATE TABLE t2_p1 PARTITION OF t2 FOR VALUES FROM (0) TO (200);
+CREATE TABLE t2_p2 PARTITION OF t2 FOR VALUES FROM (200) TO (400);
+CREATE TABLE t2_p3 PARTITION OF t2 FOR VALUES FROM (400) TO (600);
+
+CREATE TABLE t3 (id INT) PARTITION BY RANGE(id);
+CREATE TABLE t3_p1 PARTITION OF t3 FOR VALUES FROM (0) TO (200);
+CREATE TABLE t3_p2 PARTITION OF t3 FOR VALUES FROM (200) TO (400);
+CREATE TABLE t3_p3 PARTITION OF t3 FOR VALUES FROM (400) TO (600);
+
打开以下参数启用 ePQ 对分区表的支持:
SET polar_enable_px TO ON;
+SET polar_px_enable_partition TO ON;
+
当 Partition Wise join 关闭时,两表在分区键上等值连接的执行计划如下:
=> SET polar_px_enable_partitionwise_join TO OFF;
+=> EXPLAIN (COSTS OFF) SELECT * FROM t2 JOIN t3 ON t2.id = t3.id;
+ QUERY PLAN
+-----------------------------------------------------------
+ PX Coordinator 6:1 (slice1; segments: 6)
+ -> Hash Join
+ Hash Cond: (t2_p1.id = t3_p1.id)
+ -> Append
+ -> Partial Seq Scan on t2_p1
+ -> Partial Seq Scan on t2_p2
+ -> Partial Seq Scan on t2_p3
+ -> Hash
+ -> PX Broadcast 6:6 (slice2; segments: 6)
+ -> Append
+ -> Partial Seq Scan on t3_p1
+ -> Partial Seq Scan on t3_p2
+ -> Partial Seq Scan on t3_p3
+ Optimizer: PolarDB PX Optimizer
+(14 rows)
+
从执行计划中可以看出,执行 slice1
计划分片的六个进程会分别通过 Append
算子依次扫描分区表 t2
每一个子分区的一部分数据,并通过 Motion 算子(PX Broadcast
)接收来自执行 slice2
的六个进程广播的 t3
全表数据,在本地完成哈希连接(Hash Join
)后,通过 Motion 算子(PX Coordinator
)汇聚结果并返回。本质上,分区表 t2
的每一行数据都与 t3
的每一行数据做了一次连接。
打开参数 polar_px_enable_partitionwise_join
启用 Partition Wise join 后,再次查看执行计划:
=> SET polar_px_enable_partitionwise_join TO ON;
+=> EXPLAIN (COSTS OFF) SELECT * FROM t2 JOIN t3 ON t2.id = t3.id;
+ QUERY PLAN
+------------------------------------------------
+ PX Coordinator 6:1 (slice1; segments: 6)
+ -> Append
+ -> Hash Join
+ Hash Cond: (t2_p1.id = t3_p1.id)
+ -> Partial Seq Scan on t2_p1
+ -> Hash
+ -> Full Seq Scan on t3_p1
+ -> Hash Join
+ Hash Cond: (t2_p2.id = t3_p2.id)
+ -> Partial Seq Scan on t2_p2
+ -> Hash
+ -> Full Seq Scan on t3_p2
+ -> Hash Join
+ Hash Cond: (t2_p3.id = t3_p3.id)
+ -> Partial Seq Scan on t2_p3
+ -> Hash
+ -> Full Seq Scan on t3_p3
+ Optimizer: PolarDB PX Optimizer
+(18 rows)
+
在上述执行计划中,执行 slice1
计划分片的六个进程将通过 Append
算子依次扫描分区表 t2
每个子分区中的一部分数据,以及分区表 t3
相对应子分区 的全部数据,将两份数据进行哈希连接(Hash Join
),最终通过 Motion 算子(PX Coordinator
)汇聚结果并返回。在上述执行过程中,分区表 t2
的每一个子分区 t2_p1
、t2_p2
、t2_p3
分别只与分区表 t3
对应的 t3_p1
、t3_p2
、t3_p3
做了连接,并没有与其它不相关的分区连接,节省了不必要的工作。
在多级分区表中,每级分区表的分区维度(分区键)可以不同:比如一级分区表按照时间维度分区,二级分区表按照地域维度分区。当查询 SQL 的过滤条件中包含每一级分区表中的分区键时,ePQ 优化器支持对多级分区表进行静态分区裁剪,从而过滤掉不需要被扫描的子分区。
以下图为例:当查询过滤条件 WHERE date = '202201' AND region = 'beijing'
中包含一级分区键 date
和二级分区键 region
时,ePQ 优化器能够裁剪掉所有不相关的分区,产生的执行计划中只包含符合条件的子分区。由此,执行器只对需要扫描的子分区进行扫描即可。
使用以下 SQL 为例,创建一张多级分区表:
CREATE TABLE r1 (a INT, b TIMESTAMP) PARTITION BY RANGE (b);
+
+CREATE TABLE r1_p1 PARTITION OF r1 FOR VALUES FROM ('2000-01-01') TO ('2010-01-01') PARTITION BY RANGE (a);
+CREATE TABLE r1_p1_p1 PARTITION OF r1_p1 FOR VALUES FROM (1) TO (1000000);
+CREATE TABLE r1_p1_p2 PARTITION OF r1_p1 FOR VALUES FROM (1000000) TO (2000000);
+
+CREATE TABLE r1_p2 PARTITION OF r1 FOR VALUES FROM ('2010-01-01') TO ('2020-01-01') PARTITION BY RANGE (a);
+CREATE TABLE r1_p2_p1 PARTITION OF r1_p2 FOR VALUES FROM (1) TO (1000000);
+CREATE TABLE r1_p2_p2 PARTITION OF r1_p2 FOR VALUES FROM (1000000) TO (2000000);
+
打开以下参数启用 ePQ 对分区表的支持:
SET polar_enable_px TO ON;
+SET polar_px_enable_partition TO ON;
+
执行一条以两级分区键作为过滤条件的 SQL,并关闭 ePQ 的多级分区扫描功能,将得到 PostgreSQL 内置优化器经过多级分区静态裁剪后的执行计划:
=> SET polar_px_optimizer_multilevel_partitioning TO OFF;
+=> EXPLAIN (COSTS OFF) SELECT * FROM r1 WHERE a < 1000000 AND b < '2009-01-01 00:00:00';
+ QUERY PLAN
+----------------------------------------------------------------------------------------
+ Seq Scan on r1_p1_p1 r1
+ Filter: ((a < 1000000) AND (b < '2009-01-01 00:00:00'::timestamp without time zone))
+(2 rows)
+
启用 ePQ 的多级分区扫描功能,再次查看执行计划:
=> SET polar_px_optimizer_multilevel_partitioning TO ON;
+=> EXPLAIN (COSTS OFF) SELECT * FROM r1 WHERE a < 1000000 AND b < '2009-01-01 00:00:00';
+ QUERY PLAN
+----------------------------------------------------------------------------------------------------
+ PX Coordinator 6:1 (slice1; segments: 6)
+ -> Append
+ -> Partial Seq Scan on r1_p1_p1
+ Filter: ((a < 1000000) AND (b < '2009-01-01 00:00:00'::timestamp without time zone))
+ Optimizer: PolarDB PX Optimizer
+(5 rows)
+
在上述计划中,ePQ 优化器进行了对多级分区表的静态裁剪。执行 slice1
计划分片的六个进程只需对符合过滤条件的子分区 r1_p1_p1
进行并行扫描(Partial Seq Scan
)即可,并将扫描到的数据通过 Motion 算子(PX Coordinator
)汇聚并返回。
目前文件系统并不能保证数据库页面级别的原子读写,在一次页面的 I/O 过程中,如果发生设备断电等情况,就会造成页面数据的错乱和丢失。在实现闪回表的过程中,我们发现通过定期保存旧版本数据页 + WAL 日志回放的方式可以得到任意时间点的数据页,这样就可以解决半写问题。这种方式和 PostgreSQL 原生的 Full Page Write 相比,由于不在事务提交的主路径上,因此性能有了约 30% ~ 100% 的提升。实例规格越大,负载压力越大,效果越明显。
闪回日志 (Flashback Log) 用于保存压缩后的旧版本数据页。其解决半写问题的方案如下:
当遭遇半写问题(数据页 checksum 不正确)时,通过日志索引快速找到该页对应的 Flashback Log 记录,通过 Flashback Log 记录可以得到旧版本的正确数据页,用于替换被损坏的页。在文件系统不能保证 8kB 级别原子读写的任何设备上,都可以使用这个功能。需要特别注意的是,启用这个功能会造成一定的性能下降。
闪回表 (Flashback Table) 功能通过定期保留数据页面快照到闪回日志中,保留事务信息到快速恢复区中,支持用户将某个时刻的表数据恢复到一个新的表中。
FLASHBACK TABLE
+ [ schema. ]table
+ TO TIMESTAMP expr;
+
准备测试数据。创建表 test
,并插入数据:
CREATE TABLE test(id int);
+INSERT INTO test select * FROM generate_series(1, 10000);
+
查看已插入的数据:
polardb=# SELECT count(1) FROM test;
+ count
+-------
+ 10000
+(1 row)
+
+polardb=# SELECT sum(id) FROM test;
+ sum
+----------
+ 50005000
+(1 row)
+
等待 10 秒并删除表数据:
SELECT pg_sleep(10);
+DELETE FROM test;
+
表中已无数据:
polardb=# SELECT * FROM test;
+ id
+----
+(0 rows)
+
闪回表到 10 秒之前的数据:
polardb=# FLASHBACK TABLE test TO TIMESTAMP now() - interval'10s';
+NOTICE: Flashback the relation test to new relation polar_flashback_65566, please check the data
+FLASHBACK TABLE
+
检查闪回表数据:
polardb=# SELECT count(1) FROM polar_flashback_65566;
+ count
+-------
+ 10000
+(1 row)
+
+polardb=# SELECT sum(id) FROM polar_flashback_65566;
+ sum
+----------
+ 50005000
+(1 row)
+
闪回表功能依赖闪回日志和快速恢复区功能,需要设置 polar_enable_flashback_log
和 polar_enable_fast_recovery_area
参数并重启。其他的参数也需要按照需求来修改,建议一次性修改完成并在业务低峰期重启。打开闪回表功能将会增大内存、磁盘的占用量,并带来一定的性能损失,请谨慎评估后再使用。
打开闪回日志功能需要增加的共享内存大小为以下三项之和:
polar_flashback_log_buffers
* 8kBpolar_flashback_logindex_mem_size
MBpolar_flashback_logindex_queue_buffers
MB打开快速恢复区需要增加大约 32kB 的共享内存大小,请评估当前实例状态后再调整参数。
为了保证能够闪回到一定时间之前,需要保留该段时间的闪回日志和 WAL 日志,以及两者的 LogIndex 文件,这会增加磁盘空间的占用。理论上 polar_fast_recovery_area_rotation
设置得越大,磁盘占用越多。若 polar_fast_recovery_area_rotation
设置为 300
,则将会保存 5 个小时的历史数据。
打开闪回日志之后,会定期去做 闪回点(Flashback Point)。闪回点是检查点的一种,当触发检查点后会检查 polar_flashback_point_segments
和 polar_flashback_point_timeout
参数来判断当前检查点是否为闪回点。所以建议:
polar_flashback_point_segments
为 max_wal_size
的倍数polar_flashback_point_timeout
为 checkpoint_timeout
的倍数假设 5 个小时共产生 20GB 的 WAL 日志,闪回日志与 WAL 日志的比例大约是 1:20,那么大约会产生 1GB 的闪回日志。闪回日志和 WAL 日志的比例大小和以下两个因素有关:
polar_flashback_point_segments
、polar_flashback_point_timeout
参数设定越大,闪回日志越少闪回日志特性增加了两个后台进程来消费闪回日志,这势必会增大 CPU 的开销。可以调整 polar_flashback_log_bgwrite_delay
和 polar_flashback_log_insert_list_delay
参数使得两个后台进程工作间隔周期更长,从而减少 CPU 消耗,但是这可能会造成一定性能的下降,建议使用默认值即可。
另外,由于闪回日志功能需要在该页面刷脏之前,先刷对应的闪回日志,来保证不丢失闪回日志,所以可能会造成一定的性能下降。目前测试在大多数场景下性能下降不超过 5%。
在表闪回的过程中,目标表涉及到的页面在共享内存池中换入换出,可能会造成其他数据库访问操作的性能抖动。
目前闪回表功能会恢复目标表的数据到一个新表中,表名为 polar_flashback_目标表 OID
。在执行 FLASHBACK TABLE
语法后会有如下 NOTICE
提示:
polardb=# flashback table test to timestamp now() - interval '1h';
+NOTICE: Flashback the relation test to new relation polar_flashback_54986, please check the data
+FLASHBACK TABLE
+
其中的 polar_flashback_54986
就是闪回恢复出的临时表,只恢复表数据到目标时刻。目前只支持 普通表 的闪回,不支持以下数据库对象:
另外,如果在目标时间到当前时刻对表执行过某些 DDL,则无法闪回:
DROP TABLE
ALTER TABLE SET WITH OIDS
ALTER TABLE SET WITHOUT OIDS
TRUNCATE TABLE
UNLOGGED
或者 LOGGED
IDENTITY
的列其中 DROP TABLE
的闪回可以使用 PolarDB for PostgreSQL/Oracle 的闪回删除功能来恢复。
当出现人为误操作数据的情况时,建议先使用审计日志快速定位到误操作发生的时间,然后将目标表闪回到该时间之前。在表闪回过程中,会持有目标表的排他锁,因此仅可以对目标表进行查询操作。另外,在表闪回的过程中,目标表涉及到的页面在共享内存池中换入换出,可能会造成其他数据库访问操作的性能抖动。因此,建议在业务低峰期执行闪回操作。
闪回的速度和表的大小相关。当表比较大时,为节约时间,可以加大 polar_workers_per_flashback_table
参数,增加并行闪回的 worker 个数。
在表闪回结束后,可以根据 NOTICE
的提示,查询对应闪回表的数据,和原表的数据进行比对。闪回表上不会有任何索引,用户可以根据查询需要自行创建索引。在数据比对完成之后,可以将缺失的数据重新回流到原表。
参数名 | 参数含义 | 取值范围 | 默认值 | 生效方法 |
---|---|---|---|---|
polar_enable_flashback_log | 是否打开闪回日志 | on / off | off | 修改配置文件后重启生效 |
polar_enable_fast_recovery_area | 是否打开快速恢复区 | on / off | off | 修改配置文件后重启生效 |
polar_flashback_log_keep_segments | 闪回日志保留的文件个数,可重用。每个文件 256MB | [3, 2147483647] | 8 | SIGHUP 生效 |
polar_fast_recovery_area_rotation | 快速恢复区保留的事务信息时长,单位为分钟,即最大可闪回表到几分钟之前。 | [1, 14400] | 180 | SIGHUP 生效 |
polar_flashback_point_segments | 两个闪回点之间的最小 WAL 日志个数,每个 WAL 日志 1GB | [1, 2147483647] | 16 | SIGHUP 生效 |
polar_flashback_point_timeout | 两个闪回点之间的最小时间间隔,单位为秒 | [1, 86400] | 300 | SIGHUP 生效 |
polar_flashback_log_buffers | 闪回日志共享内存大小,单位为 8kB | [4, 262144] | 2048 (16MB) | 修改配置文件后重启生效 |
polar_flashback_logindex_mem_size | 闪回日志索引共享内存大小,单位为 MB | [3, 1073741823] | 64 | 修改配置文件后重启生效 |
polar_flashback_logindex_bloom_blocks | 闪回日志索引的布隆过滤器页面个数 | [8, 1073741823] | 512 | 修改配置文件后重启生效 |
polar_flashback_log_insert_locks | 闪回日志插入锁的个数 | [1, 2147483647] | 8 | 修改配置文件后重启生效 |
polar_workers_per_flashback_table | 闪回表并行 worker 的数量 | [0, 1024] (0 为关闭并行) | 5 | 即时生效 |
polar_flashback_log_bgwrite_delay | 闪回日志 bgwriter 进程的工作间隔周期,单位为 ms | [1, 10000] | 100 | SIGHUP 生效 |
polar_flashback_log_flush_max_size | 闪回日志 bgwriter 进程每次刷盘闪回日志的大小,单位为 kB | [0, 2097152] (0 为不限制) | 5120 | SIGHUP 生效 |
polar_flashback_log_insert_list_delay | 闪回日志 bginserter 进程的工作间隔周期,单位为 ms | [1, 10000] | 10 | SIGHUP 生效 |
docker pull polardb/polardb_pg_devel:curvebs
+docker run -it \\
+ --network=host \\
+ --cap-add=SYS_PTRACE --privileged=true \\
+ --name polardb_pg \\
+ polardb/polardb_pg_devel:curvebs bash
+
进入容器后需要修改 curve 相关的配置文件:
sudo vim /etc/curve/client.conf
+#
+################### mds一侧配置信息 ##################
+#
+
+# mds的地址信息,对于mds集群,地址以逗号隔开
+mds.listen.addr=127.0.0.1:6666
+... ...
+
容器内已经安装了 curve
工具,该工具可用于创建卷,用户需要使用该工具创建实际存储 PolarFS 数据的 curve 卷:
curve create --filename /volume --user my --length 10 --stripeUnit 16384 --stripeCount 64
+
用户可通过 curve create -h 命令查看创建卷的详细说明。上面的列子中,我们创建了一个拥有以下属性的卷:
特别需要注意的是,在数据库场景下,我们强烈建议使用条带卷,只有这样才能充分发挥 Curve 的性能优势,而 16384 * 64 的条带设置是目前最优的条带设置。
在使用 curve 卷之前需要使用 pfs 来格式化对应的 curve 卷:
sudo pfs -C curve mkfs pool@@volume_my_
+
与我们在本地挂载文件系统前要先在磁盘上格式化文件系统一样,我们也要把我们的 curve 卷格式化为 PolarFS 文件系统。
注意,由于 PolarFS 解析的特殊性,我们将以 pool@\${volume}_\${user}_
的形式指定我们的 curve 卷,此外还需要将卷名中的 / 替换成 @
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p pool@@volume_my_
+
如果 pfsd 启动成功,那么至此 curve 版 PolarFS 已全部部署完成,已经成功挂载 PFS 文件系统。 下面需要编译部署 PolarDB。
docker pull polardb/polardb_pg_devel:curvebs
+docker run -it \\
+ --network=host \\
+ --cap-add=SYS_PTRACE --privileged=true \\
+ --name polardb_pg \\
+ polardb/polardb_pg_devel:curvebs bash
+
进入容器后需要修改 curve 相关的配置文件:
sudo vim /etc/curve/client.conf
+#
+################### mds一侧配置信息 ##################
+#
+
+# mds的地址信息,对于mds集群,地址以逗号隔开
+mds.listen.addr=127.0.0.1:6666
+... ...
+
容器内已经安装了 curve
工具,该工具可用于创建卷,用户需要使用该工具创建实际存储 PolarFS 数据的 curve 卷:
curve create --filename /volume --user my --length 10 --stripeUnit 16384 --stripeCount 64
+
用户可通过 curve create -h 命令查看创建卷的详细说明。上面的列子中,我们创建了一个拥有以下属性的卷:
特别需要注意的是,在数据库场景下,我们强烈建议使用条带卷,只有这样才能充分发挥 Curve 的性能优势,而 16384 * 64 的条带设置是目前最优的条带设置。
在使用 curve 卷之前需要使用 pfs 来格式化对应的 curve 卷:
sudo pfs -C curve mkfs pool@@volume_my_
+
与我们在本地挂载文件系统前要先在磁盘上格式化文件系统一样,我们也要把我们的 curve 卷格式化为 PolarFS 文件系统。
注意,由于 PolarFS 解析的特殊性,我们将以 pool@\${volume}_\${user}_
的形式指定我们的 curve 卷,此外还需要将卷名中的 / 替换成 @
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p pool@@volume_my_
+
如果 pfsd 启动成功,那么至此 curve 版 PolarFS 已全部部署完成,已经成功挂载 PFS 文件系统。 下面需要编译部署 PolarDB。
docker pull polardb/polardb_pg_binary
+docker run -it \\
+ --cap-add=SYS_PTRACE \\
+ --privileged=true \\
+ --name polardb_pg \\
+ --shm-size=512m \\
+ polardb/polardb_pg_binary \\
+ bash
+
#define PFS_PATH_ISVALID(path) \\
+ (path != NULL && \\
+ ((path[0] == '/' && isdigit((path)[1])) || path[0] == '.' \\
+ || strncmp(path, "/pangu-", 7) == 0 \\
+ || strncmp(path, "/sd", 3) == 0 \\
+ || strncmp(path, "/sf", 3) == 0 \\
+ || strncmp(path, "/vd", 3) == 0 \\
+ || strncmp(path, "/nvme", 5) == 0 \\
+ || strncmp(path, "/loop", 5) == 0 \\
+ || strncmp(path, "/mapper_", 8) ==0))
+
因此,为了保证能够顺畅完成后续流程,我们建议在所有访问块设备的节点上使用相同的软链接访问共享块设备。例如,在 NBD 服务端主机上,使用新的块设备名 /dev/nvme1n1
软链接到共享存储块设备的原有名称 /dev/vdb
上:
sudo ln -s /dev/vdb /dev/nvme1n1
+
在 NBD 客户端主机上,使用同样的块设备名 /dev/nvme1n1
软链到共享存储块设备的原有名称 /dev/nbd0
上:
sudo ln -s /dev/nbd0 /dev/nvme1n1
+
这样便可以在服务端和客户端两台主机上使用相同的块设备名 /dev/nvme1n1
访问同一个块设备。
使用 任意一台主机,在共享存储块设备上格式化 PFS 分布式文件系统:
sudo pfs -C disk mkfs nvme1n1
+
在能够访问共享存储的 所有主机节点 上分别启动 PFS 守护进程,挂载 PFS 文件系统:
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme1n1 -w 2
+
docker pull polardb/polardb_pg_binary
+docker run -it \\
+ --cap-add=SYS_PTRACE \\
+ --privileged=true \\
+ --name polardb_pg \\
+ --shm-size=512m \\
+ polardb/polardb_pg_binary \\
+ bash
+
#define PFS_PATH_ISVALID(path) \\
+ (path != NULL && \\
+ ((path[0] == '/' && isdigit((path)[1])) || path[0] == '.' \\
+ || strncmp(path, "/pangu-", 7) == 0 \\
+ || strncmp(path, "/sd", 3) == 0 \\
+ || strncmp(path, "/sf", 3) == 0 \\
+ || strncmp(path, "/vd", 3) == 0 \\
+ || strncmp(path, "/nvme", 5) == 0 \\
+ || strncmp(path, "/loop", 5) == 0 \\
+ || strncmp(path, "/mapper_", 8) ==0))
+
因此,为了保证能够顺畅完成后续流程,我们建议在所有访问块设备的节点上使用相同的软链接访问共享块设备。例如,在 NBD 服务端主机上,使用新的块设备名 /dev/nvme1n1
软链接到共享存储块设备的原有名称 /dev/vdb
上:
sudo ln -s /dev/vdb /dev/nvme1n1
+
在 NBD 客户端主机上,使用同样的块设备名 /dev/nvme1n1
软链到共享存储块设备的原有名称 /dev/nbd0
上:
sudo ln -s /dev/nbd0 /dev/nvme1n1
+
这样便可以在服务端和客户端两台主机上使用相同的块设备名 /dev/nvme1n1
访问同一个块设备。
使用 任意一台主机,在共享存储块设备上格式化 PFS 分布式文件系统:
sudo pfs -C disk mkfs nvme1n1
+
在能够访问共享存储的 所有主机节点 上分别启动 PFS 守护进程,挂载 PFS 文件系统:
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme1n1 -w 2
+
另外,为保证后续扩容步骤的成功,请以 10GB 为单位进行扩容。
本示例中,在扩容之前,已有一个 20GB 的 ESSD 云盘多重挂载在两台 ECS 上。在这两台 ECS 上运行 lsblk
,可以看到 ESSD 云盘共享存储对应的块设备 nvme1n1
目前的物理空间为 20GB。
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 20G 0 disk
+
接下来对这块 ESSD 云盘进行扩容。在阿里云 ESSD 云盘的管理页面上,点击 云盘扩容:
进入到云盘扩容界面以后,可以看到该云盘已被两台 ECS 实例多重挂载。填写扩容后的容量,然后点击确认扩容,把 20GB 的云盘扩容为 40GB:
扩容成功后,将会看到如下提示:
此时,两台 ECS 上运行 lsblk
,可以看到 ESSD 对应块设备 nvme1n1
的物理空间已经变为 40GB:
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 40G 0 disk
+
至此,块存储层面的扩容就完成了。
在物理块设备完成扩容以后,接下来需要使用 PFS 文件系统提供的工具,对块设备上扩大后的物理空间进行格式化,以完成文件系统层面的扩容。
在能够访问共享存储的 任意一台主机上 运行 PFS 的 growfs
命令,其中:
-o
表示共享存储扩容前的空间(以 10GB 为单位)-n
表示共享存储扩容后的空间(以 10GB 为单位)本例将共享存储从 20GB 扩容至 40GB,所以参数分别填写 2
和 4
:
$ sudo pfs -C disk growfs -o 2 -n 4 nvme1n1
+
+...
+
+Init chunk 2
+ metaset 2/1: sectbda 0x500001000, npage 80, objsize 128, nobj 2560, oid range [ 2000, 2a00)
+ metaset 2/2: sectbda 0x500051000, npage 64, objsize 128, nobj 2048, oid range [ 1000, 1800)
+ metaset 2/3: sectbda 0x500091000, npage 64, objsize 128, nobj 2048, oid range [ 1000, 1800)
+
+Init chunk 3
+ metaset 3/1: sectbda 0x780001000, npage 80, objsize 128, nobj 2560, oid range [ 3000, 3a00)
+ metaset 3/2: sectbda 0x780051000, npage 64, objsize 128, nobj 2048, oid range [ 1800, 2000)
+ metaset 3/3: sectbda 0x780091000, npage 64, objsize 128, nobj 2048, oid range [ 1800, 2000)
+
+pfs growfs succeeds!
+
如果看到上述输出,说明文件系统层面的扩容已经完成。
最后,在数据库实例层,扩容需要做的工作是执行 SQL 函数来通知每个实例上已经挂载到共享存储的 PFSD(PFS Daemon)守护进程,告知共享存储上的新空间已经可以被使用了。需要注意的是,数据库实例集群中的 所有 PFSD 都需要被通知到,并且需要 先通知所有 RO 节点上的 PFSD,最后通知 RW 节点上的 PFSD。这意味着我们需要在 每一个 PolarDB for PostgreSQL 节点上执行一次通知 PFSD 的 SQL 函数,并且 RO 节点在先,RW 节点在后。
数据库实例层通知 PFSD 的扩容函数实现在 PolarDB for PostgreSQL 的 polar_vfs
插件中,所以首先需要在 RW 节点 上加载 polar_vfs
插件。在加载插件的过程中,会在 RW 节点和所有 RO 节点上注册好 polar_vfs_disk_expansion
这个 SQL 函数。
CREATE EXTENSION IF NOT EXISTS polar_vfs;
+
接下来,依次 在所有的 RO 节点上,再到 RW 节点上 分别 执行这个 SQL 函数。其中函数的参数名为块设备名:
SELECT polar_vfs_disk_expansion('nvme1n1');
+
执行完毕后,数据库实例层面的扩容也就完成了。此时,新的存储空间已经能够被数据库使用了。
`,26);function D(p,R){const l=e("Badge"),c=e("ArticleInfo"),o=e("router-link"),r=e("RouteLink");return d(),i("div",null,[s("h1",_,[s("a",S,[s("span",null,[n("共享存储在线扩容 "),s("a",f,[a(l,{type:"tip",text:"视频",vertical:"top"})])])])]),a(c,{frontmatter:p.$frontmatter},null,8,["frontmatter"]),v,s("nav",P,[s("ul",null,[s("li",null,[a(o,{to:"#块存储层扩容"},{default:t(()=>[n("块存储层扩容")]),_:1})]),s("li",null,[a(o,{to:"#文件系统层扩容"},{default:t(()=>[n("文件系统层扩容")]),_:1})]),s("li",null,[a(o,{to:"#数据库实例层扩容"},{default:t(()=>[n("数据库实例层扩容")]),_:1})])])]),E,w,s("p",null,[n("首先需要进行的是块存储层面上的扩容。不管使用哪种类型的共享存储,存储层面扩容最终需要达成的目的是:在能够访问共享存储的主机上运行 "),x,n(" 命令,显示存储块设备的物理空间变大。由于不同类型的共享存储有不同的扩容方式,本文以 "),a(r,{to:"/deploying/storage-aliyun-essd.html"},{default:t(()=>[n("阿里云 ECS + ESSD 云盘共享存储")]),_:1}),n(" 为例演示如何进行存储层面的扩容。")]),B])}const N=u(h,[["render",D],["__file","grow-storage.html.vue"]]),T=JSON.parse('{"path":"/operation/grow-storage.html","title":"共享存储在线扩容","lang":"en-US","frontmatter":{"author":"棠羽","date":"2022/10/12","minute":15},"headers":[{"level":2,"title":"块存储层扩容","slug":"块存储层扩容","link":"#块存储层扩容","children":[]},{"level":2,"title":"文件系统层扩容","slug":"文件系统层扩容","link":"#文件系统层扩容","children":[]},{"level":2,"title":"数据库实例层扩容","slug":"数据库实例层扩容","link":"#数据库实例层扩容","children":[]}],"git":{"updatedTime":1672970315000},"filePathRelative":"operation/grow-storage.md"}');export{N as comp,T as data}; diff --git a/assets/grow-storage.html-BtBHbbDF.js b/assets/grow-storage.html-BtBHbbDF.js new file mode 100644 index 00000000000..ce44c0fffd2 --- /dev/null +++ b/assets/grow-storage.html-BtBHbbDF.js @@ -0,0 +1,28 @@ +import{_ as u,r as e,o as d,c as i,a as s,b as n,d as a,w as t,e as k}from"./app-CWFDhr_k.js";const m="/PolarDB-for-PostgreSQL/assets/essd-storage-grow-QKfeQwnZ.png",g="/PolarDB-for-PostgreSQL/assets/essd-storage-online-grow-6Cak6wu9.png",b="/PolarDB-for-PostgreSQL/assets/essd-storage-grow-complete-CDUTckAw.png",h={},_={id:"共享存储在线扩容",tabindex:"-1"},S={class:"header-anchor",href:"#共享存储在线扩容"},f={href:"https://developer.aliyun.com/live/250669"},v=s("p",null,"在使用数据库时,随着数据量的逐渐增大,不可避免需要对数据库所使用的存储空间进行扩容。由于 PolarDB for PostgreSQL 基于共享存储与分布式文件系统 PFS 的架构设计,与安装部署时类似,在扩容时,需要在以下三个层面分别进行操作:",-1),P={class:"table-of-contents"},E=s("p",null,"本文将指导您分别在以上三个层面上分别完成扩容操作,以实现不停止数据库实例的动态扩容。",-1),w=s("h2",{id:"块存储层扩容",tabindex:"-1"},[s("a",{class:"header-anchor",href:"#块存储层扩容"},[s("span",null,"块存储层扩容")])],-1),x=s("code",null,"lsblk",-1),B=k(`另外,为保证后续扩容步骤的成功,请以 10GB 为单位进行扩容。
本示例中,在扩容之前,已有一个 20GB 的 ESSD 云盘多重挂载在两台 ECS 上。在这两台 ECS 上运行 lsblk
,可以看到 ESSD 云盘共享存储对应的块设备 nvme1n1
目前的物理空间为 20GB。
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 20G 0 disk
+
接下来对这块 ESSD 云盘进行扩容。在阿里云 ESSD 云盘的管理页面上,点击 云盘扩容:
进入到云盘扩容界面以后,可以看到该云盘已被两台 ECS 实例多重挂载。填写扩容后的容量,然后点击确认扩容,把 20GB 的云盘扩容为 40GB:
扩容成功后,将会看到如下提示:
此时,两台 ECS 上运行 lsblk
,可以看到 ESSD 对应块设备 nvme1n1
的物理空间已经变为 40GB:
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 40G 0 disk
+
至此,块存储层面的扩容就完成了。
在物理块设备完成扩容以后,接下来需要使用 PFS 文件系统提供的工具,对块设备上扩大后的物理空间进行格式化,以完成文件系统层面的扩容。
在能够访问共享存储的 任意一台主机上 运行 PFS 的 growfs
命令,其中:
-o
表示共享存储扩容前的空间(以 10GB 为单位)-n
表示共享存储扩容后的空间(以 10GB 为单位)本例将共享存储从 20GB 扩容至 40GB,所以参数分别填写 2
和 4
:
$ sudo pfs -C disk growfs -o 2 -n 4 nvme1n1
+
+...
+
+Init chunk 2
+ metaset 2/1: sectbda 0x500001000, npage 80, objsize 128, nobj 2560, oid range [ 2000, 2a00)
+ metaset 2/2: sectbda 0x500051000, npage 64, objsize 128, nobj 2048, oid range [ 1000, 1800)
+ metaset 2/3: sectbda 0x500091000, npage 64, objsize 128, nobj 2048, oid range [ 1000, 1800)
+
+Init chunk 3
+ metaset 3/1: sectbda 0x780001000, npage 80, objsize 128, nobj 2560, oid range [ 3000, 3a00)
+ metaset 3/2: sectbda 0x780051000, npage 64, objsize 128, nobj 2048, oid range [ 1800, 2000)
+ metaset 3/3: sectbda 0x780091000, npage 64, objsize 128, nobj 2048, oid range [ 1800, 2000)
+
+pfs growfs succeeds!
+
如果看到上述输出,说明文件系统层面的扩容已经完成。
最后,在数据库实例层,扩容需要做的工作是执行 SQL 函数来通知每个实例上已经挂载到共享存储的 PFSD(PFS Daemon)守护进程,告知共享存储上的新空间已经可以被使用了。需要注意的是,数据库实例集群中的 所有 PFSD 都需要被通知到,并且需要 先通知所有 RO 节点上的 PFSD,最后通知 RW 节点上的 PFSD。这意味着我们需要在 每一个 PolarDB for PostgreSQL 节点上执行一次通知 PFSD 的 SQL 函数,并且 RO 节点在先,RW 节点在后。
数据库实例层通知 PFSD 的扩容函数实现在 PolarDB for PostgreSQL 的 polar_vfs
插件中,所以首先需要在 RW 节点 上加载 polar_vfs
插件。在加载插件的过程中,会在 RW 节点和所有 RO 节点上注册好 polar_vfs_disk_expansion
这个 SQL 函数。
CREATE EXTENSION IF NOT EXISTS polar_vfs;
+
接下来,依次 在所有的 RO 节点上,再到 RW 节点上 分别 执行这个 SQL 函数。其中函数的参数名为块设备名:
SELECT polar_vfs_disk_expansion('nvme1n1');
+
执行完毕后,数据库实例层面的扩容也就完成了。此时,新的存储空间已经能够被数据库使用了。
`,26);function D(p,N){const l=e("Badge"),c=e("ArticleInfo"),o=e("router-link"),r=e("RouteLink");return d(),i("div",null,[s("h1",_,[s("a",S,[s("span",null,[n("共享存储在线扩容 "),s("a",f,[a(l,{type:"tip",text:"视频",vertical:"top"})])])])]),a(c,{frontmatter:p.$frontmatter},null,8,["frontmatter"]),v,s("nav",P,[s("ul",null,[s("li",null,[a(o,{to:"#块存储层扩容"},{default:t(()=>[n("块存储层扩容")]),_:1})]),s("li",null,[a(o,{to:"#文件系统层扩容"},{default:t(()=>[n("文件系统层扩容")]),_:1})]),s("li",null,[a(o,{to:"#数据库实例层扩容"},{default:t(()=>[n("数据库实例层扩容")]),_:1})])])]),E,w,s("p",null,[n("首先需要进行的是块存储层面上的扩容。不管使用哪种类型的共享存储,存储层面扩容最终需要达成的目的是:在能够访问共享存储的主机上运行 "),x,n(" 命令,显示存储块设备的物理空间变大。由于不同类型的共享存储有不同的扩容方式,本文以 "),a(r,{to:"/zh/deploying/storage-aliyun-essd.html"},{default:t(()=>[n("阿里云 ECS + ESSD 云盘共享存储")]),_:1}),n(" 为例演示如何进行存储层面的扩容。")]),B])}const G=u(h,[["render",D],["__file","grow-storage.html.vue"]]),T=JSON.parse('{"path":"/zh/operation/grow-storage.html","title":"共享存储在线扩容","lang":"zh-CN","frontmatter":{"author":"棠羽","date":"2022/10/12","minute":15},"headers":[{"level":2,"title":"块存储层扩容","slug":"块存储层扩容","link":"#块存储层扩容","children":[]},{"level":2,"title":"文件系统层扩容","slug":"文件系统层扩容","link":"#文件系统层扩容","children":[]},{"level":2,"title":"数据库实例层扩容","slug":"数据库实例层扩容","link":"#数据库实例层扩容","children":[]}],"git":{"updatedTime":1672970315000},"filePathRelative":"zh/operation/grow-storage.md"}');export{G as comp,T as data}; diff --git a/assets/htap-1-background-spowOHJU.png b/assets/htap-1-background-spowOHJU.png new file mode 100644 index 00000000000..34e7b04bb1c Binary files /dev/null and b/assets/htap-1-background-spowOHJU.png differ diff --git a/assets/htap-2-arch-9jMAEiAc.png b/assets/htap-2-arch-9jMAEiAc.png new file mode 100644 index 00000000000..423e2413530 Binary files /dev/null and b/assets/htap-2-arch-9jMAEiAc.png differ diff --git a/assets/htap-3-mpp-jrU6LCSN.png b/assets/htap-3-mpp-jrU6LCSN.png new file mode 100644 index 00000000000..5f3fafb28c8 Binary files /dev/null and b/assets/htap-3-mpp-jrU6LCSN.png differ diff --git a/assets/htap-4-1-consistency-CN3AgZlI.png b/assets/htap-4-1-consistency-CN3AgZlI.png new file mode 100644 index 00000000000..4913c48ffbc Binary files /dev/null and b/assets/htap-4-1-consistency-CN3AgZlI.png differ diff --git a/assets/htap-4-2-serverless-ChOC1z2u.png b/assets/htap-4-2-serverless-ChOC1z2u.png new file mode 100644 index 00000000000..400c46d654c Binary files /dev/null and b/assets/htap-4-2-serverless-ChOC1z2u.png differ diff --git a/assets/htap-4-3-serverlessmap-CXx4-kAG.png b/assets/htap-4-3-serverlessmap-CXx4-kAG.png new file mode 100644 index 00000000000..1b4c07c4d1f Binary files /dev/null and b/assets/htap-4-3-serverlessmap-CXx4-kAG.png differ diff --git a/assets/htap-5-skew-CGu8XLkV.png b/assets/htap-5-skew-CGu8XLkV.png new file mode 100644 index 00000000000..3ab903d0fe4 Binary files /dev/null and b/assets/htap-5-skew-CGu8XLkV.png differ diff --git a/assets/htap-6-btbuild-D5VrHGoS.png b/assets/htap-6-btbuild-D5VrHGoS.png new file mode 100644 index 00000000000..21148fae067 Binary files /dev/null and b/assets/htap-6-btbuild-D5VrHGoS.png differ diff --git a/assets/htap-7-1-acc--fNK91m9.png b/assets/htap-7-1-acc--fNK91m9.png new file mode 100644 index 00000000000..3ef22381d10 Binary files /dev/null and b/assets/htap-7-1-acc--fNK91m9.png differ diff --git a/assets/htap-7-2-cpu-Bx0hJl5u.png b/assets/htap-7-2-cpu-Bx0hJl5u.png new file mode 100644 index 00000000000..dbedfc1980b Binary files /dev/null and b/assets/htap-7-2-cpu-Bx0hJl5u.png differ diff --git a/assets/htap-7-3-dop-BovrSV4Q.png b/assets/htap-7-3-dop-BovrSV4Q.png new file mode 100644 index 00000000000..52e1e37e5c2 Binary files /dev/null and b/assets/htap-7-3-dop-BovrSV4Q.png differ diff --git a/assets/htap-8-1-tpch-mpp-BonkD7SS.png b/assets/htap-8-1-tpch-mpp-BonkD7SS.png new file mode 100644 index 00000000000..334a6b89a86 Binary files /dev/null and b/assets/htap-8-1-tpch-mpp-BonkD7SS.png differ diff --git a/assets/htap-8-2-tpch-mpp-each-DhosjIGV.png b/assets/htap-8-2-tpch-mpp-each-DhosjIGV.png new file mode 100644 index 00000000000..f4695f57f75 Binary files /dev/null and b/assets/htap-8-2-tpch-mpp-each-DhosjIGV.png differ diff --git a/assets/htap-adaptive-scan-DNIJP8n3.png b/assets/htap-adaptive-scan-DNIJP8n3.png new file mode 100644 index 00000000000..a11d09a9e46 Binary files /dev/null and b/assets/htap-adaptive-scan-DNIJP8n3.png differ diff --git a/assets/htap-multi-level-partition-1-B3MUuoZn.png b/assets/htap-multi-level-partition-1-B3MUuoZn.png new file mode 100644 index 00000000000..014d96e63d8 Binary files /dev/null and b/assets/htap-multi-level-partition-1-B3MUuoZn.png differ diff --git a/assets/htap-non-adaptive-scan-M-kqk4bv.png b/assets/htap-non-adaptive-scan-M-kqk4bv.png new file mode 100644 index 00000000000..a780e53586b Binary files /dev/null and b/assets/htap-non-adaptive-scan-M-kqk4bv.png differ diff --git a/assets/index-Ds2TtRM5.js b/assets/index-Ds2TtRM5.js new file mode 100644 index 00000000000..820819345fa --- /dev/null +++ b/assets/index-Ds2TtRM5.js @@ -0,0 +1,17 @@ +/*! @docsearch/js 3.6.0 | MIT License | © Algolia, Inc. and contributors | https://docsearch.algolia.com */function lr(t,e){var r=Object.keys(t);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(t);e&&(n=n.filter(function(o){return Object.getOwnPropertyDescriptor(t,o).enumerable})),r.push.apply(r,n)}return r}function I(t){for(var e=1;eAlibaba Cloud continuously releases updates to PolarDB PostgreSQL (hereafter simplified as PolarDB) to improve user experience. At present, Alibaba Cloud plans the following versions for PolarDB:
Version 1.0 supports shared storage and compute-storage separation. This version provides the minimum set of features such as Polar virtual file system (PolarVFS), flushing and buffer management, LogIndex, and SyncDDL.
In addition to improvements to compute-storage separation, version 2.0 provides a significantly improved optimizer.
The availability of PolarDB with compute-storage separation is significantly improved.
Version 4.0 can meet your growing business requirements in hybrid transaction/analytical processing (HTAP) scenarios. Version 4.0 is based on the shared storage-based massively parallel processing (MPP) architecture, which allows PolarDB to fully utilize the CPU, memory, and I/O resources of multiple read-only nodes.
Test results show that the performance of a PolarDB cluster linearly increases as you increase the number of cores from 1 to 256.
In earlier versions, each PolarDB cluster consists of one primary node that processes both read requests and write requests and one or more read-only nodes that process only read requests. You can increase the read capability of a PolarDB cluster by creating more read-only nodes. However, you cannot increase the writing capability because each PolarDB cluster consists of only one primary node.
Version 5.0 uses the shared-nothing architecture together with the shared-everything architecture. This allows multiple compute nodes to process write requests.
',17),t=[n];function i(l,d){return r(),a("div",null,t)}const c=e(s,[["render",i],["__file","index.html.vue"]]),p=JSON.parse('{"path":"/roadmap/","title":"Roadmap","lang":"en-US","frontmatter":{},"headers":[{"level":2,"title":"Version 1.0","slug":"version-1-0","link":"#version-1-0","children":[]},{"level":2,"title":"Version 2.0","slug":"version-2-0","link":"#version-2-0","children":[]},{"level":2,"title":"Version 3.0","slug":"version-3-0","link":"#version-3-0","children":[]},{"level":2,"title":"Version 4.0","slug":"version-4-0","link":"#version-4-0","children":[]},{"level":2,"title":"Version 5.0","slug":"version-5-0","link":"#version-5-0","children":[]}],"git":{"updatedTime":1642525053000},"filePathRelative":"roadmap/README.md"}');export{c as comp,p as data}; diff --git a/assets/index.html-Clc4ZHCG.js b/assets/index.html-Clc4ZHCG.js new file mode 100644 index 00000000000..8567cc736df --- /dev/null +++ b/assets/index.html-Clc4ZHCG.js @@ -0,0 +1 @@ +import{_ as e,o as l,c as a,e as r}from"./app-CWFDhr_k.js";const o={},s=r('PolarDB PostgreSQL 将持续发布对用户有价值的功能。当前我们计划了 5 个阶段:
1.0 版本基于 Shared-Storage 的存储计算分离架构,发布必备的最小功能集合,例如:PolarVFS、刷脏和 Buffer 管理、LogIndex、SyncDDL 等。
除了在存储计算分离架构上改动之外,2.0 版本将在优化器上进行深度的优化,例如:
3.0 版本主要在存储计算分离后在可用性上进行重大优化,例如:
为了满足日益增多的 HTAP 混合负载需求,4.0 版本将发布基于 Shared-Storage 架构的分布式并行执行引擎,充分发挥多个只读节点的 CPU/MEM/IO 资源。
经测试,在计算集群逐步扩展到 256 核时,性能仍然能够线性提升。
基于存储计算分离的一写多读架构中,读能力能够弹性的扩展,但是写入能力仍然只能在单个节点上执行。
5.0 版本将发布 Shared-Nothing On Share-Everything 架构,结合 PolarDB 的分布式版本和 PolarDB 集中式版本的架构优势,使得多个节点都能够写入。
',17),t=[s];function p(n,d){return l(),a("div",null,t)}const h=e(o,[["render",p],["__file","index.html.vue"]]),g=JSON.parse('{"path":"/zh/roadmap/","title":"版本规划","lang":"zh-CN","frontmatter":{},"headers":[{"level":2,"title":"PolarDB PostgreSQL 1.0 版本","slug":"polardb-postgresql-1-0-版本","link":"#polardb-postgresql-1-0-版本","children":[]},{"level":2,"title":"PolarDB PostgreSQL 2.0 版本","slug":"polardb-postgresql-2-0-版本","link":"#polardb-postgresql-2-0-版本","children":[]},{"level":2,"title":"PolarDB PostgreSQL 3.0 版本","slug":"polardb-postgresql-3-0-版本","link":"#polardb-postgresql-3-0-版本","children":[]},{"level":2,"title":"PolarDB PostgreSQL 4.0 版本","slug":"polardb-postgresql-4-0-版本","link":"#polardb-postgresql-4-0-版本","children":[]},{"level":2,"title":"PolarDB PostgreSQL 5.0 版本","slug":"polardb-postgresql-5-0-版本","link":"#polardb-postgresql-5-0-版本","children":[]}],"git":{"updatedTime":1675309212000},"filePathRelative":"zh/roadmap/README.md"}');export{h as comp,g as data}; diff --git a/assets/index.html-CuF3tJ67.js b/assets/index.html-CuF3tJ67.js new file mode 100644 index 00000000000..4e54d415521 --- /dev/null +++ b/assets/index.html-CuF3tJ67.js @@ -0,0 +1 @@ +import{_ as n,r,o as s,c,a,d as t,w as o,b as e}from"./app-CWFDhr_k.js";const p={},v=a("h1",{id:"高可用",tabindex:"-1"},[a("a",{class:"header-anchor",href:"#高可用"},[a("span",null,"高可用")])],-1);function u(d,h){const l=r("RouteLink"),i=r("Badge");return s(),c("div",null,[v,a("ul",null,[a("li",null,[t(l,{to:"/zh/features/v11/availability/avail-online-promote.html"},{default:o(()=>[e("只读节点 Online Promote")]),_:1}),e(),t(i,{type:"tip",text:"V11 / v1.1.1-",vertical:"top"})]),a("li",null,[t(l,{to:"/zh/features/v11/availability/avail-parallel-replay.html"},{default:o(()=>[e("WAL 日志并行回放")]),_:1}),e(),t(i,{type:"tip",text:"V11 / v1.1.17-",vertical:"top"})]),a("li",null,[t(l,{to:"/zh/features/v11/availability/datamax.html"},{default:o(()=>[e("DataMax 日志节点")]),_:1}),e(),t(i,{type:"tip",text:"V11 / v1.1.6-",vertical:"top"})]),a("li",null,[t(l,{to:"/zh/features/v11/availability/resource-manager.html"},{default:o(()=>[e("Resource Manager")]),_:1}),e(),t(i,{type:"tip",text:"V11 / v1.1.1-",vertical:"top"})]),a("li",null,[t(l,{to:"/zh/features/v11/availability/flashback-table.html"},{default:o(()=>[e("闪回表和闪回日志")]),_:1}),e(),t(i,{type:"tip",text:"V11 / v1.1.22-",vertical:"top"})])])])}const f=n(p,[["render",u],["__file","index.html.vue"]]),m=JSON.parse('{"path":"/zh/features/v11/availability/","title":"高可用","lang":"zh-CN","frontmatter":{},"headers":[],"git":{"updatedTime":1697908247000},"filePathRelative":"zh/features/v11/availability/README.md"}');export{f as comp,m as data}; diff --git a/assets/index.html-D4TsALr0.js b/assets/index.html-D4TsALr0.js new file mode 100644 index 00000000000..0642d8aa12a --- /dev/null +++ b/assets/index.html-D4TsALr0.js @@ -0,0 +1,11 @@ +import{_ as n,r as l,o as s,c as i,a as e,b as a,d as o,e as r}from"./app-CWFDhr_k.js";const c={},p=e("hr",null,null,-1),h=e("h3",{id:"通过-docker-快速使用",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#通过-docker-快速使用"},[e("span",null,"通过 Docker 快速使用")])],-1),d={href:"https://hub.docker.com/r/polardb/polardb_pg_local_instance/tags",target:"_blank",rel:"noopener noreferrer"},u=r(`# 拉取 PolarDB-PG 镜像
+docker pull polardb/polardb_pg_local_instance
+# 创建并运行容器
+docker run -it --rm polardb/polardb_pg_local_instance psql
+# 测试可用性
+postgres=# SELECT version();
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
# pull the instance image from DockerHub
+docker pull polardb/polardb_pg_local_instance
+# create and run the container
+docker run -it --rm polardb/polardb_pg_local_instance psql
+# check
+postgres=# SELECT version();
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
PolarDB uses a shared storage architecture. Each PolarDB cluster consists of a primary node and multiple read-only nodes. The primary node can share data in the shared storage. The primary node can read data from the shared storage and write data to the storage. Read-only nodes can read data from the shared storage only by replaying logs. Data in the memory is synchronized from the primary node to read-only nodes. This ensures that data is consistent between the primary node and read-only nodes. Read-only nodes can also provide services to implement read/write splitting and load balancing. If the primary node becomes unavailable, a read-only node can be used as the primary node. This ensures the high availability of the cluster. The following figure shows the architecture of PolarDB.
In the shared-nothing architecture, read-only nodes have independent memory and storage. These nodes need only to receive write-ahead logging (WAL) logs from the primary node and replay the WAL logs. If the data that needs to be replayed is not in buffer pools, the data must be read from storage files and written to buffer pools for replay. This can cause cache misses. More data is evicted from buffer pools because the data is replayed in a continuous manner. The following figure shows more details.
Multiple transactions on the primary node can be executed in parallel. Read-only nodes must replay WAL logs in the sequence in which the WAL logs are generated. As a result, read-only nodes replay WAL logs at a low speed and the latency between the primary node and read-only nodes increases.
If a PolarDB cluster uses a shared storage architecture and consists of one primary node and multiple read-only nodes, the read-only nodes can obtain WAL logs that need to be replayed from the shared storage. If data pages on the shared storage are the most recent pages, read-only nodes can read the data pages without replaying the pages. PolarDB provides LogIndex that can be used on read-only nodes to replay WAL logs at a higher speed.
LogIndex stores the mapping between a data page and all the log sequence numbers (LSNs) of updates on the page. LogIndex can be used to rapidly obtain all LSNs of updates on a data page. This way, the WAL logs generated for the data page can be replayed when the data page is read. The following figure shows the architecture that is used to synchronize data from the primary node to read-only nodes.
Compared with the shared-nothing architecture, the workflow of the primary node and read-only nodes in the shared storage architecture has the following differences:
PolarDB reduces the latency between the primary node and read-only nodes by replicating only WAL log metadata. PolarDB uses LogIndex to delay the replay of WAL logs and replay WAL logs in parallel. This can increase the speed at which read-only nodes replay WAL logs.
WAL logs are also called XLogRecord. Each XLogRecord consists of two parts, as shown in the following figure.
In shared storage mode, complete WAL logs do not need to be replicated from the primary node to read-only nodes. Only WAL log metadata is replicated to the read-only nodes. WAL log metadata consists of the general header portion, header part, and main data, as shown in the preceding figure. Read-only nodes can read complete WAL log content from the shared storage based on WAL log metadata. The following figure shows the process of replicating WAL log metadata from the primary node to read-only nodes.
In streaming replication mode, payloads are not replicated from the primary node to read-only nodes. This reduces the amount of data transmitted on the network. The WalSender process on the primary node obtains the metadata of WAL logs from the metadata queue stored in the memory. After the WalReceiver process on the read-only nodes receives the metadata, the process stores the metadata in the metadata queue of WAL logs in the memory. The disk I/O in streaming replication mode is lower than that in primary/secondary mode. This increases the speed at which logs are transmitted and reduces the latency between the primary node and read-only nodes.
LogIndex is a HashTable structure. The key of this structure is PageTag. A PageTag can identify a specific data page . In this case, the values of this structure are all LSNs generated for updates on the page. The following figure shows the memory data structure of LogIndex. A LogIndex Memtable contains Memtable ID values, maximum and minimum LSNs, and the following arrays:
LogIndex Memtables stored in the memory are divided into two categories: Active LogIndex Memtables and Inactive LogIndex Memtables. The LogIndex records generated based on WAL log metadata are written to an Active LogIndex Memtable. After the Active LogIndex Memtable is full, the table is converted to an Inactive LogIndex Memtable and the system generates another Active LogIndex Memtable. The data in the Inactive LogIndex Memtable can be flushed to the disk. Then, the Inactive LogIndex Memtable can be converted to an Active LogIndex Memtable again. The following figure shows more details.
The disk stores a large number of LogIndex Tables. The structure of a LogIndex Table is similar to the structure of a LogIndex Memtable. A LogIndex Table can contain a maximum of 64 LogIndex Memtables. When data in Inactive LogIndex Memtables is flushed to the disk, Bloom filters are generated for the Memtables. The size of a single Bloom filter is 4,096 bytes. A Bloom filter records the information about an Inactive LogIndex Memtable, such as the mapped values that the bit array of the Bloom filter stores for all pages in the Inactive LogIndex Memtable, the minimum LSN, and the maximum LSN. The following figure shows more details. A Bloom filter can be used to determine whether a page exists in the LogIndex Table that corresponds to the filter. This way, LogIndex Tables in which the page does not exist do not need to be scanned. This accelerates data retrieval.
After the data in an Inactive LogIndex Memtable is flushed to the disk, the LogIndex metadata file is updated. This file is used to ensure the atomicity of I/O operations on the LogIndex Memtable file. The LogIndex metadata file stores the information about the smallest LogIndex Table and the largest LogIndex Memtable on the disk. Start LSN in this file records the maximum LSN among all LogIndex Memtables whose data is flushed to the disk. If data is written to the LogIndex Memtable when the Memtable is flushed, the system parses the WAL logs from Start LSN that are recorded in the LogIndex metadata file. Then, LogIndex records that are discarded during the data write are also regenerated to ensure the atomicity of I/O operations on the Memtable.
',35),A=a('For scenarios in which LogIndex Tables are used, the startup processes of read-only nodes generate LogIndex records based on the received WAL metadata and mark the pages that correspond to the WAL metadata and exist in buffer pools as outdated pages. This way, WAL logs for the next LSN can be replayed. The startup processes do not replay WAL logs. The backend processes that access the page and the background replay processes replay the logs. The following figure shows how WAL logs are replayed.
The XLOG Buffer is added to cache the read WAL logs. This reduces performance overhead when WAL logs are read from the disk for replay. WAL logs are read from the WAL segment file on the disk. After the XLOG Page Buffer is added, WAL logs are preferentially read from the XLOG Buffer. If WAL logs that you want to replay are not in the XLOG Buffer, the pages of the WAL logs are read from the disk, written to the buffer, and then copied to readBuf of XLogReaderState. If the WAL logs are in the buffer, the logs are copied to readBuf of XLogReaderState. This reduces the number of I/O operations that need to be performed to replay the WAL logs to increase the speed at which the WAL logs are replayed. The following figure shows more details.
The LogIndex mechanism is different from the shared-nothing architecture in terms of log replay. If the LogIndex mechanism is used, the startup process parses WAL metadata to generate LogIndex records and the backend process replays pages based on LogIndex records in parallel. In this case, the startup process and backend process perform the operations in parallel. The backend process replays only the pages that it must access. An XLogRecord may be used to modify multiple pages. For example, in an index block split, Page_0 and Page_1 are modified. The modification is an atomic operation. This indicates that Page_0 or Page_1 is completely modified or not modified. The service provides the mini transaction lock mechanism. This ensures that the memory data structures are consistent when the backend process replays pages.
When mini transaction locks are unavailable, the startup process parses WAL metadata and sequentially inserts the current LSN into the LSN list of each page. The following figure shows more details. The startup process completes the update of the LSN list of Page_0 but does not complete the update of the LSN list of Page_1. In this case, Backend_0 accesses Page_0 and Backend_1 accesses Page_1. Backend_0 replays Page_0 based on the LSN list of Page_0. Backend_1 replays Page_1 based on the LSN list of Page_1. The WAL log for LSN_N+1 is replayed for Page_0 and the WAL log for LSN_N is replayed for Page_1. As a result, the versions of the two pages are not consistent in the buffer pool. This causes inconsistency between the memory data structure of Page_0 and that of Page_1.
In the mini transaction lock mechanism, an update on the LSN list of Page_0 or Page_1 is a mini transaction. Before the startup process updates the LSN list of a page, the process must obtain the mini transaction lock of the page. In the following figure, the process first obtains the mini transaction lock of Page_0. The sequence of the obtained mini transaction lock is consistent with the Page_0 modification sequence in which the WAL log of this page is replayed. After the LSN lists of Page_0 and Page_1 are updated, the mini transaction lock is released. If the backend process replays a specific page based on LogIndex records and the startup process for the page is in a mini transaction, the mini transaction lock of the page must be obtained before the page is replayed. The startup process completes the update of the LSN list of Page_0 but does not complete the update of the LSN list of Page_1. Backend_0 accesses Page_0 and Backend_1 accesses Page_1. In this case, Backend_0 cannot replay Page_0 until the LSN list of this page is updated and the mini transaction lock of this page is released. Before the mini transaction lock of this page is released, the update of the LSN list of page_1 is completed. The memory data structures are modified based on the atomic operation rule.
PolarDB provides LogIndex based on the shared storage between the primary node and read-only nodes. LogIndex accelerates the speed at which memory data is synchronized from the primary node to read-only nodes and reduces the latency between the primary node and read-only nodes. This ensures the availability of read-only nodes and makes data between the primary node and read-only nodes consistent. This topic describes LogIndex and the LogIndex-based memory synchronization architecture of read-only nodes. LogIndex can be used to synchronize memory data from the primary node to read-only nodes. LogIndex can also be used to promote a read-only node as the primary node online. If the primary node becomes unavailable, the speed at which a read-only node is promoted to the primary node can be increased. This achieves the high availability of compute nodes. In addition, services can be restored in a short period of time.
',15);function k(W,v){const t=s("RouteLink");return n(),r("div",null,[S,d("p",null,[e("All modified data pages recorded in WAL logs before the consistent LSN are persisted to the shared storage based on the information described in "),i(t,{to:"/theory/buffer-management.html"},{default:h(()=>[e("Buffer Management")]),_:1}),e(". The primary node sends the write LSN and consistent LSN to each read-only node, and each read-only node sends the apply LSN and the min used LSN to the primary node. In this case, the WAL logs whose LSNs are smaller than the consistent LSN and the min used LSN can be cleared from LogIndex Tables. This way, the primary node can truncate LogIndex Tables that are no longer used in the storage. This enables more efficient log replay for read-only nodes and reduces the space occupied by LogIndex Tables.")]),A])}const N=o(_,[["render",k],["__file","logindex.html.vue"]]),B=JSON.parse('{"path":"/theory/logindex.html","title":"LogIndex","lang":"en-US","frontmatter":{},"headers":[{"level":2,"title":"Background Information","slug":"background-information","link":"#background-information","children":[]},{"level":2,"title":"Memory Synchronization Architecture for RO","slug":"memory-synchronization-architecture-for-ro","link":"#memory-synchronization-architecture-for-ro","children":[]},{"level":2,"title":"WAL Meta","slug":"wal-meta","link":"#wal-meta","children":[]},{"level":2,"title":"LogIndex","slug":"logindex-1","link":"#logindex-1","children":[{"level":3,"title":"Memory data structure","slug":"memory-data-structure","link":"#memory-data-structure","children":[]},{"level":3,"title":"Data Structure on Disk","slug":"data-structure-on-disk","link":"#data-structure-on-disk","children":[]}]},{"level":2,"title":"Log replay","slug":"log-replay","link":"#log-replay","children":[{"level":3,"title":"Delayed replay","slug":"delayed-replay","link":"#delayed-replay","children":[]},{"level":3,"title":"Mini Transaction","slug":"mini-transaction","link":"#mini-transaction","children":[]}]},{"level":2,"title":"Summary","slug":"summary","link":"#summary","children":[]}],"git":{"updatedTime":1712565495000},"filePathRelative":"theory/logindex.md"}');export{N as comp,B as data}; diff --git a/assets/logindex.html-DYDPauSD.js b/assets/logindex.html-DYDPauSD.js new file mode 100644 index 00000000000..f74280af1cd --- /dev/null +++ b/assets/logindex.html-DYDPauSD.js @@ -0,0 +1 @@ +import{_ as o,r as n,o as l,c as g,a as r,b as e,d as i,w as L,e as a}from"./app-CWFDhr_k.js";const d="/PolarDB-for-PostgreSQL/assets/49_LogIndex_1-D4VVINY3.png",s="/PolarDB-for-PostgreSQL/assets/50_LogIndex_2-Cq1DiwYr.png",p="/PolarDB-for-PostgreSQL/assets/51_LogIndex_3-CMaItyhv.png",c="/PolarDB-for-PostgreSQL/assets/52_LogIndex_4-D2wWgJ3h.png",m="/PolarDB-for-PostgreSQL/assets/53_LogIndex_5-BFAKWbmm.png",x="/PolarDB-for-PostgreSQL/assets/54_LogIndex_6-CgmB3aIV.png",I="/PolarDB-for-PostgreSQL/assets/55_LogIndex_7-UHFuCS_g.png",h="/PolarDB-for-PostgreSQL/assets/56_LogIndex_8-Sj7v9mQV.png",P="/PolarDB-for-PostgreSQL/assets/57_LogIndex_9-CwvVXy-7.png",_="/PolarDB-for-PostgreSQL/assets/58_LogIndex_10-BamMnLp4.png",S="/PolarDB-for-PostgreSQL/assets/59_LogIndex_11-ChiLiKyO.png",f="/PolarDB-for-PostgreSQL/assets/60_LogIndex_12-DQhb-t6m.png",R="/PolarDB-for-PostgreSQL/assets/61_LogIndex_13-BiF3IFPl.png",W="/PolarDB-for-PostgreSQL/assets/62_LogIndex_14-_Pc_aSZK.png",u={},b=a('PolarDB 采用了共享存储一写多读架构,读写节点 RW 和多个只读节点 RO 共享同一份存储,读写节点可以读写共享存储中的数据;只读节点仅能各自通过回放日志,从共享存储中读取数据,而不能写入,只读节点 RO 通过内存同步来维护数据的一致性。此外,只读节点可同时对外提供服务用于实现读写分离与负载均衡,在读写节点异常 crash 时,可将只读节点提升为读写节点,保证集群的高可用。基本架构图如下所示:
传统 share nothing 的架构下,只读节点 RO 有自己的内存及存储,只需要接收 RW 节点的 WAL 日志进行回放即可。如下图所示,如果需要回放的数据页不在 Buffer Pool 中,需将其从存储文件中读至 Buffer Pool 中进行回放,从而带来 CacheMiss 的成本,且持续性的回放会带来较频繁的 Buffer Pool 淘汰问题。
此外,RW 节点多个事务之间可并行执行,RO 节点则需依照 WAL 日志的顺序依次进行串行回放,导致 RO 回放速度较慢,与 RW 节点的延迟逐步增大。
与传统 share nothing 架构不同,共享存储一写多读架构下 RO 节点可直接从共享存储上获取需要回放的 WAL 日志。若共享存储上的数据页是最新的,那么 RO 可直接读取数据页而不需要再进行回放操作。基于此,PolarDB 设计了 LogIndex 来加速 RO 节点的日志回放。
LogIndex 中保存了数据页与修改该数据页的所有 LSN 的映射关系,基于 LogIndex 可快速获取到修改某个数据页的所有 LSN,从而可将该数据页对应日志的回放操作延迟到真正访问该数据页的时刻进行。LogIndex 机制下 RO 内存同步的架构如下图所示。
RW / RO 的相关流程相较传统 share nothing 架构下有如下区别:
PolarDB 通过仅传输 WAL Meta 降低 RW 与 RO 之间的延迟,通过 LogIndex 实现 WAL 日志的延迟回放 + 并行回放以加速 RO 的回放速度,以下则对这两点进行详细介绍。
WAL 日志又称为 XLOG Record,如下图,每个 XLOG Record 由两部分组成:
共享存储模式下,读写节点 RW 与只读节点 RO 之间无需传输完整的 WAL 日志,仅传输 WAL Meta 数据,WAL Meta 即为上图中的 general header portion + header part + main data,RO 节点可基于 WAL Meta 从共享存储上读取完整的 WAL 日志内容。该机制下,RW 与 RO 之间传输 WAL Meta 的流程如下:
RW 与 RO 节点的流复制不传输具体的 payload 数据,减少了网络数据传输量;此外,RW 节点的 WalSender 进程从内存中的 WAL Meta queue 中获取 WAL Meta 信息,RO 节点的 WalReceiver 进程接收到 WAL Meta 后也同样将其保存至内存的 WAL Meta queue 中,相较于传统主备模式减少了日志发送及接收的磁盘 I/O 过程,从而提升传输速度,降低 RW 与 RO 之间的延迟。
LogIndex 实质为一个 HashTable 结构,其 key 为 PageTag,可标识一个具体数据页,其 value 即为修改该 page 的所有 LSN。LogIndex 的内存数据结构如下图所示,除了 Memtable ID、Memtable 保存的最大 LSN、最小 LSN 等信息,LogIndex Memtable 中还包含了三个数组,分别为:
内存中保存的 LogIndex Memtable 又可分为 Active LogIndex Memtable 和 Inactive LogIndex Memtable。如下图所示,基于 WAL Meta 数据生成的 LogIndex 记录会写入 Active LogIndex Memtable,Active LogIndex Memtable 写满后会转为 Inactive LogIndex Memtable,并重新申请一个新的 Active LogIndex Memtable,Inactive LogIndex Memtable 可直接落盘,落盘后的 Inactive LogIndex Memtable 可再次转为 Active LogIndex Memtable。
磁盘上保存了若干个 LogIndex Table,LogIndex Table 与 LogIndex Memtable 结构类似,一个 LogIndex Table 可包含 64 个 LogIndex Memtable,Inactive LogIndex Memtable 落盘的同时会生成其对应的 Bloom Filter。如下图所示,单个 Bloom Filter 的大小为 4096 字节,Bloom Filter 记录了该 Inactive LogIndex Memtable 的相关信息,如保存的最小 LSN、最大 LSN、该 Memtable 中所有 Page 在 bloom filter bit array 中的映射值等。通过 Bloom Filter 可快速判断某个 Page 是否存在于对应的 LogIndex Table 中,从而可忽略无需扫描的 LogIndex Table 以加速检索。
当 Inactive LogIndex MemTable 成功落盘后,LogIndex Meta 文件也被更新,该文件可保证 LogIndex Memtable 文件 I/O 操作的原子性。如下,LogIndex Meta 文件保存了当前磁盘上最小 LogIndex Table 及最大 LogIndex Memtable 的相关信息,其 Start LSN 记录了当前已落盘的所有 LogIndex MemTable 中最大的 LSN。若 Flush LogIndex MemTable 时发生部分写,系统会从 LogIndex Meta 记录的 Start LSN 开始解析日志,如此部分写舍弃的 LogIndex 记录也会重新生成,保证了其 I/O 操作的原子性。
',35),M=a('LogIndex 机制下,RO 节点的 Startup 进程基于接收到的 WAL Meta 生成 LogIndex,同时将该 WAL Meta 对应的已存在于 Buffer Pool 中的页面标记为 Outdate 后即可推进回放位点,Startup 进程本身并不对日志进行回放,日志的回放操作交由背景回放进程及真正访问该页面的 Backend 进程进行,回放过程如下图所示,其中:
为降低回放时读取磁盘 WAL 日志带来的性能损耗,同时添加了 XLOG Buffer 用于缓存读取的 WAL 日志。如下图所示,原始方式下直接从磁盘上的 WAL Segment File 中读取 WAL 日志,添加 XLog Page Buffer 后,会先从 XLog Buffer 中读取,若所需 WAL 日志不在 XLog Buffer 中,则从磁盘上读取对应的 WAL Page 到 Buffer 中,然后再将其拷贝至 XLogReaderState 的 readBuf 中;若已在 Buffer 中,则直接将其拷贝至 XLogReaderState 的 readBuf 中,以此减少回放 WAL 日志时的 I/O 次数,从而进一步加速日志回放的速度。
与传统 share nothing 架构下的日志回放不同,LogIndex 机制下,Startup 进程解析 WAL Meta 生成 LogIndex 与 Backend 进程基于 LogIndex 对 Page 进行回放的操作是并行的,且各个 Backend 进程仅对其需要访问的 Page 进行回放。由于一条 XLog Record 可能会对多个 Page 进行修改,以索引分裂为例,其涉及对 Page_0、Page_1 的修改,且其对 Page_0 及 Page_1 的修改为一个原子操作,即修改要么全部可见,要么全部不可见。针对此,设计了 mini transaction 锁机制以保证 Backend 进程回放过程中内存数据结构的一致性。
如下图所示,无 mini transaction lock 时,Startup 进程对 WAL Meta 进行解析并按序将当前 LSN 插入到各个 Page 对应的 LSN List 中。若 Startup 进程完成对 Page_0 LSN List 的更新,但尚未完成对 Page_1 LSN List 的更新时,Backend_0 和 Backend_1 分别对 Page_0 及 Page_1 进行访问,Backend_0 和 Backend_1 分别基于 Page 对应的 LSN List 进行回放操作,Page_0 被回放至 LSN_N + 1 处,Page_1 被回放至 LSN_N 处,可见此时 Buffer Pool 中两个 Page 对应的版本并不一致,从而导致相应内存数据结构的不一致。
Mini transaction 锁机制下,对 Page_0 及 Page_1 LSN List 的更新被视为一个 mini transaction。Startup 进程更新 Page 对应的 LSN List 时,需先获取该 Page 的 mini transaction lock,如下先获取 Page_0 对应的 mtr lock,获取 Page mtr lock 的顺序与回放时的顺序保持一致,更新完 Page_0 及 Page_1 LSN List 后再释放 Page_0 对应的 mtr lock。Backend 进程基于 LogIndex 对特定 Page 进行回放时,若该 Page 对应在 Startup 进程仍处于一个 mini transaction 中,则同样需先获取该 Page 对应的 mtr lock 后再进行回放操作。故若 Startup 进程完成对 Page_0 LSN List 的更新,但尚未完成对 Page_1 LSN List 的更新时,Backend_0 和 Backend_1 分别对 Page_0 及 Page_1 进行访问,此时 Backend_0 需等待 LSN List 更新完毕并释放 Page_0 mtr lock 之后才可进行回放操作,而释放 Page_0 mtr lock 时 Page_1 的 LSN List 已完成更新,从而实现了内存数据结构的原子修改。
PolarDB 基于 RW 节点与 RO 节点共享存储这一特性,设计了 LogIndex 机制来加速 RO 节点的内存同步,降低 RO 节点与 RW 节点之间的延迟,确保了 RO 节点的一致性与可用性。本文对 LogIndex 的设计背景、基于 LogIndex 的 RO 内存同步架构及具体细节进行了分析。除了实现 RO 节点的内存同步,基于 LogIndex 机制还可实现 RO 节点的 Online Promote,可加速 RW 节点异常崩溃时,RO 节点提升为 RW 节点的速度,从而构建计算节点的高可用,实现服务的快速恢复。
',15);function B(A,O){const t=n("RouteLink");return l(),g("div",null,[b,r("p",null,[e("由 "),i(t,{to:"/zh/theory/buffer-management.html"},{default:L(()=>[e("Buffer 管理")]),_:1}),e(" 可知,一致性位点之前的所有 WAL 日志修改的数据页均已持久化到共享存储中,RO 节点通过流复制向 RW 节点反馈当前回放的位点和当前使用的最小 WAL 日志位点,故 LogIndex Table 中小于两个位点的 LSN 均可清除。RW 据此 Truncate 掉存储上不再使用的 LogIndex Table,在加速 RO 回放效率的同时还可减少 LogIndex Table 占用的空间。")]),M])}const k=o(u,[["render",B],["__file","logindex.html.vue"]]),T=JSON.parse('{"path":"/zh/theory/logindex.html","title":"LogIndex","lang":"zh-CN","frontmatter":{},"headers":[{"level":2,"title":"背景介绍","slug":"背景介绍","link":"#背景介绍","children":[]},{"level":2,"title":"RO 内存同步架构","slug":"ro-内存同步架构","link":"#ro-内存同步架构","children":[]},{"level":2,"title":"WAL Meta","slug":"wal-meta","link":"#wal-meta","children":[]},{"level":2,"title":"LogIndex","slug":"logindex-1","link":"#logindex-1","children":[{"level":3,"title":"内存数据结构","slug":"内存数据结构","link":"#内存数据结构","children":[]},{"level":3,"title":"磁盘数据结构","slug":"磁盘数据结构","link":"#磁盘数据结构","children":[]}]},{"level":2,"title":"日志回放","slug":"日志回放","link":"#日志回放","children":[{"level":3,"title":"延迟回放","slug":"延迟回放","link":"#延迟回放","children":[]},{"level":3,"title":"Mini Transaction","slug":"mini-transaction","link":"#mini-transaction","children":[]}]},{"level":2,"title":"总结","slug":"总结","link":"#总结","children":[]}],"git":{"updatedTime":1712565495000},"filePathRelative":"zh/theory/logindex.md"}');export{k as comp,T as data}; diff --git a/assets/online_promote_logindex_bgw-D8AmDDQh.png b/assets/online_promote_logindex_bgw-D8AmDDQh.png new file mode 100644 index 00000000000..beef14da81b Binary files /dev/null and b/assets/online_promote_logindex_bgw-D8AmDDQh.png differ diff --git a/assets/online_promote_postmaster-C4ViJDEx.png b/assets/online_promote_postmaster-C4ViJDEx.png new file mode 100644 index 00000000000..e5c946e554c Binary files /dev/null and b/assets/online_promote_postmaster-C4ViJDEx.png differ diff --git a/assets/online_promote_startup-DirLTg8T.png b/assets/online_promote_startup-DirLTg8T.png new file mode 100644 index 00000000000..2c8a78dd5d3 Binary files /dev/null and b/assets/online_promote_startup-DirLTg8T.png differ diff --git a/assets/parallel-dml.html-T1oFsYgB.js b/assets/parallel-dml.html-T1oFsYgB.js new file mode 100644 index 00000000000..f8a88bcd3c3 --- /dev/null +++ b/assets/parallel-dml.html-T1oFsYgB.js @@ -0,0 +1,25 @@ +import{_ as r,r as o,o as k,c as d,d as n,a as s,w as e,e as u,b as p}from"./app-CWFDhr_k.js";const i="/PolarDB-for-PostgreSQL/assets/parallel_data_flow-CfEUi17v.png",m={},w=s("h1",{id:"并行-insert",tabindex:"-1"},[s("a",{class:"header-anchor",href:"#并行-insert"},[s("span",null,"并行 INSERT")])],-1),b={class:"table-of-contents"},h=u(`PolarDB-PG 支持 ePQ 弹性跨机并行查询,能够利用集群中多个计算节点提升只读查询的性能。此外,ePQ 也支持在读写节点上通过多进程并行写入,实现对 INSERT
语句的加速。
ePQ 的并行 INSERT
功能可以用于加速 INSERT INTO ... SELECT ...
这种读写兼备的 SQL。对于 SQL 中的 SELECT
部分,ePQ 将启动多个进程并行执行查询;对于 SQL 中的 INSERT
部分,ePQ 将在读写节点上启动多个进程并行执行写入。执行写入的进程与执行查询的进程之间通过 Motion 算子 进行数据传递。
能够支持并行 INSERT
的表类型有:
并行 INSERT
支持动态调整写入并行度(写入进程数量),在查询不成为瓶颈的条件下性能最高能提升三倍。
创建两张表 t1
和 t2
,向 t1
中插入一些数据:
CREATE TABLE t1 (id INT);
+CREATE TABLE t2 (id INT);
+INSERT INTO t1 SELECT generate_series(1,100000);
+
打开 ePQ 及并行 INSERT
的开关:
SET polar_enable_px TO ON;
+SET polar_px_enable_insert_select TO ON;
+
通过 INSERT
语句将 t1
表中的所有数据插入到 t2
表中。查看并行 INSERT
的执行计划:
=> EXPLAIN INSERT INTO t2 SELECT * FROM t1;
+ QUERY PLAN
+-----------------------------------------------------------------------------------------
+ Insert on t2 (cost=0.00..952.87 rows=33334 width=4)
+ -> Result (cost=0.00..0.00 rows=0 width=0)
+ -> PX Hash 6:3 (slice1; segments: 6) (cost=0.00..432.04 rows=100000 width=8)
+ -> Partial Seq Scan on t1 (cost=0.00..431.37 rows=16667 width=4)
+ Optimizer: PolarDB PX Optimizer
+(5 rows)
+
其中的 PX Hash 6:3
表示 6 个并行查询 t1
的进程通过 Motion 算子将数据传递给 3 个并行写入 t2
的进程。
通过参数 polar_px_insert_dop_num
可以动态调整写入并行度,比如:
=> SET polar_px_insert_dop_num TO 12;
+=> EXPLAIN INSERT INTO t2 SELECT * FROM t1;
+ QUERY PLAN
+------------------------------------------------------------------------------------------
+ Insert on t2 (cost=0.00..952.87 rows=8334 width=4)
+ -> Result (cost=0.00..0.00 rows=0 width=0)
+ -> PX Hash 6:12 (slice1; segments: 6) (cost=0.00..432.04 rows=100000 width=8)
+ -> Partial Seq Scan on t1 (cost=0.00..431.37 rows=16667 width=4)
+ Optimizer: PolarDB PX Optimizer
+(5 rows)
+
执行计划中的 PX Hash 6:12
显示,并行查询 t1
的进程数量不变,并行写入 t2
的进程数量变更为 12
。
调整 polar_px_dop_per_node
和 polar_px_insert_dop_num
可以分别修改 INSERT INTO ... SELECT ...
中查询和写入的并行度。
ePQ 对并行 INSERT
的处理如下:
并行查询和并行写入是以流水线的形式同时进行的。上述执行过程如图所示:
',26);function _(t,y){const l=o("Badge"),c=o("ArticleInfo"),a=o("router-link");return k(),d("div",null,[w,n(l,{type:"tip",text:"V11 / v1.1.17-",vertical:"top"}),n(c,{frontmatter:t.$frontmatter},null,8,["frontmatter"]),s("nav",b,[s("ul",null,[s("li",null,[n(a,{to:"#背景介绍"},{default:e(()=>[p("背景介绍")]),_:1})]),s("li",null,[n(a,{to:"#功能介绍"},{default:e(()=>[p("功能介绍")]),_:1})]),s("li",null,[n(a,{to:"#使用方法"},{default:e(()=>[p("使用方法")]),_:1})]),s("li",null,[n(a,{to:"#使用说明"},{default:e(()=>[p("使用说明")]),_:1})]),s("li",null,[n(a,{to:"#原理介绍"},{default:e(()=>[p("原理介绍")]),_:1})])])]),h])}const g=r(m,[["render",_],["__file","parallel-dml.html.vue"]]),T=JSON.parse('{"path":"/zh/features/v11/epq/parallel-dml.html","title":"并行 INSERT","lang":"zh-CN","frontmatter":{"author":"渊云","date":"2022/09/27","minute":30},"headers":[{"level":2,"title":"背景介绍","slug":"背景介绍","link":"#背景介绍","children":[]},{"level":2,"title":"功能介绍","slug":"功能介绍","link":"#功能介绍","children":[]},{"level":2,"title":"使用方法","slug":"使用方法","link":"#使用方法","children":[]},{"level":2,"title":"使用说明","slug":"使用说明","link":"#使用说明","children":[]},{"level":2,"title":"原理介绍","slug":"原理介绍","link":"#原理介绍","children":[]}],"git":{"updatedTime":1697908247000},"filePathRelative":"zh/features/v11/epq/parallel-dml.md"}');export{g as comp,T as data}; diff --git a/assets/parallel_data_flow-CfEUi17v.png b/assets/parallel_data_flow-CfEUi17v.png new file mode 100644 index 00000000000..021ba46b1b6 Binary files /dev/null and b/assets/parallel_data_flow-CfEUi17v.png differ diff --git a/assets/pgvector.html-D9Sei-Ks.js b/assets/pgvector.html-D9Sei-Ks.js new file mode 100644 index 00000000000..d86090635f0 --- /dev/null +++ b/assets/pgvector.html-D9Sei-Ks.js @@ -0,0 +1,14 @@ +import{_ as i,r as o,o as d,c as k,d as a,a as n,w as t,b as s,e as h}from"./app-CWFDhr_k.js";const g={},_=n("h1",{id:"pgvector",tabindex:"-1"},[n("a",{class:"header-anchor",href:"#pgvector"},[n("span",null,"pgvector")])],-1),v={class:"table-of-contents"},m=n("h2",{id:"背景",tabindex:"-1"},[n("a",{class:"header-anchor",href:"#背景"},[n("span",null,"背景")])],-1),f={href:"https://github.com/pgvector/pgvector",target:"_blank",rel:"noopener noreferrer"},b=n("code",null,"pgvector",-1),q=n("p",null,[n("code",null,"pgvector"),s(" 支持 IVFFlat 索引。IVFFlat 索引能够将向量空间分为若干个划分区域,每个区域都包含一些向量,并创建倒排索引,用于快速地查找与给定向量相似的向量。IVFFlat 是 IVFADC 索引的简化版本,适用于召回精度要求高,但对查询耗时要求不严格(100ms 级别)的场景。相比其他索引类型,IVFFlat 索引具有高召回率、高精度、算法和参数简单、空间占用小的优势。")],-1),E=n("p",null,[n("code",null,"pgvector"),s(" 插件算法的具体流程如下:")],-1),x=n("ol",null,[n("li",null,"高维空间中的点基于隐形的聚类属性,按照 K-Means 等聚类算法对向量进行聚类处理,使得每个类簇有一个中心点"),n("li",null,"检索向量时首先遍历计算所有类簇的中心点,找到与目标向量最近的 n 个类簇中心"),n("li",null,"遍历计算 n 个类簇中心所在聚类中的所有元素,经过全局排序得到距离最近的 k 个向量")],-1),w=n("h2",{id:"使用方法",tabindex:"-1"},[n("a",{class:"header-anchor",href:"#使用方法"},[n("span",null,"使用方法")])],-1),I=n("code",null,"pgvector",-1),y={href:"https://github.com/pgvector/pgvector/blob/master/README.md",target:"_blank",rel:"noopener noreferrer"},N=h(`CREATE EXTENSION vector;
+
执行如下命令,创建一个含有向量字段的表:
CREATE TABLE t (val vector(3));
+
执行如下命令,可以插入向量数据:
INSERT INTO t (val) VALUES ('[0,0,0]'), ('[1,2,3]'), ('[1,1,1]'), (NULL);
+
创建 IVFFlat 类型的索引:
val vector_ip_ops
表示需要创建索引的列名为 val
,并且使用向量操作符 vector_ip_ops
来计算向量之间的相似度。该操作符支持向量之间的点积、余弦相似度、欧几里得距离等计算方式WITH (lists = 1)
表示使用的划分区域数量为 1,这意味着所有向量都将被分配到同一个区域中。在实际应用中,划分区域数量需要根据数据规模和查询性能进行调整CREATE INDEX ON t USING ivfflat (val vector_ip_ops) WITH (lists = 1);
+
计算近似向量:
=> SELECT * FROM t ORDER BY val <#> '[3,3,3]';
+ val
+---------
+ [1,2,3]
+ [1,1,1]
+ [0,0,0]
+
+(4 rows)
+
DROP EXTENSION vector;
+
Sequence 作为数据库中的一个特别的表级对象,可以根据用户设定的不同属性,产生一系列有规则的整数,从而起到发号器的作用。
在使用方面,可以设置永不重复的 Sequence 用来作为一张表的主键,也可以通过不同表共享同一个 Sequence 来记录多个表的总插入行数。根据 ANSI 标准,一个 Sequence 对象在数据库要具备以下特征:
为了解释上述特性,我们分别定义 a
、b
两种序列来举例其具体的行为。
CREATE SEQUENCE a start with 5 minvalue -1 increment -2;
+CREATE SEQUENCE b start with 2 minvalue 1 maxvalue 4 cycle;
+
两个 Sequence 对象提供的序列值,随着序列申请次数的变化,如下所示:
PostgreSQL | Oracle | SQLSERVER | MySQL | MariaDB | DB2 | Sybase | Hive |
---|---|---|---|---|---|---|---|
支持 | 支持 | 支持 | 仅支持自增字段 | 支持 | 支持 | 仅支持自增字段 | 不支持 |
为了更进一步了解 PostgreSQL 中的 Sequence 对象,我们先来了解 Sequence 的用法,并从用法中透析 Sequence 背后的设计原理。
PostgreSQL 提供了丰富的 Sequence 调用接口,以及组合使用的场景,以充分支持开发者的各种需求。
PostgreSQL 对 Sequence 对象也提供了类似于 表 的访问方式,即 DQL、DML 以及 DDL。我们从下图中可一览对外提供的 SQL 接口。
分别来介绍以下这几个接口:
该接口的含义为,返回 Session 上次使用的某一 Sequence 的值。
postgres=# select nextval('seq');
+ nextval
+---------
+ 2
+(1 row)
+
+postgres=# select currval('seq');
+ currval
+---------
+ 2
+(1 row)
+
需要注意的是,使用该接口必须使用过一次 nextval
方法,否则会提示目标 Sequence 在当前 Session 未定义。
postgres=# select currval('seq');
+ERROR: currval of sequence "seq" is not yet defined in this session
+
该接口的含义为,返回 Session 上次使用的 Sequence 的值。
postgres=# select nextval('seq');
+ nextval
+---------
+ 3
+(1 row)
+
+postgres=# select lastval();
+ lastval
+---------
+ 3
+(1 row)
+
同样,为了知道上次用的是哪个 Sequence 对象,需要用一次 nextval('seq')
,让 Session 以全局变量的形式记录下上次使用的 Sequence 对象。
lastval
与 curval
两个接口仅仅只是参数不同,currval
需要指定是哪个访问过的 Sequence 对象,而 lastval
无法指定,只能是最近一次使用的 Sequence 对象。
该接口的含义为,取 Sequence 对象的下一个序列值。
通过使用 nextval
方法,可以让数据库基于 Sequence 对象的当前值,返回一个递增了 increment
数量的一个序列值,并将递增后的值作为 Sequence 对象当前值。
postgres=# CREATE SEQUENCE seq start with 1 increment 2;
+CREATE SEQUENCE
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 3
+(1 row)
+
increment
称作 Sequence 对象的步长,Sequence 的每次以 nextval
的方式进行申请,都是以步长为单位进行申请的。同时,需要注意的是,Sequence 对象创建好以后,第一次申请获得的值,是 start value 所定义的值。对于 start value 的默认值,有以下 PostgreSQL 规则:
另外,nextval
是一种特殊的 DML,其不受事务所保护,即:申请出的序列值不会再回滚。
postgres=# BEGIN;
+BEGIN
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
+postgres=# ROLLBACK;
+ROLLBACK
+postgres=# select nextval('seq');
+ nextval
+---------
+ 2
+(1 row)
+
PostgreSQL 为了 Sequence 对象可以获得较好的并发性能,并没有采用多版本的方式来更新 Sequence 对象,而是采用了原地修改的方式完成 Sequence 对象的更新,这种不用事务保护的方式几乎成为所有支持 Sequence 对象的 RDMS 的通用做法,这也使得 Sequence 成为一种特殊的表级对象。
该接口的含义是,设置 Sequence 对象的序列值。
postgres=# select nextval('seq');
+ nextval
+---------
+ 4
+(1 row)
+
+postgres=# select setval('seq', 1);
+ setval
+--------
+ 1
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 2
+(1 row)
+
该方法可以将 Sequence 对象的序列值设置到给定的位置,同时可以将第一个序列值申请出来。如果不想申请出来,可以采用加入 false
参数的做法。
postgres=# select nextval('seq');
+ nextval
+---------
+ 4
+(1 row)
+
+postgres=# select setval('seq', 1, false);
+ setval
+--------
+ 1
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
通过在 setval
来设置好 Sequence 对象的值以后,同时来设置 Sequence 对象的 is_called
属性。nextval
就可以根据 Sequence 对象的 is_called
属性来判断要返回的是否要返回设置的序列值。即:如果 is_called
为 false
,nextval
接口会去设置 is_called
为 true
,而不是进行 increment。
CREATE
和 ALTER SEQUENCE
用于创建/变更 Sequence 对象,其中 Sequence 属性也通过 CREATE
和 ALTER SEQUENCE
接口进行设置,前面已简单介绍部分属性,下面将详细描述具体的属性。
CREATE [ TEMPORARY | TEMP ] SEQUENCE [ IF NOT EXISTS ] name
+ [ AS data_type ]
+ [ INCREMENT [ BY ] increment ]
+ [ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE ]
+ [ START [ WITH ] start ] [ CACHE cache ] [ [ NO ] CYCLE ]
+ [ OWNED BY { table_name.column_name | NONE } ]
+ALTER SEQUENCE [ IF EXISTS ] name
+ [ AS data_type ]
+ [ INCREMENT [ BY ] increment ]
+ [ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE ]
+ [ START [ WITH ] start ]
+ [ RESTART [ [ WITH ] restart ] ]
+ [ CACHE cache ] [ [ NO ] CYCLE ]
+ [ OWNED BY { table_name.column_name | NONE } ]
+
AS
:设置 Sequence 的数据类型,只可以设置为 smallint
,int
,bigint
;与此同时也限定了 minvalue
和 maxvalue
的设置范围,默认为 bigint
类型(注意,只是限定,而不是设置,设置的范围不得超过数据类型的范围)。INCREMENT
:步长,nextval
申请序列值的递增数量,默认值为 1。MINVALUE
/ NOMINVALUE
:设置/不设置 Sequence 对象的最小值,如果不设置则是数据类型规定的范围,例如 bigint
类型,则最小值设置为 PG_INT64_MIN
(-9223372036854775808)MAXVALUE
/ NOMAXVALUE
:设置/不设置 Sequence 对象的最大值,如果不设置,则默认设置规则如上。START
:Sequence 对象的初始值,必须在 MINVALUE
和 MAXVALUE
范围之间。RESTART
:ALTER 后,可以重新设置 Sequence 对象的序列值,默认设置为 start value。CACHE
/ NOCACHE
:设置 Sequence 对象使用的 Cache 大小,NOCACHE
或者不设置则默认为 1。OWNED BY
:设置 Sequence 对象归属于某张表的某一列,删除列后,Sequence 对象也将删除。下面描述了一种序列回滚的场景
CREATE SEQUENCE
+postgres=# BEGIN;
+BEGIN
+postgres=# ALTER SEQUENCE seq maxvalue 10;
+ALTER SEQUENCE
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 2
+(1 row)
+
+postgres=# ROLLBACK;
+ROLLBACK
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
与之前描述的不同,此处 Sequence 对象受到了事务的保护,序列值发生了发生回滚。实际上,此处事务保护的是 ALTER SEQUENCE
(DDL),而非 nextval
(DML),因此此处发生的回滚是将 Sequence 对象回滚到 ALTER SEQUENCE
之前的状态,故发生了序列回滚现象。
DROP SEQUENCE
,如字面意思,去除数据库中的 Sequence 对象。TRUNCATE
,准确来讲,是通过 TRUNCATE TABLE
完成 RESTART SEQUENCE
。postgres=# CREATE TABLE tbl_iden (i INTEGER, j int GENERATED ALWAYS AS IDENTITY);
+CREATE TABLE
+postgres=# insert into tbl_iden values (100);
+INSERT 0 1
+postgres=# insert into tbl_iden values (1000);
+INSERT 0 1
+postgres=# select * from tbl_iden;
+ i | j
+------+---
+ 100 | 1
+ 1000 | 2
+(2 rows)
+
+postgres=# TRUNCATE TABLE tbl_iden RESTART IDENTITY;
+TRUNCATE TABLE
+postgres=# insert into tbl_iden values (1234);
+INSERT 0 1
+postgres=# select * from tbl_iden;
+ i | j
+------+---
+ 1234 | 1
+(1 row)
+
此处相当于在 TRUNCATE
表的时候,执行 ALTER SEQUENCE RESTART
。
SEQUENCE 除了作为一个独立的对象时候以外,还可以组合其他 PostgreSQL 其他组件进行使用,我们总结了一下几个常用的场景。
CREATE SEQUENCE seq;
+CREATE TABLE tbl (i INTEGER PRIMARY KEY);
+INSERT INTO tbl (i) VALUES (nextval('seq'));
+SELECT * FROM tbl ORDER BY 1 DESC;
+ tbl
+---------
+ 1
+(1 row)
+
CREATE SEQUENCE seq;
+CREATE TABLE tbl (i INTEGER PRIMARY KEY, j INTEGER);
+CREATE FUNCTION f()
+RETURNS TRIGGER AS
+$$
+BEGIN
+NEW.i := nextval('seq');
+RETURN NEW;
+END;
+$$
+LANGUAGE 'plpgsql';
+
+CREATE TRIGGER tg
+BEFORE INSERT ON tbl
+FOR EACH ROW
+EXECUTE PROCEDURE f();
+
+INSERT INTO tbl (j) VALUES (4);
+
+SELECT * FROM tbl;
+ i | j
+---+---
+ 1 | 4
+(1 row)
+
显式 DEFAULT
调用:
CREATE SEQUENCE seq;
+CREATE TABLE tbl(i INTEGER DEFAULT nextval('seq') PRIMARY KEY, j INTEGER);
+
+INSERT INTO tbl (i,j) VALUES (DEFAULT,11);
+INSERT INTO tbl(j) VALUES (321);
+INSERT INTO tbl (i,j) VALUES (nextval('seq'),1);
+
+SELECT * FROM tbl;
+ i | j
+---+-----
+ 2 | 321
+ 1 | 11
+ 3 | 1
+(3 rows)
+
SERIAL
调用:
CREATE TABLE tbl (i SERIAL PRIMARY KEY, j INTEGER);
+INSERT INTO tbl (i,j) VALUES (DEFAULT,42);
+
+INSERT INTO tbl (j) VALUES (25);
+
+SELECT * FROM tbl;
+ i | j
+---+----
+ 1 | 42
+ 2 | 25
+(2 rows)
+
注意,SERIAL
并不是一种类型,而是 DEFAULT
调用的另一种形式,只不过 SERIAL
会自动创建 DEFAULT
约束所要使用的 Sequence。
CREATE TABLE tbl (i int GENERATED ALWAYS AS IDENTITY,
+ j INTEGER);
+INSERT INTO tbl(i,j) VALUES (DEFAULT,32);
+
+INSERT INTO tbl(j) VALUES (23);
+
+SELECT * FROM tbl;
+ i | j
+---+----
+ 1 | 32
+ 2 | 23
+(2 rows)
+
AUTO_INC
调用对列附加了自增约束,与 default
约束不同,自增约束通过查找 dependency 的方式找到该列关联的 Sequence,而 default
调用仅仅是将默认值设置为一个 nextval
表达式。
在 PostgreSQL 中有一张专门记录 Sequence 信息的系统表,即 pg_sequence
。其表结构如下:
postgres=# \\d pg_sequence
+ Table "pg_catalog.pg_sequence"
+ Column | Type | Collation | Nullable | Default
+--------------+---------+-----------+----------+---------
+ seqrelid | oid | | not null |
+ seqtypid | oid | | not null |
+ seqstart | bigint | | not null |
+ seqincrement | bigint | | not null |
+ seqmax | bigint | | not null |
+ seqmin | bigint | | not null |
+ seqcache | bigint | | not null |
+ seqcycle | boolean | | not null |
+Indexes:
+ "pg_sequence_seqrelid_index" PRIMARY KEY, btree (seqrelid)
+
不难看出,pg_sequence
中记录了 Sequence 的全部的属性信息,该属性在 CREATE/ALTER SEQUENCE
中被设置,Sequence 的 nextval
以及 setval
要经常打开这张系统表,按照规则办事。
对于 Sequence 序列数据本身,其实现方式是基于 heap 表实现的,heap 表共计三个字段,其在表结构如下:
typedef struct FormData_pg_sequence_data
+{
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} FormData_pg_sequence_data;
+
last_value
记录了 Sequence 的当前的序列值,我们称之为页面值(与后续的缓存值相区分)log_cnt
记录了 Sequence 在 nextval
申请时,预先向 WAL 中额外申请的序列次数,这一部分我们放在序列申请机制剖析中详细介绍。is_called
标记 Sequence 的 last_value
是否已经被申请过,例如 setval
可以设置 is_called
字段:-- setval false
+postgres=# select setval('seq', 10, false);
+ setval
+--------
+ 10
+(1 row)
+
+postgres=# select * from seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 10 | 0 | f
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 10
+(1 row)
+
+-- setval true
+postgres=# select setval('seq', 10, true);
+ setval
+--------
+ 10
+(1 row)
+
+postgres=# select * from seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 10 | 0 | t
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 11
+(1 row)
+
每当用户创建一个 Sequence 对象时,PostgreSQL 总是会创建出一张上面这种结构的 heap 表,来记录 Sequence 对象的数据信息。当 Sequence 对象因为 nextval
或 setval
导致序列值变化时,PostgreSQL 就会通过原地更新的方式更新 heap 表中的这一行的三个字段。
以 setval
为例,下面的逻辑解释了其具体的原地更新过程。
static void
+do_setval(Oid relid, int64 next, bool iscalled)
+{
+
+ /* 打开并对Sequence heap表进行加锁 */
+ init_sequence(relid, &elm, &seqrel);
+
+ ...
+
+ /* 对buffer进行加锁,同时提取tuple */
+ seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+
+ ...
+
+ /* 原地更新tuple */
+ seq->last_value = next; /* last fetched number */
+ seq->is_called = iscalled;
+ seq->log_cnt = 0;
+
+ ...
+
+ /* 释放buffer锁以及表锁 */
+ UnlockReleaseBuffer(buf);
+ relation_close(seqrel, NoLock);
+}
+
可见,do_setval
会直接去设置 Sequence heap 表中的这一行元组,而非普通 heap 表中的删除 + 插入的方式来完成元组更新,对于 nextval
而言,也是类似的过程,只不过 last_value
的值需要计算得出,而非用户设置。
讲清楚 Sequence 对象在内核中的存在形式之后,就需要讲清楚一个序列值是如何发出的,即 nextval
方法。其在内核的具体实现在 sequence.c
中的 nextval_internal
函数,其最核心的功能,就是计算 last_value
以及 log_cnt
。
last_value
和 log_cnt
的具体关系如下图:
其中 log_cnt
是一个预留的申请次数。默认值为 32,由下面的宏定义决定:
/*
+ * We don't want to log each fetching of a value from a sequence,
+ * so we pre-log a few fetches in advance. In the event of
+ * crash we can lose (skip over) as many values as we pre-logged.
+ */
+#define SEQ_LOG_VALS 32
+
每当将 last_value
增加一个 increment 的长度时,log_cnt
就会递减 1。
当 log_cnt
为 0,或者发生 checkpoint
以后,就会触发一次 WAL 日志写入,按下面的公式设置 WAL 日志中的页面值,并重新将 log_cnt
设置为 SEQ_LOG_VALS
。
通过这种方式,PostgreSQL 每次通过 nextval
修改页面中的 last_value
后,不需要每次都写入 WAL 日志。这意味着:如果 nextval
每次都需要修改页面值的话,这种优化将会使得写 WAL 的频率降低 32 倍。其代价就是,在发生 crash 前如果没有及时进行 checkpoint,那么会丢失一段序列。如下面所示:
postgres=# create sequence seq;
+CREATE SEQUENCE
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
+postgres=# select * from seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 1 | 32 | t
+(1 row)
+
+-- crash and restart
+
+postgres=# select * from seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 33 | 0 | t
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 34
+(1 row)
+
显然,crash 以后,Sequence 对象产生了 2-33 这段空洞,但这个代价是可以被接受的,因为 Sequence 并没有违背唯一性原则。同时,在特定场景下极大地降低了写 WAL 的频率。
通过上述描述,不难发现 Sequence 每次发生序列申请,都需要通过加入 buffer 锁的方式来修改页面,这意味着 Sequence 的并发性能是比较差的。
针对这个问题,PostgreSQL 使用对 Sequence 使用了 Session Cache 来提前缓存一段序列,来提高并发性能。如下图所示:
Sequence Session Cache 的实现是一个 entry 数量固定为 16 的哈希表,以 Sequence 的 OID 为 key 去检索已经缓存好的 Sequence 序列,其缓存的 value 结构如下:
typedef struct SeqTableData
+{
+ Oid relid; /* Sequence OID(hash key) */
+ int64 last; /* value last returned by nextval */
+ int64 cached; /* last value already cached for nextval */
+ int64 increment; /* copy of sequence's increment field */
+} SeqTableData;
+
其中 last
即为 Sequence 在 Session 中的当前值,即 current_value,cached
为 Sequence 在 Session 中的缓存值,即 cached_value,increment
记录了步长,有了这三个值即可满足 Sequence 缓存的基本条件。
对于 Sequence Session Cache 与页面值之间的关系,如下图所示:
类似于 log_cnt
,cache_cnt
即为用户在定义 Sequence 时,设置的 Cache 大小,最小为 1。只有当 cache domain 中的序列用完以后,才会去对 buffer 加锁,修改页中的 Sequence 页面值。调整过程如下所示:
例如,如果 CACHE 设置的值为 20,那么当 cache 使用完以后,就会尝试对 buffer 加锁来调整页面值,并重新申请 20 个 increment 至 cache 中。对于上图而言,有如下关系:
',15),A=s("p",{class:"katex-block"},[s("span",{class:"katex-display"},[s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("semantics",null,[s("mrow",null,[s("mi",null,"c"),s("mi",null,"a"),s("mi",null,"c"),s("mi",null,"h"),s("mi",null,"e"),s("mi",null,"d"),s("mi",{mathvariant:"normal"},"_"),s("mi",null,"v"),s("mi",null,"a"),s("mi",null,"l"),s("mi",null,"u"),s("mi",null,"e"),s("mo",null,"="),s("mi",null,"N"),s("mi",null,"E"),s("mi",null,"W"),s("mtext",null," "),s("mi",null,"c"),s("mi",null,"u"),s("mi",null,"r"),s("mi",null,"r"),s("mi",null,"e"),s("mi",null,"n"),s("mi",null,"t"),s("mi",{mathvariant:"normal"},"_"),s("mi",null,"v"),s("mi",null,"a"),s("mi",null,"l"),s("mi",null,"u"),s("mi",null,"e")]),s("annotation",{encoding:"application/x-tex"}," cached\\_value = NEW\\ current\\_value ")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0044em","vertical-align":"-0.31em"}}),s("span",{class:"mord mathnormal"},"c"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"c"),s("span",{class:"mord mathnormal"},"h"),s("span",{class:"mord mathnormal"},"e"),s("span",{class:"mord mathnormal"},"d"),s("span",{class:"mord",style:{"margin-right":"0.02778em"}},"_"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.03588em"}},"v"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"l"),s("span",{class:"mord mathnormal"},"u"),s("span",{class:"mord mathnormal"},"e"),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0044em","vertical-align":"-0.31em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.05764em"}},"NE"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"W"),s("span",{class:"mspace"}," "),s("span",{class:"mord mathnormal"},"c"),s("span",{class:"mord mathnormal"},"u"),s("span",{class:"mord mathnormal"},"rre"),s("span",{class:"mord mathnormal"},"n"),s("span",{class:"mord mathnormal"},"t"),s("span",{class:"mord",style:{"margin-right":"0.02778em"}},"_"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.03588em"}},"v"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"l"),s("span",{class:"mord mathnormal"},"u"),s("span",{class:"mord mathnormal"},"e")])])])])],-1),N=s("p",{class:"katex-block"},[s("span",{class:"katex-display"},[s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("semantics",null,[s("mrow",null,[s("mi",null,"N"),s("mi",null,"E"),s("mi",null,"W"),s("mtext",null," "),s("mi",null,"c"),s("mi",null,"u"),s("mi",null,"r"),s("mi",null,"r"),s("mi",null,"e"),s("mi",null,"n"),s("mi",null,"t"),s("mi",{mathvariant:"normal"},"_"),s("mi",null,"v"),s("mi",null,"a"),s("mi",null,"l"),s("mi",null,"u"),s("mi",null,"e"),s("mo",null,"+"),s("mn",null,"20"),s("mo",null,"×"),s("mi",null,"I"),s("mi",null,"N"),s("mi",null,"C"),s("mo",null,"="),s("mi",null,"N"),s("mi",null,"E"),s("mi",null,"W"),s("mtext",null," "),s("mi",null,"c"),s("mi",null,"a"),s("mi",null,"c"),s("mi",null,"h"),s("mi",null,"e"),s("mi",null,"d"),s("mi",{mathvariant:"normal"},"_"),s("mi",null,"v"),s("mi",null,"a"),s("mi",null,"l"),s("mi",null,"u"),s("mi",null,"e")]),s("annotation",{encoding:"application/x-tex"}," NEW\\ current\\_value+20\\times INC=NEW\\ cached\\_value ")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0044em","vertical-align":"-0.31em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.05764em"}},"NE"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"W"),s("span",{class:"mspace"}," "),s("span",{class:"mord mathnormal"},"c"),s("span",{class:"mord mathnormal"},"u"),s("span",{class:"mord mathnormal"},"rre"),s("span",{class:"mord mathnormal"},"n"),s("span",{class:"mord mathnormal"},"t"),s("span",{class:"mord",style:{"margin-right":"0.02778em"}},"_"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.03588em"}},"v"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"l"),s("span",{class:"mord mathnormal"},"u"),s("span",{class:"mord mathnormal"},"e"),s("span",{class:"mspace",style:{"margin-right":"0.2222em"}}),s("span",{class:"mbin"},"+"),s("span",{class:"mspace",style:{"margin-right":"0.2222em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.7278em","vertical-align":"-0.0833em"}}),s("span",{class:"mord"},"20"),s("span",{class:"mspace",style:{"margin-right":"0.2222em"}}),s("span",{class:"mbin"},"×"),s("span",{class:"mspace",style:{"margin-right":"0.2222em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.6833em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.07847em"}},"I"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.07153em"}},"NC"),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0044em","vertical-align":"-0.31em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.05764em"}},"NE"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"W"),s("span",{class:"mspace"}," "),s("span",{class:"mord mathnormal"},"c"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"c"),s("span",{class:"mord mathnormal"},"h"),s("span",{class:"mord mathnormal"},"e"),s("span",{class:"mord mathnormal"},"d"),s("span",{class:"mord",style:{"margin-right":"0.02778em"}},"_"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.03588em"}},"v"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"l"),s("span",{class:"mord mathnormal"},"u"),s("span",{class:"mord mathnormal"},"e")])])])])],-1),L=s("p",{class:"katex-block"},[s("span",{class:"katex-display"},[s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("semantics",null,[s("mrow",null,[s("mi",null,"N"),s("mi",null,"E"),s("mi",null,"W"),s("mtext",null," "),s("mi",null,"l"),s("mi",null,"a"),s("mi",null,"s"),s("mi",null,"t"),s("mi",{mathvariant:"normal"},"_"),s("mi",null,"v"),s("mi",null,"a"),s("mi",null,"l"),s("mi",null,"u"),s("mi",null,"e"),s("mo",null,"="),s("mi",null,"N"),s("mi",null,"E"),s("mi",null,"W"),s("mtext",null," "),s("mi",null,"c"),s("mi",null,"a"),s("mi",null,"c"),s("mi",null,"h"),s("mi",null,"e"),s("mi",null,"d"),s("mi",{mathvariant:"normal"},"_"),s("mi",null,"v"),s("mi",null,"a"),s("mi",null,"l"),s("mi",null,"u"),s("mi",null,"e")]),s("annotation",{encoding:"application/x-tex"}," NEW\\ last\\_value = NEW\\ cached\\_value ")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0044em","vertical-align":"-0.31em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.05764em"}},"NE"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"W"),s("span",{class:"mspace"}," "),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"l"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord mathnormal"},"t"),s("span",{class:"mord",style:{"margin-right":"0.02778em"}},"_"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.03588em"}},"v"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"l"),s("span",{class:"mord mathnormal"},"u"),s("span",{class:"mord mathnormal"},"e"),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0044em","vertical-align":"-0.31em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.05764em"}},"NE"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"W"),s("span",{class:"mspace"}," "),s("span",{class:"mord mathnormal"},"c"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"c"),s("span",{class:"mord mathnormal"},"h"),s("span",{class:"mord mathnormal"},"e"),s("span",{class:"mord mathnormal"},"d"),s("span",{class:"mord",style:{"margin-right":"0.02778em"}},"_"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.03588em"}},"v"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"l"),s("span",{class:"mord mathnormal"},"u"),s("span",{class:"mord mathnormal"},"e")])])])])],-1),R=n('在 Sequence Session Cache 的加持下,nextval
方法的并发性能得到了极大的提升,以下是通过 pgbench 进行压测的结果对比。
Sequence 在 PostgreSQL 中是一类特殊的表级对象,提供了简单而又丰富的 SQL 接口,使得用户可以更加方便的创建、使用定制化的序列对象。不仅如此,Sequence 在内核中也具有丰富的组合使用场景,其使用场景也得到了极大地扩展。
本文详细介绍了 Sequence 对象在 PostgreSQL 内核中的具体设计,从对象的元数据描述、对象的数据描述出发,介绍了 Sequence 对象的组成。本文随后介绍了 Sequence 最为核心的 SQL 接口——nextval
,从 nextval
的序列值计算、原地更新、降低 WAL 日志写入三个方面进行了详细阐述。最后,本文介绍了 Sequence Session Cache 的相关原理,描述了引入 Cache 以后,序列值在 Cache 中,以及页面中的计算方法以及对齐关系,并对比了引入 Cache 前后,nextval
方法在单序列和多序列并发场景下的对比情况。
Sequence 作为数据库中的一个特别的表级对象,可以根据用户设定的不同属性,产生一系列有规则的整数,从而起到发号器的作用。
在使用方面,可以设置永不重复的 Sequence 用来作为一张表的主键,也可以通过不同表共享同一个 Sequence 来记录多个表的总插入行数。根据 ANSI 标准,一个 Sequence 对象在数据库要具备以下特征:
为了解释上述特性,我们分别定义 a
、b
两种序列来举例其具体的行为。
CREATE SEQUENCE a start with 5 minvalue -1 increment -2;
+CREATE SEQUENCE b start with 2 minvalue 1 maxvalue 4 cycle;
+
两个 Sequence 对象提供的序列值,随着序列申请次数的变化,如下所示:
PostgreSQL | Oracle | SQLSERVER | MySQL | MariaDB | DB2 | Sybase | Hive |
---|---|---|---|---|---|---|---|
支持 | 支持 | 支持 | 仅支持自增字段 | 支持 | 支持 | 仅支持自增字段 | 不支持 |
为了更进一步了解 PostgreSQL 中的 Sequence 对象,我们先来了解 Sequence 的用法,并从用法中透析 Sequence 背后的设计原理。
PostgreSQL 提供了丰富的 Sequence 调用接口,以及组合使用的场景,以充分支持开发者的各种需求。
PostgreSQL 对 Sequence 对象也提供了类似于 表 的访问方式,即 DQL、DML 以及 DDL。我们从下图中可一览对外提供的 SQL 接口。
分别来介绍以下这几个接口:
该接口的含义为,返回 Session 上次使用的某一 Sequence 的值。
postgres=# select nextval('seq');
+ nextval
+---------
+ 2
+(1 row)
+
+postgres=# select currval('seq');
+ currval
+---------
+ 2
+(1 row)
+
需要注意的是,使用该接口必须使用过一次 nextval
方法,否则会提示目标 Sequence 在当前 Session 未定义。
postgres=# select currval('seq');
+ERROR: currval of sequence "seq" is not yet defined in this session
+
该接口的含义为,返回 Session 上次使用的 Sequence 的值。
postgres=# select nextval('seq');
+ nextval
+---------
+ 3
+(1 row)
+
+postgres=# select lastval();
+ lastval
+---------
+ 3
+(1 row)
+
同样,为了知道上次用的是哪个 Sequence 对象,需要用一次 nextval('seq')
,让 Session 以全局变量的形式记录下上次使用的 Sequence 对象。
lastval
与 curval
两个接口仅仅只是参数不同,currval
需要指定是哪个访问过的 Sequence 对象,而 lastval
无法指定,只能是最近一次使用的 Sequence 对象。
该接口的含义为,取 Sequence 对象的下一个序列值。
通过使用 nextval
方法,可以让数据库基于 Sequence 对象的当前值,返回一个递增了 increment
数量的一个序列值,并将递增后的值作为 Sequence 对象当前值。
postgres=# CREATE SEQUENCE seq start with 1 increment 2;
+CREATE SEQUENCE
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 3
+(1 row)
+
increment
称作 Sequence 对象的步长,Sequence 的每次以 nextval
的方式进行申请,都是以步长为单位进行申请的。同时,需要注意的是,Sequence 对象创建好以后,第一次申请获得的值,是 start value 所定义的值。对于 start value 的默认值,有以下 PostgreSQL 规则:
另外,nextval
是一种特殊的 DML,其不受事务所保护,即:申请出的序列值不会再回滚。
postgres=# BEGIN;
+BEGIN
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
+postgres=# ROLLBACK;
+ROLLBACK
+postgres=# select nextval('seq');
+ nextval
+---------
+ 2
+(1 row)
+
PostgreSQL 为了 Sequence 对象可以获得较好的并发性能,并没有采用多版本的方式来更新 Sequence 对象,而是采用了原地修改的方式完成 Sequence 对象的更新,这种不用事务保护的方式几乎成为所有支持 Sequence 对象的 RDMS 的通用做法,这也使得 Sequence 成为一种特殊的表级对象。
该接口的含义是,设置 Sequence 对象的序列值。
postgres=# select nextval('seq');
+ nextval
+---------
+ 4
+(1 row)
+
+postgres=# select setval('seq', 1);
+ setval
+--------
+ 1
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 2
+(1 row)
+
该方法可以将 Sequence 对象的序列值设置到给定的位置,同时可以将第一个序列值申请出来。如果不想申请出来,可以采用加入 false
参数的做法。
postgres=# select nextval('seq');
+ nextval
+---------
+ 4
+(1 row)
+
+postgres=# select setval('seq', 1, false);
+ setval
+--------
+ 1
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
通过在 setval
来设置好 Sequence 对象的值以后,同时来设置 Sequence 对象的 is_called
属性。nextval
就可以根据 Sequence 对象的 is_called
属性来判断要返回的是否要返回设置的序列值。即:如果 is_called
为 false
,nextval
接口会去设置 is_called
为 true
,而不是进行 increment。
CREATE
和 ALTER SEQUENCE
用于创建/变更 Sequence 对象,其中 Sequence 属性也通过 CREATE
和 ALTER SEQUENCE
接口进行设置,前面已简单介绍部分属性,下面将详细描述具体的属性。
CREATE [ TEMPORARY | TEMP ] SEQUENCE [ IF NOT EXISTS ] name
+ [ AS data_type ]
+ [ INCREMENT [ BY ] increment ]
+ [ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE ]
+ [ START [ WITH ] start ] [ CACHE cache ] [ [ NO ] CYCLE ]
+ [ OWNED BY { table_name.column_name | NONE } ]
+ALTER SEQUENCE [ IF EXISTS ] name
+ [ AS data_type ]
+ [ INCREMENT [ BY ] increment ]
+ [ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE ]
+ [ START [ WITH ] start ]
+ [ RESTART [ [ WITH ] restart ] ]
+ [ CACHE cache ] [ [ NO ] CYCLE ]
+ [ OWNED BY { table_name.column_name | NONE } ]
+
AS
:设置 Sequence 的数据类型,只可以设置为 smallint
,int
,bigint
;与此同时也限定了 minvalue
和 maxvalue
的设置范围,默认为 bigint
类型(注意,只是限定,而不是设置,设置的范围不得超过数据类型的范围)。INCREMENT
:步长,nextval
申请序列值的递增数量,默认值为 1。MINVALUE
/ NOMINVALUE
:设置/不设置 Sequence 对象的最小值,如果不设置则是数据类型规定的范围,例如 bigint
类型,则最小值设置为 PG_INT64_MIN
(-9223372036854775808)MAXVALUE
/ NOMAXVALUE
:设置/不设置 Sequence 对象的最大值,如果不设置,则默认设置规则如上。START
:Sequence 对象的初始值,必须在 MINVALUE
和 MAXVALUE
范围之间。RESTART
:ALTER 后,可以重新设置 Sequence 对象的序列值,默认设置为 start value。CACHE
/ NOCACHE
:设置 Sequence 对象使用的 Cache 大小,NOCACHE
或者不设置则默认为 1。OWNED BY
:设置 Sequence 对象归属于某张表的某一列,删除列后,Sequence 对象也将删除。下面描述了一种序列回滚的场景
CREATE SEQUENCE
+postgres=# BEGIN;
+BEGIN
+postgres=# ALTER SEQUENCE seq maxvalue 10;
+ALTER SEQUENCE
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 2
+(1 row)
+
+postgres=# ROLLBACK;
+ROLLBACK
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
与之前描述的不同,此处 Sequence 对象受到了事务的保护,序列值发生了发生回滚。实际上,此处事务保护的是 ALTER SEQUENCE
(DDL),而非 nextval
(DML),因此此处发生的回滚是将 Sequence 对象回滚到 ALTER SEQUENCE
之前的状态,故发生了序列回滚现象。
DROP SEQUENCE
,如字面意思,去除数据库中的 Sequence 对象。TRUNCATE
,准确来讲,是通过 TRUNCATE TABLE
完成 RESTART SEQUENCE
。postgres=# CREATE TABLE tbl_iden (i INTEGER, j int GENERATED ALWAYS AS IDENTITY);
+CREATE TABLE
+postgres=# insert into tbl_iden values (100);
+INSERT 0 1
+postgres=# insert into tbl_iden values (1000);
+INSERT 0 1
+postgres=# select * from tbl_iden;
+ i | j
+------+---
+ 100 | 1
+ 1000 | 2
+(2 rows)
+
+postgres=# TRUNCATE TABLE tbl_iden RESTART IDENTITY;
+TRUNCATE TABLE
+postgres=# insert into tbl_iden values (1234);
+INSERT 0 1
+postgres=# select * from tbl_iden;
+ i | j
+------+---
+ 1234 | 1
+(1 row)
+
此处相当于在 TRUNCATE
表的时候,执行 ALTER SEQUENCE RESTART
。
SEQUENCE 除了作为一个独立的对象时候以外,还可以组合其他 PostgreSQL 其他组件进行使用,我们总结了一下几个常用的场景。
CREATE SEQUENCE seq;
+CREATE TABLE tbl (i INTEGER PRIMARY KEY);
+INSERT INTO tbl (i) VALUES (nextval('seq'));
+SELECT * FROM tbl ORDER BY 1 DESC;
+ tbl
+---------
+ 1
+(1 row)
+
CREATE SEQUENCE seq;
+CREATE TABLE tbl (i INTEGER PRIMARY KEY, j INTEGER);
+CREATE FUNCTION f()
+RETURNS TRIGGER AS
+$$
+BEGIN
+NEW.i := nextval('seq');
+RETURN NEW;
+END;
+$$
+LANGUAGE 'plpgsql';
+
+CREATE TRIGGER tg
+BEFORE INSERT ON tbl
+FOR EACH ROW
+EXECUTE PROCEDURE f();
+
+INSERT INTO tbl (j) VALUES (4);
+
+SELECT * FROM tbl;
+ i | j
+---+---
+ 1 | 4
+(1 row)
+
显式 DEFAULT
调用:
CREATE SEQUENCE seq;
+CREATE TABLE tbl(i INTEGER DEFAULT nextval('seq') PRIMARY KEY, j INTEGER);
+
+INSERT INTO tbl (i,j) VALUES (DEFAULT,11);
+INSERT INTO tbl(j) VALUES (321);
+INSERT INTO tbl (i,j) VALUES (nextval('seq'),1);
+
+SELECT * FROM tbl;
+ i | j
+---+-----
+ 2 | 321
+ 1 | 11
+ 3 | 1
+(3 rows)
+
SERIAL
调用:
CREATE TABLE tbl (i SERIAL PRIMARY KEY, j INTEGER);
+INSERT INTO tbl (i,j) VALUES (DEFAULT,42);
+
+INSERT INTO tbl (j) VALUES (25);
+
+SELECT * FROM tbl;
+ i | j
+---+----
+ 1 | 42
+ 2 | 25
+(2 rows)
+
注意,SERIAL
并不是一种类型,而是 DEFAULT
调用的另一种形式,只不过 SERIAL
会自动创建 DEFAULT
约束所要使用的 Sequence。
CREATE TABLE tbl (i int GENERATED ALWAYS AS IDENTITY,
+ j INTEGER);
+INSERT INTO tbl(i,j) VALUES (DEFAULT,32);
+
+INSERT INTO tbl(j) VALUES (23);
+
+SELECT * FROM tbl;
+ i | j
+---+----
+ 1 | 32
+ 2 | 23
+(2 rows)
+
AUTO_INC
调用对列附加了自增约束,与 default
约束不同,自增约束通过查找 dependency 的方式找到该列关联的 Sequence,而 default
调用仅仅是将默认值设置为一个 nextval
表达式。
在 PostgreSQL 中有一张专门记录 Sequence 信息的系统表,即 pg_sequence
。其表结构如下:
postgres=# \\d pg_sequence
+ Table "pg_catalog.pg_sequence"
+ Column | Type | Collation | Nullable | Default
+--------------+---------+-----------+----------+---------
+ seqrelid | oid | | not null |
+ seqtypid | oid | | not null |
+ seqstart | bigint | | not null |
+ seqincrement | bigint | | not null |
+ seqmax | bigint | | not null |
+ seqmin | bigint | | not null |
+ seqcache | bigint | | not null |
+ seqcycle | boolean | | not null |
+Indexes:
+ "pg_sequence_seqrelid_index" PRIMARY KEY, btree (seqrelid)
+
不难看出,pg_sequence
中记录了 Sequence 的全部的属性信息,该属性在 CREATE/ALTER SEQUENCE
中被设置,Sequence 的 nextval
以及 setval
要经常打开这张系统表,按照规则办事。
对于 Sequence 序列数据本身,其实现方式是基于 heap 表实现的,heap 表共计三个字段,其在表结构如下:
typedef struct FormData_pg_sequence_data
+{
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} FormData_pg_sequence_data;
+
last_value
记录了 Sequence 的当前的序列值,我们称之为页面值(与后续的缓存值相区分)log_cnt
记录了 Sequence 在 nextval
申请时,预先向 WAL 中额外申请的序列次数,这一部分我们放在序列申请机制剖析中详细介绍。is_called
标记 Sequence 的 last_value
是否已经被申请过,例如 setval
可以设置 is_called
字段:-- setval false
+postgres=# select setval('seq', 10, false);
+ setval
+--------
+ 10
+(1 row)
+
+postgres=# select * from seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 10 | 0 | f
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 10
+(1 row)
+
+-- setval true
+postgres=# select setval('seq', 10, true);
+ setval
+--------
+ 10
+(1 row)
+
+postgres=# select * from seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 10 | 0 | t
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 11
+(1 row)
+
每当用户创建一个 Sequence 对象时,PostgreSQL 总是会创建出一张上面这种结构的 heap 表,来记录 Sequence 对象的数据信息。当 Sequence 对象因为 nextval
或 setval
导致序列值变化时,PostgreSQL 就会通过原地更新的方式更新 heap 表中的这一行的三个字段。
以 setval
为例,下面的逻辑解释了其具体的原地更新过程。
static void
+do_setval(Oid relid, int64 next, bool iscalled)
+{
+
+ /* 打开并对Sequence heap表进行加锁 */
+ init_sequence(relid, &elm, &seqrel);
+
+ ...
+
+ /* 对buffer进行加锁,同时提取tuple */
+ seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+
+ ...
+
+ /* 原地更新tuple */
+ seq->last_value = next; /* last fetched number */
+ seq->is_called = iscalled;
+ seq->log_cnt = 0;
+
+ ...
+
+ /* 释放buffer锁以及表锁 */
+ UnlockReleaseBuffer(buf);
+ relation_close(seqrel, NoLock);
+}
+
可见,do_setval
会直接去设置 Sequence heap 表中的这一行元组,而非普通 heap 表中的删除 + 插入的方式来完成元组更新,对于 nextval
而言,也是类似的过程,只不过 last_value
的值需要计算得出,而非用户设置。
讲清楚 Sequence 对象在内核中的存在形式之后,就需要讲清楚一个序列值是如何发出的,即 nextval
方法。其在内核的具体实现在 sequence.c
中的 nextval_internal
函数,其最核心的功能,就是计算 last_value
以及 log_cnt
。
last_value
和 log_cnt
的具体关系如下图:
其中 log_cnt
是一个预留的申请次数。默认值为 32,由下面的宏定义决定:
/*
+ * We don't want to log each fetching of a value from a sequence,
+ * so we pre-log a few fetches in advance. In the event of
+ * crash we can lose (skip over) as many values as we pre-logged.
+ */
+#define SEQ_LOG_VALS 32
+
每当将 last_value
增加一个 increment 的长度时,log_cnt
就会递减 1。
当 log_cnt
为 0,或者发生 checkpoint
以后,就会触发一次 WAL 日志写入,按下面的公式设置 WAL 日志中的页面值,并重新将 log_cnt
设置为 SEQ_LOG_VALS
。
通过这种方式,PostgreSQL 每次通过 nextval
修改页面中的 last_value
后,不需要每次都写入 WAL 日志。这意味着:如果 nextval
每次都需要修改页面值的话,这种优化将会使得写 WAL 的频率降低 32 倍。其代价就是,在发生 crash 前如果没有及时进行 checkpoint,那么会丢失一段序列。如下面所示:
postgres=# create sequence seq;
+CREATE SEQUENCE
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
+postgres=# select * from seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 1 | 32 | t
+(1 row)
+
+-- crash and restart
+
+postgres=# select * from seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 33 | 0 | t
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 34
+(1 row)
+
显然,crash 以后,Sequence 对象产生了 2-33 这段空洞,但这个代价是可以被接受的,因为 Sequence 并没有违背唯一性原则。同时,在特定场景下极大地降低了写 WAL 的频率。
通过上述描述,不难发现 Sequence 每次发生序列申请,都需要通过加入 buffer 锁的方式来修改页面,这意味着 Sequence 的并发性能是比较差的。
针对这个问题,PostgreSQL 使用对 Sequence 使用了 Session Cache 来提前缓存一段序列,来提高并发性能。如下图所示:
Sequence Session Cache 的实现是一个 entry 数量固定为 16 的哈希表,以 Sequence 的 OID 为 key 去检索已经缓存好的 Sequence 序列,其缓存的 value 结构如下:
typedef struct SeqTableData
+{
+ Oid relid; /* Sequence OID(hash key) */
+ int64 last; /* value last returned by nextval */
+ int64 cached; /* last value already cached for nextval */
+ int64 increment; /* copy of sequence's increment field */
+} SeqTableData;
+
其中 last
即为 Sequence 在 Session 中的当前值,即 current_value,cached
为 Sequence 在 Session 中的缓存值,即 cached_value,increment
记录了步长,有了这三个值即可满足 Sequence 缓存的基本条件。
对于 Sequence Session Cache 与页面值之间的关系,如下图所示:
类似于 log_cnt
,cache_cnt
即为用户在定义 Sequence 时,设置的 Cache 大小,最小为 1。只有当 cache domain 中的序列用完以后,才会去对 buffer 加锁,修改页中的 Sequence 页面值。调整过程如下所示:
例如,如果 CACHE 设置的值为 20,那么当 cache 使用完以后,就会尝试对 buffer 加锁来调整页面值,并重新申请 20 个 increment 至 cache 中。对于上图而言,有如下关系:
',15),A=s("p",{class:"katex-block"},[s("span",{class:"katex-display"},[s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("semantics",null,[s("mrow",null,[s("mi",null,"c"),s("mi",null,"a"),s("mi",null,"c"),s("mi",null,"h"),s("mi",null,"e"),s("mi",null,"d"),s("mi",{mathvariant:"normal"},"_"),s("mi",null,"v"),s("mi",null,"a"),s("mi",null,"l"),s("mi",null,"u"),s("mi",null,"e"),s("mo",null,"="),s("mi",null,"N"),s("mi",null,"E"),s("mi",null,"W"),s("mtext",null," "),s("mi",null,"c"),s("mi",null,"u"),s("mi",null,"r"),s("mi",null,"r"),s("mi",null,"e"),s("mi",null,"n"),s("mi",null,"t"),s("mi",{mathvariant:"normal"},"_"),s("mi",null,"v"),s("mi",null,"a"),s("mi",null,"l"),s("mi",null,"u"),s("mi",null,"e")]),s("annotation",{encoding:"application/x-tex"}," cached\\_value = NEW\\ current\\_value ")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0044em","vertical-align":"-0.31em"}}),s("span",{class:"mord mathnormal"},"c"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"c"),s("span",{class:"mord mathnormal"},"h"),s("span",{class:"mord mathnormal"},"e"),s("span",{class:"mord mathnormal"},"d"),s("span",{class:"mord",style:{"margin-right":"0.02778em"}},"_"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.03588em"}},"v"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"l"),s("span",{class:"mord mathnormal"},"u"),s("span",{class:"mord mathnormal"},"e"),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0044em","vertical-align":"-0.31em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.05764em"}},"NE"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"W"),s("span",{class:"mspace"}," "),s("span",{class:"mord mathnormal"},"c"),s("span",{class:"mord mathnormal"},"u"),s("span",{class:"mord mathnormal"},"rre"),s("span",{class:"mord mathnormal"},"n"),s("span",{class:"mord mathnormal"},"t"),s("span",{class:"mord",style:{"margin-right":"0.02778em"}},"_"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.03588em"}},"v"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"l"),s("span",{class:"mord mathnormal"},"u"),s("span",{class:"mord mathnormal"},"e")])])])])],-1),N=s("p",{class:"katex-block"},[s("span",{class:"katex-display"},[s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("semantics",null,[s("mrow",null,[s("mi",null,"N"),s("mi",null,"E"),s("mi",null,"W"),s("mtext",null," "),s("mi",null,"c"),s("mi",null,"u"),s("mi",null,"r"),s("mi",null,"r"),s("mi",null,"e"),s("mi",null,"n"),s("mi",null,"t"),s("mi",{mathvariant:"normal"},"_"),s("mi",null,"v"),s("mi",null,"a"),s("mi",null,"l"),s("mi",null,"u"),s("mi",null,"e"),s("mo",null,"+"),s("mn",null,"20"),s("mo",null,"×"),s("mi",null,"I"),s("mi",null,"N"),s("mi",null,"C"),s("mo",null,"="),s("mi",null,"N"),s("mi",null,"E"),s("mi",null,"W"),s("mtext",null," "),s("mi",null,"c"),s("mi",null,"a"),s("mi",null,"c"),s("mi",null,"h"),s("mi",null,"e"),s("mi",null,"d"),s("mi",{mathvariant:"normal"},"_"),s("mi",null,"v"),s("mi",null,"a"),s("mi",null,"l"),s("mi",null,"u"),s("mi",null,"e")]),s("annotation",{encoding:"application/x-tex"}," NEW\\ current\\_value+20\\times INC=NEW\\ cached\\_value ")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0044em","vertical-align":"-0.31em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.05764em"}},"NE"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"W"),s("span",{class:"mspace"}," "),s("span",{class:"mord mathnormal"},"c"),s("span",{class:"mord mathnormal"},"u"),s("span",{class:"mord mathnormal"},"rre"),s("span",{class:"mord mathnormal"},"n"),s("span",{class:"mord mathnormal"},"t"),s("span",{class:"mord",style:{"margin-right":"0.02778em"}},"_"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.03588em"}},"v"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"l"),s("span",{class:"mord mathnormal"},"u"),s("span",{class:"mord mathnormal"},"e"),s("span",{class:"mspace",style:{"margin-right":"0.2222em"}}),s("span",{class:"mbin"},"+"),s("span",{class:"mspace",style:{"margin-right":"0.2222em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.7278em","vertical-align":"-0.0833em"}}),s("span",{class:"mord"},"20"),s("span",{class:"mspace",style:{"margin-right":"0.2222em"}}),s("span",{class:"mbin"},"×"),s("span",{class:"mspace",style:{"margin-right":"0.2222em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"0.6833em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.07847em"}},"I"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.07153em"}},"NC"),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0044em","vertical-align":"-0.31em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.05764em"}},"NE"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"W"),s("span",{class:"mspace"}," "),s("span",{class:"mord mathnormal"},"c"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"c"),s("span",{class:"mord mathnormal"},"h"),s("span",{class:"mord mathnormal"},"e"),s("span",{class:"mord mathnormal"},"d"),s("span",{class:"mord",style:{"margin-right":"0.02778em"}},"_"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.03588em"}},"v"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"l"),s("span",{class:"mord mathnormal"},"u"),s("span",{class:"mord mathnormal"},"e")])])])])],-1),L=s("p",{class:"katex-block"},[s("span",{class:"katex-display"},[s("span",{class:"katex"},[s("span",{class:"katex-mathml"},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("semantics",null,[s("mrow",null,[s("mi",null,"N"),s("mi",null,"E"),s("mi",null,"W"),s("mtext",null," "),s("mi",null,"l"),s("mi",null,"a"),s("mi",null,"s"),s("mi",null,"t"),s("mi",{mathvariant:"normal"},"_"),s("mi",null,"v"),s("mi",null,"a"),s("mi",null,"l"),s("mi",null,"u"),s("mi",null,"e"),s("mo",null,"="),s("mi",null,"N"),s("mi",null,"E"),s("mi",null,"W"),s("mtext",null," "),s("mi",null,"c"),s("mi",null,"a"),s("mi",null,"c"),s("mi",null,"h"),s("mi",null,"e"),s("mi",null,"d"),s("mi",{mathvariant:"normal"},"_"),s("mi",null,"v"),s("mi",null,"a"),s("mi",null,"l"),s("mi",null,"u"),s("mi",null,"e")]),s("annotation",{encoding:"application/x-tex"}," NEW\\ last\\_value = NEW\\ cached\\_value ")])])]),s("span",{class:"katex-html","aria-hidden":"true"},[s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0044em","vertical-align":"-0.31em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.05764em"}},"NE"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"W"),s("span",{class:"mspace"}," "),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"l"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"s"),s("span",{class:"mord mathnormal"},"t"),s("span",{class:"mord",style:{"margin-right":"0.02778em"}},"_"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.03588em"}},"v"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"l"),s("span",{class:"mord mathnormal"},"u"),s("span",{class:"mord mathnormal"},"e"),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}}),s("span",{class:"mrel"},"="),s("span",{class:"mspace",style:{"margin-right":"0.2778em"}})]),s("span",{class:"base"},[s("span",{class:"strut",style:{height:"1.0044em","vertical-align":"-0.31em"}}),s("span",{class:"mord mathnormal",style:{"margin-right":"0.05764em"}},"NE"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.13889em"}},"W"),s("span",{class:"mspace"}," "),s("span",{class:"mord mathnormal"},"c"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal"},"c"),s("span",{class:"mord mathnormal"},"h"),s("span",{class:"mord mathnormal"},"e"),s("span",{class:"mord mathnormal"},"d"),s("span",{class:"mord",style:{"margin-right":"0.02778em"}},"_"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.03588em"}},"v"),s("span",{class:"mord mathnormal"},"a"),s("span",{class:"mord mathnormal",style:{"margin-right":"0.01968em"}},"l"),s("span",{class:"mord mathnormal"},"u"),s("span",{class:"mord mathnormal"},"e")])])])])],-1),R=n('在 Sequence Session Cache 的加持下,nextval
方法的并发性能得到了极大的提升,以下是通过 pgbench 进行压测的结果对比。
Sequence 在 PostgreSQL 中是一类特殊的表级对象,提供了简单而又丰富的 SQL 接口,使得用户可以更加方便的创建、使用定制化的序列对象。不仅如此,Sequence 在内核中也具有丰富的组合使用场景,其使用场景也得到了极大地扩展。
本文详细介绍了 Sequence 对象在 PostgreSQL 内核中的具体设计,从对象的元数据描述、对象的数据描述出发,介绍了 Sequence 对象的组成。本文随后介绍了 Sequence 最为核心的 SQL 接口——nextval
,从 nextval
的序列值计算、原地更新、降低 WAL 日志写入三个方面进行了详细阐述。最后,本文介绍了 Sequence Session Cache 的相关原理,描述了引入 Cache 以后,序列值在 Cache 中,以及页面中的计算方法以及对齐关系,并对比了引入 Cache 前后,nextval
方法在单序列和多序列并发场景下的对比情况。
# 拉取 PolarDB-PG 镜像
+docker pull polardb/polardb_pg_local_instance
+# 创建并运行容器
+docker run -it --rm polardb/polardb_pg_local_instance psql
+# 测试可用性
+postgres=# SELECT version();
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
# 拉取 PolarDB-PG 镜像
+docker pull polardb/polardb_pg_local_instance
+# 创建并运行容器
+docker run -it --rm polardb/polardb_pg_local_instance psql
+# 测试可用性
+postgres=# SELECT version();
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
在 SQL 执行的过程中,存在若干次对系统表和用户表的查询。PolarDB for PostgreSQL 通过文件系统的 lseek 系统调用来获取表大小。频繁执行 lseek 系统调用会严重影响数据库的执行性能,特别是对于存储计算分离架构的 PolarDB for PostgreSQL 来说,在 PolarFS 上的 PFS lseek 系统调用会带来更大的 RTO 时延。为了降低 lseek 系统调用的使用频率,PolarDB for PostgreSQL 在自身存储引擎上提供了一层表大小缓存接口,用于提升数据库的运行时性能。
PolarDB for PostgreSQL 为了实现 RSC,在 smgr 层进行了重新适配与设计。在整体上,RSC 是一个 缓存数组 + 两级索引 的结构设计:一级索引通过内存地址 + 引用计数来寻找共享内存 RSC 缓存中的一个缓存块;二级索引通过共享内存中的哈希表来索引得到一个 RSC 缓存块的数组下标,根据下标进一步访问 RSC 缓存,获取表大小信息。
在开启 RSC 缓存功能后,各个 smgr 层接口将会生效 RSC 缓存查询与更新的逻辑:
smgrnblocks
:获取表大小的实际入口,将会通过查询 RSC 一级或二级索引得到 RSC 缓存块地址,从而得到物理表大小。如果 RSC 缓存命中则直接返回缓存中的物理表大小;否则需要进行一次 lseek 系统调用,并将实际的物理表大小更新到 RSC 缓存中,并同步更新 RSC 一级与二级索引。smgrextend
:表文件扩展接口,将会把物理表文件扩展一个页,并更新对应表的 RSC 索引与缓存。smgrextendbatch
:表文件的预扩展接口,将会把物理表文件预扩展多个页,并更新对应表的 RSC 索引与缓存。smgrtruncate
:表文件的删除接口,将会把物理表文件删除,并清空对应表的 RSC 索引与缓存。在共享内存中,维护了一个数组形式的 RSC 缓存。数组中的每个元素是一个 RSC 缓存块,其中保存的关键信息包含:
generation
:表发生更新操作时,这个计数会自增对于每个执行用户操作的会话进程而言,其所需访问的表被维护在进程私有的 SmgrRelation
结构中,其中包含:
generation
计数当执行表访问操作时,如果引用计数与 RSC 缓存中的 generation
一致,则认为 RSC 缓存没有被更新过,可以直接通过指针得到 RSC 缓存,获得物理表的当前大小。RSC 一级索引整体上是一个共享引用计数 + 共享内存指针的设计,在对大多数特定表的读多写少场景中,这样的设计可以有效降低对 RSC 二级索引的并发访问。
当表大小发生更新(例如 INSERT
、UPDATE
、COPY
等触发表文件大小元信息变更的操作)时,会导致 RSC 一级索引失效(generation
计数不一致),会话进程会尝试访问 RSC 二级索引。RSC 二级索引的形式是一个共享内存哈希表:
通过待访问物理表的 OID,查找位于共享内存中的 RSC 二级索引:如果命中,则直接得到 RSC 缓存块,取得表大小,同时更新 RSC 一级索引;如果不命中,则使用 lseek 系统调用获取物理表的实际大小,并更新 RSC 缓存及其一二级索引。RSC 缓存更新的过程可能因缓存已满而触发缓存淘汰。
在 RSC 缓存被更新的过程中,可能会因为缓存总容量已满,进而触发缓存淘汰。RSC 实现了一个 SLRU 缓存淘汰算法,用于在缓存块满时选择一个旧缓存块进行淘汰。每一个 RSC 缓存块上都维护了一个引用计数器,缓存每被访问一次,计数器的值加 1;缓存被淘汰时计数器清 0。当缓存淘汰被触发时,将从 RSC 缓存数组上一次遍历到的位置开始向前遍历,递减每一个 RSC 缓存上的引用计数,直到找到一个引用计数为 0 的缓存块进行淘汰。遍历的长度可以通过 GUC 参数控制,默认为 8:当向前遍历 8 个块后仍未找到一个可以被淘汰的 RSC 缓存块时,将会随机选择一个缓存块进行淘汰。
PolarDB for PostgreSQL 的备节点分为两种,一种是提供只读服务的共享存储 Read Only 节点(RO),一种是提供跨数据中心高可用的 Standby 节点。对于 Standby 节点,由于其数据同步机制采用传统流复制 + WAL 日志回放的方式进行,故 RSC 缓存的使用与更新方式与 Read Write 节点(RW)无异。但对于 RO 节点,其数据是通过 PolarDB for PostgreSQL 实现的 LogIndex 机制实现同步的,故需要额外支持该机制下 RO 节点的 RSC 缓存同步方式。对于每种 WAL 日志类型,都需要根据当前是否存在 New Page 类型的日志,进行缓存更新与淘汰处理,保证 RO 节点下 RSC 缓存的一致性。
该功能默认生效。提供如下 GUC 参数控制:
polar_nblocks_cache_mode
:是否开启 RSC 功能,取值为: scan
(默认值):表示仅在 scan
顺序查询场景下开启on
:在所有场景下全量开启 RSCoff
:关闭 RSC;参数从 scan
或 on
设置为 off
,可以直接通过 ALTER SYSTEM SET
进行设置,无需重启即可生效;参数从 off
设置为 scan
/ on
,需要修改 postgresql.conf
配置文件并重启生效polar_enable_replica_use_smgr_cache
:RO 节点是否开启 RSC 功能,默认为 on
。可配置为 on
/ off
。polar_enable_standby_use_smgr_cache
:Standby 节点是否开启 RSC 功能,默认为 on
。可配置为 on
/ off
。通过如下 Shell 脚本创建一个带有 1000 个子分区的分区表:
psql -c "CREATE TABLE hp(a INT) PARTITION BY HASH(a);"
+for ((i=1; i<1000; i++)); do
+ psql -c "CREATE TABLE hp$i PARTITION OF hp FOR VALUES WITH(modulus 1000, remainder $i);"
+done
+
此时分区子表无数据。接下来借助一条在所有子分区上的聚合查询,来验证打开或关闭 RSC 功能时,lseek 系统调用所带来的时间性能影响。
开启 RSC:
ALTER SYSTEM SET polar_nblocks_cache_mode = 'scan';
+ALTER SYSTEM
+
+ALTER SYSTEM SET polar_enable_replica_use_smgr_cache = on;
+ALTER SYSTEM
+
+ALTER SYSTEM SET polar_enable_standby_use_smgr_cache = on;
+ALTER SYSTEM
+
+SELECT pg_reload_conf();
+ pg_reload_conf
+----------------
+ t
+(1 row)
+
+SHOW polar_nblocks_cache_mode;
+ polar_nblocks_cache_mode
+--------------------------
+ scan
+(1 row)
+
+SHOW polar_enable_replica_use_smgr_cache ;
+ polar_enable_replica_use_smgr_cache
+--------------------------
+ on
+(1 row)
+
+SHOW polar_enable_standby_use_smgr_cache ;
+ polar_enable_standby_use_smgr_cache
+--------------------------
+ on
+(1 row)
+
+SELECT COUNT(*) FROM hp;
+ count
+-------
+ 0
+(1 row)
+
+Time: 97.658 ms
+
+SELECT COUNT(*) FROM hp;
+ count
+-------
+ 0
+(1 row)
+
+Time: 108.672 ms
+
+SELECT COUNT(*) FROM hp;
+ count
+-------
+ 0
+(1 row)
+
+Time: 93.678 ms
+
关闭 RSC:
ALTER SYSTEM SET polar_nblocks_cache_mode = 'off';
+ALTER SYSTEM
+
+ALTER SYSTEM SET polar_enable_replica_use_smgr_cache = off;
+ALTER SYSTEM
+
+ALTER SYSTEM SET polar_enable_standby_use_smgr_cache = off;
+ALTER SYSTEM
+
+SELECT pg_reload_conf();
+ pg_reload_conf
+----------------
+ t
+(1 row)
+
+SELECT COUNT(*) FROM hp;
+ count
+-------
+ 0
+(1 row)
+
+Time: 164.772 ms
+
+SELECT COUNT(*) FROM hp;
+ count
+-------
+ 0
+(1 row)
+
+Time: 147.255 ms
+
+SELECT COUNT(*) FROM hp;
+ count
+-------
+ 0
+(1 row)
+
+Time: 177.039 ms
+
+SELECT COUNT(*) FROM hp;
+ count
+-------
+ 0
+(1 row)
+
+Time: 194.724 ms
+
PolarDB for PostgreSQL 的内存可以分为以下三部分:
进程间动态共享内存和进程私有内存是 动态分配 的,其使用量随着实例承载的业务运行情况而不断变化。过多使用动态内存,可能会导致内存使用量超过操作系统限制,触发内核内存限制机制,造成实例进程异常退出,实例重启,引发实例不可用的问题。
进程私有内存 MemoryContext 管理的内存可以分为两部分:
为了解决以上问题,PolarDB for PostgreSQL 增加了 Resource Manager 资源限制机制,能够在实例运行期间,周期性检测资源使用情况。对于超过资源限制阈值的进程,强制进行资源限制,降低实例不可用的风险。
Resource Manager 主要的限制资源有:
当前仅支持对内存资源进行限制。
内存限制依赖 Cgroup,如果不存在 Cgroup,则无法有效进行资源限制。Resource Manager 作为 PolarDB for PostgreSQL 一个后台辅助进程,周期性读取 Cgroup 的内存使用数据作为内存限制的依据。当发现存在进程超过内存限制阈值后,会读取内核的用户进程内存记账,按照内存大小排序,依次对内存使用量超过阈值的进程发送中断进程信号(SIGTERM)或取消操作信号(SIGINT)。
Resource Manager 守护进程会随着实例启动而建立,同时对 RW、RO 以及 Standby 节点起作用。可以通过修改参数改变 Resource Manager 的行为。
enable_resource_manager
:是否启动 Resource Manager,取值为 on
/ off
,默认值为 on
stat_interval
:资源使用量周期检测的间隔,单位为毫秒,取值范围为 10
-10000
,默认值为 500
total_mem_limit_rate
:限制实例内存使用的百分比,当实例内存使用超过该百分比后,开始强制对内存资源进行限制,默认值为 95
total_mem_limit_remain_size
:实例内存预留值,当实例空闲内存小于预留值后,开始强制对内存资源进行限制,单位为 kB,取值范围为 131072
-MAX_KILOBYTES
(整型数值最大值),默认值为 524288
mem_release_policy
:内存资源限制的策略 none
:无动作default
:缺省策略(默认值),优先中断空闲进程,然后中断活跃进程cancel_query
:中断活跃进程terminate_idle_backend
:中断空闲进程terminate_any_backend
:中断所有进程terminate_random_backend
:中断随机进程2022-11-28 14:07:56.929 UTC [18179] LOG: [polar_resource_manager] terminate process 13461 release memory 65434123 bytes
+2022-11-28 14:08:17.143 UTC [35472] FATAL: terminating connection due to out of memory
+2022-11-28 14:08:17.143 UTC [35472] BACKTRACE:
+ postgres: primary: postgres postgres [local] idle(ProcessInterrupts+0x34c) [0xae5fda]
+ postgres: primary: postgres postgres [local] idle(ProcessClientReadInterrupt+0x3a) [0xae1ad6]
+ postgres: primary: postgres postgres [local] idle(secure_read+0x209) [0x8c9070]
+ postgres: primary: postgres postgres [local] idle() [0x8d4565]
+ postgres: primary: postgres postgres [local] idle(pq_getbyte+0x30) [0x8d4613]
+ postgres: primary: postgres postgres [local] idle() [0xae1861]
+ postgres: primary: postgres postgres [local] idle() [0xae1a83]
+ postgres: primary: postgres postgres [local] idle(PostgresMain+0x8df) [0xae7949]
+ postgres: primary: postgres postgres [local] idle() [0x9f4c4c]
+ postgres: primary: postgres postgres [local] idle() [0x9f440c]
+ postgres: primary: postgres postgres [local] idle() [0x9ef963]
+ postgres: primary: postgres postgres [local] idle(PostmasterMain+0x1321) [0x9ef18a]
+ postgres: primary: postgres postgres [local] idle() [0x8dc1f6]
+ /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f888afff445]
+ postgres: primary: postgres postgres [local] idle() [0x49d209]
+
为方便起见,本示例使用基于本地磁盘的实例来进行演示。拉取如下镜像并启动容器,可以得到一个基于本地磁盘的 HTAP 实例:
docker pull polardb/polardb_pg_local_instance
+docker run -it \\
+ --cap-add=SYS_PTRACE \\
+ --privileged=true \\
+ --name polardb_pg_htap \\
+ --shm-size=512m \\
+ polardb/polardb_pg_local_instance \\
+ bash
+
容器内的 5432
至 5434
端口分别运行着一个读写节点和两个只读节点。两个只读节点与读写节点共享同一份数据,并通过物理复制保持与读写节点的内存状态同步。
首先,连接到读写节点,创建一张表并插入一些数据:
psql -p5432
+
postgres=# CREATE TABLE t (id int);
+CREATE TABLE
+postgres=# INSERT INTO t SELECT generate_series(1,10);
+INSERT 0 10
+
然后连接到只读节点,并同样试图对表插入数据,将会发现无法进行插入操作:
psql -p5433
+
postgres=# INSERT INTO t SELECT generate_series(1,10);
+ERROR: cannot execute INSERT in a read-only transaction
+
此时,关闭读写节点,模拟出读写节点不可用的行为:
$ pg_ctl -D ~/tmp_master_dir_polardb_pg_1100_bld/ stop
+waiting for server to shut down.... done
+server stopped
+
此时,集群中没有任何节点可以写入存储了。这时,我们需要将一个只读节点提升为读写节点,恢复对存储的写入。
只有当读写节点停止写入后,才可以将只读节点提升为读写节点,否则将会出现集群内两个节点同时写入的情况。当数据库检测到出现多节点写入时,将会导致运行异常。
将运行在 5433
端口的只读节点提升为读写节点:
$ pg_ctl -D ~/tmp_replica_dir_polardb_pg_1100_bld1/ promote
+waiting for server to promote.... done
+server promoted
+
连接到已经完成 promote 的新读写节点上,再次尝试之前的 INSERT
操作:
postgres=# INSERT INTO t SELECT generate_series(1,10);
+INSERT 0 10
+
从上述结果中可以看到,新的读写节点能够成功对存储进行写入。这说明原先的只读节点已经被成功提升为读写节点了。
`,23);function b(p,v){const l=o("ArticleInfo"),n=o("router-link");return c(),d("div",null,[h,e(l,{frontmatter:p.$frontmatter},null,8,["frontmatter"]),k,m,a("nav",g,[a("ul",null,[a("li",null,[e(n,{to:"#前置准备"},{default:t(()=>[s("前置准备")]),_:1})]),a("li",null,[e(n,{to:"#验证只读节点不可写"},{default:t(()=>[s("验证只读节点不可写")]),_:1})]),a("li",null,[e(n,{to:"#读写节点停止写入"},{default:t(()=>[s("读写节点停止写入")]),_:1})]),a("li",null,[e(n,{to:"#只读节点-promote"},{default:t(()=>[s("只读节点 Promote")]),_:1})]),a("li",null,[e(n,{to:"#计算集群恢复写入"},{default:t(()=>[s("计算集群恢复写入")]),_:1})])])]),_])}const E=r(u,[["render",b],["__file","ro-online-promote.html.vue"]]),T=JSON.parse('{"path":"/zh/operation/ro-online-promote.html","title":"只读节点在线 Promote","lang":"zh-CN","frontmatter":{"author":"棠羽","date":"2022/12/25","minute":15},"headers":[{"level":2,"title":"前置准备","slug":"前置准备","link":"#前置准备","children":[]},{"level":2,"title":"验证只读节点不可写","slug":"验证只读节点不可写","link":"#验证只读节点不可写","children":[]},{"level":2,"title":"读写节点停止写入","slug":"读写节点停止写入","link":"#读写节点停止写入","children":[]},{"level":2,"title":"只读节点 Promote","slug":"只读节点-promote","link":"#只读节点-promote","children":[]},{"level":2,"title":"计算集群恢复写入","slug":"计算集群恢复写入","link":"#计算集群恢复写入","children":[]}],"git":{"updatedTime":1703744114000},"filePathRelative":"zh/operation/ro-online-promote.md"}');export{E as comp,T as data}; diff --git a/assets/ro-online-promote.html-DQQAaka2.js b/assets/ro-online-promote.html-DQQAaka2.js new file mode 100644 index 00000000000..780701cf0ea --- /dev/null +++ b/assets/ro-online-promote.html-DQQAaka2.js @@ -0,0 +1,25 @@ +import{_ as r,r as o,o as c,c as d,d as e,a,w as t,b as s,e as i}from"./app-CWFDhr_k.js";const u={},h=a("h1",{id:"只读节点在线-promote",tabindex:"-1"},[a("a",{class:"header-anchor",href:"#只读节点在线-promote"},[a("span",null,"只读节点在线 Promote")])],-1),k=a("p",null,[s("PolarDB for PostgreSQL 是一款存储与计算分离的云原生数据库,所有计算节点共享一份存储,并且对存储的访问具有 "),a("strong",null,"一写多读"),s(" 的限制:所有计算节点可以对存储进行读取,但只有一个计算节点可以对存储进行写入。这种限制会带来一个问题:当读写节点因为宕机或网络故障而不可用时,集群中将没有能够可以写入存储的计算节点,应用业务中的增、删、改,以及 DDL 都将无法运行。")],-1),m=a("p",null,"本文将指导您在 PolarDB for PostgreSQL 计算集群中的读写节点停止服务时,将任意一个只读节点在线提升为读写节点,从而使集群恢复对于共享存储的写入能力。",-1),g={class:"table-of-contents"},_=i(`为方便起见,本示例使用基于本地磁盘的实例来进行演示。拉取如下镜像并启动容器,可以得到一个基于本地磁盘的 HTAP 实例:
docker pull polardb/polardb_pg_local_instance
+docker run -it \\
+ --cap-add=SYS_PTRACE \\
+ --privileged=true \\
+ --name polardb_pg_htap \\
+ --shm-size=512m \\
+ polardb/polardb_pg_local_instance \\
+ bash
+
容器内的 5432
至 5434
端口分别运行着一个读写节点和两个只读节点。两个只读节点与读写节点共享同一份数据,并通过物理复制保持与读写节点的内存状态同步。
首先,连接到读写节点,创建一张表并插入一些数据:
psql -p5432
+
postgres=# CREATE TABLE t (id int);
+CREATE TABLE
+postgres=# INSERT INTO t SELECT generate_series(1,10);
+INSERT 0 10
+
然后连接到只读节点,并同样试图对表插入数据,将会发现无法进行插入操作:
psql -p5433
+
postgres=# INSERT INTO t SELECT generate_series(1,10);
+ERROR: cannot execute INSERT in a read-only transaction
+
此时,关闭读写节点,模拟出读写节点不可用的行为:
$ pg_ctl -D ~/tmp_master_dir_polardb_pg_1100_bld/ stop
+waiting for server to shut down.... done
+server stopped
+
此时,集群中没有任何节点可以写入存储了。这时,我们需要将一个只读节点提升为读写节点,恢复对存储的写入。
只有当读写节点停止写入后,才可以将只读节点提升为读写节点,否则将会出现集群内两个节点同时写入的情况。当数据库检测到出现多节点写入时,将会导致运行异常。
将运行在 5433
端口的只读节点提升为读写节点:
$ pg_ctl -D ~/tmp_replica_dir_polardb_pg_1100_bld1/ promote
+waiting for server to promote.... done
+server promoted
+
连接到已经完成 promote 的新读写节点上,再次尝试之前的 INSERT
操作:
postgres=# INSERT INTO t SELECT generate_series(1,10);
+INSERT 0 10
+
从上述结果中可以看到,新的读写节点能够成功对存储进行写入。这说明原先的只读节点已经被成功提升为读写节点了。
`,23);function b(p,v){const l=o("ArticleInfo"),n=o("router-link");return c(),d("div",null,[h,e(l,{frontmatter:p.$frontmatter},null,8,["frontmatter"]),k,m,a("nav",g,[a("ul",null,[a("li",null,[e(n,{to:"#前置准备"},{default:t(()=>[s("前置准备")]),_:1})]),a("li",null,[e(n,{to:"#验证只读节点不可写"},{default:t(()=>[s("验证只读节点不可写")]),_:1})]),a("li",null,[e(n,{to:"#读写节点停止写入"},{default:t(()=>[s("读写节点停止写入")]),_:1})]),a("li",null,[e(n,{to:"#只读节点-promote"},{default:t(()=>[s("只读节点 Promote")]),_:1})]),a("li",null,[e(n,{to:"#计算集群恢复写入"},{default:t(()=>[s("计算集群恢复写入")]),_:1})])])]),_])}const E=r(u,[["render",b],["__file","ro-online-promote.html.vue"]]),T=JSON.parse('{"path":"/operation/ro-online-promote.html","title":"只读节点在线 Promote","lang":"en-US","frontmatter":{"author":"棠羽","date":"2022/12/25","minute":15},"headers":[{"level":2,"title":"前置准备","slug":"前置准备","link":"#前置准备","children":[]},{"level":2,"title":"验证只读节点不可写","slug":"验证只读节点不可写","link":"#验证只读节点不可写","children":[]},{"level":2,"title":"读写节点停止写入","slug":"读写节点停止写入","link":"#读写节点停止写入","children":[]},{"level":2,"title":"只读节点 Promote","slug":"只读节点-promote","link":"#只读节点-promote","children":[]},{"level":2,"title":"计算集群恢复写入","slug":"计算集群恢复写入","link":"#计算集群恢复写入","children":[]}],"git":{"updatedTime":1703744114000},"filePathRelative":"operation/ro-online-promote.md"}');export{E as comp,T as data}; diff --git a/assets/rsc-first-cache-y8Pfr0V9.png b/assets/rsc-first-cache-y8Pfr0V9.png new file mode 100644 index 00000000000..5f1d22a4274 Binary files /dev/null and b/assets/rsc-first-cache-y8Pfr0V9.png differ diff --git a/assets/rsc-second-cache-BqIyilzj.png b/assets/rsc-second-cache-BqIyilzj.png new file mode 100644 index 00000000000..0671e62cb8a Binary files /dev/null and b/assets/rsc-second-cache-BqIyilzj.png differ diff --git a/assets/scale-out.html-CDj8dDq_.js b/assets/scale-out.html-CDj8dDq_.js new file mode 100644 index 00000000000..fc0e6166528 --- /dev/null +++ b/assets/scale-out.html-CDj8dDq_.js @@ -0,0 +1,150 @@ +import{_ as c,r as p,o as r,c as i,d as n,a,w as e,e as u,b as t}from"./app-CWFDhr_k.js";const d={},k=a("h1",{id:"计算节点扩缩容",tabindex:"-1"},[a("a",{class:"header-anchor",href:"#计算节点扩缩容"},[a("span",null,"计算节点扩缩容")])],-1),v=a("p",null,"PolarDB for PostgreSQL 是一款存储与计算分离的数据库,所有计算节点共享存储,并可以按需要弹性增加或删减计算节点而无需做任何数据迁移。所有本教程将协助您在共享存储集群上添加或删除计算节点。",-1),b={class:"table-of-contents"},m=u(`首先,在已经搭建完毕的共享存储集群上,初始化并启动第一个计算节点,即读写节点,该节点可以对共享存储进行读写。我们在下面的镜像中提供了已经编译完毕的 PolarDB for PostgreSQL 内核和周边工具的可执行文件:
$ docker pull polardb/polardb_pg_binary
+$ docker run -it \\
+ --cap-add=SYS_PTRACE \\
+ --privileged=true \\
+ --name polardb_pg \\
+ --shm-size=512m \\
+ polardb/polardb_pg_binary \\
+ bash
+
+$ ls ~/tmp_basedir_polardb_pg_1100_bld/bin/
+clusterdb dropuser pg_basebackup pg_dump pg_resetwal pg_test_timing polar-initdb.sh psql
+createdb ecpg pgbench pg_dumpall pg_restore pg_upgrade polar-replica-initdb.sh reindexdb
+createuser initdb pg_config pg_isready pg_rewind pg_verify_checksums polar_tools vacuumdb
+dbatools.sql oid2name pg_controldata pg_receivewal pg_standby pg_waldump postgres vacuumlo
+dropdb pg_archivecleanup pg_ctl pg_recvlogical pg_test_fsync polar_basebackup postmaster
+
使用 lsblk
命令确认存储集群已经能够被当前机器访问到。比如,如下示例中的 nvme1n1
是将要使用的共享存储的块设备:
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 100G 0 disk
+
此时,共享存储上没有任何内容。使用容器内的 PFS 工具将共享存储格式化为 PFS 文件系统的格式:
sudo pfs -C disk mkfs nvme1n1
+
格式化完成后,在当前容器内启动 PFS 守护进程,挂载到文件系统上。该守护进程后续将会被计算节点用于访问共享存储:
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme1n1 -w 2
+
使用 initdb
在节点本地存储的 ~/primary
路径上创建本地数据目录。本地数据目录中将会存放节点的配置、审计日志等节点私有的信息:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D $HOME/primary
+
使用 PFS 工具,在共享存储上创建一个共享数据目录;使用 polar-initdb.sh
脚本把将会被所有节点共享的数据文件拷贝到共享存储的数据目录中。将会被所有节点共享的文件包含所有的表文件、WAL 日志文件等:
sudo pfs -C disk mkdir /nvme1n1/shared_data
+
+sudo $HOME/tmp_basedir_polardb_pg_1100_bld/bin/polar-initdb.sh \\
+ $HOME/primary/ /nvme1n1/shared_data/
+
对读写节点的配置文件 ~/primary/postgresql.conf
进行修改,使数据库以共享模式启动,并能够找到共享存储上的数据目录:
port=5432
+polar_hostid=1
+
+polar_enable_shared_storage_mode=on
+polar_disk_name='nvme1n1'
+polar_datadir='/nvme1n1/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='disk'
+
+logging_collector=on
+log_line_prefix='%p\\t%r\\t%u\\t%m\\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+synchronous_standby_names='replica1'
+
编辑读写节点的客户端认证文件 ~/primary/pg_hba.conf
,允许来自所有地址的客户端以 postgres
用户进行物理复制:
host replication postgres 0.0.0.0/0 trust
+
使用以下命令启动读写节点,并检查节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl -D $HOME/primary start
+
+$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5432 \\
+ -d postgres \\
+ -c 'SELECT version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
接下来,在已经有一个读写节点的计算集群中扩容一个新的计算节点。由于 PolarDB for PostgreSQL 是一写多读的架构,所以后续扩容的节点只可以对共享存储进行读取,但无法对共享存储进行写入。只读节点通过与读写节点进行物理复制来保持内存状态的同步。
类似地,在用于部署新计算节点的机器上,拉取镜像并启动带有可执行文件的容器:
docker pull polardb/polardb_pg_binary
+docker run -it \\
+ --cap-add=SYS_PTRACE \\
+ --privileged=true \\
+ --name polardb_pg \\
+ --shm-size=512m \\
+ polardb/polardb_pg_binary \\
+ bash
+
确保部署只读节点的机器也可以访问到共享存储的块设备:
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 100G 0 disk
+
由于此时共享存储已经被读写节点格式化为 PFS 格式了,因此这里无需再次进行格式化。只需要启动 PFS 守护进程完成挂载即可:
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme1n1 -w 2
+
在只读节点本地磁盘的 ~/replica1
路径上创建一个空目录,然后通过 polar-replica-initdb.sh
脚本使用共享存储上的数据目录来初始化只读节点的本地目录。初始化后的本地目录中没有默认配置文件,所以还需要使用 initdb
创建一个临时的本地目录模板,然后将所有的默认配置文件拷贝到只读节点的本地目录下:
mkdir -m 0700 $HOME/replica1
+sudo ~/tmp_basedir_polardb_pg_1100_bld/bin/polar-replica-initdb.sh \\
+ /nvme1n1/shared_data/ $HOME/replica1/
+
+$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D /tmp/replica1
+cp /tmp/replica1/*.conf $HOME/replica1/
+
编辑只读节点的配置文件 ~/replica1/postgresql.conf
,配置好只读节点的集群标识和监听端口,以及与读写节点相同的共享存储目录:
port=5432
+polar_hostid=2
+
+polar_enable_shared_storage_mode=on
+polar_disk_name='nvme1n1'
+polar_datadir='/nvme1n1/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='disk'
+
+logging_collector=on
+log_line_prefix='%p\\t%r\\t%u\\t%m\\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+
编辑只读节点的复制配置文件 ~/replica1/recovery.conf
,配置好当前节点的角色(只读),以及从读写节点进行物理复制的连接串和复制槽:
polar_replica='on'
+recovery_target_timeline='latest'
+primary_conninfo='host=[读写节点所在IP] port=5432 user=postgres dbname=postgres application_name=replica1'
+primary_slot_name='replica1'
+
由于读写节点上暂时还没有名为 replica1
的复制槽,所以需要连接到读写节点上,创建这个复制槽:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5432 \\
+ -d postgres \\
+ -c "SELECT pg_create_physical_replication_slot('replica1');"
+ pg_create_physical_replication_slot
+-------------------------------------
+ (replica1,)
+(1 row)
+
完成上述步骤后,启动只读节点并验证:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl -D $HOME/replica1 start
+
+$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5432 \\
+ -d postgres \\
+ -c 'SELECT version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
连接到读写节点上,创建一个表并插入数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \\
+ -p 5432 \\
+ -d postgres \\
+ -c "CREATE TABLE t(id INT); INSERT INTO t SELECT generate_series(1,10);"
+
在只读节点上可以立刻查询到从读写节点上插入的数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \\
+ -p 5432 \\
+ -d postgres \\
+ -c "SELECT * FROM t;"
+ id
+----
+ 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+ 10
+(10 rows)
+
从读写节点上可以看到用于与只读节点进行物理复制的复制槽已经处于活跃状态:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \\
+ -p 5432 \\
+ -d postgres \\
+ -c "SELECT * FROM pg_replication_slots;"
+ slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn
+-----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------
+ replica1 | | physical | | | f | t | 45 | | | 0/4079E8E8 |
+(1 rows)
+
依次类推,使用类似的方法还可以横向扩容更多的只读节点。
集群缩容的步骤较为简单:将只读节点停机即可。
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl -D $HOME/replica1 stop
+
在只读节点停机后,读写节点上的复制槽将变为非活跃状态。非活跃的复制槽将会阻止 WAL 日志的回收,所以需要及时清理。
在读写节点上执行如下命令,移除名为 replica1
的复制槽:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5432 \\
+ -d postgres \\
+ -c "SELECT pg_drop_replication_slot('replica1');"
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
首先,在已经搭建完毕的共享存储集群上,初始化并启动第一个计算节点,即读写节点,该节点可以对共享存储进行读写。我们在下面的镜像中提供了已经编译完毕的 PolarDB for PostgreSQL 内核和周边工具的可执行文件:
$ docker pull polardb/polardb_pg_binary
+$ docker run -it \\
+ --cap-add=SYS_PTRACE \\
+ --privileged=true \\
+ --name polardb_pg \\
+ --shm-size=512m \\
+ polardb/polardb_pg_binary \\
+ bash
+
+$ ls ~/tmp_basedir_polardb_pg_1100_bld/bin/
+clusterdb dropuser pg_basebackup pg_dump pg_resetwal pg_test_timing polar-initdb.sh psql
+createdb ecpg pgbench pg_dumpall pg_restore pg_upgrade polar-replica-initdb.sh reindexdb
+createuser initdb pg_config pg_isready pg_rewind pg_verify_checksums polar_tools vacuumdb
+dbatools.sql oid2name pg_controldata pg_receivewal pg_standby pg_waldump postgres vacuumlo
+dropdb pg_archivecleanup pg_ctl pg_recvlogical pg_test_fsync polar_basebackup postmaster
+
使用 lsblk
命令确认存储集群已经能够被当前机器访问到。比如,如下示例中的 nvme1n1
是将要使用的共享存储的块设备:
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 100G 0 disk
+
此时,共享存储上没有任何内容。使用容器内的 PFS 工具将共享存储格式化为 PFS 文件系统的格式:
sudo pfs -C disk mkfs nvme1n1
+
格式化完成后,在当前容器内启动 PFS 守护进程,挂载到文件系统上。该守护进程后续将会被计算节点用于访问共享存储:
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme1n1 -w 2
+
使用 initdb
在节点本地存储的 ~/primary
路径上创建本地数据目录。本地数据目录中将会存放节点的配置、审计日志等节点私有的信息:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D $HOME/primary
+
使用 PFS 工具,在共享存储上创建一个共享数据目录;使用 polar-initdb.sh
脚本把将会被所有节点共享的数据文件拷贝到共享存储的数据目录中。将会被所有节点共享的文件包含所有的表文件、WAL 日志文件等:
sudo pfs -C disk mkdir /nvme1n1/shared_data
+
+sudo $HOME/tmp_basedir_polardb_pg_1100_bld/bin/polar-initdb.sh \\
+ $HOME/primary/ /nvme1n1/shared_data/
+
对读写节点的配置文件 ~/primary/postgresql.conf
进行修改,使数据库以共享模式启动,并能够找到共享存储上的数据目录:
port=5432
+polar_hostid=1
+
+polar_enable_shared_storage_mode=on
+polar_disk_name='nvme1n1'
+polar_datadir='/nvme1n1/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='disk'
+
+logging_collector=on
+log_line_prefix='%p\\t%r\\t%u\\t%m\\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+synchronous_standby_names='replica1'
+
编辑读写节点的客户端认证文件 ~/primary/pg_hba.conf
,允许来自所有地址的客户端以 postgres
用户进行物理复制:
host replication postgres 0.0.0.0/0 trust
+
使用以下命令启动读写节点,并检查节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl -D $HOME/primary start
+
+$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5432 \\
+ -d postgres \\
+ -c 'SELECT version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
接下来,在已经有一个读写节点的计算集群中扩容一个新的计算节点。由于 PolarDB for PostgreSQL 是一写多读的架构,所以后续扩容的节点只可以对共享存储进行读取,但无法对共享存储进行写入。只读节点通过与读写节点进行物理复制来保持内存状态的同步。
类似地,在用于部署新计算节点的机器上,拉取镜像并启动带有可执行文件的容器:
docker pull polardb/polardb_pg_binary
+docker run -it \\
+ --cap-add=SYS_PTRACE \\
+ --privileged=true \\
+ --name polardb_pg \\
+ --shm-size=512m \\
+ polardb/polardb_pg_binary \\
+ bash
+
确保部署只读节点的机器也可以访问到共享存储的块设备:
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 100G 0 disk
+
由于此时共享存储已经被读写节点格式化为 PFS 格式了,因此这里无需再次进行格式化。只需要启动 PFS 守护进程完成挂载即可:
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme1n1 -w 2
+
在只读节点本地磁盘的 ~/replica1
路径上创建一个空目录,然后通过 polar-replica-initdb.sh
脚本使用共享存储上的数据目录来初始化只读节点的本地目录。初始化后的本地目录中没有默认配置文件,所以还需要使用 initdb
创建一个临时的本地目录模板,然后将所有的默认配置文件拷贝到只读节点的本地目录下:
mkdir -m 0700 $HOME/replica1
+sudo ~/tmp_basedir_polardb_pg_1100_bld/bin/polar-replica-initdb.sh \\
+ /nvme1n1/shared_data/ $HOME/replica1/
+
+$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D /tmp/replica1
+cp /tmp/replica1/*.conf $HOME/replica1/
+
编辑只读节点的配置文件 ~/replica1/postgresql.conf
,配置好只读节点的集群标识和监听端口,以及与读写节点相同的共享存储目录:
port=5432
+polar_hostid=2
+
+polar_enable_shared_storage_mode=on
+polar_disk_name='nvme1n1'
+polar_datadir='/nvme1n1/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='disk'
+
+logging_collector=on
+log_line_prefix='%p\\t%r\\t%u\\t%m\\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+
编辑只读节点的复制配置文件 ~/replica1/recovery.conf
,配置好当前节点的角色(只读),以及从读写节点进行物理复制的连接串和复制槽:
polar_replica='on'
+recovery_target_timeline='latest'
+primary_conninfo='host=[读写节点所在IP] port=5432 user=postgres dbname=postgres application_name=replica1'
+primary_slot_name='replica1'
+
由于读写节点上暂时还没有名为 replica1
的复制槽,所以需要连接到读写节点上,创建这个复制槽:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5432 \\
+ -d postgres \\
+ -c "SELECT pg_create_physical_replication_slot('replica1');"
+ pg_create_physical_replication_slot
+-------------------------------------
+ (replica1,)
+(1 row)
+
完成上述步骤后,启动只读节点并验证:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl -D $HOME/replica1 start
+
+$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5432 \\
+ -d postgres \\
+ -c 'SELECT version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
连接到读写节点上,创建一个表并插入数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \\
+ -p 5432 \\
+ -d postgres \\
+ -c "CREATE TABLE t(id INT); INSERT INTO t SELECT generate_series(1,10);"
+
在只读节点上可以立刻查询到从读写节点上插入的数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \\
+ -p 5432 \\
+ -d postgres \\
+ -c "SELECT * FROM t;"
+ id
+----
+ 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+ 10
+(10 rows)
+
从读写节点上可以看到用于与只读节点进行物理复制的复制槽已经处于活跃状态:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \\
+ -p 5432 \\
+ -d postgres \\
+ -c "SELECT * FROM pg_replication_slots;"
+ slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn
+-----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------
+ replica1 | | physical | | | f | t | 45 | | | 0/4079E8E8 |
+(1 rows)
+
依次类推,使用类似的方法还可以横向扩容更多的只读节点。
集群缩容的步骤较为简单:将只读节点停机即可。
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl -D $HOME/replica1 stop
+
在只读节点停机后,读写节点上的复制槽将变为非活跃状态。非活跃的复制槽将会阻止 WAL 日志的回收,所以需要及时清理。
在读写节点上执行如下命令,移除名为 replica1
的复制槽:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \\
+ -p 5432 \\
+ -d postgres \\
+ -c "SELECT pg_drop_replication_slot('replica1');"
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
PolarDB for PostgreSQL 针对上述问题,从数据库内部提供了 Shared Server(后文简称 SS)内置连接池功能,采用共享内存 + Session Context + Dispatcher 转发 + Backend Pool 的架构,实现了用户连接与后端进程的解绑。后端进程具备了 Native、Shared、Dedicated 三种执行模式,并且在运行时可以根据实时负载和进程污染情况进行动态转换。负载调度算法充分吸收 AliSQL 对社区版 MySQL 线程池的缺陷改进,使用 Stall 机制弹性控制 Worker 数量,同时避免用户连接饿死。从根本上解决了高并发或者大量短连接带来的性能、稳定性问题。
在 PostgreSQL 原生的 One-Process-Per-Connection 连接调度策略中,用户发起的连接与后端进程一一绑定:这里不仅是生命周期的绑定,同时还是服务与被服务关系的绑定。
在 Shared Server 内置连接池中,通过提取出会话相关上下文 Session Context,将用户连接和后端进程进行了解绑,并且引入 Dispatcher 来进行代理转发:
<user, database, GUCs>
为 key,划分成不同的后端进程池。每个后端进程池都有自己独占的后端进程组,单个后端进程池内的后端进程数量随着负载增高而增多,随着负载降低而减少。在 Shared Server 中,后端进程有三种执行模式。进程执行模式在运行时会根据实时负载和进程污染情况进行动态转换:
polar_ss_dedicated_dbuser_names
黑名单范围内的数据库或用户DECLARE CURSOR
命令CURSOR WITH HOLD
操作Shared Server 主要应用于高并发或大量短连接的业务场景,因此这里使用 TPC-C 进行测试。
使用 104c 512GB 的物理机单机部署,测试 TPC-C 1000 仓下,并发数从 300 增大到 5000 时,不同配置下的分数对比。如下图所示:
从图中可以看出:
使用 104c 512GB 的物理机单机部署,利用 pgbench
分别测试以下配置中,并发短连接数从 1 到 128 的场景下的性能表现:
从图中可以看出,使用连接池后,对于短连接,PgBouncer 和 Shared Server 的性能均有所提升。但 PgBouncer 最高只能提升 14 倍性能,Shared Server 最高可以提升 42 倍性能。
业界典型的后置连接池 PgBouncer 具有多种模式。其中 session pooling 模式仅对短连接友好,一般不使用;transaction pooling 模式对短连接、长连接都友好,是默认推荐的模式。与 PgBouncer 相比,Shared Server 的差异化功能特点如下:
Feature | PgBouncer Session Pooling | PgBouncer Transaction Pooling | Shared Server |
---|---|---|---|
Startup parameters | 受限 | 受限 | 支持 |
SSL | 支持 | 支持 | 未来将支持 |
LISTEN/NOTIFY | 支持 | 不支持 | 支持 触发兜底 |
LOAD statement | 支持 | 不支持 | 支持 触发兜底 |
Session-level advisory locks | 支持 | 不支持 | 支持 触发兜底 |
SET/RESET GUC | 支持 | 不支持 | 支持 |
Protocol-level prepared plans | 支持 | 未来将支持 | 支持 |
PREPARE / DEALLOCATE | 支持 | 不支持 | 支持 |
Cached Plan Reset | 支持 | 支持 | 支持 |
WITHOUT HOLD CURSOR | 支持 | 支持 | 支持 |
WITH HOLD CURSOR | 支持 | 不支持 | 未来将支持 触发兜底 |
PRESERVE/DELETE ROWS temp | 支持 | 不支持 | 未来将支持 触发兜底 |
ON COMMIT DROP temp | 支持 | 支持 | 支持 |
注:
client_encoding
datestyle
timezone
standard_conforming_strings
为了适应不同的环境,Shared Server 支持丰富了参数配置:
Shared Server 的典型配置参数说明如下:
polar_enable_shm_aset
:是否开启全局共享内存,当前默认关闭,重启生效polar_ss_shared_memory_size
:Shared Server 全局共享内存的使用上限,单位 kB,为 0
时表示关闭,默认 1MB。重启生效。polar_ss_dispatcher_count
:Dispatcher 进程的最大个数,默认为 2
,最大为 CPU 核心数,建议配置与 CPU 核心数相同。重启生效。polar_enable_shared_server
:Shared Server 功能是否开启,默认关闭。polar_ss_backend_max_count
:后端进程的最大数量,默认为 -5
,表示为 max_connection
的 1/5;0
/ -1
表示与 max_connection
保持一致。建议设置为 CPU 核心数的 10 倍为佳。polar_ss_backend_idle_timeout
:后端进程的空闲退出时间,默认 3 分钟polar_ss_session_wait_timeout
:后端进程被用满时,用户连接等待被服务的最大时间,默认 60 秒polar_ss_dedicated_dbuser_names
:记录指定数据库/用户使用时进入 Native 模式,默认为空,格式为 d1/_,_/u1,d2/u2
,表示对使用数据库 d1
的任意连接、使用用户 u1
的任意连接、使用数据库 d2
且用户 u2
的任意连接,都会回退到 Native 模式注意
由于 smlar 插件的 %
操作符与 RUM 插件的 %
操作符冲突,因此 smlar 与 RUM 两个插件无法同时创建在同一 schema 中。
float4 smlar(anyarray, anyarray)
计算两个数组的相似度,数组的数据类型需要一致。
float4 smlar(anyarray, anyarray, bool useIntersect)
计算两个自定义复合类型数组的相似度,useIntersect
参数表示是否让仅重叠元素还是全部元素参与运算;复合类型可由以下方式定义:
CREATE TYPE type_name AS (element_name anytype, weight_name FLOAT4);
+
float4 smlar(anyarray a, anyarray b, text formula);
使用参数给定的公式来计算两个数组的相似度,数组的数据类型需要一致;公式中可以使用的预定义变量有:
N.i
:两个数组中的相同元素个数(交集)N.a
:第一个数组中的唯一元素个数N.b
:第二个数组中的唯一元素个数SELECT smlar('{1,4,6}'::int[], '{5,4,6}', 'N.i / sqrt(N.a * N.b)');
+
anyarray % anyarray
该运算符的含义为,当两个数组的的相似度超过阈值时返回 TRUE
,否则返回 FALSE
。
text[] tsvector2textarray(tsvector)
将 tsvector
类型转换为字符串数组。
anyarray array_unique(anyarray)
对数组进行排序、去重。
float4 inarray(anyarray, anyelement)
如果元素出现在数组中,则返回 1.0
;否则返回 0
。
float4 inarray(anyarray, anyelement, float4, float4)
如果元素出现在数组中,则返回第三个参数;否则返回第四个参数。
smlar.stattable STRING
存储集合范围统计信息的表名,表定义如下:
CREATE TABLE table_name (
+ value data_type UNIQUE,
+ ndoc int4 (or bigint) NOT NULL CHECK (ndoc>0)
+);
+
smlar.tf_method STRING
:计算词频(TF,Term Frequency)的方法,取值如下
n
:简单计数(默认)log
:1 + log(n)
const
:频率等于 1
smlar.idf_plus_one BOOL
:计算逆文本频率指数的方法(IDF,Inverse Document Frequency)的方法,取值如下
FALSE
:log(d / df)
(默认)TRUE
:log(1 + d / df)
CREATE EXTENSION smlar;
+
使用上述的函数计算两个数组的相似度:
SELECT smlar('{3,2}'::int[], '{3,2,1}');
+ smlar
+----------
+ 0.816497
+(1 row)
+
+SELECT smlar('{1,4,6}'::int[], '{5,4,6}', 'N.i / (N.a + N.b)' );
+ smlar
+----------
+ 0.333333
+(1 row)
+
DROP EXTENSION smlar;
+
对 ECS 存储配置的选择,系统盘可以选用任意的存储类型,数据盘和共享盘暂不选择。后续再单独创建一个 ESSD 云盘作为共享盘:
如图所示,在 同一可用区 中建好两台 ECS:
在阿里云 ECS 的管理控制台中,选择 存储与快照 下的 云盘,点击 创建云盘。在与已经建好的 ECS 所在的相同可用区内,选择建立一个 ESSD 云盘,并勾选 多实例挂载。如果您的 ECS 不符合多实例挂载的限制条件,则该选框不会出现。
ESSD 云盘创建完毕后,控制台显示云盘支持多重挂载,状态为 待挂载:
接下来,把这个云盘分别挂载到两台 ECS 上:
挂载完毕后,查看该云盘,将会显示该云盘已经挂载的两台 ECS 实例:
通过 ssh 分别连接到两台 ECS 上,运行 lsblk
命令可以看到:
nvme0n1
是 40GB 的 ECS 系统盘,为 ECS 私有nvme1n1
是 100GB 的 ESSD 云盘,两台 ECS 同时可见$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 100G 0 disk
+
对 ECS 存储配置的选择,系统盘可以选用任意的存储类型,数据盘和共享盘暂不选择。后续再单独创建一个 ESSD 云盘作为共享盘:
如图所示,在 同一可用区 中建好两台 ECS:
在阿里云 ECS 的管理控制台中,选择 存储与快照 下的 云盘,点击 创建云盘。在与已经建好的 ECS 所在的相同可用区内,选择建立一个 ESSD 云盘,并勾选 多实例挂载。如果您的 ECS 不符合多实例挂载的限制条件,则该选框不会出现。
ESSD 云盘创建完毕后,控制台显示云盘支持多重挂载,状态为 待挂载:
接下来,把这个云盘分别挂载到两台 ECS 上:
挂载完毕后,查看该云盘,将会显示该云盘已经挂载的两台 ECS 实例:
通过 ssh 分别连接到两台 ECS 上,运行 lsblk
命令可以看到:
nvme0n1
是 40GB 的 ECS 系统盘,为 ECS 私有nvme1n1
是 100GB 的 ESSD 云盘,两台 ECS 同时可见$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 100G 0 disk
+
Ceph 是一个统一的分布式存储系统,由于它可以提供较好的性能、可靠性和可扩展性,被广泛的应用在存储领域。Ceph 搭建需要 2 台及以上的物理机/虚拟机实现存储共享与数据备份,本教程以 3 台虚拟机环境为例,介绍基于 ceph 共享存储的实例构建方法。大体如下:
注意
操作系统版本要求 CentOS 7.5 及以上。以下步骤在 CentOS 7.5 上通过测试。
使用的虚拟机环境如下:
IP hostname
+192.168.1.173 ceph001
+192.168.1.174 ceph002
+192.168.1.175 ceph003
+
提示
本教程使用阿里云镜像站提供的 docker 包。
yum install -y yum-utils device-mapper-persistent-data lvm2
+
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
+yum makecache
+yum install -y docker-ce
+
+systemctl start docker
+systemctl enable docker
+
docker run hello-world
+
ssh-keygen
+ssh-copy-id -i /root/.ssh/id_rsa.pub root@ceph001
+ssh-copy-id -i /root/.ssh/id_rsa.pub root@ceph002
+ssh-copy-id -i /root/.ssh/id_rsa.pub root@ceph003
+
ssh root@ceph003
+
docker pull ceph/daemon
+
docker run -d \\
+ --net=host \\
+ --privileged=true \\
+ -v /etc/ceph:/etc/ceph \\
+ -v /var/lib/ceph/:/var/lib/ceph/ \\
+ -e MON_IP=192.168.1.173 \\
+ -e CEPH_PUBLIC_NETWORK=192.168.1.0/24 \\
+ --security-opt seccomp=unconfined \\
+ --name=mon01 \\
+ ceph/daemon mon
+
注意
根据实际网络环境修改 IP、子网掩码位数。
$ docker exec mon01 ceph -s
+cluster:
+ id: 937ccded-3483-4245-9f61-e6ef0dbd85ca
+ health: HEALTH_OK
+
+services:
+ mon: 1 daemons, quorum ceph001 (age 26m)
+ mgr: no daemons active
+ osd: 0 osds: 0 up, 0 in
+
+data:
+ pools: 0 pools, 0 pgs
+ objects: 0 objects, 0 B
+ usage: 0 B used, 0 B / 0 B avail
+ pgs:
+
注意
如果遇到 mon is allowing insecure global_id reclaim
的报错,使用以下命令解决。
docker exec mon01 ceph config set mon auth_allow_insecure_global_id_reclaim false
+
docker exec mon01 ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap-osd/ceph.keyring
+docker exec mon01 ceph auth get client.bootstrap-rgw -o /var/lib/ceph/bootstrap-rgw/ceph.keyring
+
ssh root@ceph002 mkdir -p /var/lib/ceph
+scp -r /etc/ceph root@ceph002:/etc
+scp -r /var/lib/ceph/bootstrap* root@ceph002:/var/lib/ceph
+ssh root@ceph003 mkdir -p /var/lib/ceph
+scp -r /etc/ceph root@ceph003:/etc
+scp -r /var/lib/ceph/bootstrap* root@ceph003:/var/lib/ceph
+
docker run -d \\
+ --net=host \\
+ --privileged=true \\
+ -v /etc/ceph:/etc/ceph \\
+ -v /var/lib/ceph/:/var/lib/ceph/ \\
+ -e MON_IP=192.168.1.174 \\
+ -e CEPH_PUBLIC_NETWORK=192.168.1.0/24 \\
+ --security-opt seccomp=unconfined \\
+ --name=mon02 \\
+ ceph/daemon mon
+
+docker run -d \\
+ --net=host \\
+ --privileged=true \\
+ -v /etc/ceph:/etc/ceph \\
+ -v /var/lib/ceph/:/var/lib/ceph/ \\
+ -e MON_IP=192.168.1.175 \\
+ -e CEPH_PUBLIC_NETWORK=192.168.1.0/24 \\
+ --security-opt seccomp=unconfined \\
+ --name=mon03 \\
+ ceph/daemon mon
+
$ docker exec mon01 ceph -s
+cluster:
+ id: 937ccded-3483-4245-9f61-e6ef0dbd85ca
+ health: HEALTH_OK
+
+services:
+ mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 35s)
+ mgr: no daemons active
+ osd: 0 osds: 0 up, 0 in
+
+data:
+ pools: 0 pools, 0 pgs
+ objects: 0 objects, 0 B
+ usage: 0 B used, 0 B / 0 B avail
+ pgs:
+
注意
从 mon 节点信息查看是否有添加在另外两个节点创建的 mon 添加进来。
提示
本环境的虚拟机只有一个 /dev/vdb
磁盘可用,因此为每个虚拟机只创建了一个 osd 节点。
docker run --rm --privileged=true --net=host --ipc=host \\
+ --security-opt seccomp=unconfined \\
+ -v /run/lock/lvm:/run/lock/lvm:z \\
+ -v /var/run/udev/:/var/run/udev/:z \\
+ -v /dev:/dev -v /etc/ceph:/etc/ceph:z \\
+ -v /run/lvm/:/run/lvm/ \\
+ -v /var/lib/ceph/:/var/lib/ceph/:z \\
+ -v /var/log/ceph/:/var/log/ceph/:z \\
+ --entrypoint=ceph-volume \\
+ docker.io/ceph/daemon \\
+ --cluster ceph lvm prepare --bluestore --data /dev/vdb
+
注意
以上命令在三个节点都是一样的,只需要根据磁盘名称进行修改调整即可。
docker run -d --privileged=true --net=host --pid=host --ipc=host \\
+ --security-opt seccomp=unconfined \\
+ -v /dev:/dev \\
+ -v /etc/localtime:/etc/ localtime:ro \\
+ -v /var/lib/ceph:/var/lib/ceph:z \\
+ -v /etc/ceph:/etc/ceph:z \\
+ -v /var/run/ceph:/var/run/ceph:z \\
+ -v /var/run/udev/:/var/run/udev/ \\
+ -v /var/log/ceph:/var/log/ceph:z \\
+ -v /run/lvm/:/run/lvm/ \\
+ -e CLUSTER=ceph \\
+ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \\
+ -e CONTAINER_IMAGE=docker.io/ceph/daemon \\
+ -e OSD_ID=0 \\
+ --name=ceph-osd-0 \\
+ docker.io/ceph/daemon
+
注意
各个节点需要修改 OSD_ID 与 name 属性,OSD_ID 是从编号 0 递增的,其余节点为 OSD_ID=1、OSD_ID=2。
$ docker exec mon01 ceph -s
+cluster:
+ id: e430d054-dda8-43f1-9cda-c0881b782e17
+ health: HEALTH_WARN
+ no active mgr
+
+services:
+ mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 44m)
+ mgr: no daemons active
+ osd: 3 osds: 3 up (since 7m), 3 in (since 13m)
+
+data:
+ pools: 0 pools, 0 pgs
+ objects: 0 objects, 0 B
+ usage: 0 B used, 0 B / 0 B avail
+ pgs:
+
以下命令均在 ceph001 进行:
docker run -d --net=host \\
+ --privileged=true \\
+ --security-opt seccomp=unconfined \\
+ -v /etc/ceph:/etc/ceph \\
+ -v /var/lib/ceph/:/var/lib/ceph/ \\
+ --name=ceph-mgr-0 \\
+ ceph/daemon mgr
+
+docker run -d --net=host \\
+ --privileged=true \\
+ --security-opt seccomp=unconfined \\
+ -v /var/lib/ceph/:/var/lib/ceph/ \\
+ -v /etc/ceph:/etc/ceph \\
+ -e CEPHFS_CREATE=1 \\
+ --name=ceph-mds-0 \\
+ ceph/daemon mds
+
+docker run -d --net=host \\
+ --privileged=true \\
+ --security-opt seccomp=unconfined \\
+ -v /var/lib/ceph/:/var/lib/ceph/ \\
+ -v /etc/ceph:/etc/ceph \\
+ --name=ceph-rgw-0 \\
+ ceph/daemon rgw
+
查看集群状态:
docker exec mon01 ceph -s
+cluster:
+ id: e430d054-dda8-43f1-9cda-c0881b782e17
+ health: HEALTH_OK
+
+services:
+ mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 92m)
+ mgr: ceph001(active, since 25m)
+ mds: 1/1 daemons up
+ osd: 3 osds: 3 up (since 54m), 3 in (since 60m)
+ rgw: 1 daemon active (1 hosts, 1 zones)
+
+data:
+ volumes: 1/1 healthy
+ pools: 7 pools, 145 pgs
+ objects: 243 objects, 7.2 KiB
+ usage: 50 MiB used, 2.9 TiB / 2.9 TiB avail
+ pgs: 145 active+clean
+
提示
以下命令均在容器 mon01 中进行。
docker exec -it mon01 bash
+ceph osd pool create rbd_polar
+
rbd create --size 512000 rbd_polar/image02
+rbd info rbd_polar/image02
+
+rbd image 'image02':
+size 500 GiB in 128000 objects
+order 22 (4 MiB objects)
+snapshot_count: 0
+id: 13b97b252c5d
+block_name_prefix: rbd_data.13b97b252c5d
+format: 2
+features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
+op_features:
+flags:
+create_timestamp: Thu Oct 28 06:18:07 2021
+access_timestamp: Thu Oct 28 06:18:07 2021
+modify_timestamp: Thu Oct 28 06:18:07 2021
+
modprobe rbd # 加载内核模块,在主机上执行
+rbd map rbd_polar/image02
+
+rbd: sysfs write failed
+RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable rbd_polar/image02 object-map fast-diff deep-flatten".
+In some cases useful info is found in syslog - try "dmesg | tail".
+rbd: map failed: (6) No such device or address
+
注意
某些特性内核不支持,需要关闭才可以映射成功。如下进行:关闭 rbd 不支持特性,重新映射镜像,并查看映射列表。
rbd feature disable rbd_polar/image02 object-map fast-diff deep-flatten
+rbd map rbd_polar/image02
+rbd device list
+
+id pool namespace image snap device
+0 rbd_polar image01 - /dev/ rbd0
+1 rbd_polar image02 - /dev/ rbd1
+
提示
此处我已经先映射了一个 image01,所以有两条信息。
回到容器外,进行操作。查看系统中的块设备:
lsblk
+
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+vda 253:0 0 500G 0 disk
+└─vda1 253:1 0 500G 0 part /
+vdb 253:16 0 1000G 0 disk
+└─ceph--7eefe77f--c618--4477--a1ed--b4f44520dfc 2-osd--block--bced3ff1--42b9--43e1--8f63--e853b ce41435
+ 252:0 0 1000G 0 lvm
+rbd0 251:0 0 100G 0 disk
+rbd1 251:16 0 500G 0 disk
+
注意
块设备镜像需要在各个节点都进行映射才可以在本地环境中通过 lsblk
命令查看到,否则不显示。ceph002 与 ceph003 上映射命令与上述一致。
Ceph 是一个统一的分布式存储系统,由于它可以提供较好的性能、可靠性和可扩展性,被广泛的应用在存储领域。Ceph 搭建需要 2 台及以上的物理机/虚拟机实现存储共享与数据备份,本教程以 3 台虚拟机机环境为例,介绍基于 ceph 共享存储的实例构建方法。大体如下:
WARNING
操作系统版本要求 CentOS 7.5 及以上。以下步骤在 CentOS 7.5 上通过测试。
使用的虚拟机环境如下:
IP hostname
+192.168.1.173 ceph001
+192.168.1.174 ceph002
+192.168.1.175 ceph003
+
TIP
本教程使用阿里云镜像站提供的 docker 包。
yum install -y yum-utils device-mapper-persistent-data lvm2
+
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
+yum makecache
+yum install -y docker-ce
+
+systemctl start docker
+systemctl enable docker
+
docker run hello-world
+
ssh-keygen
+ssh-copy-id -i /root/.ssh/id_rsa.pub root@ceph001
+ssh-copy-id -i /root/.ssh/id_rsa.pub root@ceph002
+ssh-copy-id -i /root/.ssh/id_rsa.pub root@ceph003
+
ssh root@ceph003
+
docker pull ceph/daemon
+
docker run -d \\
+ --net=host \\
+ --privileged=true \\
+ -v /etc/ceph:/etc/ceph \\
+ -v /var/lib/ceph/:/var/lib/ceph/ \\
+ -e MON_IP=192.168.1.173 \\
+ -e CEPH_PUBLIC_NETWORK=192.168.1.0/24 \\
+ --security-opt seccomp=unconfined \\
+ --name=mon01 \\
+ ceph/daemon mon
+
WARNING
根据实际网络环境修改 IP、子网掩码位数。
$ docker exec mon01 ceph -s
+cluster:
+ id: 937ccded-3483-4245-9f61-e6ef0dbd85ca
+ health: HEALTH_OK
+
+services:
+ mon: 1 daemons, quorum ceph001 (age 26m)
+ mgr: no daemons active
+ osd: 0 osds: 0 up, 0 in
+
+data:
+ pools: 0 pools, 0 pgs
+ objects: 0 objects, 0 B
+ usage: 0 B used, 0 B / 0 B avail
+ pgs:
+
WARNING
如果遇到 mon is allowing insecure global_id reclaim
的报错,使用以下命令解决。
docker exec mon01 ceph config set mon auth_allow_insecure_global_id_reclaim false
+
docker exec mon01 ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap-osd/ceph.keyring
+docker exec mon01 ceph auth get client.bootstrap-rgw -o /var/lib/ceph/bootstrap-rgw/ceph.keyring
+
ssh root@ceph002 mkdir -p /var/lib/ceph
+scp -r /etc/ceph root@ceph002:/etc
+scp -r /var/lib/ceph/bootstrap* root@ceph002:/var/lib/ceph
+ssh root@ceph003 mkdir -p /var/lib/ceph
+scp -r /etc/ceph root@ceph003:/etc
+scp -r /var/lib/ceph/bootstrap* root@ceph003:/var/lib/ceph
+
docker run -d \\
+ --net=host \\
+ --privileged=true \\
+ -v /etc/ceph:/etc/ceph \\
+ -v /var/lib/ceph/:/var/lib/ceph/ \\
+ -e MON_IP=192.168.1.174 \\
+ -e CEPH_PUBLIC_NETWORK=192.168.1.0/24 \\
+ --security-opt seccomp=unconfined \\
+ --name=mon02 \\
+ ceph/daemon mon
+
+docker run -d \\
+ --net=host \\
+ --privileged=true \\
+ -v /etc/ceph:/etc/ceph \\
+ -v /var/lib/ceph/:/var/lib/ceph/ \\
+ -e MON_IP=1192.168.1.175 \\
+ -e CEPH_PUBLIC_NETWORK=192.168.1.0/24 \\
+ --security-opt seccomp=unconfined \\
+ --name=mon03 \\
+ ceph/daemon mon
+
$ docker exec mon01 ceph -s
+cluster:
+ id: 937ccded-3483-4245-9f61-e6ef0dbd85ca
+ health: HEALTH_OK
+
+services:
+ mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 35s)
+ mgr: no daemons active
+ osd: 0 osds: 0 up, 0 in
+
+data:
+ pools: 0 pools, 0 pgs
+ objects: 0 objects, 0 B
+ usage: 0 B used, 0 B / 0 B avail
+ pgs:
+
WARNING
从 mon 节点信息查看是否有添加在另外两个节点创建的 mon 添加进来。
TIP
本环境的虚拟机只有一个 /dev/vdb
磁盘可用,因此为每个虚拟机只创建了一个 osd 节点。
docker run --rm --privileged=true --net=host --ipc=host \\
+ --security-opt seccomp=unconfined \\
+ -v /run/lock/lvm:/run/lock/lvm:z \\
+ -v /var/run/udev/:/var/run/udev/:z \\
+ -v /dev:/dev -v /etc/ceph:/etc/ceph:z \\
+ -v /run/lvm/:/run/lvm/ \\
+ -v /var/lib/ceph/:/var/lib/ceph/:z \\
+ -v /var/log/ceph/:/var/log/ceph/:z \\
+ --entrypoint=ceph-volume \\
+ docker.io/ceph/daemon \\
+ --cluster ceph lvm prepare --bluestore --data /dev/vdb
+
WARNING
以上命令在三个节点都是一样的,只需要根据磁盘名称进行修改调整即可。
docker run -d --privileged=true --net=host --pid=host --ipc=host \\
+ --security-opt seccomp=unconfined \\
+ -v /dev:/dev \\
+ -v /etc/localtime:/etc/ localtime:ro \\
+ -v /var/lib/ceph:/var/lib/ceph:z \\
+ -v /etc/ceph:/etc/ceph:z \\
+ -v /var/run/ceph:/var/run/ceph:z \\
+ -v /var/run/udev/:/var/run/udev/ \\
+ -v /var/log/ceph:/var/log/ceph:z \\
+ -v /run/lvm/:/run/lvm/ \\
+ -e CLUSTER=ceph \\
+ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \\
+ -e CONTAINER_IMAGE=docker.io/ceph/daemon \\
+ -e OSD_ID=0 \\
+ --name=ceph-osd-0 \\
+ docker.io/ceph/daemon
+
WARNING
各个节点需要修改 OSD_ID 与 name 属性,OSD_ID 是从编号 0 递增的,其余节点为 OSD_ID=1、OSD_ID=2。
$ docker exec mon01 ceph -s
+cluster:
+ id: e430d054-dda8-43f1-9cda-c0881b782e17
+ health: HEALTH_WARN
+ no active mgr
+
+services:
+ mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 44m)
+ mgr: no daemons active
+ osd: 3 osds: 3 up (since 7m), 3 in (since 13m)
+
+data:
+ pools: 0 pools, 0 pgs
+ objects: 0 objects, 0 B
+ usage: 0 B used, 0 B / 0 B avail
+ pgs:
+
以下命令均在 ceph001 进行:
docker run -d --net=host \\
+ --privileged=true \\
+ --security-opt seccomp=unconfined \\
+ -v /etc/ceph:/etc/ceph \\
+ -v /var/lib/ceph/:/var/lib/ceph/ \\
+ --name=ceph-mgr-0 \\
+ ceph/daemon mgr
+
+docker run -d --net=host \\
+ --privileged=true \\
+ --security-opt seccomp=unconfined \\
+ -v /var/lib/ceph/:/var/lib/ceph/ \\
+ -v /etc/ceph:/etc/ceph \\
+ -e CEPHFS_CREATE=1 \\
+ --name=ceph-mds-0 \\
+ ceph/daemon mds
+
+docker run -d --net=host \\
+ --privileged=true \\
+ --security-opt seccomp=unconfined \\
+ -v /var/lib/ceph/:/var/lib/ceph/ \\
+ -v /etc/ceph:/etc/ceph \\
+ --name=ceph-rgw-0 \\
+ ceph/daemon rgw
+
查看集群状态:
docker exec mon01 ceph -s
+cluster:
+ id: e430d054-dda8-43f1-9cda-c0881b782e17
+ health: HEALTH_OK
+
+services:
+ mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 92m)
+ mgr: ceph001(active, since 25m)
+ mds: 1/1 daemons up
+ osd: 3 osds: 3 up (since 54m), 3 in (since 60m)
+ rgw: 1 daemon active (1 hosts, 1 zones)
+
+data:
+ volumes: 1/1 healthy
+ pools: 7 pools, 145 pgs
+ objects: 243 objects, 7.2 KiB
+ usage: 50 MiB used, 2.9 TiB / 2.9 TiB avail
+ pgs: 145 active+clean
+
TIP
以下命令均在容器 mon01 中进行。
docker exec -it mon01 bash
+ceph osd pool create rbd_polar
+
rbd create --size 512000 rbd_polar/image02
+rbd info rbd_polar/image02
+
+rbd image 'image02':
+size 500 GiB in 128000 objects
+order 22 (4 MiB objects)
+snapshot_count: 0
+id: 13b97b252c5d
+block_name_prefix: rbd_data.13b97b252c5d
+format: 2
+features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
+op_features:
+flags:
+create_timestamp: Thu Oct 28 06:18:07 2021
+access_timestamp: Thu Oct 28 06:18:07 2021
+modify_timestamp: Thu Oct 28 06:18:07 2021
+
modprobe rbd # 加载内核模块,在主机上执行
+rbd map rbd_polar/image02
+
+rbd: sysfs write failed
+RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable rbd_polar/image02 object-map fast-diff deep-flatten".
+In some cases useful info is found in syslog - try "dmesg | tail".
+rbd: map failed: (6) No such device or address
+
WARNING
某些特性内核不支持,需要关闭才可以映射成功。如下进行:关闭 rbd 不支持特性,重新映射镜像,并查看映射列表。
rbd feature disable rbd_polar/image02 object-map fast-diff deep-flatten
+rbd map rbd_polar/image02
+rbd device list
+
+id pool namespace image snap device
+0 rbd_polar image01 - /dev/ rbd0
+1 rbd_polar image02 - /dev/ rbd1
+
TIP
此处我已经先映射了一个 image01,所以有两条信息。
回到容器外,进行操作。查看系统中的块设备:
lsblk
+
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+vda 253:0 0 500G 0 disk
+└─vda1 253:1 0 500G 0 part /
+vdb 253:16 0 1000G 0 disk
+└─ceph--7eefe77f--c618--4477--a1ed--b4f44520dfc 2-osd--block--bced3ff1--42b9--43e1--8f63--e853b ce41435
+ 252:0 0 1000G 0 lvm
+rbd0 251:0 0 100G 0 disk
+rbd1 251:16 0 500G 0 disk
+
WARNING
块设备镜像需要在各个节点都进行映射才可以在本地环境中通过 lsblk
命令查看到,否则不显示。ceph002 与 ceph003 上映射命令与上述一致。
bash -c "$(curl -fsSL https://curveadm.nos-eastchina1.126.net/script/install.sh)"
+source /root/.bash_profile
+
在中控机上编辑主机列表文件:
vim hosts.yaml
+
文件中包含另外五台服务器的 IP 地址和在 Curve 集群内的名称,其中:
global:
+ user: root
+ ssh_port: 22
+ private_key_file: /root/.ssh/id_rsa
+
+hosts:
+ # Curve worker nodes
+ - host: server-host1
+ hostname: 172.16.0.223
+ - host: server-host2
+ hostname: 172.16.0.224
+ - host: server-host3
+ hostname: 172.16.0.225
+ # PolarDB nodes
+ - host: polardb-primary
+ hostname: 172.16.0.226
+ - host: polardb-replica
+ hostname: 172.16.0.227
+
导入主机列表:
curveadm hosts commit hosts.yaml
+
准备磁盘列表,并提前生成一批固定大小并预写过的 chunk 文件。磁盘列表中需要包含:
/dev/vdb
)vim format.yaml
+
host:
+ - server-host1
+ - server-host2
+ - server-host3
+disk:
+ - /dev/vdb:/data/chunkserver0:90 # device:mount_path:format_percent
+
开始格式化。此时,中控机将在每台存储节点主机上对每个块设备启动一个格式化进程容器。
$ curveadm format -f format.yaml
+Start Format Chunkfile Pool: ⠸
+ + host=server-host1 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [0/1] ⠸
+ + host=server-host2 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [0/1] ⠸
+ + host=server-host3 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [0/1] ⠸
+
当显示 OK
时,说明这个格式化进程容器已启动,但 并不代表格式化已经完成。格式化是个较久的过程,将会持续一段时间:
Start Format Chunkfile Pool: [OK]
+ + host=server-host1 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [1/1] [OK]
+ + host=server-host2 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [1/1] [OK]
+ + host=server-host3 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [1/1] [OK]
+
可以通过以下命令查看格式化进度,目前仍在格式化状态中:
$ curveadm format --status
+Get Format Status: [OK]
+
+Host Device MountPoint Formatted Status
+---- ------ ---------- --------- ------
+server-host1 /dev/vdb /data/chunkserver0 19/90 Formatting
+server-host2 /dev/vdb /data/chunkserver0 22/90 Formatting
+server-host3 /dev/vdb /data/chunkserver0 22/90 Formatting
+
格式化完成后的输出:
$ curveadm format --status
+Get Format Status: [OK]
+
+Host Device MountPoint Formatted Status
+---- ------ ---------- --------- ------
+server-host1 /dev/vdb /data/chunkserver0 95/90 Done
+server-host2 /dev/vdb /data/chunkserver0 95/90 Done
+server-host3 /dev/vdb /data/chunkserver0 95/90 Done
+
首先,准备集群配置文件:
vim topology.yaml
+
粘贴如下配置文件:
kind: curvebs
+global:
+ container_image: opencurvedocker/curvebs:v1.2
+ log_dir: ${home}/logs/${service_role}${service_replicas_sequence}
+ data_dir: ${home}/data/${service_role}${service_replicas_sequence}
+ s3.nos_address: 127.0.0.1
+ s3.snapshot_bucket_name: curve
+ s3.ak: minioadmin
+ s3.sk: minioadmin
+ variable:
+ home: /tmp
+ machine1: server-host1
+ machine2: server-host2
+ machine3: server-host3
+
+etcd_services:
+ config:
+ listen.ip: ${service_host}
+ listen.port: 2380
+ listen.client_port: 2379
+ deploy:
+ - host: ${machine1}
+ - host: ${machine2}
+ - host: ${machine3}
+
+mds_services:
+ config:
+ listen.ip: ${service_host}
+ listen.port: 6666
+ listen.dummy_port: 6667
+ deploy:
+ - host: ${machine1}
+ - host: ${machine2}
+ - host: ${machine3}
+
+chunkserver_services:
+ config:
+ listen.ip: ${service_host}
+ listen.port: 82${format_replicas_sequence} # 8200,8201,8202
+ data_dir: /data/chunkserver${service_replicas_sequence} # /data/chunkserver0, /data/chunkserver1
+ copysets: 100
+ deploy:
+ - host: ${machine1}
+ replicas: 1
+ - host: ${machine2}
+ replicas: 1
+ - host: ${machine3}
+ replicas: 1
+
+snapshotclone_services:
+ config:
+ listen.ip: ${service_host}
+ listen.port: 5555
+ listen.dummy_port: 8081
+ listen.proxy_port: 8080
+ deploy:
+ - host: ${machine1}
+ - host: ${machine2}
+ - host: ${machine3}
+
根据上述的集群拓扑文件创建集群 my-cluster
:
curveadm cluster add my-cluster -f topology.yaml
+
切换 my-cluster
集群为当前管理集群:
curveadm cluster checkout my-cluster
+
部署集群。如果部署成功,将会输出类似 Cluster 'my-cluster' successfully deployed ^_^.
字样。
$ curveadm deploy --skip snapshotclone
+
+...
+Create Logical Pool: [OK]
+ + host=server-host1 role=mds containerId=c6fdd71ae678 [1/1] [OK]
+
+Start Service: [OK]
+ + host=server-host1 role=snapshotclone containerId=9d3555ba72fa [1/1] [OK]
+ + host=server-host2 role=snapshotclone containerId=e6ae2b23b57e [1/1] [OK]
+ + host=server-host3 role=snapshotclone containerId=f6d3446c7684 [1/1] [OK]
+
+Balance Leader: [OK]
+ + host=server-host1 role=mds containerId=c6fdd71ae678 [1/1] [OK]
+
+Cluster 'my-cluster' successfully deployed ^_^.
+
查看集群状态:
$ curveadm status
+Get Service Status: [OK]
+
+cluster name : my-cluster
+cluster kind : curvebs
+cluster mds addr : 172.16.0.223:6666,172.16.0.224:6666,172.16.0.225:6666
+cluster mds leader: 172.16.0.225:6666 / d0a94a7afa14
+
+Id Role Host Replicas Container Id Status
+-- ---- ---- -------- ------------ ------
+5567a1c56ab9 etcd server-host1 1/1 f894c5485a26 Up 17 seconds
+68f9f0e6f108 etcd server-host2 1/1 69b09cdbf503 Up 17 seconds
+a678263898cc etcd server-host3 1/1 2ed141800731 Up 17 seconds
+4dcbdd08e2cd mds server-host1 1/1 76d62ff0eb25 Up 17 seconds
+8ef1755b0a10 mds server-host2 1/1 d8d838258a6f Up 17 seconds
+f3599044c6b5 mds server-host3 1/1 d63ae8502856 Up 17 seconds
+9f1d43bc5b03 chunkserver server-host1 1/1 39751a4f49d5 Up 16 seconds
+3fb8fd7b37c1 chunkserver server-host2 1/1 0f55a19ed44b Up 16 seconds
+c4da555952e3 chunkserver server-host3 1/1 9411274d2c97 Up 16 seconds
+
在 Curve 中控机上编辑客户端配置文件:
vim client.yaml
+
注意,这里的 mds.listen.addr
请填写上一步集群状态中输出的 cluster mds addr
:
kind: curvebs
+container_image: opencurvedocker/curvebs:v1.2
+mds.listen.addr: 172.16.0.223:6666,172.16.0.224:6666,172.16.0.225:6666
+log_dir: /root/curvebs/logs/client
+
bash -c "$(curl -fsSL https://curveadm.nos-eastchina1.126.net/script/install.sh)"
+source /root/.bash_profile
+
在中控机上编辑主机列表文件:
vim hosts.yaml
+
文件中包含另外五台服务器的 IP 地址和在 Curve 集群内的名称,其中:
global:
+ user: root
+ ssh_port: 22
+ private_key_file: /root/.ssh/id_rsa
+
+hosts:
+ # Curve worker nodes
+ - host: server-host1
+ hostname: 172.16.0.223
+ - host: server-host2
+ hostname: 172.16.0.224
+ - host: server-host3
+ hostname: 172.16.0.225
+ # PolarDB nodes
+ - host: polardb-primary
+ hostname: 172.16.0.226
+ - host: polardb-replica
+ hostname: 172.16.0.227
+
导入主机列表:
curveadm hosts commit hosts.yaml
+
准备磁盘列表,并提前生成一批固定大小并预写过的 chunk 文件。磁盘列表中需要包含:
/dev/vdb
)vim format.yaml
+
host:
+ - server-host1
+ - server-host2
+ - server-host3
+disk:
+ - /dev/vdb:/data/chunkserver0:90 # device:mount_path:format_percent
+
开始格式化。此时,中控机将在每台存储节点主机上对每个块设备启动一个格式化进程容器。
$ curveadm format -f format.yaml
+Start Format Chunkfile Pool: ⠸
+ + host=server-host1 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [0/1] ⠸
+ + host=server-host2 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [0/1] ⠸
+ + host=server-host3 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [0/1] ⠸
+
当显示 OK
时,说明这个格式化进程容器已启动,但 并不代表格式化已经完成。格式化是个较久的过程,将会持续一段时间:
Start Format Chunkfile Pool: [OK]
+ + host=server-host1 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [1/1] [OK]
+ + host=server-host2 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [1/1] [OK]
+ + host=server-host3 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [1/1] [OK]
+
可以通过以下命令查看格式化进度,目前仍在格式化状态中:
$ curveadm format --status
+Get Format Status: [OK]
+
+Host Device MountPoint Formatted Status
+---- ------ ---------- --------- ------
+server-host1 /dev/vdb /data/chunkserver0 19/90 Formatting
+server-host2 /dev/vdb /data/chunkserver0 22/90 Formatting
+server-host3 /dev/vdb /data/chunkserver0 22/90 Formatting
+
格式化完成后的输出:
$ curveadm format --status
+Get Format Status: [OK]
+
+Host Device MountPoint Formatted Status
+---- ------ ---------- --------- ------
+server-host1 /dev/vdb /data/chunkserver0 95/90 Done
+server-host2 /dev/vdb /data/chunkserver0 95/90 Done
+server-host3 /dev/vdb /data/chunkserver0 95/90 Done
+
首先,准备集群配置文件:
vim topology.yaml
+
粘贴如下配置文件:
kind: curvebs
+global:
+ container_image: opencurvedocker/curvebs:v1.2
+ log_dir: ${home}/logs/${service_role}${service_replicas_sequence}
+ data_dir: ${home}/data/${service_role}${service_replicas_sequence}
+ s3.nos_address: 127.0.0.1
+ s3.snapshot_bucket_name: curve
+ s3.ak: minioadmin
+ s3.sk: minioadmin
+ variable:
+ home: /tmp
+ machine1: server-host1
+ machine2: server-host2
+ machine3: server-host3
+
+etcd_services:
+ config:
+ listen.ip: ${service_host}
+ listen.port: 2380
+ listen.client_port: 2379
+ deploy:
+ - host: ${machine1}
+ - host: ${machine2}
+ - host: ${machine3}
+
+mds_services:
+ config:
+ listen.ip: ${service_host}
+ listen.port: 6666
+ listen.dummy_port: 6667
+ deploy:
+ - host: ${machine1}
+ - host: ${machine2}
+ - host: ${machine3}
+
+chunkserver_services:
+ config:
+ listen.ip: ${service_host}
+ listen.port: 82${format_replicas_sequence} # 8200,8201,8202
+ data_dir: /data/chunkserver${service_replicas_sequence} # /data/chunkserver0, /data/chunkserver1
+ copysets: 100
+ deploy:
+ - host: ${machine1}
+ replicas: 1
+ - host: ${machine2}
+ replicas: 1
+ - host: ${machine3}
+ replicas: 1
+
+snapshotclone_services:
+ config:
+ listen.ip: ${service_host}
+ listen.port: 5555
+ listen.dummy_port: 8081
+ listen.proxy_port: 8080
+ deploy:
+ - host: ${machine1}
+ - host: ${machine2}
+ - host: ${machine3}
+
根据上述的集群拓扑文件创建集群 my-cluster
:
curveadm cluster add my-cluster -f topology.yaml
+
切换 my-cluster
集群为当前管理集群:
curveadm cluster checkout my-cluster
+
部署集群。如果部署成功,将会输出类似 Cluster 'my-cluster' successfully deployed ^_^.
字样。
$ curveadm deploy --skip snapshotclone
+
+...
+Create Logical Pool: [OK]
+ + host=server-host1 role=mds containerId=c6fdd71ae678 [1/1] [OK]
+
+Start Service: [OK]
+ + host=server-host1 role=snapshotclone containerId=9d3555ba72fa [1/1] [OK]
+ + host=server-host2 role=snapshotclone containerId=e6ae2b23b57e [1/1] [OK]
+ + host=server-host3 role=snapshotclone containerId=f6d3446c7684 [1/1] [OK]
+
+Balance Leader: [OK]
+ + host=server-host1 role=mds containerId=c6fdd71ae678 [1/1] [OK]
+
+Cluster 'my-cluster' successfully deployed ^_^.
+
查看集群状态:
$ curveadm status
+Get Service Status: [OK]
+
+cluster name : my-cluster
+cluster kind : curvebs
+cluster mds addr : 172.16.0.223:6666,172.16.0.224:6666,172.16.0.225:6666
+cluster mds leader: 172.16.0.225:6666 / d0a94a7afa14
+
+Id Role Host Replicas Container Id Status
+-- ---- ---- -------- ------------ ------
+5567a1c56ab9 etcd server-host1 1/1 f894c5485a26 Up 17 seconds
+68f9f0e6f108 etcd server-host2 1/1 69b09cdbf503 Up 17 seconds
+a678263898cc etcd server-host3 1/1 2ed141800731 Up 17 seconds
+4dcbdd08e2cd mds server-host1 1/1 76d62ff0eb25 Up 17 seconds
+8ef1755b0a10 mds server-host2 1/1 d8d838258a6f Up 17 seconds
+f3599044c6b5 mds server-host3 1/1 d63ae8502856 Up 17 seconds
+9f1d43bc5b03 chunkserver server-host1 1/1 39751a4f49d5 Up 16 seconds
+3fb8fd7b37c1 chunkserver server-host2 1/1 0f55a19ed44b Up 16 seconds
+c4da555952e3 chunkserver server-host3 1/1 9411274d2c97 Up 16 seconds
+
在 Curve 中控机上编辑客户端配置文件:
vim client.yaml
+
注意,这里的 mds.listen.addr
请填写上一步集群状态中输出的 cluster mds addr
:
kind: curvebs
+container_image: opencurvedocker/curvebs:v1.2
+mds.listen.addr: 172.16.0.223:6666,172.16.0.224:6666,172.16.0.225:6666
+log_dir: /root/curvebs/logs/client
+
Network Block Device (NBD) 是一种网络协议,可以在多个主机间共享块存储设备。NBD 被设计为 Client-Server 的架构,因此至少需要两台物理机来部署。
以两台物理机环境为例,本小节介绍基于 NBD 共享存储的实例构建方法大体如下:
注意
以上步骤在 CentOS 7.5 上通过测试。
提示
操作系统内核需要支持 NBD 内核模块,如果操作系统当前不支持该内核模块,则需要自己通过对应内核版本进行编译和加载 NBD 内核模块。
rpm -ihv kernel-3.10.0-862.el7.src.rpm
+cd ~/rpmbuild/SOURCES
+tar Jxvf linux-3.10.0-862.el7.tar.xz -C /usr/src/kernels/
+cd /usr/src/kernels/linux-3.10.0-862.el7/
+
NBD 驱动源码路径位于:drivers/block/nbd.c
。接下来编译操作系统内核依赖和组件:
cp ../$(uname -r)/Module.symvers ./
+make menuconfig # Device Driver -> Block devices -> Set 'M' On 'Network block device support'
+make prepare && make modules_prepare && make scripts
+make CONFIG_BLK_DEV_NBD=m M=drivers/block
+
检查是否正常生成驱动:
modinfo drivers/block/nbd.ko
+
拷贝、生成依赖并安装驱动:
cp drivers/block/nbd.ko /lib/modules/$(uname -r)/kernel/drivers/block
+depmod -a
+modprobe nbd # 或者 modprobe -f nbd 可以忽略模块版本检查
+
检查是否安装成功:
# 检查已安装内核模块
+lsmod | grep nbd
+# 如果NBD驱动已经安装,则会生成/dev/nbd*设备(例如:/dev/nbd0、/dev/nbd1等)
+ls /dev/nbd*
+
yum install nbd
+
拉起 NBD 服务端,按照同步方式(sync/flush=true
)配置,在指定端口(例如 1921
)上监听对指定块设备(例如 /dev/vdb
)的访问。
nbd-server -C /root/nbd.conf
+
配置文件 /root/nbd.conf
的内容举例如下:
[generic]
+ #user = nbd
+ #group = nbd
+ listenaddr = 0.0.0.0
+ port = 1921
+[export1]
+ exportname = /dev/vdb
+ readonly = false
+ multifile = false
+ copyonwrite = false
+ flush = true
+ fua = true
+ sync = true
+
NBD 驱动安装成功后会看到 /dev/nbd*
设备, 根据服务端的配置把远程块设备映射为本地的某个 NBD 设备即可:
nbd-client x.x.x.x 1921 -N export1 /dev/nbd0
+# x.x.x.x是NBD服务端主机的IP地址
+
Network Block Device (NBD) 是一种网络协议,可以在多个主机间共享块存储设备。NBD 被设计为 Client-Server 的架构,因此至少需要两台物理机来部署。
以两台物理机环境为例,本小节介绍基于 NBD 共享存储的实例构建方法大体如下:
WARNING
以上步骤在 CentOS 7.5 上通过测试。
TIP
操作系统内核需要支持 NBD 内核模块,如果操作系统当前不支持该内核模块,则需要自己通过对应内核版本进行编译和加载 NBD 内核模块。
rpm -ihv kernel-3.10.0-862.el7.src.rpm
+cd ~/rpmbuild/SOURCES
+tar Jxvf linux-3.10.0-862.el7.tar.xz -C /usr/src/kernels/
+cd /usr/src/kernels/linux-3.10.0-862.el7/
+
NBD 驱动源码路径位于:drivers/block/nbd.c
。接下来编译操作系统内核依赖和组件:
cp ../$(uname -r)/Module.symvers ./
+make menuconfig # Device Driver -> Block devices -> Set 'M' On 'Network block device support'
+make prepare && make modules_prepare && make scripts
+make CONFIG_BLK_DEV_NBD=m M=drivers/block
+
检查是否正常生成驱动:
modinfo drivers/block/nbd.ko
+
拷贝、生成依赖并安装驱动:
cp drivers/block/nbd.ko /lib/modules/$(uname -r)/kernel/drivers/block
+depmod -a
+modprobe nbd # 或者 modprobe -f nbd 可以忽略模块版本检查
+
检查是否安装成功:
# 检查已安装内核模块
+lsmod | grep nbd
+# 如果NBD驱动已经安装,则会生成/dev/nbd*设备(例如:/dev/nbd0、/dev/nbd1等)
+ls /dev/nbd*
+
yum install nbd
+
拉起 NBD 服务端,按照同步方式(sync/flush=true
)配置,在指定端口(例如 1921
)上监听对指定块设备(例如 /dev/vdb
)的访问。
nbd-server -C /root/nbd.conf
+
配置文件 /root/nbd.conf
的内容举例如下:
[generic]
+ #user = nbd
+ #group = nbd
+ listenaddr = 0.0.0.0
+ port = 1921
+[export1]
+ exportname = /dev/vdb
+ readonly = false
+ multifile = false
+ copyonwrite = false
+ flush = true
+ fua = true
+ sync = true
+
NBD 驱动安装成功后会看到 /dev/nbd*
设备, 根据服务端的配置把远程块设备映射为本地的某个 NBD 设备即可:
nbd-client x.x.x.x 1921 -N export1 /dev/nbd0
+# x.x.x.x是NBD服务端主机的IP地址
+
在国际上,一些相关行业也有监管数据安全标准,例如:
为了满足保护用户数据安全的需求,我们在 PolarDB 中实现 TDE 功能。
pg_strong_random
随机生成,存在内存中,作为实际加密数据的密码。对于用户来说:
initdb
时增加 --cluster-passphrase-command 'xxx' -e aes-256
参数就会生成支持 TDE 的集群,其中 cluster-passphrase-command
参数为得到加密密钥的密钥的命令,-e
代表数据加密采用的加密算法,目前支持 AES-128、AES-256 和 SM4。
initdb --cluster-passphrase-command 'echo \\"abc123\\"' -e aes-256
+
在数据库运行过程中,只有超级用户可以执行如下命令得到对应的加密算法:
show polar_data_encryption_cipher;
+
在数据库运行过程中,可以创建插件 polar_tde_utils
来修改 TDE 的加密密钥或者查询 TDE 的一些执行状态,目前支持:
修改加密密钥,其中函数参数为获取加密密钥的方法(该方法保证只能在宿主机所在网络才可以获得),该函数执行后,kmgr
文件内容变更,等下次重启后生效。
select polar_tde_update_kmgr_file('echo \\"abc123456\\"');
+
得到当前的 kmgr 的 info 信息。
select * from polar_tde_kmgr_info_view();
+
检查 kmgr 文件的完整性。
select polar_tde_check_kmgr_file();
+
执行 pg_filedump
解析加密后的页面,用于一些极端情况下,做页面解析。
pg_filedump -e aes-128 -C 'echo \\"abc123\\"' -K global/kmgr base/14543/2608
+
采用 2 层密钥结构,即密钥加密密钥和表数据加密密钥。表数据加密密钥是实际对数据库数据进行加密的密钥。密钥加密密钥则是对表数据加密密钥进行进一步加密的密钥。两层密钥的详细介绍如下:
polar_cluster_passphrase_command
参数中命令并计算 SHA-512 后得到 64 字节的数据,其中前 32 字节为顶层加密密钥 KEK,后 32 字节为 HMACK。KEK 和 HMACK 每次都是通过外部获取,例如 KMS,测试的时候可以直接 echo passphrase
得到。ENCMDEK 和 KEK_HMAC 需要保存在共享存储上,用来保证下次启动时 RW 和 RO 都可以读取该文件,获取真正的加密密钥。其数据结构如下:
typedef struct KmgrFileData
+{
+ /* version for kmgr file */
+ uint32 kmgr_version_no;
+
+ /* Are data pages encrypted? Zero if encryption is disabled */
+ uint32 data_encryption_cipher;
+
+ /*
+ * Wrapped Key information for data encryption.
+ */
+ WrappedEncKeyWithHmac tde_rdek;
+ WrappedEncKeyWithHmac tde_wdek;
+
+ /* CRC of all above ... MUST BE LAST! */
+ pg_crc32c crc;
+} KmgrFileData;
+
该文件当前是在 initdb
的时候产生,这样就可以保证 Standby 通过 pg_basebackup
获取到。
在实例运行状态下,TDE 相关的控制信息保存在进程的内存中,结构如下:
static keydata_t keyEncKey[TDE_KEK_SIZE];
+static keydata_t relEncKey[TDE_MAX_DEK_SIZE];
+static keydata_t walEncKey[TDE_MAX_DEK_SIZE];
+char *polar_cluster_passphrase_command = NULL;
+extern int data_encryption_cipher;
+
数据库初始化时需要生成密钥,过程示意图如下:
',21),A=a("li",null,[n("运行 "),a("code",null,"polar_cluster_passphrase_command"),n(" 得到 64 字节的 KEK + HMACK,其中 KEK 长度为 32 字节,HMACK 长度为 32 字节。")],-1),y={href:"https://www.openssl.org/",target:"_blank",rel:"noopener noreferrer"},H=a("li",null,"使用 MDEK 调用 OpenSSL 的 HKDF 算法生成 TDEK。",-1),x=a("li",null,"使用 MDEK 调用 OpenSSL 的 HKDF 算法生成 WDEK。",-1),S=a("li",null,"使用 KEK 加密 MDEK 生成 ENCMDEK。",-1),w=a("li",null,"ENCMDEK 和 HMACK 经过 HMAC 算法生成 KEK_HMAC 用于还原密钥时的校验信息。",-1),P=a("li",null,[n("将 ENCMDEK 和 KEK_HMAC 补充其他 "),a("code",null,"KmgrFileData"),n(" 结构信息写入 "),a("code",null,"global/kmgr"),n(" 文件。")],-1),N=c('当数据库崩溃或重新启动等情况下,需要通过有限的密文信息解密出对应的密钥,其过程如下:
global/kmgr
文件获取 ENCMDEK 和 KEK_HMAC。polar_cluster_passphrase_command
得到 64 字节的 KEK + HMACK。密钥更换的过程可以理解为先用旧的 KEK 还原密钥,然后再用新的 KEK 生成新的 kmgr 文件。其过程如下图:
global/kmgr
文件获取 ENCMDEK 和 KEK_HMAC。polar_cluster_passphrase_command
得到 64 字节的 KEK + HMACKpolar_cluster_passphrase_command
得到 64 字节新的 new_KEK + new_HMACK。KmgrFileData
结构信息写入 global/kmgr
文件。我们期望对所有的用户数据按照 Page 的粒度进行加密,加密方法采用 AES-128/256 加密算法(产品化默认使用 AES-256)。(page LSN,page number)
作为每个数据页加密的 IV,IV 是可以保证相同内容加密出不同结果的初始向量。
每个 Page 的头部数据结构如下:
typedef struct PageHeaderData
+{
+ /* XXX LSN is member of *any* block, not only page-organized ones */
+ PageXLogRecPtr pd_lsn; /* LSN: next byte after last byte of xlog
+ * record for last change to this page */
+ uint16 pd_checksum; /* checksum */
+ uint16 pd_flags; /* flag bits, see below */
+ LocationIndex pd_lower; /* offset to start of free space */
+ LocationIndex pd_upper; /* offset to end of free space */
+ LocationIndex pd_special; /* offset to start of special space */
+ uint16 pd_pagesize_version;
+ TransactionId pd_prune_xid; /* oldest prunable XID, or zero if none */
+ ItemIdData pd_linp[FLEXIBLE_ARRAY_MEMBER]; /* line pointer array */
+} PageHeaderData;
+
在上述结构中:
pd_lsn
不能加密:因为解密时需要使用 IV 来解密。pd_flags
增加是否加密的标志位 0x8000
,并且不加密:这样可以兼容明文 page 的读取,为增量实例打开 TDE 提供条件。pd_checksum
不加密:这样可以在密文条件下判断 Page 的校验和。当前加密含有用户数据的文件,比如数据目录中以下子目录中的文件:
base/
global/
pg_tblspc/
pg_replslot/
pg_stat/
pg_stat_tmp/
当前对于按照数据 Page 来进行组织的数据,将按照 Page 来进行加密的。Page 落盘之前必定需要计算校验和,即使校验和相关参数关闭,也会调用校验和相关的函数 PageSetChecksumCopy
、PageSetChecksumInplace
。所以,只需要计算校验和之前加密 Page,即可保证用户数据在存储上是被加密的。
存储上的 Page 读入内存之前必定经过 checksum 校验,即使相关参数关闭,也会调用校验函数 PageIsVerified
。所以,只需要在校验和计算之后解密,即可保证内存中的数据已被解密。
$ git clone https://github.com/pgsql-io/benchmarksql.git
+$ cd benchmarksql
+$ mvn
+
编译出的工具位于如下目录中:
$ cd target/run
+
在编译完毕的工具目录下,将会存在面向不同数据库产品的示例配置:
$ ls | grep sample
+sample.firebird.properties
+sample.mariadb.properties
+sample.oracle.properties
+sample.postgresql.properties
+sample.transact-sql.properties
+
配置项包含的配置类型有:
使用 runDatabaseBuild.sh
脚本,以配置文件作为参数,产生和导入测试数据:
./runDatabaseBuild.sh sample.postgresql.properties
+
通常,在正式测试前会进行一次数据预热:
./runBenchmark.sh sample.postgresql.properties
+
预热完毕后,再次运行同样的命令进行正式测试:
./runBenchmark.sh sample.postgresql.properties
+
_____ latency (seconds) _____
+ TransType count | mix % | mean max 90th% | rbk% errors
++--------------+---------------+---------+---------+---------+---------+---------+---------------+
+| NEW_ORDER | 635 | 44.593 | 0.006 | 0.012 | 0.008 | 1.102 | 0 |
+| PAYMENT | 628 | 44.101 | 0.001 | 0.006 | 0.002 | 0.000 | 0 |
+| ORDER_STATUS | 58 | 4.073 | 0.093 | 0.168 | 0.132 | 0.000 | 0 |
+| STOCK_LEVEL | 52 | 3.652 | 0.035 | 0.044 | 0.041 | 0.000 | 0 |
+| DELIVERY | 51 | 3.581 | 0.000 | 0.001 | 0.001 | 0.000 | 0 |
+| DELIVERY_BG | 51 | 0.000 | 0.018 | 0.023 | 0.020 | 0.000 | 0 |
++--------------+---------------+---------+---------+---------+---------+---------+---------------+
+
+Overall NOPM: 635 (98.76% of the theoretical maximum)
+Overall TPM: 1,424
+
另外也有 CSV 形式的结果被保存,从输出日志中可以找到结果存放目录。
`,14);function D(c,R){const i=p("ArticleInfo"),t=p("router-link"),o=p("ExternalLinkIcon"),l=p("RouteLink");return u(),h("div",null,[m,n(i,{frontmatter:c.$frontmatter},null,8,["frontmatter"]),b,a("nav",_,[a("ul",null,[a("li",null,[n(t,{to:"#背景"},{default:e(()=>[s("背景")]),_:1})]),a("li",null,[n(t,{to:"#测试步骤"},{default:e(()=>[s("测试步骤")]),_:1}),a("ul",null,[a("li",null,[n(t,{to:"#部署-polardb-pg"},{default:e(()=>[s("部署 PolarDB-PG")]),_:1})]),a("li",null,[n(t,{to:"#安装测试工具-benchmarksql"},{default:e(()=>[s("安装测试工具 BenchmarkSQL")]),_:1})]),a("li",null,[n(t,{to:"#tpc-c-配置"},{default:e(()=>[s("TPC-C 配置")]),_:1})]),a("li",null,[n(t,{to:"#导入数据"},{default:e(()=>[s("导入数据")]),_:1})]),a("li",null,[n(t,{to:"#预热数据"},{default:e(()=>[s("预热数据")]),_:1})]),a("li",null,[n(t,{to:"#正式测试"},{default:e(()=>[s("正式测试")]),_:1})]),a("li",null,[n(t,{to:"#查看结果"},{default:e(()=>[s("查看结果")]),_:1})])])])])]),g,a("p",null,[s("TPC 是一系列事务处理和数据库基准测试的规范。其中 "),a("a",f,[s("TPC-C"),n(o)]),s(" (Transaction Processing Performance Council) 是针对 OLTP 的基准测试模型。TPC-C 测试模型给基准测试提供了一种统一的测试标准,可以大体观察出数据库服务稳定性、性能以及系统性能等一系列问题。对数据库展开 TPC-C 基准性能测试,一方面可以衡量数据库的性能,另一方面可以衡量采用不同硬件软件系统的性价比,是被业内广泛应用并关注的一种测试模型。")]),v,P,x,a("ul",null,[a("li",null,[n(l,{to:"/zh/deploying/quick-start.html"},{default:e(()=>[s("快速部署")]),_:1})]),a("li",null,[n(l,{to:"/zh/deploying/deploy.html"},{default:e(()=>[s("进阶部署")]),_:1})])]),C,a("p",null,[a("a",T,[s("BenchmarkSQL"),n(o)]),s(" 依赖 Java 运行环境与 Maven 包管理工具,需要预先安装。拉取 BenchmarkSQL 工具源码并进入目录后,通过 "),B,s(" 编译工程:")]),q,a("p",null,[s("其中,"),L,s(" 包含 PostgreSQL 系列数据库的模板参数,可以基于这个模板来修改并自定义配置。参考 BenchmarkSQL 工具的 "),a("a",S,[s("文档"),n(o)]),s(" 可以查看关于配置项的详细描述。")]),E])}const N=k(d,[["render",D],["__file","tpcc-test.html.vue"]]),O=JSON.parse('{"path":"/zh/operation/tpcc-test.html","title":"TPC-C 测试","lang":"zh-CN","frontmatter":{"author":"棠羽","date":"2023/04/11","minute":15},"headers":[{"level":2,"title":"背景","slug":"背景","link":"#背景","children":[]},{"level":2,"title":"测试步骤","slug":"测试步骤","link":"#测试步骤","children":[{"level":3,"title":"部署 PolarDB-PG","slug":"部署-polardb-pg","link":"#部署-polardb-pg","children":[]},{"level":3,"title":"安装测试工具 BenchmarkSQL","slug":"安装测试工具-benchmarksql","link":"#安装测试工具-benchmarksql","children":[]},{"level":3,"title":"TPC-C 配置","slug":"tpc-c-配置","link":"#tpc-c-配置","children":[]},{"level":3,"title":"导入数据","slug":"导入数据","link":"#导入数据","children":[]},{"level":3,"title":"预热数据","slug":"预热数据","link":"#预热数据","children":[]},{"level":3,"title":"正式测试","slug":"正式测试","link":"#正式测试","children":[]},{"level":3,"title":"查看结果","slug":"查看结果","link":"#查看结果","children":[]}]}],"git":{"updatedTime":1681281377000},"filePathRelative":"zh/operation/tpcc-test.md"}');export{N as comp,O as data}; diff --git a/assets/tpcc-test.html-DfkjzL4L.js b/assets/tpcc-test.html-DfkjzL4L.js new file mode 100644 index 00000000000..41d56c44a28 --- /dev/null +++ b/assets/tpcc-test.html-DfkjzL4L.js @@ -0,0 +1,27 @@ +import{_ as k,r as p,o as u,c as d,d as n,a,w as e,b as s,e as r}from"./app-CWFDhr_k.js";const h={},m=a("h1",{id:"tpc-c-测试",tabindex:"-1"},[a("a",{class:"header-anchor",href:"#tpc-c-测试"},[a("span",null,"TPC-C 测试")])],-1),b=a("p",null,"本文将引导您对 PolarDB for PostgreSQL 进行 TPC-C 测试。",-1),_={class:"table-of-contents"},g=a("h2",{id:"背景",tabindex:"-1"},[a("a",{class:"header-anchor",href:"#背景"},[a("span",null,"背景")])],-1),f={href:"https://www.tpc.org/tpcc/",target:"_blank",rel:"noopener noreferrer"},v=a("h2",{id:"测试步骤",tabindex:"-1"},[a("a",{class:"header-anchor",href:"#测试步骤"},[a("span",null,"测试步骤")])],-1),P=a("h3",{id:"部署-polardb-pg",tabindex:"-1"},[a("a",{class:"header-anchor",href:"#部署-polardb-pg"},[a("span",null,"部署 PolarDB-PG")])],-1),x=a("p",null,"参考如下教程部署 PolarDB for PostgreSQL:",-1),C=a("h3",{id:"安装测试工具-benchmarksql",tabindex:"-1"},[a("a",{class:"header-anchor",href:"#安装测试工具-benchmarksql"},[a("span",null,"安装测试工具 BenchmarkSQL")])],-1),T={href:"https://github.com/pgsql-io/benchmarksql",target:"_blank",rel:"noopener noreferrer"},B=a("code",null,"mvn",-1),q=r(`$ git clone https://github.com/pgsql-io/benchmarksql.git
+$ cd benchmarksql
+$ mvn
+
编译出的工具位于如下目录中:
$ cd target/run
+
在编译完毕的工具目录下,将会存在面向不同数据库产品的示例配置:
$ ls | grep sample
+sample.firebird.properties
+sample.mariadb.properties
+sample.oracle.properties
+sample.postgresql.properties
+sample.transact-sql.properties
+
配置项包含的配置类型有:
使用 runDatabaseBuild.sh
脚本,以配置文件作为参数,产生和导入测试数据:
./runDatabaseBuild.sh sample.postgresql.properties
+
通常,在正式测试前会进行一次数据预热:
./runBenchmark.sh sample.postgresql.properties
+
预热完毕后,再次运行同样的命令进行正式测试:
./runBenchmark.sh sample.postgresql.properties
+
_____ latency (seconds) _____
+ TransType count | mix % | mean max 90th% | rbk% errors
++--------------+---------------+---------+---------+---------+---------+---------+---------------+
+| NEW_ORDER | 635 | 44.593 | 0.006 | 0.012 | 0.008 | 1.102 | 0 |
+| PAYMENT | 628 | 44.101 | 0.001 | 0.006 | 0.002 | 0.000 | 0 |
+| ORDER_STATUS | 58 | 4.073 | 0.093 | 0.168 | 0.132 | 0.000 | 0 |
+| STOCK_LEVEL | 52 | 3.652 | 0.035 | 0.044 | 0.041 | 0.000 | 0 |
+| DELIVERY | 51 | 3.581 | 0.000 | 0.001 | 0.001 | 0.000 | 0 |
+| DELIVERY_BG | 51 | 0.000 | 0.018 | 0.023 | 0.020 | 0.000 | 0 |
++--------------+---------------+---------+---------+---------+---------+---------+---------------+
+
+Overall NOPM: 635 (98.76% of the theoretical maximum)
+Overall TPM: 1,424
+
另外也有 CSV 形式的结果被保存,从输出日志中可以找到结果存放目录。
`,14);function D(c,R){const i=p("ArticleInfo"),t=p("router-link"),o=p("ExternalLinkIcon"),l=p("RouteLink");return u(),d("div",null,[m,n(i,{frontmatter:c.$frontmatter},null,8,["frontmatter"]),b,a("nav",_,[a("ul",null,[a("li",null,[n(t,{to:"#背景"},{default:e(()=>[s("背景")]),_:1})]),a("li",null,[n(t,{to:"#测试步骤"},{default:e(()=>[s("测试步骤")]),_:1}),a("ul",null,[a("li",null,[n(t,{to:"#部署-polardb-pg"},{default:e(()=>[s("部署 PolarDB-PG")]),_:1})]),a("li",null,[n(t,{to:"#安装测试工具-benchmarksql"},{default:e(()=>[s("安装测试工具 BenchmarkSQL")]),_:1})]),a("li",null,[n(t,{to:"#tpc-c-配置"},{default:e(()=>[s("TPC-C 配置")]),_:1})]),a("li",null,[n(t,{to:"#导入数据"},{default:e(()=>[s("导入数据")]),_:1})]),a("li",null,[n(t,{to:"#预热数据"},{default:e(()=>[s("预热数据")]),_:1})]),a("li",null,[n(t,{to:"#正式测试"},{default:e(()=>[s("正式测试")]),_:1})]),a("li",null,[n(t,{to:"#查看结果"},{default:e(()=>[s("查看结果")]),_:1})])])])])]),g,a("p",null,[s("TPC 是一系列事务处理和数据库基准测试的规范。其中 "),a("a",f,[s("TPC-C"),n(o)]),s(" (Transaction Processing Performance Council) 是针对 OLTP 的基准测试模型。TPC-C 测试模型给基准测试提供了一种统一的测试标准,可以大体观察出数据库服务稳定性、性能以及系统性能等一系列问题。对数据库展开 TPC-C 基准性能测试,一方面可以衡量数据库的性能,另一方面可以衡量采用不同硬件软件系统的性价比,是被业内广泛应用并关注的一种测试模型。")]),v,P,x,a("ul",null,[a("li",null,[n(l,{to:"/deploying/quick-start.html"},{default:e(()=>[s("快速部署")]),_:1})]),a("li",null,[n(l,{to:"/deploying/deploy.html"},{default:e(()=>[s("进阶部署")]),_:1})])]),C,a("p",null,[a("a",T,[s("BenchmarkSQL"),n(o)]),s(" 依赖 Java 运行环境与 Maven 包管理工具,需要预先安装。拉取 BenchmarkSQL 工具源码并进入目录后,通过 "),B,s(" 编译工程:")]),q,a("p",null,[s("其中,"),L,s(" 包含 PostgreSQL 系列数据库的模板参数,可以基于这个模板来修改并自定义配置。参考 BenchmarkSQL 工具的 "),a("a",S,[s("文档"),n(o)]),s(" 可以查看关于配置项的详细描述。")]),E])}const O=k(h,[["render",D],["__file","tpcc-test.html.vue"]]),N=JSON.parse('{"path":"/operation/tpcc-test.html","title":"TPC-C 测试","lang":"en-US","frontmatter":{"author":"棠羽","date":"2023/04/11","minute":15},"headers":[{"level":2,"title":"背景","slug":"背景","link":"#背景","children":[]},{"level":2,"title":"测试步骤","slug":"测试步骤","link":"#测试步骤","children":[{"level":3,"title":"部署 PolarDB-PG","slug":"部署-polardb-pg","link":"#部署-polardb-pg","children":[]},{"level":3,"title":"安装测试工具 BenchmarkSQL","slug":"安装测试工具-benchmarksql","link":"#安装测试工具-benchmarksql","children":[]},{"level":3,"title":"TPC-C 配置","slug":"tpc-c-配置","link":"#tpc-c-配置","children":[]},{"level":3,"title":"导入数据","slug":"导入数据","link":"#导入数据","children":[]},{"level":3,"title":"预热数据","slug":"预热数据","link":"#预热数据","children":[]},{"level":3,"title":"正式测试","slug":"正式测试","link":"#正式测试","children":[]},{"level":3,"title":"查看结果","slug":"查看结果","link":"#查看结果","children":[]}]}],"git":{"updatedTime":1681281377000},"filePathRelative":"operation/tpcc-test.md"}');export{O as comp,N as data}; diff --git a/assets/tpch-test.html-BhGRYhPX.js b/assets/tpch-test.html-BhGRYhPX.js new file mode 100644 index 00000000000..f58820c4abb --- /dev/null +++ b/assets/tpch-test.html-BhGRYhPX.js @@ -0,0 +1,202 @@ +import{_ as u,r as e,o as i,c as d,d as a,a as s,w as p,b as n,e as t}from"./app-CWFDhr_k.js";const m={},b=s("h1",{id:"tpc-h-测试",tabindex:"-1"},[s("a",{class:"header-anchor",href:"#tpc-h-测试"},[s("span",null,"TPC-H 测试")])],-1),w=s("p",null,"本文将引导您对 PolarDB for PostgreSQL 进行 TPC-H 测试。",-1),y={class:"table-of-contents"},h=s("h2",{id:"背景",tabindex:"-1"},[s("a",{class:"header-anchor",href:"#背景"},[s("span",null,"背景")])],-1),g={href:"https://www.tpc.org/tpch/default5.asp",target:"_blank",rel:"noopener noreferrer"},_=t(`使用 Docker 快速拉起一个基于本地存储的 PolarDB for PostgreSQL 集群:
docker pull polardb/polardb_pg_local_instance
+docker run -it \\
+ --cap-add=SYS_PTRACE \\
+ --privileged=true \\
+ --name polardb_pg_htap \\
+ --shm-size=512m \\
+ polardb/polardb_pg_local_instance \\
+ bash
+
$ git clone https://github.com/ApsaraDB/tpch-dbgen.git
+$ cd tpch-dbgen
+$ ./build.sh --help
+
+ 1) Use default configuration to build
+ ./build.sh
+ 2) Use limited configuration to build
+ ./build.sh --user=postgres --db=postgres --host=localhost --port=5432 --scale=1
+ 3) Run the test case
+ ./build.sh --run
+ 4) Run the target test case
+ ./build.sh --run=3. run the 3rd case.
+ 5) Run the target test case with option
+ ./build.sh --run --option="set polar_enable_px = on;"
+ 6) Clean the test data. This step will drop the database or tables, remove csv
+ and tbl files
+ ./build.sh --clean
+ 7) Quick build TPC-H with 100MB scale of data
+ ./build.sh --scale=0.1
+
通过设置不同的参数,可以定制化地创建不同规模的 TPC-H 数据集。build.sh
脚本中各个参数的含义如下:
--user
:数据库用户名--db
:数据库名--host
:数据库主机地址--port
:数据库服务端口--run
:执行所有 TPC-H 查询,或执行某条特定的 TPC-H 查询--option
:额外指定 GUC 参数--scale
:生成 TPC-H 数据集的规模,单位为 GB该脚本没有提供输入数据库密码的参数,需要通过设置 PGPASSWORD
为数据库用户的数据库密码来完成认证:
export PGPASSWORD=<your password>
+
生成并导入 100MB 规模的 TPC-H 数据:
./build.sh --scale=0.1
+
生成并导入 1GB 规模的 TPC-H 数据:
./build.sh
+
以 TPC-H 的 Q18 为例,执行 PostgreSQL 的单机并行查询,并观测查询速度。
在 tpch-dbgen/
目录下通过 psql
连接到数据库:
cd tpch-dbgen
+psql
+
-- 打开计时
+\\timing on
+
+-- 设置单机并行度
+SET max_parallel_workers_per_gather = 2;
+
+-- 查看 Q18 的执行计划
+\\i finals/18.explain.sql
+ QUERY PLAN
+------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Sort (cost=3450834.75..3450835.42 rows=268 width=81)
+ Sort Key: orders.o_totalprice DESC, orders.o_orderdate
+ -> GroupAggregate (cost=3450817.91..3450823.94 rows=268 width=81)
+ Group Key: customer.c_custkey, orders.o_orderkey
+ -> Sort (cost=3450817.91..3450818.58 rows=268 width=67)
+ Sort Key: customer.c_custkey, orders.o_orderkey
+ -> Hash Join (cost=1501454.20..3450807.10 rows=268 width=67)
+ Hash Cond: (lineitem.l_orderkey = orders.o_orderkey)
+ -> Seq Scan on lineitem (cost=0.00..1724402.52 rows=59986052 width=22)
+ -> Hash (cost=1501453.37..1501453.37 rows=67 width=53)
+ -> Nested Loop (cost=1500465.85..1501453.37 rows=67 width=53)
+ -> Nested Loop (cost=1500465.43..1501084.65 rows=67 width=34)
+ -> Finalize GroupAggregate (cost=1500464.99..1500517.66 rows=67 width=4)
+ Group Key: lineitem_1.l_orderkey
+ Filter: (sum(lineitem_1.l_quantity) > '314'::numeric)
+ -> Gather Merge (cost=1500464.99..1500511.66 rows=400 width=36)
+ Workers Planned: 2
+ -> Sort (cost=1499464.97..1499465.47 rows=200 width=36)
+ Sort Key: lineitem_1.l_orderkey
+ -> Partial HashAggregate (cost=1499454.82..1499457.32 rows=200 width=36)
+ Group Key: lineitem_1.l_orderkey
+ -> Parallel Seq Scan on lineitem lineitem_1 (cost=0.00..1374483.88 rows=24994188 width=22)
+ -> Index Scan using orders_pkey on orders (cost=0.43..8.45 rows=1 width=30)
+ Index Cond: (o_orderkey = lineitem_1.l_orderkey)
+ -> Index Scan using customer_pkey on customer (cost=0.43..5.50 rows=1 width=23)
+ Index Cond: (c_custkey = orders.o_custkey)
+(26 rows)
+
+Time: 3.965 ms
+
+-- 执行 Q18
+\\i finals/18.sql
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 80150.449 ms (01:20.150)
+
PolarDB for PostgreSQL 提供了弹性跨机并行查询(ePQ)的能力,非常适合进行分析型查询。下面的步骤将引导您可以在一台主机上使用 ePQ 并行执行 TPC-H 查询。
在 tpch-dbgen/
目录下通过 psql
连接到数据库:
cd tpch-dbgen
+psql
+
首先需要对 TPC-H 产生的八张表设置 ePQ 的最大查询并行度:
ALTER TABLE nation SET (px_workers = 100);
+ALTER TABLE region SET (px_workers = 100);
+ALTER TABLE supplier SET (px_workers = 100);
+ALTER TABLE part SET (px_workers = 100);
+ALTER TABLE partsupp SET (px_workers = 100);
+ALTER TABLE customer SET (px_workers = 100);
+ALTER TABLE orders SET (px_workers = 100);
+ALTER TABLE lineitem SET (px_workers = 100);
+
以 Q18 为例,执行查询:
-- 打开计时
+\\timing on
+
+-- 打开 ePQ 功能的开关
+SET polar_enable_px = ON;
+-- 设置每个节点的 ePQ 并行度为 1
+SET polar_px_dop_per_node = 1;
+
+-- 查看 Q18 的执行计划
+\\i finals/18.explain.sql
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------
+ PX Coordinator 2:1 (slice1; segments: 2) (cost=0.00..257526.21 rows=59986052 width=47)
+ Merge Key: orders.o_totalprice, orders.o_orderdate
+ -> GroupAggregate (cost=0.00..243457.68 rows=29993026 width=47)
+ Group Key: orders.o_totalprice, orders.o_orderdate, customer.c_name, customer.c_custkey, orders.o_orderkey
+ -> Sort (cost=0.00..241257.18 rows=29993026 width=47)
+ Sort Key: orders.o_totalprice DESC, orders.o_orderdate, customer.c_name, customer.c_custkey, orders.o_orderkey
+ -> Hash Join (cost=0.00..42729.99 rows=29993026 width=47)
+ Hash Cond: (orders.o_orderkey = lineitem_1.l_orderkey)
+ -> PX Hash 2:2 (slice2; segments: 2) (cost=0.00..15959.71 rows=7500000 width=39)
+ Hash Key: orders.o_orderkey
+ -> Hash Join (cost=0.00..15044.19 rows=7500000 width=39)
+ Hash Cond: (orders.o_custkey = customer.c_custkey)
+ -> PX Hash 2:2 (slice3; segments: 2) (cost=0.00..11561.51 rows=7500000 width=20)
+ Hash Key: orders.o_custkey
+ -> Hash Semi Join (cost=0.00..11092.01 rows=7500000 width=20)
+ Hash Cond: (orders.o_orderkey = lineitem.l_orderkey)
+ -> Partial Seq Scan on orders (cost=0.00..1132.25 rows=7500000 width=20)
+ -> Hash (cost=7760.84..7760.84 rows=400 width=4)
+ -> PX Broadcast 2:2 (slice4; segments: 2) (cost=0.00..7760.84 rows=400 width=4)
+ -> Result (cost=0.00..7760.80 rows=200 width=4)
+ Filter: ((sum(lineitem.l_quantity)) > '314'::numeric)
+ -> Finalize HashAggregate (cost=0.00..7760.78 rows=500 width=12)
+ Group Key: lineitem.l_orderkey
+ -> PX Hash 2:2 (slice5; segments: 2) (cost=0.00..7760.72 rows=500 width=12)
+ Hash Key: lineitem.l_orderkey
+ -> Partial HashAggregate (cost=0.00..7760.70 rows=500 width=12)
+ Group Key: lineitem.l_orderkey
+ -> Partial Seq Scan on lineitem (cost=0.00..3350.82 rows=29993026 width=12)
+ -> Hash (cost=597.51..597.51 rows=749979 width=23)
+ -> PX Hash 2:2 (slice6; segments: 2) (cost=0.00..597.51 rows=749979 width=23)
+ Hash Key: customer.c_custkey
+ -> Partial Seq Scan on customer (cost=0.00..511.44 rows=749979 width=23)
+ -> Hash (cost=5146.80..5146.80 rows=29993026 width=12)
+ -> PX Hash 2:2 (slice7; segments: 2) (cost=0.00..5146.80 rows=29993026 width=12)
+ Hash Key: lineitem_1.l_orderkey
+ -> Partial Seq Scan on lineitem lineitem_1 (cost=0.00..3350.82 rows=29993026 width=12)
+ Optimizer: PolarDB PX Optimizer
+(37 rows)
+
+Time: 216.672 ms
+
+-- 执行 Q18
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 59113.965 ms (00:59.114)
+
可以看到比 PostgreSQL 的单机并行执行的时间略短。加大 ePQ 功能的节点并行度,查询性能将会有更明显的提升:
SET polar_px_dop_per_node = 2;
+\\i finals/18.sql
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 42400.500 ms (00:42.401)
+
+SET polar_px_dop_per_node = 4;
+\\i finals/18.sql
+
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 19892.603 ms (00:19.893)
+
+SET polar_px_dop_per_node = 8;
+\\i finals/18.sql
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 10944.402 ms (00:10.944)
+
使用 ePQ 执行 Q17 和 Q18 时可能会出现 OOM。需要设置以下参数防止用尽内存:
SET polar_px_optimizer_enable_hashagg = 0; +
在上面的例子中,出于简单考虑,PolarDB for PostgreSQL 的多个计算节点被部署在同一台主机上。在这种场景下使用 ePQ 时,由于所有的计算节点都使用了同一台主机的 CPU、内存、I/O 带宽,因此本质上是基于单台主机的并行执行。实际上,PolarDB for PostgreSQL 的计算节点可以被部署在能够共享存储节点的多台机器上。此时使用 ePQ 功能将进行真正的跨机器分布式并行查询,能够充分利用多台机器上的计算资源。
`,27),T=t(``,1);function v(r,S){const k=e("ArticleInfo"),o=e("router-link"),c=e("ExternalLinkIcon"),l=e("RouteLink");return i(),d("div",null,[b,a(k,{frontmatter:r.$frontmatter},null,8,["frontmatter"]),w,s("nav",y,[s("ul",null,[s("li",null,[a(o,{to:"#背景"},{default:p(()=>[n("背景")]),_:1})]),s("li",null,[a(o,{to:"#测试准备"},{default:p(()=>[n("测试准备")]),_:1}),s("ul",null,[s("li",null,[a(o,{to:"#部署-polardb-pg"},{default:p(()=>[n("部署 PolarDB-PG")]),_:1})]),s("li",null,[a(o,{to:"#生成-tpc-h-测试数据集"},{default:p(()=>[n("生成 TPC-H 测试数据集")]),_:1})])])]),s("li",null,[a(o,{to:"#执行-postgresql-单机并行执行"},{default:p(()=>[n("执行 PostgreSQL 单机并行执行")]),_:1})]),s("li",null,[a(o,{to:"#执行-epq-单机并行执行"},{default:p(()=>[n("执行 ePQ 单机并行执行")]),_:1})]),s("li",null,[a(o,{to:"#执行-epq-跨机并行执行"},{default:p(()=>[n("执行 ePQ 跨机并行执行")]),_:1})])])]),h,s("p",null,[s("a",g,[n("TPC-H"),a(c)]),n(" 是专门测试数据库分析型场景性能的数据集。")]),_,s("p",null,[n("或者参考 "),a(l,{to:"/zh/deploying/deploy.html"},{default:p(()=>[n("进阶部署")]),_:1}),n(" 部署一个基于共享存储的 PolarDB for PostgreSQL 集群。")]),P,s("p",null,[n("通过 "),s("a",f,[n("tpch-dbgen"),a(c)]),n(" 工具来生成测试数据。")]),q,s("p",null,[n("参考 "),a(l,{to:"/zh/deploying/deploy.html"},{default:p(()=>[n("进阶部署")]),_:1}),n(" 可以搭建起不同形态的 PolarDB for PostgreSQL 集群。集群搭建成功后,使用 ePQ 的方式与单机 ePQ 完全相同。")]),T])}const x=u(m,[["render",v],["__file","tpch-test.html.vue"]]),H=JSON.parse('{"path":"/zh/operation/tpch-test.html","title":"TPC-H 测试","lang":"zh-CN","frontmatter":{"author":"棠羽","date":"2023/04/12","minute":20},"headers":[{"level":2,"title":"背景","slug":"背景","link":"#背景","children":[]},{"level":2,"title":"测试准备","slug":"测试准备","link":"#测试准备","children":[{"level":3,"title":"部署 PolarDB-PG","slug":"部署-polardb-pg","link":"#部署-polardb-pg","children":[]},{"level":3,"title":"生成 TPC-H 测试数据集","slug":"生成-tpc-h-测试数据集","link":"#生成-tpc-h-测试数据集","children":[]}]},{"level":2,"title":"执行 PostgreSQL 单机并行执行","slug":"执行-postgresql-单机并行执行","link":"#执行-postgresql-单机并行执行","children":[]},{"level":2,"title":"执行 ePQ 单机并行执行","slug":"执行-epq-单机并行执行","link":"#执行-epq-单机并行执行","children":[]},{"level":2,"title":"执行 ePQ 跨机并行执行","slug":"执行-epq-跨机并行执行","link":"#执行-epq-跨机并行执行","children":[]}],"git":{"updatedTime":1703744114000},"filePathRelative":"zh/operation/tpch-test.md"}');export{x as comp,H as data}; diff --git a/assets/tpch-test.html-Bq8CMIlT.js b/assets/tpch-test.html-Bq8CMIlT.js new file mode 100644 index 00000000000..003cf5e6e09 --- /dev/null +++ b/assets/tpch-test.html-Bq8CMIlT.js @@ -0,0 +1,202 @@ +import{_ as u,r as e,o as i,c as d,d as a,a as s,w as p,b as n,e as t}from"./app-CWFDhr_k.js";const m={},b=s("h1",{id:"tpc-h-测试",tabindex:"-1"},[s("a",{class:"header-anchor",href:"#tpc-h-测试"},[s("span",null,"TPC-H 测试")])],-1),w=s("p",null,"本文将引导您对 PolarDB for PostgreSQL 进行 TPC-H 测试。",-1),y={class:"table-of-contents"},h=s("h2",{id:"背景",tabindex:"-1"},[s("a",{class:"header-anchor",href:"#背景"},[s("span",null,"背景")])],-1),g={href:"https://www.tpc.org/tpch/default5.asp",target:"_blank",rel:"noopener noreferrer"},_=t(`如果遇到如下错误:
psql:queries/q01.analyze.sq1:24: WARNING: interconnect may encountered a network error, please check your network +DETAIL: Failed to send packet (seq 1) to 192.168.1.8:57871 (pid 17766 cid 0) after 100 retries. +
可以尝试统一修改每台机器的 MTU 为 9000:
ifconfig <网卡名> mtu 9000 +
使用 Docker 快速拉起一个基于本地存储的 PolarDB for PostgreSQL 集群:
docker pull polardb/polardb_pg_local_instance
+docker run -it \\
+ --cap-add=SYS_PTRACE \\
+ --privileged=true \\
+ --name polardb_pg_htap \\
+ --shm-size=512m \\
+ polardb/polardb_pg_local_instance \\
+ bash
+
$ git clone https://github.com/ApsaraDB/tpch-dbgen.git
+$ cd tpch-dbgen
+$ ./build.sh --help
+
+ 1) Use default configuration to build
+ ./build.sh
+ 2) Use limited configuration to build
+ ./build.sh --user=postgres --db=postgres --host=localhost --port=5432 --scale=1
+ 3) Run the test case
+ ./build.sh --run
+ 4) Run the target test case
+ ./build.sh --run=3. run the 3rd case.
+ 5) Run the target test case with option
+ ./build.sh --run --option="set polar_enable_px = on;"
+ 6) Clean the test data. This step will drop the database or tables, remove csv
+ and tbl files
+ ./build.sh --clean
+ 7) Quick build TPC-H with 100MB scale of data
+ ./build.sh --scale=0.1
+
通过设置不同的参数,可以定制化地创建不同规模的 TPC-H 数据集。build.sh
脚本中各个参数的含义如下:
--user
:数据库用户名--db
:数据库名--host
:数据库主机地址--port
:数据库服务端口--run
:执行所有 TPC-H 查询,或执行某条特定的 TPC-H 查询--option
:额外指定 GUC 参数--scale
:生成 TPC-H 数据集的规模,单位为 GB该脚本没有提供输入数据库密码的参数,需要通过设置 PGPASSWORD
为数据库用户的数据库密码来完成认证:
export PGPASSWORD=<your password>
+
生成并导入 100MB 规模的 TPC-H 数据:
./build.sh --scale=0.1
+
生成并导入 1GB 规模的 TPC-H 数据:
./build.sh
+
以 TPC-H 的 Q18 为例,执行 PostgreSQL 的单机并行查询,并观测查询速度。
在 tpch-dbgen/
目录下通过 psql
连接到数据库:
cd tpch-dbgen
+psql
+
-- 打开计时
+\\timing on
+
+-- 设置单机并行度
+SET max_parallel_workers_per_gather = 2;
+
+-- 查看 Q18 的执行计划
+\\i finals/18.explain.sql
+ QUERY PLAN
+------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Sort (cost=3450834.75..3450835.42 rows=268 width=81)
+ Sort Key: orders.o_totalprice DESC, orders.o_orderdate
+ -> GroupAggregate (cost=3450817.91..3450823.94 rows=268 width=81)
+ Group Key: customer.c_custkey, orders.o_orderkey
+ -> Sort (cost=3450817.91..3450818.58 rows=268 width=67)
+ Sort Key: customer.c_custkey, orders.o_orderkey
+ -> Hash Join (cost=1501454.20..3450807.10 rows=268 width=67)
+ Hash Cond: (lineitem.l_orderkey = orders.o_orderkey)
+ -> Seq Scan on lineitem (cost=0.00..1724402.52 rows=59986052 width=22)
+ -> Hash (cost=1501453.37..1501453.37 rows=67 width=53)
+ -> Nested Loop (cost=1500465.85..1501453.37 rows=67 width=53)
+ -> Nested Loop (cost=1500465.43..1501084.65 rows=67 width=34)
+ -> Finalize GroupAggregate (cost=1500464.99..1500517.66 rows=67 width=4)
+ Group Key: lineitem_1.l_orderkey
+ Filter: (sum(lineitem_1.l_quantity) > '314'::numeric)
+ -> Gather Merge (cost=1500464.99..1500511.66 rows=400 width=36)
+ Workers Planned: 2
+ -> Sort (cost=1499464.97..1499465.47 rows=200 width=36)
+ Sort Key: lineitem_1.l_orderkey
+ -> Partial HashAggregate (cost=1499454.82..1499457.32 rows=200 width=36)
+ Group Key: lineitem_1.l_orderkey
+ -> Parallel Seq Scan on lineitem lineitem_1 (cost=0.00..1374483.88 rows=24994188 width=22)
+ -> Index Scan using orders_pkey on orders (cost=0.43..8.45 rows=1 width=30)
+ Index Cond: (o_orderkey = lineitem_1.l_orderkey)
+ -> Index Scan using customer_pkey on customer (cost=0.43..5.50 rows=1 width=23)
+ Index Cond: (c_custkey = orders.o_custkey)
+(26 rows)
+
+Time: 3.965 ms
+
+-- 执行 Q18
+\\i finals/18.sql
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 80150.449 ms (01:20.150)
+
PolarDB for PostgreSQL 提供了弹性跨机并行查询(ePQ)的能力,非常适合进行分析型查询。下面的步骤将引导您可以在一台主机上使用 ePQ 并行执行 TPC-H 查询。
在 tpch-dbgen/
目录下通过 psql
连接到数据库:
cd tpch-dbgen
+psql
+
首先需要对 TPC-H 产生的八张表设置 ePQ 的最大查询并行度:
ALTER TABLE nation SET (px_workers = 100);
+ALTER TABLE region SET (px_workers = 100);
+ALTER TABLE supplier SET (px_workers = 100);
+ALTER TABLE part SET (px_workers = 100);
+ALTER TABLE partsupp SET (px_workers = 100);
+ALTER TABLE customer SET (px_workers = 100);
+ALTER TABLE orders SET (px_workers = 100);
+ALTER TABLE lineitem SET (px_workers = 100);
+
以 Q18 为例,执行查询:
-- 打开计时
+\\timing on
+
+-- 打开 ePQ 功能的开关
+SET polar_enable_px = ON;
+-- 设置每个节点的 ePQ 并行度为 1
+SET polar_px_dop_per_node = 1;
+
+-- 查看 Q18 的执行计划
+\\i finals/18.explain.sql
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------
+ PX Coordinator 2:1 (slice1; segments: 2) (cost=0.00..257526.21 rows=59986052 width=47)
+ Merge Key: orders.o_totalprice, orders.o_orderdate
+ -> GroupAggregate (cost=0.00..243457.68 rows=29993026 width=47)
+ Group Key: orders.o_totalprice, orders.o_orderdate, customer.c_name, customer.c_custkey, orders.o_orderkey
+ -> Sort (cost=0.00..241257.18 rows=29993026 width=47)
+ Sort Key: orders.o_totalprice DESC, orders.o_orderdate, customer.c_name, customer.c_custkey, orders.o_orderkey
+ -> Hash Join (cost=0.00..42729.99 rows=29993026 width=47)
+ Hash Cond: (orders.o_orderkey = lineitem_1.l_orderkey)
+ -> PX Hash 2:2 (slice2; segments: 2) (cost=0.00..15959.71 rows=7500000 width=39)
+ Hash Key: orders.o_orderkey
+ -> Hash Join (cost=0.00..15044.19 rows=7500000 width=39)
+ Hash Cond: (orders.o_custkey = customer.c_custkey)
+ -> PX Hash 2:2 (slice3; segments: 2) (cost=0.00..11561.51 rows=7500000 width=20)
+ Hash Key: orders.o_custkey
+ -> Hash Semi Join (cost=0.00..11092.01 rows=7500000 width=20)
+ Hash Cond: (orders.o_orderkey = lineitem.l_orderkey)
+ -> Partial Seq Scan on orders (cost=0.00..1132.25 rows=7500000 width=20)
+ -> Hash (cost=7760.84..7760.84 rows=400 width=4)
+ -> PX Broadcast 2:2 (slice4; segments: 2) (cost=0.00..7760.84 rows=400 width=4)
+ -> Result (cost=0.00..7760.80 rows=200 width=4)
+ Filter: ((sum(lineitem.l_quantity)) > '314'::numeric)
+ -> Finalize HashAggregate (cost=0.00..7760.78 rows=500 width=12)
+ Group Key: lineitem.l_orderkey
+ -> PX Hash 2:2 (slice5; segments: 2) (cost=0.00..7760.72 rows=500 width=12)
+ Hash Key: lineitem.l_orderkey
+ -> Partial HashAggregate (cost=0.00..7760.70 rows=500 width=12)
+ Group Key: lineitem.l_orderkey
+ -> Partial Seq Scan on lineitem (cost=0.00..3350.82 rows=29993026 width=12)
+ -> Hash (cost=597.51..597.51 rows=749979 width=23)
+ -> PX Hash 2:2 (slice6; segments: 2) (cost=0.00..597.51 rows=749979 width=23)
+ Hash Key: customer.c_custkey
+ -> Partial Seq Scan on customer (cost=0.00..511.44 rows=749979 width=23)
+ -> Hash (cost=5146.80..5146.80 rows=29993026 width=12)
+ -> PX Hash 2:2 (slice7; segments: 2) (cost=0.00..5146.80 rows=29993026 width=12)
+ Hash Key: lineitem_1.l_orderkey
+ -> Partial Seq Scan on lineitem lineitem_1 (cost=0.00..3350.82 rows=29993026 width=12)
+ Optimizer: PolarDB PX Optimizer
+(37 rows)
+
+Time: 216.672 ms
+
+-- 执行 Q18
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 59113.965 ms (00:59.114)
+
可以看到比 PostgreSQL 的单机并行执行的时间略短。加大 ePQ 功能的节点并行度,查询性能将会有更明显的提升:
SET polar_px_dop_per_node = 2;
+\\i finals/18.sql
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 42400.500 ms (00:42.401)
+
+SET polar_px_dop_per_node = 4;
+\\i finals/18.sql
+
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 19892.603 ms (00:19.893)
+
+SET polar_px_dop_per_node = 8;
+\\i finals/18.sql
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 10944.402 ms (00:10.944)
+
使用 ePQ 执行 Q17 和 Q18 时可能会出现 OOM。需要设置以下参数防止用尽内存:
SET polar_px_optimizer_enable_hashagg = 0; +
在上面的例子中,出于简单考虑,PolarDB for PostgreSQL 的多个计算节点被部署在同一台主机上。在这种场景下使用 ePQ 时,由于所有的计算节点都使用了同一台主机的 CPU、内存、I/O 带宽,因此本质上是基于单台主机的并行执行。实际上,PolarDB for PostgreSQL 的计算节点可以被部署在能够共享存储节点的多台机器上。此时使用 ePQ 功能将进行真正的跨机器分布式并行查询,能够充分利用多台机器上的计算资源。
`,27),T=t(``,1);function S(r,v){const k=e("ArticleInfo"),o=e("router-link"),c=e("ExternalLinkIcon"),l=e("RouteLink");return i(),d("div",null,[b,a(k,{frontmatter:r.$frontmatter},null,8,["frontmatter"]),w,s("nav",y,[s("ul",null,[s("li",null,[a(o,{to:"#背景"},{default:p(()=>[n("背景")]),_:1})]),s("li",null,[a(o,{to:"#测试准备"},{default:p(()=>[n("测试准备")]),_:1}),s("ul",null,[s("li",null,[a(o,{to:"#部署-polardb-pg"},{default:p(()=>[n("部署 PolarDB-PG")]),_:1})]),s("li",null,[a(o,{to:"#生成-tpc-h-测试数据集"},{default:p(()=>[n("生成 TPC-H 测试数据集")]),_:1})])])]),s("li",null,[a(o,{to:"#执行-postgresql-单机并行执行"},{default:p(()=>[n("执行 PostgreSQL 单机并行执行")]),_:1})]),s("li",null,[a(o,{to:"#执行-epq-单机并行执行"},{default:p(()=>[n("执行 ePQ 单机并行执行")]),_:1})]),s("li",null,[a(o,{to:"#执行-epq-跨机并行执行"},{default:p(()=>[n("执行 ePQ 跨机并行执行")]),_:1})])])]),h,s("p",null,[s("a",g,[n("TPC-H"),a(c)]),n(" 是专门测试数据库分析型场景性能的数据集。")]),_,s("p",null,[n("或者参考 "),a(l,{to:"/deploying/deploy.html"},{default:p(()=>[n("进阶部署")]),_:1}),n(" 部署一个基于共享存储的 PolarDB for PostgreSQL 集群。")]),P,s("p",null,[n("通过 "),s("a",f,[n("tpch-dbgen"),a(c)]),n(" 工具来生成测试数据。")]),q,s("p",null,[n("参考 "),a(l,{to:"/deploying/deploy.html"},{default:p(()=>[n("进阶部署")]),_:1}),n(" 可以搭建起不同形态的 PolarDB for PostgreSQL 集群。集群搭建成功后,使用 ePQ 的方式与单机 ePQ 完全相同。")]),T])}const x=u(m,[["render",S],["__file","tpch-test.html.vue"]]),H=JSON.parse('{"path":"/operation/tpch-test.html","title":"TPC-H 测试","lang":"en-US","frontmatter":{"author":"棠羽","date":"2023/04/12","minute":20},"headers":[{"level":2,"title":"背景","slug":"背景","link":"#背景","children":[]},{"level":2,"title":"测试准备","slug":"测试准备","link":"#测试准备","children":[{"level":3,"title":"部署 PolarDB-PG","slug":"部署-polardb-pg","link":"#部署-polardb-pg","children":[]},{"level":3,"title":"生成 TPC-H 测试数据集","slug":"生成-tpc-h-测试数据集","link":"#生成-tpc-h-测试数据集","children":[]}]},{"level":2,"title":"执行 PostgreSQL 单机并行执行","slug":"执行-postgresql-单机并行执行","link":"#执行-postgresql-单机并行执行","children":[]},{"level":2,"title":"执行 ePQ 单机并行执行","slug":"执行-epq-单机并行执行","link":"#执行-epq-单机并行执行","children":[]},{"level":2,"title":"执行 ePQ 跨机并行执行","slug":"执行-epq-跨机并行执行","link":"#执行-epq-跨机并行执行","children":[]}],"git":{"updatedTime":1703744114000},"filePathRelative":"operation/tpch-test.md"}');export{x as comp,H as data}; diff --git a/assets/trouble-issuing.html-BN2dUtga.js b/assets/trouble-issuing.html-BN2dUtga.js new file mode 100644 index 00000000000..4386025b5a9 --- /dev/null +++ b/assets/trouble-issuing.html-BN2dUtga.js @@ -0,0 +1,31 @@ +import{_ as s,o as n,c as a,e as t}from"./app-CWFDhr_k.js";const o={},p=t(`如果遇到如下错误:
psql:queries/q01.analyze.sq1:24: WARNING: interconnect may encountered a network error, please check your network +DETAIL: Failed to send packet (seq 1) to 192.168.1.8:57871 (pid 17766 cid 0) after 100 retries. +
可以尝试统一修改每台机器的 MTU 为 9000:
ifconfig <网卡名> mtu 9000 +
如果在运行 PolarDB for PostgreSQL 的过程中出现问题,请提供数据库的日志与机器的配置信息以方便定位问题。
通过 polar_stat_env
插件可以轻松获取数据库所在主机的硬件配置:
=> CREATE EXTENSION polar_stat_env;
+=> SELECT polar_stat_env();
+ polar_stat_env
+--------------------------------------------------------------------
+ { +
+ "CPU": { +
+ "Architecture": "x86_64", +
+ "Model Name": "Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz",+
+ "CPU Cores": "8", +
+ "CPU Thread Per Cores": "2", +
+ "CPU Core Per Socket": "4", +
+ "NUMA Nodes": "1", +
+ "L1d cache": "192 KiB (4 instances)", +
+ "L1i cache": "128 KiB (4 instances)", +
+ "L2 cache": "5 MiB (4 instances)", +
+ "L3 cache": "48 MiB (1 instance)" +
+ }, +
+ "Memory": { +
+ "Memory Total (GB)": "14", +
+ "HugePage Size (MB)": "2", +
+ "HugePage Total Size (GB)": "0" +
+ }, +
+ "OS Params": { +
+ "OS": "5.10.134-16.1.al8.x86_64", +
+ "Swappiness(1-100)": "0", +
+ "Vfs Cache Pressure(0-1000)": "100", +
+ "Min Free KBytes(KB)": "67584" +
+ } +
+ }
+(1 row)
+
如果在运行 PolarDB for PostgreSQL 的过程中出现问题,请提供数据库的日志与机器的配置信息以方便定位问题。
通过 polar_stat_env
插件可以轻松获取数据库所在主机的硬件配置:
=> CREATE EXTENSION polar_stat_env;
+=> SELECT polar_stat_env();
+ polar_stat_env
+--------------------------------------------------------------------
+ { +
+ "CPU": { +
+ "Architecture": "x86_64", +
+ "Model Name": "Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz",+
+ "CPU Cores": "8", +
+ "CPU Thread Per Cores": "2", +
+ "CPU Core Per Socket": "4", +
+ "NUMA Nodes": "1", +
+ "L1d cache": "192 KiB (4 instances)", +
+ "L1i cache": "128 KiB (4 instances)", +
+ "L2 cache": "5 MiB (4 instances)", +
+ "L3 cache": "48 MiB (1 instance)" +
+ }, +
+ "Memory": { +
+ "Memory Total (GB)": "14", +
+ "HugePage Size (MB)": "2", +
+ "HugePage Total Size (GB)": "0" +
+ }, +
+ "OS Params": { +
+ "OS": "5.10.134-16.1.al8.x86_64", +
+ "Swappiness(1-100)": "0", +
+ "Vfs Cache Pressure(0-1000)": "100", +
+ "Min Free KBytes(KB)": "67584" +
+ } +
+ }
+(1 row)
+
Coding in C follows PostgreSQL's programing style, such as naming, error message format, control statements, length of lines, comment format, length of functions, and global variable. For detail, please reference Postgresql style. Here is some highlines:
Programs in Shell, Go, or Python can follow Google code conventions
We share the same thoughts and rules as Google Open Source Code Review
Before submitting for code review, please do unit test and pass all tests under src/test, such as regress and isolation. Unit tests or function tests should be submitted with code modification.
In addition to code review, this doc offers instructions for the whole cycle of high-quality development, from design, implementation, testing, documentation, to preparing for code review. Many good questions are asked for critical steps during development, such as about design, about function, about complexity, about test, about naming, about documentation, and about code review. The doc summarized rules for code review as follows.
In doing a code review, you should make sure that:
DANGER
需要翻译
PolarDB for PostgreSQL 的文档使用 VuePress 2 进行管理,以 Markdown 为中心进行写作。
本文档在线托管于 GitHub Pages 服务上。
若您发现文档中存在内容或格式错误,或者您希望能够贡献新文档,那么您需要在本地安装并配置文档开发环境。本项目的文档是一个 Node.js 工程,以 pnpm 作为软件包管理器。Node.js® 是一个基于 Chrome V8 引擎的 JavaScript 运行时环境。
您需要在本地准备 Node.js 环境。可以选择在 Node.js 官网 下载 页面下载安装包手动安装,也可以使用下面的命令自动安装。
通过 curl
安装 Node 版本管理器 nvm
。
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
+command -v nvm
+
如果上一步显示 command not found
,那么请关闭当前终端,然后重新打开。
如果 nvm
已经被成功安装,执行以下命令安装 Node 的 LTS 版本:
nvm install --lts
+
Node.js 安装完毕后,使用如下命令检查安装是否成功:
node -v
+npm -v
+
使用 npm
全局安装软件包管理器 pnpm
:
npm install -g pnpm
+pnpm -v
+
在 PolarDB for PostgreSQL 工程的根目录下运行以下命令,pnpm
将会根据 package.json
安装所有依赖:
pnpm install
+
在 PolarDB for PostgreSQL 工程的根目录下运行以下命令:
pnpm run docs:dev
+
文档开发服务器将运行于 http://localhost:8080/PolarDB-for-PostgreSQL/
,打开浏览器即可访问。对 Markdown 文件作出修改后,可以在网页上实时查看变化。
PolarDB for PostgreSQL 的文档资源位于工程根目录的 docs/
目录下。其目录被组织为:
└── docs
+ ├── .vuepress
+ │ ├── configs
+ │ ├── public
+ │ └── styles
+ ├── README.md
+ ├── architecture
+ ├── contributing
+ ├── guide
+ ├── imgs
+ ├── roadmap
+ └── zh
+ ├── README.md
+ ├── architecture
+ ├── contributing
+ ├── guide
+ ├── imgs
+ └── roadmap
+
可以看到,docs/zh/
目录下是其父级目录除 .vuepress/
以外的翻版。docs/
目录中全部为英语文档,docs/zh/
目录下全部是相对应的简体中文文档。
.vuepress/
目录下包含文档工程的全局配置信息:
config.ts
:文档配置configs/
:文档配置模块(导航栏 / 侧边栏、英文 / 中文等配置)public/
:公共静态资源styles/
:文档主题默认样式覆盖文档的配置方式请参考 VuePress 2 官方文档的 配置指南。
npx prettier --write docs/
本文档借助 GitHub Actions 提供 CI 服务。向主分支推送代码时,将触发对 docs/
目录下文档资源的构建,并将构建结果推送到 gh-pages 分支上。GitHub Pages 服务会自动将该分支上的文档静态资源部署到 Web 服务器上形成文档网站。
PolarDB for PostgreSQL is an open source product from PostgreSQL and other open source projects. Our main target is to create a larger community for PostgreSQL. Contributors are welcomed to submit their code and ideas. In a long run, we hope this project can be managed by developers from both inside and outside Alibaba.
POLARDB_11_STABLE
is the stable branch of PolarDB, it can accept the merge from POLARDB_11_DEV
onlyPOLARDB_11_DEV
is the stable development branch of PolarDB, it can accept the merge from both pull requests and direct pushes from maintainersNew features will be merged to POLARDB_11_DEV
, and will be merged to POLARDB_11_STABLE
periodically by maintainers
Here is a checklist to prepare and submit your PR (pull request):
ApsaraDB/PolarDB-for-PostgreSQL
.Let's use an example to walk through the list.
On GitHub repository of PolarDB for PostgreSQL, Click fork button to create your own PolarDB repository.
git clone https://github.com/<your-github>/PolarDB-for-PostgreSQL.git
+
Check out a new development branch from the stable development branch POLARDB_11_DEV
. Suppose your branch is named as dev
:
git checkout POLARDB_11_DEV
+git checkout -b dev
+
git status
+git add <files-to-change>
+git commit -m "modification for dev"
+
Click Fetch upstream
on your own repository page to make sure your stable development branch is up do date with PolarDB official. Then pull the latest commits on stable development branch to your local repository.
git checkout POLARDB_11_DEV
+git pull
+
Then, rebase your development branch to the stable development branch, and resolve the conflict:
git checkout dev
+git rebase POLARDB_11_DEV
+-- resolve conflict --
+git push -f dev
+
Click New pull request or Compare & pull request button, choose to compare branches ApsaraDB/PolarDB-for-PostgreSQL:POLARDB_11_DEV
and <your-github>/PolarDB-for-PostgreSQL:dev
, and write PR description.
GitHub will automatically run regression test on your code. Your PR should pass all these checks.
Resolve all problems raised by reviewers and update the PR.
It is done by PolarDB maintainers.
如果在运行 PolarDB for PostgreSQL 的过程中出现问题,请提供数据库的日志与机器的配置信息以方便定位问题。
通过 polar_stat_env
插件可以轻松获取数据库所在主机的硬件配置:
=> CREATE EXTENSION polar_stat_env;
+=> SELECT polar_stat_env();
+ polar_stat_env
+--------------------------------------------------------------------
+ { +
+ "CPU": { +
+ "Architecture": "x86_64", +
+ "Model Name": "Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz",+
+ "CPU Cores": "8", +
+ "CPU Thread Per Cores": "2", +
+ "CPU Core Per Socket": "4", +
+ "NUMA Nodes": "1", +
+ "L1d cache": "192 KiB (4 instances)", +
+ "L1i cache": "128 KiB (4 instances)", +
+ "L2 cache": "5 MiB (4 instances)", +
+ "L3 cache": "48 MiB (1 instance)" +
+ }, +
+ "Memory": { +
+ "Memory Total (GB)": "14", +
+ "HugePage Size (MB)": "2", +
+ "HugePage Total Size (GB)": "0" +
+ }, +
+ "OS Params": { +
+ "OS": "5.10.134-16.1.al8.x86_64", +
+ "Swappiness(1-100)": "0", +
+ "Vfs Cache Pressure(0-1000)": "100", +
+ "Min Free KBytes(KB)": "67584" +
+ } +
+ }
+(1 row)
+
棠羽
2023/08/01
15 min
本文将指导您在单机文件系统(如 ext4)上编译部署 PolarDB-PG,适用于所有计算节点都可以访问相同本地磁盘存储的场景。
我们在 DockerHub 上提供了 PolarDB-PG 的 本地实例镜像,里面已包含启动 PolarDB-PG 本地存储实例的入口脚本。镜像目前支持 linux/amd64
和 linux/arm64
两种 CPU 架构。
docker pull polardb/polardb_pg_local_instance
+
新建一个空白目录 ${your_data_dir}
作为 PolarDB-PG 实例的数据目录。启动容器时,将该目录作为 VOLUME 挂载到容器内,对数据目录进行初始化。在初始化的过程中,可以传入环境变量覆盖默认值:
POLARDB_PORT
:PolarDB-PG 运行所需要使用的端口号,默认值为 5432
;镜像将会使用三个连续的端口号(默认 5432-5434
)POLARDB_USER
:初始化数据库时创建默认的 superuser(默认 postgres
)POLARDB_PASSWORD
:默认 superuser 的密码使用如下命令初始化数据库:
docker run -it --rm \
+ --env POLARDB_PORT=5432 \
+ --env POLARDB_USER=u1 \
+ --env POLARDB_PASSWORD=your_password \
+ -v ${your_data_dir}:/var/polardb \
+ polardb/polardb_pg_local_instance \
+ echo 'done'
+
数据库初始化完毕后,使用 -d
参数以后台模式创建容器,启动 PolarDB-PG 服务。通常 PolarDB-PG 的端口需要暴露给外界使用,使用 -p
参数将容器内的端口范围暴露到容器外。比如,初始化数据库时使用的是 5432-5434
端口,如下命令将会把这三个端口映射到容器外的 54320-54322
端口:
docker run -d \
+ -p 54320-54322:5432-5434 \
+ -v ${your_data_dir}:/var/polardb \
+ polardb/polardb_pg_local_instance
+
或者也可以直接让容器与宿主机共享网络:
docker run -d \
+ --network=host \
+ -v ${your_data_dir}:/var/polardb \
+ polardb/polardb_pg_local_instance
+
程义
2022/11/02
15 min
本文将指导您在分布式文件系统 PolarDB File System(PFS)上编译部署 PolarDB,适用于已经在 Curve 块存储上格式化并挂载 PFS 的计算节点。
我们在 DockerHub 上提供了一个 PolarDB 开发镜像,里面已经包含编译运行 PolarDB for PostgreSQL 所需要的所有依赖。您可以直接使用这个开发镜像进行实例搭建。镜像目前支持 AMD64 和 ARM64 两种 CPU 架构。
在前置文档中,我们已经从 DockerHub 上拉取了 PolarDB 开发镜像,并且进入到了容器中。进入容器后,从 GitHub 上下载 PolarDB for PostgreSQL 的源代码,稳定分支为 POLARDB_11_STABLE
。如果因网络原因不能稳定访问 GitHub,则可以访问 Gitee 国内镜像。
git clone -b POLARDB_11_STABLE https://github.com/ApsaraDB/PolarDB-for-PostgreSQL.git
+
git clone -b POLARDB_11_STABLE https://gitee.com/mirrors/PolarDB-for-PostgreSQL
+
代码克隆完毕后,进入源码目录:
cd PolarDB-for-PostgreSQL/
+
在读写节点上,使用 --with-pfsd
选项编译 PolarDB 内核。请参考 编译测试选项说明 查看更多编译选项的说明。
./polardb_build.sh --with-pfsd
+
WARNING
上述脚本在编译完成后,会自动部署一个基于 本地文件系统 的实例,运行于 5432
端口上。
手动键入以下命令停止这个实例,以便 在 PFS 和共享存储上重新部署实例:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl \
+ -D $HOME/tmp_master_dir_polardb_pg_1100_bld/ \
+ stop
+
在节点本地初始化数据目录 $HOME/primary/
:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D $HOME/primary
+
在共享存储的 /pool@@volume_my_/shared_data
目录上初始化共享数据目录
# 使用 pfs 创建共享数据目录
+sudo pfs -C curve mkdir /pool@@volume_my_/shared_data
+# 初始化 db 的本地和共享数据目录
+sudo $HOME/tmp_basedir_polardb_pg_1100_bld/bin/polar-initdb.sh \
+ $HOME/primary/ /pool@@volume_my_/shared_data/ curve
+
编辑读写节点的配置。打开 $HOME/primary/postgresql.conf
,增加配置项:
port=5432
+polar_hostid=1
+polar_enable_shared_storage_mode=on
+polar_disk_name='pool@@volume_my_'
+polar_datadir='/pool@@volume_my_/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='curve'
+logging_collector=on
+log_line_prefix='%p\t%r\t%u\t%m\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+synchronous_standby_names='replica1'
+
打开 $HOME/primary/pg_hba.conf
,增加以下配置项:
host replication postgres 0.0.0.0/0 trust
+
最后,启动读写节点:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D $HOME/primary
+
检查读写节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5432 \
+ -d postgres \
+ -c 'select version();'
+# 下面为输出内容
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
在读写节点上,为对应的只读节点创建相应的 replication slot,用于只读节点的物理流复制:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5432 \
+ -d postgres \
+ -c "select pg_create_physical_replication_slot('replica1');"
+# 下面为输出内容
+ pg_create_physical_replication_slot
+-------------------------------------
+ (replica1,)
+(1 row)
+
在只读节点上,使用 --with-pfsd
选项编译 PolarDB 内核。
./polardb_build.sh --with-pfsd
+
WARNING
上述脚本在编译完成后,会自动部署一个基于 本地文件系统 的实例,运行于 5432
端口上。
手动键入以下命令停止这个实例,以便 在 PFS 和共享存储上重新部署实例:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl \
+ -D $HOME/tmp_master_dir_polardb_pg_1100_bld/ \
+ stop
+
在节点本地初始化数据目录 $HOME/replica1/
:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D $HOME/replica1
+
编辑只读节点的配置。打开 $HOME/replica1/postgresql.conf
,增加配置项:
port=5433
+polar_hostid=2
+polar_enable_shared_storage_mode=on
+polar_disk_name='pool@@volume_my_'
+polar_datadir='/pool@@volume_my_/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='curve'
+logging_collector=on
+log_line_prefix='%p\t%r\t%u\t%m\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+
创建 $HOME/replica1/recovery.conf
,增加以下配置项:
WARNING
请在下面替换读写节点(容器)所在的 IP 地址。
polar_replica='on'
+recovery_target_timeline='latest'
+primary_slot_name='replica1'
+primary_conninfo='host=[读写节点所在IP] port=5432 user=postgres dbname=postgres application_name=replica1'
+
最后,启动只读节点:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D $HOME/replica1
+
检查只读节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5433 \
+ -d postgres \
+ -c 'select version();'
+# 下面为输出内容
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
部署完成后,需要进行实例检查和测试,确保读写节点可正常写入数据、只读节点可以正常读取。
登录 读写节点,创建测试表并插入样例数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \
+ -p 5432 \
+ -d postgres \
+ -c "create table t(t1 int primary key, t2 int);insert into t values (1, 1),(2, 3),(3, 3);"
+
登录 只读节点,查询刚刚插入的样例数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \
+ -p 5433 \
+ -d postgres \
+ -c "select * from t;"
+# 下面为输出内容
+ t1 | t2
+----+----
+ 1 | 1
+ 2 | 3
+ 3 | 3
+(3 rows)
+
在读写节点上插入的数据对只读节点可见。
棠羽
2022/05/09
15 min
本文将指导您在分布式文件系统 PolarDB File System(PFS)上编译部署 PolarDB,适用于已经在共享存储上格式化并挂载 PFS 文件系统的计算节点。
初始化读写节点的本地数据目录 ~/primary/
:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D $HOME/primary
+
在共享存储的 /nvme1n1/shared_data/
路径上创建共享数据目录,然后使用 polar-initdb.sh
脚本初始化共享数据目录:
# 使用 pfs 创建共享数据目录
+sudo pfs -C disk mkdir /nvme1n1/shared_data
+# 初始化 db 的本地和共享数据目录
+sudo $HOME/tmp_basedir_polardb_pg_1100_bld/bin/polar-initdb.sh \
+ $HOME/primary/ /nvme1n1/shared_data/
+
编辑读写节点的配置。打开 ~/primary/postgresql.conf
,增加配置项:
port=5432
+polar_hostid=1
+polar_enable_shared_storage_mode=on
+polar_disk_name='nvme1n1'
+polar_datadir='/nvme1n1/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='disk'
+logging_collector=on
+log_line_prefix='%p\t%r\t%u\t%m\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+synchronous_standby_names='replica1'
+
编辑读写节点的客户端认证文件 ~/primary/pg_hba.conf
,增加以下配置项,允许只读节点进行物理复制:
host replication postgres 0.0.0.0/0 trust
+
最后,启动读写节点:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D $HOME/primary
+
检查读写节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5432 \
+ -d postgres \
+ -c 'SELECT version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
在读写节点上,为对应的只读节点创建相应的复制槽,用于只读节点的物理复制:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5432 \
+ -d postgres \
+ -c "SELECT pg_create_physical_replication_slot('replica1');"
+ pg_create_physical_replication_slot
+-------------------------------------
+ (replica1,)
+(1 row)
+
在只读节点本地磁盘的 ~/replica1
路径上创建一个空目录,然后通过 polar-replica-initdb.sh
脚本使用共享存储上的数据目录来初始化只读节点的本地目录。初始化后的本地目录中没有默认配置文件,所以还需要使用 initdb
创建一个临时的本地目录模板,然后将所有的默认配置文件拷贝到只读节点的本地目录下:
mkdir -m 0700 $HOME/replica1
+sudo ~/tmp_basedir_polardb_pg_1100_bld/bin/polar-replica-initdb.sh \
+ /nvme1n1/shared_data/ $HOME/replica1/
+
+$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D /tmp/replica1
+cp /tmp/replica1/*.conf $HOME/replica1/
+
编辑只读节点的配置。打开 ~/replica1/postgresql.conf
,增加配置项:
port=5433
+polar_hostid=2
+polar_enable_shared_storage_mode=on
+polar_disk_name='nvme1n1'
+polar_datadir='/nvme1n1/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='disk'
+logging_collector=on
+log_line_prefix='%p\t%r\t%u\t%m\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+
创建只读节点的复制配置文件 ~/replica1/recovery.conf
,增加读写节点的连接信息,以及复制槽名称:
polar_replica='on'
+recovery_target_timeline='latest'
+primary_slot_name='replica1'
+primary_conninfo='host=[读写节点所在IP] port=5432 user=postgres dbname=postgres application_name=replica1'
+
最后,启动只读节点:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D $HOME/replica1
+
检查只读节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5433 \
+ -d postgres \
+ -c 'SELECT version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
部署完成后,需要进行实例检查和测试,确保读写节点可正常写入数据、只读节点可以正常读取。
登录 读写节点,创建测试表并插入样例数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \
+ -p 5432 \
+ -d postgres \
+ -c "CREATE TABLE t (t1 INT PRIMARY KEY, t2 INT); INSERT INTO t VALUES (1, 1),(2, 3),(3, 3);"
+
登录 只读节点,查询刚刚插入的样例数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \
+ -p 5433 \
+ -d postgres \
+ -c "SELECT * FROM t;"
+ t1 | t2
+----+----
+ 1 | 1
+ 2 | 3
+ 3 | 3
+(3 rows)
+
在读写节点上插入的数据对只读节点可见,这意味着基于共享存储的 PolarDB 计算节点集群搭建成功。
阿里云官网直接提供了可供购买的 云原生关系型数据库 PolarDB PostgreSQL 引擎。
PolarDB Stack 是轻量级 PolarDB PaaS 软件。基于共享存储提供一写多读的 PolarDB 数据库服务,特别定制和深度优化了数据库生命周期管理。通过 PolarDB Stack 可以一键部署 PolarDB-for-PostgreSQL 内核和 PolarDB-FileSystem。
PolarDB Stack 架构如下图所示,进入 PolarDB Stack 的部署文档
棠羽
2022/05/09
10 min
部署 PolarDB for PostgreSQL 需要在以下三个层面上做准备:
以下表格给出了三个层次排列组合出的的不同实践方式,其中的步骤包含:
我们强烈推荐使用发布在 DockerHub 上的 PolarDB 开发镜像 来完成实践!开发镜像中已经包含了文件系统层和数据库层所需要安装的所有依赖,无需手动安装。
块存储 | 文件系统 | |
---|---|---|
实践 1(极简本地部署) | 本地 SSD | 本地文件系统(如 ext4) |
实践 2(生产环境最佳实践) 视频 | 阿里云 ECS + ESSD 云盘 | PFS |
实践 3(生产环境最佳实践) 视频 | CurveBS 共享存储 | PFS for Curve |
实践 4 | Ceph 共享存储 | PFS |
实践 5 | NBD 共享存储 | PFS |
棠羽
2022/08/31
20 min
PolarDB File System,简称 PFS 或 PolarFS,是由阿里云自主研发的高性能类 POSIX 的用户态分布式文件系统,服务于阿里云数据库 PolarDB 产品。使用 PFS 对共享存储进行格式化并挂载后,能够保证一个计算节点对共享存储的写入能够立刻对另一个计算节点可见。
在 PolarDB 计算节点上准备好 PFS 相关工具。推荐使用 DockerHub 上的 PolarDB 开发镜像,其中已经包含了编译完毕的 PFS,无需再次编译安装。Curve 开源社区 针对 PFS 对接 CurveBS 存储做了专门的优化。在用于部署 PolarDB 的计算节点上,使用下面的命令拉起带有 PFS for CurveBS 的 PolarDB 开发镜像:
docker pull polardb/polardb_pg_devel:curvebs
+docker run -it \
+ --network=host \
+ --cap-add=SYS_PTRACE --privileged=true \
+ --name polardb_pg \
+ polardb/polardb_pg_devel:curvebs bash
+
进入容器后需要修改 curve 相关的配置文件:
sudo vim /etc/curve/client.conf
+#
+################### mds一侧配置信息 ##################
+#
+
+# mds的地址信息,对于mds集群,地址以逗号隔开
+mds.listen.addr=127.0.0.1:6666
+... ...
+
注意,这里的 mds.listen.addr
请填写部署 CurveBS 集群中集群状态中输出的 cluster mds addr
容器内已经安装了 curve
工具,该工具可用于创建卷,用户需要使用该工具创建实际存储 PolarFS 数据的 curve 卷:
curve create --filename /volume --user my --length 10 --stripeUnit 16384 --stripeCount 64
+
用户可通过 curve create -h 命令查看创建卷的详细说明。上面的列子中,我们创建了一个拥有以下属性的卷:
特别需要注意的是,在数据库场景下,我们强烈建议使用条带卷,只有这样才能充分发挥 Curve 的性能优势,而 16384 * 64 的条带设置是目前最优的条带设置。
在使用 curve 卷之前需要使用 pfs 来格式化对应的 curve 卷:
sudo pfs -C curve mkfs pool@@volume_my_
+
与我们在本地挂载文件系统前要先在磁盘上格式化文件系统一样,我们也要把我们的 curve 卷格式化为 PolarFS 文件系统。
注意,由于 PolarFS 解析的特殊性,我们将以 pool@${volume}_${user}_
的形式指定我们的 curve 卷,此外还需要将卷名中的 / 替换成 @
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p pool@@volume_my_
+
如果 pfsd 启动成功,那么至此 curve 版 PolarFS 已全部部署完成,已经成功挂载 PFS 文件系统。 下面需要编译部署 PolarDB。
棠羽
2022/05/09
15 min
PolarDB File System,简称 PFS 或 PolarFS,是由阿里云自主研发的高性能类 POSIX 的用户态分布式文件系统,服务于阿里云数据库 PolarDB 产品。使用 PFS 对共享存储进行格式化并挂载后,能够保证一个计算节点对共享存储的写入能够立刻对另一个计算节点可见。
推荐使用 DockerHub 上的 PolarDB for PostgreSQL 可执行文件镜像,目前支持 linux/amd64
和 linux/arm64
两种架构,其中已经包含了编译完毕的 PFS 工具,无需手动编译安装。通过以下命令进入容器即可:
docker pull polardb/polardb_pg_binary
+docker run -it \
+ --cap-add=SYS_PTRACE \
+ --privileged=true \
+ --name polardb_pg \
+ --shm-size=512m \
+ polardb/polardb_pg_binary \
+ bash
+
PFS 的手动编译安装方式请参考 PFS 的 README,此处不再赘述。
PFS 仅支持访问 以特定字符开头的块设备(详情可见 PolarDB File System 源代码的 src/pfs_core/pfs_api.h
文件):
#define PFS_PATH_ISVALID(path) \
+ (path != NULL && \
+ ((path[0] == '/' && isdigit((path)[1])) || path[0] == '.' \
+ || strncmp(path, "/pangu-", 7) == 0 \
+ || strncmp(path, "/sd", 3) == 0 \
+ || strncmp(path, "/sf", 3) == 0 \
+ || strncmp(path, "/vd", 3) == 0 \
+ || strncmp(path, "/nvme", 5) == 0 \
+ || strncmp(path, "/loop", 5) == 0 \
+ || strncmp(path, "/mapper_", 8) ==0))
+
因此,为了保证能够顺畅完成后续流程,我们建议在所有访问块设备的节点上使用相同的软链接访问共享块设备。例如,在 NBD 服务端主机上,使用新的块设备名 /dev/nvme1n1
软链接到共享存储块设备的原有名称 /dev/vdb
上:
sudo ln -s /dev/vdb /dev/nvme1n1
+
在 NBD 客户端主机上,使用同样的块设备名 /dev/nvme1n1
软链到共享存储块设备的原有名称 /dev/nbd0
上:
sudo ln -s /dev/nbd0 /dev/nvme1n1
+
这样便可以在服务端和客户端两台主机上使用相同的块设备名 /dev/nvme1n1
访问同一个块设备。
使用 任意一台主机,在共享存储块设备上格式化 PFS 分布式文件系统:
sudo pfs -C disk mkfs nvme1n1
+
在能够访问共享存储的 所有主机节点 上分别启动 PFS 守护进程,挂载 PFS 文件系统:
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme1n1 -w 2
+
棠羽
2022/05/09
5 min
PolarDB for PostgreSQL 采用了基于 Shared-Storage 的存储计算分离架构。数据库由传统的 Shared-Nothing 架构,转变成了 Shared-Storage 架构——由原来的 N 份计算 + N 份存储,转变成了 N 份计算 + 1 份存储;而 PostgreSQL 使用了传统的单体数据库架构,存储和计算耦合在一起。
为保证所有计算节点能够以相同的可见性视角访问分布式块存储设备,PolarDB 需要使用分布式文件系统 PolarDB File System(PFS) 来访问块设备,其实现原理可参考发表在 2018 年 VLDB 上的论文[1];如果所有计算节点都可以本地访问同一个块存储设备,那么也可以不使用 PFS,直接使用本地的单机文件系统(如 ext4)。这是与 PostgreSQL 的不同点之一。
棠羽
2022/05/09
5 min
DANGER
为简化使用,容器内的 postgres
用户没有设置密码,仅供体验。如果在生产环境等高安全性需求场合,请务必修改健壮的密码!
仅需单台计算机,同时满足以下要求,就可以快速开启您的 PolarDB 之旅:
从 DockerHub 上拉取 PolarDB for PostgreSQL 的 本地存储实例镜像,创建并运行容器,然后直接试用 PolarDB-PG:
# 拉取 PolarDB-PG 镜像
+docker pull polardb/polardb_pg_local_instance
+# 创建并运行容器
+docker run -it --rm polardb/polardb_pg_local_instance psql
+# 测试可用性
+postgres=# SELECT version();
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
棠羽
2022/05/09
20 min
阿里云 ESSD(Enhanced SSD)云盘 结合 25 GE 网络和 RDMA 技术,能够提供单盘高达 100 万的随机读写能力和单路低时延性能。阿里云 ESSD 云盘支持 NVMe 协议,且可以同时挂载到多台支持 NVMe 协议的 ECS(Elastic Compute Service)实例上,从而实现多个 ECS 实例并发读写访问,具备高可靠、高并发、高性能等特点。更新信息请参考阿里云 ECS 文档:
本文将指导您完成以下过程:
首先需要准备两台或以上的 阿里云 ECS。目前,ECS 对支持 ESSD 多重挂载的规格有较多限制,详情请参考 使用限制。仅 部分可用区、部分规格(ecs.g7se、ecs.c7se、ecs.r7se)的 ECS 实例可以支持 ESSD 的多重挂载。如图,请务必选择支持多重挂载的 ECS 规格:
对 ECS 存储配置的选择,系统盘可以选用任意的存储类型,数据盘和共享盘暂不选择。后续再单独创建一个 ESSD 云盘作为共享盘:
如图所示,在 同一可用区 中建好两台 ECS:
在阿里云 ECS 的管理控制台中,选择 存储与快照 下的 云盘,点击 创建云盘。在与已经建好的 ECS 所在的相同可用区内,选择建立一个 ESSD 云盘,并勾选 多实例挂载。如果您的 ECS 不符合多实例挂载的限制条件,则该选框不会出现。
ESSD 云盘创建完毕后,控制台显示云盘支持多重挂载,状态为 待挂载:
接下来,把这个云盘分别挂载到两台 ECS 上:
挂载完毕后,查看该云盘,将会显示该云盘已经挂载的两台 ECS 实例:
通过 ssh 分别连接到两台 ECS 上,运行 lsblk
命令可以看到:
nvme0n1
是 40GB 的 ECS 系统盘,为 ECS 私有nvme1n1
是 100GB 的 ESSD 云盘,两台 ECS 同时可见$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 100G 0 disk
+
接下来,将在两台 ECS 上分别部署 PolarDB 的主节点和只读节点。作为前提,需要在 ECS 共享的 ESSD 块设备上 格式化并挂载 PFS。
Ceph 是一个统一的分布式存储系统,由于它可以提供较好的性能、可靠性和可扩展性,被广泛的应用在存储领域。Ceph 搭建需要 2 台及以上的物理机/虚拟机实现存储共享与数据备份,本教程以 3 台虚拟机机环境为例,介绍基于 ceph 共享存储的实例构建方法。大体如下:
WARNING
操作系统版本要求 CentOS 7.5 及以上。以下步骤在 CentOS 7.5 上通过测试。
使用的虚拟机环境如下:
IP hostname
+192.168.1.173 ceph001
+192.168.1.174 ceph002
+192.168.1.175 ceph003
+
TIP
本教程使用阿里云镜像站提供的 docker 包。
yum install -y yum-utils device-mapper-persistent-data lvm2
+
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
+yum makecache
+yum install -y docker-ce
+
+systemctl start docker
+systemctl enable docker
+
docker run hello-world
+
ssh-keygen
+ssh-copy-id -i /root/.ssh/id_rsa.pub root@ceph001
+ssh-copy-id -i /root/.ssh/id_rsa.pub root@ceph002
+ssh-copy-id -i /root/.ssh/id_rsa.pub root@ceph003
+
ssh root@ceph003
+
docker pull ceph/daemon
+
docker run -d \
+ --net=host \
+ --privileged=true \
+ -v /etc/ceph:/etc/ceph \
+ -v /var/lib/ceph/:/var/lib/ceph/ \
+ -e MON_IP=192.168.1.173 \
+ -e CEPH_PUBLIC_NETWORK=192.168.1.0/24 \
+ --security-opt seccomp=unconfined \
+ --name=mon01 \
+ ceph/daemon mon
+
WARNING
根据实际网络环境修改 IP、子网掩码位数。
$ docker exec mon01 ceph -s
+cluster:
+ id: 937ccded-3483-4245-9f61-e6ef0dbd85ca
+ health: HEALTH_OK
+
+services:
+ mon: 1 daemons, quorum ceph001 (age 26m)
+ mgr: no daemons active
+ osd: 0 osds: 0 up, 0 in
+
+data:
+ pools: 0 pools, 0 pgs
+ objects: 0 objects, 0 B
+ usage: 0 B used, 0 B / 0 B avail
+ pgs:
+
WARNING
如果遇到 mon is allowing insecure global_id reclaim
的报错,使用以下命令解决。
docker exec mon01 ceph config set mon auth_allow_insecure_global_id_reclaim false
+
docker exec mon01 ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap-osd/ceph.keyring
+docker exec mon01 ceph auth get client.bootstrap-rgw -o /var/lib/ceph/bootstrap-rgw/ceph.keyring
+
ssh root@ceph002 mkdir -p /var/lib/ceph
+scp -r /etc/ceph root@ceph002:/etc
+scp -r /var/lib/ceph/bootstrap* root@ceph002:/var/lib/ceph
+ssh root@ceph003 mkdir -p /var/lib/ceph
+scp -r /etc/ceph root@ceph003:/etc
+scp -r /var/lib/ceph/bootstrap* root@ceph003:/var/lib/ceph
+
docker run -d \
+ --net=host \
+ --privileged=true \
+ -v /etc/ceph:/etc/ceph \
+ -v /var/lib/ceph/:/var/lib/ceph/ \
+ -e MON_IP=192.168.1.174 \
+ -e CEPH_PUBLIC_NETWORK=192.168.1.0/24 \
+ --security-opt seccomp=unconfined \
+ --name=mon02 \
+ ceph/daemon mon
+
+docker run -d \
+ --net=host \
+ --privileged=true \
+ -v /etc/ceph:/etc/ceph \
+ -v /var/lib/ceph/:/var/lib/ceph/ \
+ -e MON_IP=1192.168.1.175 \
+ -e CEPH_PUBLIC_NETWORK=192.168.1.0/24 \
+ --security-opt seccomp=unconfined \
+ --name=mon03 \
+ ceph/daemon mon
+
$ docker exec mon01 ceph -s
+cluster:
+ id: 937ccded-3483-4245-9f61-e6ef0dbd85ca
+ health: HEALTH_OK
+
+services:
+ mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 35s)
+ mgr: no daemons active
+ osd: 0 osds: 0 up, 0 in
+
+data:
+ pools: 0 pools, 0 pgs
+ objects: 0 objects, 0 B
+ usage: 0 B used, 0 B / 0 B avail
+ pgs:
+
WARNING
从 mon 节点信息查看是否有添加在另外两个节点创建的 mon 添加进来。
TIP
本环境的虚拟机只有一个 /dev/vdb
磁盘可用,因此为每个虚拟机只创建了一个 osd 节点。
docker run --rm --privileged=true --net=host --ipc=host \
+ --security-opt seccomp=unconfined \
+ -v /run/lock/lvm:/run/lock/lvm:z \
+ -v /var/run/udev/:/var/run/udev/:z \
+ -v /dev:/dev -v /etc/ceph:/etc/ceph:z \
+ -v /run/lvm/:/run/lvm/ \
+ -v /var/lib/ceph/:/var/lib/ceph/:z \
+ -v /var/log/ceph/:/var/log/ceph/:z \
+ --entrypoint=ceph-volume \
+ docker.io/ceph/daemon \
+ --cluster ceph lvm prepare --bluestore --data /dev/vdb
+
WARNING
以上命令在三个节点都是一样的,只需要根据磁盘名称进行修改调整即可。
docker run -d --privileged=true --net=host --pid=host --ipc=host \
+ --security-opt seccomp=unconfined \
+ -v /dev:/dev \
+ -v /etc/localtime:/etc/ localtime:ro \
+ -v /var/lib/ceph:/var/lib/ceph:z \
+ -v /etc/ceph:/etc/ceph:z \
+ -v /var/run/ceph:/var/run/ceph:z \
+ -v /var/run/udev/:/var/run/udev/ \
+ -v /var/log/ceph:/var/log/ceph:z \
+ -v /run/lvm/:/run/lvm/ \
+ -e CLUSTER=ceph \
+ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \
+ -e CONTAINER_IMAGE=docker.io/ceph/daemon \
+ -e OSD_ID=0 \
+ --name=ceph-osd-0 \
+ docker.io/ceph/daemon
+
WARNING
各个节点需要修改 OSD_ID 与 name 属性,OSD_ID 是从编号 0 递增的,其余节点为 OSD_ID=1、OSD_ID=2。
$ docker exec mon01 ceph -s
+cluster:
+ id: e430d054-dda8-43f1-9cda-c0881b782e17
+ health: HEALTH_WARN
+ no active mgr
+
+services:
+ mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 44m)
+ mgr: no daemons active
+ osd: 3 osds: 3 up (since 7m), 3 in (since 13m)
+
+data:
+ pools: 0 pools, 0 pgs
+ objects: 0 objects, 0 B
+ usage: 0 B used, 0 B / 0 B avail
+ pgs:
+
以下命令均在 ceph001 进行:
docker run -d --net=host \
+ --privileged=true \
+ --security-opt seccomp=unconfined \
+ -v /etc/ceph:/etc/ceph \
+ -v /var/lib/ceph/:/var/lib/ceph/ \
+ --name=ceph-mgr-0 \
+ ceph/daemon mgr
+
+docker run -d --net=host \
+ --privileged=true \
+ --security-opt seccomp=unconfined \
+ -v /var/lib/ceph/:/var/lib/ceph/ \
+ -v /etc/ceph:/etc/ceph \
+ -e CEPHFS_CREATE=1 \
+ --name=ceph-mds-0 \
+ ceph/daemon mds
+
+docker run -d --net=host \
+ --privileged=true \
+ --security-opt seccomp=unconfined \
+ -v /var/lib/ceph/:/var/lib/ceph/ \
+ -v /etc/ceph:/etc/ceph \
+ --name=ceph-rgw-0 \
+ ceph/daemon rgw
+
查看集群状态:
docker exec mon01 ceph -s
+cluster:
+ id: e430d054-dda8-43f1-9cda-c0881b782e17
+ health: HEALTH_OK
+
+services:
+ mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 92m)
+ mgr: ceph001(active, since 25m)
+ mds: 1/1 daemons up
+ osd: 3 osds: 3 up (since 54m), 3 in (since 60m)
+ rgw: 1 daemon active (1 hosts, 1 zones)
+
+data:
+ volumes: 1/1 healthy
+ pools: 7 pools, 145 pgs
+ objects: 243 objects, 7.2 KiB
+ usage: 50 MiB used, 2.9 TiB / 2.9 TiB avail
+ pgs: 145 active+clean
+
TIP
以下命令均在容器 mon01 中进行。
docker exec -it mon01 bash
+ceph osd pool create rbd_polar
+
rbd create --size 512000 rbd_polar/image02
+rbd info rbd_polar/image02
+
+rbd image 'image02':
+size 500 GiB in 128000 objects
+order 22 (4 MiB objects)
+snapshot_count: 0
+id: 13b97b252c5d
+block_name_prefix: rbd_data.13b97b252c5d
+format: 2
+features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
+op_features:
+flags:
+create_timestamp: Thu Oct 28 06:18:07 2021
+access_timestamp: Thu Oct 28 06:18:07 2021
+modify_timestamp: Thu Oct 28 06:18:07 2021
+
modprobe rbd # 加载内核模块,在主机上执行
+rbd map rbd_polar/image02
+
+rbd: sysfs write failed
+RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable rbd_polar/image02 object-map fast-diff deep-flatten".
+In some cases useful info is found in syslog - try "dmesg | tail".
+rbd: map failed: (6) No such device or address
+
WARNING
某些特性内核不支持,需要关闭才可以映射成功。如下进行:关闭 rbd 不支持特性,重新映射镜像,并查看映射列表。
rbd feature disable rbd_polar/image02 object-map fast-diff deep-flatten
+rbd map rbd_polar/image02
+rbd device list
+
+id pool namespace image snap device
+0 rbd_polar image01 - /dev/ rbd0
+1 rbd_polar image02 - /dev/ rbd1
+
TIP
此处我已经先映射了一个 image01,所以有两条信息。
回到容器外,进行操作。查看系统中的块设备:
lsblk
+
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+vda 253:0 0 500G 0 disk
+└─vda1 253:1 0 500G 0 part /
+vdb 253:16 0 1000G 0 disk
+└─ceph--7eefe77f--c618--4477--a1ed--b4f44520dfc 2-osd--block--bced3ff1--42b9--43e1--8f63--e853b ce41435
+ 252:0 0 1000G 0 lvm
+rbd0 251:0 0 100G 0 disk
+rbd1 251:16 0 500G 0 disk
+
WARNING
块设备镜像需要在各个节点都进行映射才可以在本地环境中通过 lsblk
命令查看到,否则不显示。ceph002 与 ceph003 上映射命令与上述一致。
参阅 格式化并挂载 PFS。
棠羽
2022/08/31
30 min
Curve 是一款高性能、易运维、云原生的开源分布式存储系统。可应用于主流的云原生基础设施平台:
Curve 亦可作为云存储中间件使用 S3 兼容的对象存储作为数据存储引擎,为公有云用户提供高性价比的共享文件存储。
本示例将引导您以 CurveBS 作为块存储,部署 PolarDB for PostgreSQL。更多进阶配置和使用方法请参考 Curve 项目的 wiki。
如图所示,本示例共使用六台服务器。其中,一台中控服务器和三台存储服务器共同组成 CurveBS 集群,对外暴露为一个共享存储服务。剩余两台服务器分别用于部署 PolarDB for PostgreSQL 数据库的读写节点和只读节点,它们共享 CurveBS 对外暴露的块存储设备。
本示例使用阿里云 ECS 模拟全部六台服务器。六台 ECS 全部运行 Anolis OS 8.6(兼容 CentOS 8.6)系统,使用 root 用户,并处于同一局域网段内。需要完成的准备工作包含:
bash -c "$(curl -fsSL https://curveadm.nos-eastchina1.126.net/script/install.sh)"
+source /root/.bash_profile
+
在中控机上编辑主机列表文件:
vim hosts.yaml
+
文件中包含另外五台服务器的 IP 地址和在 Curve 集群内的名称,其中:
global:
+ user: root
+ ssh_port: 22
+ private_key_file: /root/.ssh/id_rsa
+
+hosts:
+ # Curve worker nodes
+ - host: server-host1
+ hostname: 172.16.0.223
+ - host: server-host2
+ hostname: 172.16.0.224
+ - host: server-host3
+ hostname: 172.16.0.225
+ # PolarDB nodes
+ - host: polardb-primary
+ hostname: 172.16.0.226
+ - host: polardb-replica
+ hostname: 172.16.0.227
+
导入主机列表:
curveadm hosts commit hosts.yaml
+
准备磁盘列表,并提前生成一批固定大小并预写过的 chunk 文件。磁盘列表中需要包含:
/dev/vdb
)vim format.yaml
+
host:
+ - server-host1
+ - server-host2
+ - server-host3
+disk:
+ - /dev/vdb:/data/chunkserver0:90 # device:mount_path:format_percent
+
开始格式化。此时,中控机将在每台存储节点主机上对每个块设备启动一个格式化进程容器。
$ curveadm format -f format.yaml
+Start Format Chunkfile Pool: ⠸
+ + host=server-host1 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [0/1] ⠸
+ + host=server-host2 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [0/1] ⠸
+ + host=server-host3 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [0/1] ⠸
+
当显示 OK
时,说明这个格式化进程容器已启动,但 并不代表格式化已经完成。格式化是个较久的过程,将会持续一段时间:
Start Format Chunkfile Pool: [OK]
+ + host=server-host1 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [1/1] [OK]
+ + host=server-host2 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [1/1] [OK]
+ + host=server-host3 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [1/1] [OK]
+
可以通过以下命令查看格式化进度,目前仍在格式化状态中:
$ curveadm format --status
+Get Format Status: [OK]
+
+Host Device MountPoint Formatted Status
+---- ------ ---------- --------- ------
+server-host1 /dev/vdb /data/chunkserver0 19/90 Formatting
+server-host2 /dev/vdb /data/chunkserver0 22/90 Formatting
+server-host3 /dev/vdb /data/chunkserver0 22/90 Formatting
+
格式化完成后的输出:
$ curveadm format --status
+Get Format Status: [OK]
+
+Host Device MountPoint Formatted Status
+---- ------ ---------- --------- ------
+server-host1 /dev/vdb /data/chunkserver0 95/90 Done
+server-host2 /dev/vdb /data/chunkserver0 95/90 Done
+server-host3 /dev/vdb /data/chunkserver0 95/90 Done
+
首先,准备集群配置文件:
vim topology.yaml
+
粘贴如下配置文件:
kind: curvebs
+global:
+ container_image: opencurvedocker/curvebs:v1.2
+ log_dir: ${home}/logs/${service_role}${service_replicas_sequence}
+ data_dir: ${home}/data/${service_role}${service_replicas_sequence}
+ s3.nos_address: 127.0.0.1
+ s3.snapshot_bucket_name: curve
+ s3.ak: minioadmin
+ s3.sk: minioadmin
+ variable:
+ home: /tmp
+ machine1: server-host1
+ machine2: server-host2
+ machine3: server-host3
+
+etcd_services:
+ config:
+ listen.ip: ${service_host}
+ listen.port: 2380
+ listen.client_port: 2379
+ deploy:
+ - host: ${machine1}
+ - host: ${machine2}
+ - host: ${machine3}
+
+mds_services:
+ config:
+ listen.ip: ${service_host}
+ listen.port: 6666
+ listen.dummy_port: 6667
+ deploy:
+ - host: ${machine1}
+ - host: ${machine2}
+ - host: ${machine3}
+
+chunkserver_services:
+ config:
+ listen.ip: ${service_host}
+ listen.port: 82${format_replicas_sequence} # 8200,8201,8202
+ data_dir: /data/chunkserver${service_replicas_sequence} # /data/chunkserver0, /data/chunkserver1
+ copysets: 100
+ deploy:
+ - host: ${machine1}
+ replicas: 1
+ - host: ${machine2}
+ replicas: 1
+ - host: ${machine3}
+ replicas: 1
+
+snapshotclone_services:
+ config:
+ listen.ip: ${service_host}
+ listen.port: 5555
+ listen.dummy_port: 8081
+ listen.proxy_port: 8080
+ deploy:
+ - host: ${machine1}
+ - host: ${machine2}
+ - host: ${machine3}
+
根据上述的集群拓扑文件创建集群 my-cluster
:
curveadm cluster add my-cluster -f topology.yaml
+
切换 my-cluster
集群为当前管理集群:
curveadm cluster checkout my-cluster
+
部署集群。如果部署成功,将会输出类似 Cluster 'my-cluster' successfully deployed ^_^.
字样。
$ curveadm deploy --skip snapshotclone
+
+...
+Create Logical Pool: [OK]
+ + host=server-host1 role=mds containerId=c6fdd71ae678 [1/1] [OK]
+
+Start Service: [OK]
+ + host=server-host1 role=snapshotclone containerId=9d3555ba72fa [1/1] [OK]
+ + host=server-host2 role=snapshotclone containerId=e6ae2b23b57e [1/1] [OK]
+ + host=server-host3 role=snapshotclone containerId=f6d3446c7684 [1/1] [OK]
+
+Balance Leader: [OK]
+ + host=server-host1 role=mds containerId=c6fdd71ae678 [1/1] [OK]
+
+Cluster 'my-cluster' successfully deployed ^_^.
+
查看集群状态:
$ curveadm status
+Get Service Status: [OK]
+
+cluster name : my-cluster
+cluster kind : curvebs
+cluster mds addr : 172.16.0.223:6666,172.16.0.224:6666,172.16.0.225:6666
+cluster mds leader: 172.16.0.225:6666 / d0a94a7afa14
+
+Id Role Host Replicas Container Id Status
+-- ---- ---- -------- ------------ ------
+5567a1c56ab9 etcd server-host1 1/1 f894c5485a26 Up 17 seconds
+68f9f0e6f108 etcd server-host2 1/1 69b09cdbf503 Up 17 seconds
+a678263898cc etcd server-host3 1/1 2ed141800731 Up 17 seconds
+4dcbdd08e2cd mds server-host1 1/1 76d62ff0eb25 Up 17 seconds
+8ef1755b0a10 mds server-host2 1/1 d8d838258a6f Up 17 seconds
+f3599044c6b5 mds server-host3 1/1 d63ae8502856 Up 17 seconds
+9f1d43bc5b03 chunkserver server-host1 1/1 39751a4f49d5 Up 16 seconds
+3fb8fd7b37c1 chunkserver server-host2 1/1 0f55a19ed44b Up 16 seconds
+c4da555952e3 chunkserver server-host3 1/1 9411274d2c97 Up 16 seconds
+
在 Curve 中控机上编辑客户端配置文件:
vim client.yaml
+
注意,这里的 mds.listen.addr
请填写上一步集群状态中输出的 cluster mds addr
:
kind: curvebs
+container_image: opencurvedocker/curvebs:v1.2
+mds.listen.addr: 172.16.0.223:6666,172.16.0.224:6666,172.16.0.225:6666
+log_dir: /root/curvebs/logs/client
+
接下来,将在两台运行 PolarDB 计算节点的 ECS 上分别部署 PolarDB 的主节点和只读节点。作为前提,需要让这两个节点能够共享 CurveBS 块设备,并在块设备上 格式化并挂载 PFS。
Network Block Device (NBD) 是一种网络协议,可以在多个主机间共享块存储设备。NBD 被设计为 Client-Server 的架构,因此至少需要两台物理机来部署。
以两台物理机环境为例,本小节介绍基于 NBD 共享存储的实例构建方法大体如下:
WARNING
以上步骤在 CentOS 7.5 上通过测试。
TIP
操作系统内核需要支持 NBD 内核模块,如果操作系统当前不支持该内核模块,则需要自己通过对应内核版本进行编译和加载 NBD 内核模块。
从 CentOS 官网 下载对应内核版本的驱动源码包并解压:
rpm -ihv kernel-3.10.0-862.el7.src.rpm
+cd ~/rpmbuild/SOURCES
+tar Jxvf linux-3.10.0-862.el7.tar.xz -C /usr/src/kernels/
+cd /usr/src/kernels/linux-3.10.0-862.el7/
+
NBD 驱动源码路径位于:drivers/block/nbd.c
。接下来编译操作系统内核依赖和组件:
cp ../$(uname -r)/Module.symvers ./
+make menuconfig # Device Driver -> Block devices -> Set 'M' On 'Network block device support'
+make prepare && make modules_prepare && make scripts
+make CONFIG_BLK_DEV_NBD=m M=drivers/block
+
检查是否正常生成驱动:
modinfo drivers/block/nbd.ko
+
拷贝、生成依赖并安装驱动:
cp drivers/block/nbd.ko /lib/modules/$(uname -r)/kernel/drivers/block
+depmod -a
+modprobe nbd # 或者 modprobe -f nbd 可以忽略模块版本检查
+
检查是否安装成功:
# 检查已安装内核模块
+lsmod | grep nbd
+# 如果NBD驱动已经安装,则会生成/dev/nbd*设备(例如:/dev/nbd0、/dev/nbd1等)
+ls /dev/nbd*
+
yum install nbd
+
拉起 NBD 服务端,按照同步方式(sync/flush=true
)配置,在指定端口(例如 1921
)上监听对指定块设备(例如 /dev/vdb
)的访问。
nbd-server -C /root/nbd.conf
+
配置文件 /root/nbd.conf
的内容举例如下:
[generic]
+ #user = nbd
+ #group = nbd
+ listenaddr = 0.0.0.0
+ port = 1921
+[export1]
+ exportname = /dev/vdb
+ readonly = false
+ multifile = false
+ copyonwrite = false
+ flush = true
+ fua = true
+ sync = true
+
NBD 驱动安装成功后会看到 /dev/nbd*
设备, 根据服务端的配置把远程块设备映射为本地的某个 NBD 设备即可:
nbd-client x.x.x.x 1921 -N export1 /dev/nbd0
+# x.x.x.x是NBD服务端主机的IP地址
+
参阅 格式化并挂载 PFS。
DockerHub 上已有构建完毕的开发镜像 polardb/polardb_pg_devel
可供直接使用(支持 linux/amd64
和 linux/arm64
两种架构)。
另外,我们也提供了构建上述开发镜像的 Dockerfile,从 Ubuntu 官方镜像 ubuntu:20.04
开始构建出一个安装完所有开发和运行时依赖的镜像,您可以根据自己的需要在 Dockerfile 中添加更多依赖。以下是手动构建镜像的 Dockerfile 及方法:
FROM ubuntu:20.04
+LABEL maintainer="mrdrivingduck@gmail.com"
+CMD bash
+
+# Timezone problem
+ENV TZ=Asia/Shanghai
+RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
+
+# Upgrade softwares
+RUN apt update -y && \
+ apt upgrade -y && \
+ apt clean -y
+
+# GCC (force to 9) and LLVM (force to 11)
+RUN apt install -y \
+ gcc-9 \
+ g++-9 \
+ llvm-11-dev \
+ clang-11 \
+ make \
+ gdb \
+ pkg-config \
+ locales && \
+ update-alternatives --install \
+ /usr/bin/gcc gcc /usr/bin/gcc-9 60 --slave \
+ /usr/bin/g++ g++ /usr/bin/g++-9 && \
+ update-alternatives --install \
+ /usr/bin/llvm-config llvm-config /usr/bin/llvm-config-11 60 --slave \
+ /usr/bin/clang++ clang++ /usr/bin/clang++-11 --slave \
+ /usr/bin/clang clang /usr/bin/clang-11 && \
+ apt clean -y
+
+# Generate locale
+RUN sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && \
+ sed -i '/zh_CN.UTF-8/s/^# //g' /etc/locale.gen && \
+ locale-gen
+
+# Dependencies
+RUN apt install -y \
+ libicu-dev \
+ bison \
+ flex \
+ python3-dev \
+ libreadline-dev \
+ libgss-dev \
+ libssl-dev \
+ libpam0g-dev \
+ libxml2-dev \
+ libxslt1-dev \
+ libldap2-dev \
+ uuid-dev \
+ liblz4-dev \
+ libkrb5-dev \
+ gettext \
+ libxerces-c-dev \
+ tcl-dev \
+ libperl-dev \
+ libipc-run-perl \
+ libaio-dev \
+ libfuse-dev && \
+ apt clean -y
+
+# Tools
+RUN apt install -y \
+ iproute2 \
+ wget \
+ ccache \
+ sudo \
+ vim \
+ git \
+ cmake && \
+ apt clean -y
+
+# set to empty if GitHub is not barriered
+# ENV GITHUB_PROXY=https://ghproxy.com/
+ENV GITHUB_PROXY=
+
+ENV ZLOG_VERSION=1.2.14
+ENV PFSD_VERSION=pfsd4pg-release-1.2.42-20220419
+
+# install dependencies from GitHub mirror
+RUN cd /usr/local && \
+ # zlog for PFSD
+ wget --no-verbose --no-check-certificate "${GITHUB_PROXY}https://github.com/HardySimpson/zlog/archive/refs/tags/${ZLOG_VERSION}.tar.gz" && \
+ # PFSD
+ wget --no-verbose --no-check-certificate "${GITHUB_PROXY}https://github.com/ApsaraDB/PolarDB-FileSystem/archive/refs/tags/${PFSD_VERSION}.tar.gz" && \
+ # unzip and install zlog
+ gzip -d $ZLOG_VERSION.tar.gz && \
+ tar xpf $ZLOG_VERSION.tar && \
+ cd zlog-$ZLOG_VERSION && \
+ make && make install && \
+ echo '/usr/local/lib' >> /etc/ld.so.conf && ldconfig && \
+ cd .. && \
+ rm -rf $ZLOG_VERSION* && \
+ rm -rf zlog-$ZLOG_VERSION && \
+ # unzip and install PFSD
+ gzip -d $PFSD_VERSION.tar.gz && \
+ tar xpf $PFSD_VERSION.tar && \
+ cd PolarDB-FileSystem-$PFSD_VERSION && \
+ sed -i 's/-march=native //' CMakeLists.txt && \
+ ./autobuild.sh && ./install.sh && \
+ cd .. && \
+ rm -rf $PFSD_VERSION* && \
+ rm -rf PolarDB-FileSystem-$PFSD_VERSION*
+
+# create default user
+ENV USER_NAME=postgres
+RUN echo "create default user" && \
+ groupadd -r $USER_NAME && \
+ useradd -ms /bin/bash -g $USER_NAME $USER_NAME -p '' && \
+ usermod -aG sudo $USER_NAME
+
+# modify conf
+RUN echo "modify conf" && \
+ mkdir -p /var/log/pfs && chown $USER_NAME /var/log/pfs && \
+ mkdir -p /var/run/pfs && chown $USER_NAME /var/run/pfs && \
+ mkdir -p /var/run/pfsd && chown $USER_NAME /var/run/pfsd && \
+ mkdir -p /dev/shm/pfsd && chown $USER_NAME /dev/shm/pfsd && \
+ touch /var/run/pfsd/.pfsd && \
+ echo "ulimit -c unlimited" >> /home/postgres/.bashrc && \
+ echo "export PGHOST=127.0.0.1" >> /home/postgres/.bashrc && \
+ echo "alias pg='psql -h /home/postgres/tmp_master_dir_polardb_pg_1100_bld/'" >> /home/postgres/.bashrc
+
+ENV PATH="/home/postgres/tmp_basedir_polardb_pg_1100_bld/bin:$PATH"
+WORKDIR /home/$USER_NAME
+USER $USER_NAME
+
将上述内容复制到一个文件内(假设文件名为 Dockerfile-PolarDB
)后,使用如下命令构建镜像:
TIP
💡 请在下面的高亮行中按需替换 <image_name>
内的 Docker 镜像名称
docker build --network=host \
+ -t <image_name> \
+ -f Dockerfile-PolarDB .
+
该方式假设您从一台具有 root 权限的干净的 CentOS 7 操作系统上从零开始,可以是:
centos:centos7
上启动的 Docker 容器PolarDB for PostgreSQL 需要以非 root 用户运行。以下步骤能够帮助您创建一个名为 postgres
的用户组和一个名为 postgres
的用户。
TIP
如果您已经有了一个非 root 用户,但名称不是 postgres:postgres
,可以忽略该步骤;但请注意在后续示例步骤中将命令中用户相关的信息替换为您自己的用户组名与用户名。
下面的命令能够创建用户组 postgres
和用户 postgres
,并为该用户赋予 sudo 和工作目录的权限。需要以 root 用户执行这些命令。
# install sudo
+yum install -y sudo
+# create user and group
+groupadd -r postgres
+useradd -m -g postgres postgres -p ''
+usermod -aG wheel postgres
+# make postgres as sudoer
+chmod u+w /etc/sudoers
+echo 'postgres ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers
+chmod u-w /etc/sudoers
+# grant access to home directory
+chown -R postgres:postgres /home/postgres/
+echo 'source /etc/bashrc' >> /home/postgres/.bashrc
+# for su postgres
+sed -i 's/4096/unlimited/g' /etc/security/limits.d/20-nproc.conf
+
接下来,切换到 postgres
用户,就可以进行后续的步骤了:
su postgres
+source /etc/bashrc
+cd ~
+
在 PolarDB for PostgreSQL 源码库根目录的 deps/
子目录下,放置了在各个 Linux 发行版上编译安装 PolarDB 和 PFS 需要运行的所有依赖。因此,首先需要克隆 PolarDB 的源码库。
PolarDB for PostgreSQL 的代码托管于 GitHub 上,稳定分支为 POLARDB_11_STABLE
。如果因网络原因不能稳定访问 GitHub,则可以访问 Gitee 国内镜像。
sudo yum install -y git
+git clone -b POLARDB_11_STABLE https://github.com/ApsaraDB/PolarDB-for-PostgreSQL.git
+
sudo yum install -y git
+git clone -b POLARDB_11_STABLE https://gitee.com/mirrors/PolarDB-for-PostgreSQL
+
源码下载完毕后,使用 sudo
执行 deps/
目录下的相应脚本 deps-***.sh
自动完成所有的依赖安装。比如:
cd PolarDB-for-PostgreSQL
+sudo ./deps/deps-centos7.sh
+
DANGER
为简化使用,容器内的 postgres
用户没有设置密码,仅供体验。如果在生产环境等高安全性需求场合,请务必修改健壮的密码!
从 GitHub 上下载 PolarDB for PostgreSQL 的源代码,稳定分支为 POLARDB_11_STABLE
。如果因网络原因不能稳定访问 GitHub,则可以访问 Gitee 国内镜像。
git clone -b POLARDB_11_STABLE https://github.com/ApsaraDB/PolarDB-for-PostgreSQL.git
+
git clone -b POLARDB_11_STABLE https://gitee.com/mirrors/PolarDB-for-PostgreSQL
+
代码克隆完毕后,进入源码目录:
cd PolarDB-for-PostgreSQL/
+
从 DockerHub 上拉取 PolarDB for PostgreSQL 的 开发镜像。
# 拉取 PolarDB 开发镜像
+docker pull polardb/polardb_pg_devel
+
此时我们已经在开发机器的源码目录中。从开发镜像上创建一个容器,将当前目录作为一个 volume 挂载到容器中,这样可以:
docker run -it \
+ -v $PWD:/home/postgres/polardb_pg \
+ --shm-size=512m --cap-add=SYS_PTRACE --privileged=true \
+ --name polardb_pg_devel \
+ polardb/polardb_pg_devel \
+ bash
+
进入容器后,为容器内用户获取源码目录的权限,然后编译部署 PolarDB-PG 实例。
# 获取权限并编译部署
+cd polardb_pg
+sudo chmod -R a+wr ./
+sudo chown -R postgres:postgres ./
+./polardb_build.sh
+
+# 验证
+psql -h 127.0.0.1 -c 'select version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
以下表格列出了编译、初始化或测试 PolarDB-PG 集群所可能使用到的选项及说明。更多选项及其说明详见源码目录下的 polardb_build.sh
脚本。
选项 | 描述 | 默认值 |
---|---|---|
--withrep | 是否初始化只读节点 | NO |
--repnum | 只读节点数量 | 1 |
--withstandby | 是否初始化热备份节点 | NO |
--initpx | 是否初始化为 HTAP 集群(1 个读写节点,2 个只读节点) | NO |
--with-pfsd | 是否编译 PolarDB File System(PFS)相关功能 | NO |
--with-tde | 是否初始化 透明数据加密(TDE) 功能 | NO |
--with-dma | 是否初始化为 DMA(Data Max Availability)高可用三节点集群 | NO |
-r / -t / --regress | 在编译安装完毕后运行内核回归测试 | NO |
-r-px | 运行 HTAP 实例的回归测试 | NO |
-e /--extension | 运行扩展插件测试 | NO |
-r-external | 测试 external/ 下的扩展插件 | NO |
-r-contrib | 测试 contrib/ 下的扩展插件 | NO |
-r-pl | 测试 src/pl/ 下的扩展插件 | NO |
如无定制的需求,则可以按照下面给出的选项编译部署不同形态的 PolarDB-PG 集群并进行测试。
5432
端口)./polardb_build.sh
+
5432
端口)5433
端口)./polardb_build.sh --withrep --repnum=1
+
5432
端口)5433
端口)5434
端口)./polardb_build.sh --withrep --repnum=1 --withstandby
+
5432
端口)5433
/ 5434
端口)./polardb_build.sh --initpx
+
普通实例回归测试:
./polardb_build.sh --withrep -r -e -r-external -r-contrib -r-pl --with-tde
+
HTAP 实例回归测试:
./polardb_build.sh -r-px -e -r-external -r-contrib -r-pl --with-tde
+
DMA 实例回归测试:
./polardb_build.sh -r -e -r-external -r-contrib -r-pl --with-tde --with-dma
+
慎追、棠羽
2023/01/11
30 min
PolarDB for PostgreSQL 采用基于共享存储的存算分离架构,其备份恢复和 PostgreSQL 存在部分差异。本文将指导您如何对 PolarDB for PostgreSQL 进行备份,并通过备份来搭建 Replica 节点或 Standby 节点。
PostgreSQL 的备份流程可以总结为以下几步:
backup_label
文件,其中包含基础备份的起始点位置CHECKPOINT
backup_label
文件备份 PostgreSQL 数据库最简便方法是使用 pg_basebackup
工具。
PolarDB for PostgreSQL 采用基于共享存储的存算分离架构,其数据目录分为以下两类:
由于本地数据目录中的目录和文件不涉及数据库的核心数据,因此在备份数据库时,备份本地数据目录是可选的。可以仅备份共享存储上的数据目录,然后使用 initdb
重新生成新的本地存储目录。但是计算节点的本地配置文件需要被手动备份,如 postgresql.conf
、pg_hba.conf
等文件。
通过以下 SQL 命令可以查看节点的本地数据目录:
postgres=# SHOW data_directory;
+ data_directory
+------------------------
+ /home/postgres/primary
+(1 row)
+
本地数据目录类似于 PostgreSQL 的数据目录,大多数目录和文件都是通过 initdb
生成的。随着数据库服务的运行,本地数据目录中会产生更多的本地文件,如临时文件、缓存文件、配置文件、日志文件等。其结构如下:
$ tree ./ -L 1
+./
+├── base
+├── current_logfiles
+├── global
+├── pg_commit_ts
+├── pg_csnlog
+├── pg_dynshmem
+├── pg_hba.conf
+├── pg_ident.conf
+├── pg_log
+├── pg_logical
+├── pg_logindex
+├── pg_multixact
+├── pg_notify
+├── pg_replslot
+├── pg_serial
+├── pg_snapshots
+├── pg_stat
+├── pg_stat_tmp
+├── pg_subtrans
+├── pg_tblspc
+├── PG_VERSION
+├── pg_xact
+├── polar_cache_trash
+├── polar_dma.conf
+├── polar_fullpage
+├── polar_node_static.conf
+├── polar_rel_size_cache
+├── polar_shmem
+├── polar_shmem_stat_file
+├── postgresql.auto.conf
+├── postgresql.conf
+├── postmaster.opts
+└── postmaster.pid
+
+21 directories, 12 files
+
通过以下 SQL 命令可以查看所有计算节点在共享存储上的共享数据目录:
postgres=# SHOW polar_datadir;
+ polar_datadir
+-----------------------
+ /nvme1n1/shared_data/
+(1 row)
+
共享数据目录中存放 PolarDB for PostgreSQL 的核心数据文件,如表文件、索引文件、WAL 日志、DMA、LogIndex、Flashback Log 等。这些文件被所有节点共享,因此必须被备份。其结构如下:
$ sudo pfs -C disk ls /nvme1n1/shared_data/
+ Dir 1 512 Wed Jan 11 09:34:01 2023 base
+ Dir 1 7424 Wed Jan 11 09:34:02 2023 global
+ Dir 1 0 Wed Jan 11 09:34:02 2023 pg_tblspc
+ Dir 1 512 Wed Jan 11 09:35:05 2023 pg_wal
+ Dir 1 384 Wed Jan 11 09:35:01 2023 pg_logindex
+ Dir 1 0 Wed Jan 11 09:34:02 2023 pg_twophase
+ Dir 1 128 Wed Jan 11 09:34:02 2023 pg_xact
+ Dir 1 0 Wed Jan 11 09:34:02 2023 pg_commit_ts
+ Dir 1 256 Wed Jan 11 09:34:03 2023 pg_multixact
+ Dir 1 0 Wed Jan 11 09:34:03 2023 pg_csnlog
+ Dir 1 256 Wed Jan 11 09:34:03 2023 polar_dma
+ Dir 1 512 Wed Jan 11 09:35:09 2023 polar_fullpage
+ File 1 32 Wed Jan 11 09:35:00 2023 RWID
+ Dir 1 256 Wed Jan 11 10:25:42 2023 pg_replslot
+ File 1 224 Wed Jan 11 10:19:37 2023 polar_non_exclusive_backup_label
+total 16384 (unit: 512Bytes)
+
PolarDB for PostgreSQL 的备份工具 polar_basebackup
,由 PostgreSQL 的 pg_basebackup
改造而来,完全兼容 pg_basebackup
,因此同样可以用于对 PostgreSQL 做备份恢复。polar_basebackup
的可执行文件位于 PolarDB for PostgreSQL 安装目录下的 bin/
目录中。
该工具的主要功能是将一个运行中的 PolarDB for PostgreSQL 数据库的数据目录(包括本地数据目录和共享数据目录)备份到目标目录中。
polar_basebackup takes a base backup of a running PostgreSQL server.
+
+Usage:
+ polar_basebackup [OPTION]...
+
+Options controlling the output:
+ -D, --pgdata=DIRECTORY receive base backup into directory
+ -F, --format=p|t output format (plain (default), tar)
+ -r, --max-rate=RATE maximum transfer rate to transfer data directory
+ (in kB/s, or use suffix "k" or "M")
+ -R, --write-recovery-conf
+ write recovery.conf for replication
+ -T, --tablespace-mapping=OLDDIR=NEWDIR
+ relocate tablespace in OLDDIR to NEWDIR
+ --waldir=WALDIR location for the write-ahead log directory
+ -X, --wal-method=none|fetch|stream
+ include required WAL files with specified method
+ -z, --gzip compress tar output
+ -Z, --compress=0-9 compress tar output with given compression level
+
+General options:
+ -c, --checkpoint=fast|spread
+ set fast or spread checkpointing
+ -C, --create-slot create replication slot
+ -l, --label=LABEL set backup label
+ -n, --no-clean do not clean up after errors
+ -N, --no-sync do not wait for changes to be written safely to disk
+ -P, --progress show progress information
+ -S, --slot=SLOTNAME replication slot to use
+ -v, --verbose output verbose messages
+ -V, --version output version information, then exit
+ --no-slot prevent creation of temporary replication slot
+ --no-verify-checksums
+ do not verify checksums
+ -?, --help show this help, then exit
+
+Connection options:
+ -d, --dbname=CONNSTR connection string
+ -h, --host=HOSTNAME database server host or socket directory
+ -p, --port=PORT database server port number
+ -s, --status-interval=INTERVAL
+ time between status packets sent to server (in seconds)
+ -U, --username=NAME connect as specified database user
+ -w, --no-password never prompt for password
+ -W, --password force password prompt (should happen automatically)
+ --polardata=datadir receive polar data backup into directory
+ --polar_disk_home=disk_home polar_disk_home for polar data backup
+ --polar_host_id=host_id polar_host_id for polar data backup
+ --polar_storage_cluster_name=cluster_name polar_storage_cluster_name for polar data backup
+
polar_basebackup
的参数及用法几乎和 pg_basebackup
一致,新增了以下与共享存储相关的参数:
--polar_disk_home
/ --polar_host_id
/ --polar_storage_cluster_name
:这三个参数指定了用于存放备份共享数据的共享存储节点--polardata
:该参数指定了备份共享存储节点上存放共享数据的路径;如不指定,则默认将共享数据备份到本地数据备份目录的 polar_shared_data/
路径下基础备份可用于搭建一个新的 Replica(RO)节点。如前文所述,一个正在运行中的 PolarDB for PostgreSQL 实例的数据文件分布在各计算节点的本地存储和存储节点的共享存储中。下面将说明如何使用 polar_basebackup
将实例的数据文件备份到一个本地磁盘上,并从这个备份上启动一个 Replica 节点。
首先,在将要部署 Replica 节点的机器上启动 PFSD 守护进程,挂载到正在运行中的共享存储的 PFS 文件系统上。后续启动的 Replica 节点将使用这个守护进程来访问共享存储。
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme1n1 -w 2
+
运行如下命令,将实例 Primary 节点的本地数据和共享数据备份到用于部署 Replica 节点的本地存储路径 /home/postgres/replica1
下:
polar_basebackup \
+ --host=[Primary节点所在IP] \
+ --port=[Primary节点所在端口号] \
+ -D /home/postgres/replica1 \
+ -X stream --progress --write-recovery-conf -v
+
将看到如下输出:
polar_basebackup: initiating base backup, waiting for checkpoint to complete
+polar_basebackup: checkpoint completed
+polar_basebackup: write-ahead log start point: 0/16ADD60 on timeline 1
+polar_basebackup: starting background WAL receiver
+polar_basebackup: created temporary replication slot "pg_basebackup_359"
+851371/851371 kB (100%), 2/2 tablespaces
+polar_basebackup: write-ahead log end point: 0/16ADE30
+polar_basebackup: waiting for background process to finish streaming ...
+polar_basebackup: base backup completed
+
备份完成后,可以以这个备份目录作为本地数据目录,启动一个新的 Replica 节点。由于本地数据目录中不需要共享存储上已有的共享数据文件,所以删除掉本地数据目录中的 polar_shared_data/
目录:
rm -rf ~/replica1/polar_shared_data
+
重新编辑 Replica 节点的配置文件 ~/replica1/postgresql.conf
:
-polar_hostid=1
++polar_hostid=2
+-synchronous_standby_names='replica1'
+
重新编辑 Replica 节点的复制配置文件 ~/replica1/recovery.conf
:
polar_replica='on'
+recovery_target_timeline='latest'
+primary_slot_name='replica1'
+primary_conninfo='host=[Primary节点所在IP] port=5432 user=postgres dbname=postgres application_name=replica1'
+
启动 Replica 节点:
pg_ctl -D $HOME/replica1 start
+
在 Primary 节点上执行建表并插入数据,在 Replica 节点上可以查到 Primary 节点插入的数据:
$ psql -q \
+ -h [Primary节点所在IP] \
+ -p 5432 \
+ -d postgres \
+ -c "CREATE TABLE t (t1 INT PRIMARY KEY, t2 INT); INSERT INTO t VALUES (1, 1),(2, 3),(3, 3);"
+
+$ psql -q \
+ -h [Replica节点所在IP] \
+ -p 5432 \
+ -d postgres \
+ -c "SELECT * FROM t;"
+ t1 | t2
+----+----
+ 1 | 1
+ 2 | 3
+ 3 | 3
+(3 rows)
+
基础备份也可以用于搭建一个新的 Standby 节点。如下图所示,Standby 节点与 Primary / Replica 节点各自使用独立的共享存储,与 Primary 节点使用物理复制保持同步。Standby 节点可用于作为主共享存储的灾备。
假设此时用于部署 Standby 计算节点的机器已经准备好用于后备的共享存储 nvme2n1
:
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:1 0 40G 0 disk
+└─nvme0n1p1 259:2 0 40G 0 part /etc/hosts
+nvme2n1 259:3 0 70G 0 disk
+nvme1n1 259:0 0 60G 0 disk
+
将这个共享存储格式化为 PFS 格式,并启动 PFSD 守护进程挂载到 PFS 文件系统:
sudo pfs -C disk mkfs nvme2n1
+sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme2n1 -w 2
+
在用于部署 Standby 节点的机器上执行备份,以 ~/standby
作为本地数据目录,以 /nvme2n1/shared_data
作为共享存储目录:
polar_basebackup \
+ --host=[Primary节点所在IP] \
+ --port=[Primary节点所在端口号] \
+ -D /home/postgres/standby \
+ --polardata=/nvme2n1/shared_data/ \
+ --polar_storage_cluster_name=disk \
+ --polar_disk_name=nvme2n1 \
+ --polar_host_id=3 \
+ -X stream --progress --write-recovery-conf -v
+
将会看到如下输出。其中,除了 polar_basebackup
的输出以外,还有 PFS 的输出日志:
[PFSD_SDK INF Jan 11 10:11:27.247112][99]pfs_mount_prepare 103: begin prepare mount cluster(disk), PBD(nvme2n1), hostid(3),flags(0x13)
+[PFSD_SDK INF Jan 11 10:11:27.247161][99]pfs_mount_prepare 165: pfs_mount_prepare success for nvme2n1 hostid 3
+[PFSD_SDK INF Jan 11 10:11:27.293900][99]chnl_connection_poll_shm 1238: ack data update s_mount_epoch 1
+[PFSD_SDK INF Jan 11 10:11:27.293912][99]chnl_connection_poll_shm 1266: connect and got ack data from svr, err = 0, mntid 0
+[PFSD_SDK INF Jan 11 10:11:27.293979][99]pfsd_sdk_init 191: pfsd_chnl_connect success
+[PFSD_SDK INF Jan 11 10:11:27.293987][99]pfs_mount_post 208: pfs_mount_post err : 0
+[PFSD_SDK ERR Jan 11 10:11:27.297257][99]pfsd_opendir 1437: opendir /nvme2n1/shared_data/ error: No such file or directory
+[PFSD_SDK INF Jan 11 10:11:27.297396][99]pfsd_mkdir 1320: mkdir /nvme2n1/shared_data
+polar_basebackup: initiating base backup, waiting for checkpoint to complete
+WARNING: a labelfile "/nvme1n1/shared_data//polar_non_exclusive_backup_label" is already on disk
+HINT: POLAR: we overwrite it
+polar_basebackup: checkpoint completed
+polar_basebackup: write-ahead log start point: 0/16C91F8 on timeline 1
+polar_basebackup: starting background WAL receiver
+polar_basebackup: created temporary replication slot "pg_basebackup_373"
+...
+[PFSD_SDK INF Jan 11 10:11:32.992005][99]pfsd_open 539: open /nvme2n1/shared_data/polar_non_exclusive_backup_label with inode 6325, fd 0
+[PFSD_SDK INF Jan 11 10:11:32.993074][99]pfsd_open 539: open /nvme2n1/shared_data/global/pg_control with inode 8373, fd 0
+851396/851396 kB (100%), 2/2 tablespaces
+polar_basebackup: write-ahead log end point: 0/16C9300
+polar_basebackup: waiting for background process to finish streaming ...
+polar_basebackup: base backup completed
+[PFSD_SDK INF Jan 11 10:11:52.378220][99]pfsd_umount_force 247: pbdname nvme2n1
+[PFSD_SDK INF Jan 11 10:11:52.378229][99]pfs_umount_prepare 269: pfs_umount_prepare. pbdname:nvme2n1
+[PFSD_SDK INF Jan 11 10:11:52.404010][99]chnl_connection_release_shm 1164: client umount return : deleted /var/run/pfsd//nvme2n1/99.pid
+[PFSD_SDK INF Jan 11 10:11:52.404171][99]pfs_umount_post 281: pfs_umount_post. pbdname:nvme2n1
+[PFSD_SDK INF Jan 11 10:11:52.404174][99]pfsd_umount_force 261: umount success for nvme2n1
+
上述命令会在当前机器的本地存储上备份 Primary 节点的本地数据目录,在参数指定的共享存储目录上备份共享数据目录。
重新编辑 Standby 节点的配置文件 ~/standby/postgresql.conf
:
-polar_hostid=1
++polar_hostid=3
+-polar_disk_name='nvme1n1'
+-polar_datadir='/nvme1n1/shared_data/'
++polar_disk_name='nvme2n1'
++polar_datadir='/nvme2n1/shared_data/'
+-synchronous_standby_names='replica1'
+
在 Standby 节点的复制配置文件 ~/standby/recovery.conf
中添加:
+recovery_target_timeline = 'latest'
++primary_slot_name = 'standby1'
+
在 Primary 节点上创建用于与 Standby 进行物理复制的复制槽:
$ psql \
+ --host=[Primary节点所在IP] --port=5432 \
+ -d postgres \
+ -c "SELECT * FROM pg_create_physical_replication_slot('standby1');"
+ slot_name | lsn
+-----------+-----
+ standby1 |
+(1 row)
+
启动 Standby 节点:
pg_ctl -D $HOME/standby start
+
在 Primary 节点上创建表并插入数据,在 Standby 节点上可以查询到数据:
$ psql -q \
+ -h [Primary节点所在IP] \
+ -p 5432 \
+ -d postgres \
+ -c "CREATE TABLE t (t1 INT PRIMARY KEY, t2 INT); INSERT INTO t VALUES (1, 1),(2, 3),(3, 3);"
+
+$ psql -q \
+ -h [Standby节点所在IP] \
+ -p 5432 \
+ -d postgres \
+ -c "SELECT * FROM t;"
+ t1 | t2
+----+----
+ 1 | 1
+ 2 | 3
+ 3 | 3
+(3 rows)
+
棠羽
2022/10/12
15 min
在使用数据库时,随着数据量的逐渐增大,不可避免需要对数据库所使用的存储空间进行扩容。由于 PolarDB for PostgreSQL 基于共享存储与分布式文件系统 PFS 的架构设计,与安装部署时类似,在扩容时,需要在以下三个层面分别进行操作:
本文将指导您分别在以上三个层面上分别完成扩容操作,以实现不停止数据库实例的动态扩容。
首先需要进行的是块存储层面上的扩容。不管使用哪种类型的共享存储,存储层面扩容最终需要达成的目的是:在能够访问共享存储的主机上运行 lsblk
命令,显示存储块设备的物理空间变大。由于不同类型的共享存储有不同的扩容方式,本文以 阿里云 ECS + ESSD 云盘共享存储 为例演示如何进行存储层面的扩容。
另外,为保证后续扩容步骤的成功,请以 10GB 为单位进行扩容。
本示例中,在扩容之前,已有一个 20GB 的 ESSD 云盘多重挂载在两台 ECS 上。在这两台 ECS 上运行 lsblk
,可以看到 ESSD 云盘共享存储对应的块设备 nvme1n1
目前的物理空间为 20GB。
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 20G 0 disk
+
接下来对这块 ESSD 云盘进行扩容。在阿里云 ESSD 云盘的管理页面上,点击 云盘扩容:
进入到云盘扩容界面以后,可以看到该云盘已被两台 ECS 实例多重挂载。填写扩容后的容量,然后点击确认扩容,把 20GB 的云盘扩容为 40GB:
扩容成功后,将会看到如下提示:
此时,两台 ECS 上运行 lsblk
,可以看到 ESSD 对应块设备 nvme1n1
的物理空间已经变为 40GB:
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 40G 0 disk
+
至此,块存储层面的扩容就完成了。
在物理块设备完成扩容以后,接下来需要使用 PFS 文件系统提供的工具,对块设备上扩大后的物理空间进行格式化,以完成文件系统层面的扩容。
在能够访问共享存储的 任意一台主机上 运行 PFS 的 growfs
命令,其中:
-o
表示共享存储扩容前的空间(以 10GB 为单位)-n
表示共享存储扩容后的空间(以 10GB 为单位)本例将共享存储从 20GB 扩容至 40GB,所以参数分别填写 2
和 4
:
$ sudo pfs -C disk growfs -o 2 -n 4 nvme1n1
+
+...
+
+Init chunk 2
+ metaset 2/1: sectbda 0x500001000, npage 80, objsize 128, nobj 2560, oid range [ 2000, 2a00)
+ metaset 2/2: sectbda 0x500051000, npage 64, objsize 128, nobj 2048, oid range [ 1000, 1800)
+ metaset 2/3: sectbda 0x500091000, npage 64, objsize 128, nobj 2048, oid range [ 1000, 1800)
+
+Init chunk 3
+ metaset 3/1: sectbda 0x780001000, npage 80, objsize 128, nobj 2560, oid range [ 3000, 3a00)
+ metaset 3/2: sectbda 0x780051000, npage 64, objsize 128, nobj 2048, oid range [ 1800, 2000)
+ metaset 3/3: sectbda 0x780091000, npage 64, objsize 128, nobj 2048, oid range [ 1800, 2000)
+
+pfs growfs succeeds!
+
如果看到上述输出,说明文件系统层面的扩容已经完成。
最后,在数据库实例层,扩容需要做的工作是执行 SQL 函数来通知每个实例上已经挂载到共享存储的 PFSD(PFS Daemon)守护进程,告知共享存储上的新空间已经可以被使用了。需要注意的是,数据库实例集群中的 所有 PFSD 都需要被通知到,并且需要 先通知所有 RO 节点上的 PFSD,最后通知 RW 节点上的 PFSD。这意味着我们需要在 每一个 PolarDB for PostgreSQL 节点上执行一次通知 PFSD 的 SQL 函数,并且 RO 节点在先,RW 节点在后。
数据库实例层通知 PFSD 的扩容函数实现在 PolarDB for PostgreSQL 的 polar_vfs
插件中,所以首先需要在 RW 节点 上加载 polar_vfs
插件。在加载插件的过程中,会在 RW 节点和所有 RO 节点上注册好 polar_vfs_disk_expansion
这个 SQL 函数。
CREATE EXTENSION IF NOT EXISTS polar_vfs;
+
接下来,依次 在所有的 RO 节点上,再到 RW 节点上 分别 执行这个 SQL 函数。其中函数的参数名为块设备名:
SELECT polar_vfs_disk_expansion('nvme1n1');
+
执行完毕后,数据库实例层面的扩容也就完成了。此时,新的存储空间已经能够被数据库使用了。
棠羽
2022/12/25
15 min
PolarDB for PostgreSQL 是一款存储与计算分离的云原生数据库,所有计算节点共享一份存储,并且对存储的访问具有 一写多读 的限制:所有计算节点可以对存储进行读取,但只有一个计算节点可以对存储进行写入。这种限制会带来一个问题:当读写节点因为宕机或网络故障而不可用时,集群中将没有能够可以写入存储的计算节点,应用业务中的增、删、改,以及 DDL 都将无法运行。
本文将指导您在 PolarDB for PostgreSQL 计算集群中的读写节点停止服务时,将任意一个只读节点在线提升为读写节点,从而使集群恢复对于共享存储的写入能力。
为方便起见,本示例使用基于本地磁盘的实例来进行演示。拉取如下镜像并启动容器,可以得到一个基于本地磁盘的 HTAP 实例:
docker pull polardb/polardb_pg_local_instance
+docker run -it \
+ --cap-add=SYS_PTRACE \
+ --privileged=true \
+ --name polardb_pg_htap \
+ --shm-size=512m \
+ polardb/polardb_pg_local_instance \
+ bash
+
容器内的 5432
至 5434
端口分别运行着一个读写节点和两个只读节点。两个只读节点与读写节点共享同一份数据,并通过物理复制保持与读写节点的内存状态同步。
首先,连接到读写节点,创建一张表并插入一些数据:
psql -p5432
+
postgres=# CREATE TABLE t (id int);
+CREATE TABLE
+postgres=# INSERT INTO t SELECT generate_series(1,10);
+INSERT 0 10
+
然后连接到只读节点,并同样试图对表插入数据,将会发现无法进行插入操作:
psql -p5433
+
postgres=# INSERT INTO t SELECT generate_series(1,10);
+ERROR: cannot execute INSERT in a read-only transaction
+
此时,关闭读写节点,模拟出读写节点不可用的行为:
$ pg_ctl -D ~/tmp_master_dir_polardb_pg_1100_bld/ stop
+waiting for server to shut down.... done
+server stopped
+
此时,集群中没有任何节点可以写入存储了。这时,我们需要将一个只读节点提升为读写节点,恢复对存储的写入。
只有当读写节点停止写入后,才可以将只读节点提升为读写节点,否则将会出现集群内两个节点同时写入的情况。当数据库检测到出现多节点写入时,将会导致运行异常。
将运行在 5433
端口的只读节点提升为读写节点:
$ pg_ctl -D ~/tmp_replica_dir_polardb_pg_1100_bld1/ promote
+waiting for server to promote.... done
+server promoted
+
连接到已经完成 promote 的新读写节点上,再次尝试之前的 INSERT
操作:
postgres=# INSERT INTO t SELECT generate_series(1,10);
+INSERT 0 10
+
从上述结果中可以看到,新的读写节点能够成功对存储进行写入。这说明原先的只读节点已经被成功提升为读写节点了。
棠羽
2022/12/19
30 min
PolarDB for PostgreSQL 是一款存储与计算分离的数据库,所有计算节点共享存储,并可以按需要弹性增加或删减计算节点而无需做任何数据迁移。所有本教程将协助您在共享存储集群上添加或删除计算节点。
首先,在已经搭建完毕的共享存储集群上,初始化并启动第一个计算节点,即读写节点,该节点可以对共享存储进行读写。我们在下面的镜像中提供了已经编译完毕的 PolarDB for PostgreSQL 内核和周边工具的可执行文件:
$ docker pull polardb/polardb_pg_binary
+$ docker run -it \
+ --cap-add=SYS_PTRACE \
+ --privileged=true \
+ --name polardb_pg \
+ --shm-size=512m \
+ polardb/polardb_pg_binary \
+ bash
+
+$ ls ~/tmp_basedir_polardb_pg_1100_bld/bin/
+clusterdb dropuser pg_basebackup pg_dump pg_resetwal pg_test_timing polar-initdb.sh psql
+createdb ecpg pgbench pg_dumpall pg_restore pg_upgrade polar-replica-initdb.sh reindexdb
+createuser initdb pg_config pg_isready pg_rewind pg_verify_checksums polar_tools vacuumdb
+dbatools.sql oid2name pg_controldata pg_receivewal pg_standby pg_waldump postgres vacuumlo
+dropdb pg_archivecleanup pg_ctl pg_recvlogical pg_test_fsync polar_basebackup postmaster
+
使用 lsblk
命令确认存储集群已经能够被当前机器访问到。比如,如下示例中的 nvme1n1
是将要使用的共享存储的块设备:
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 100G 0 disk
+
此时,共享存储上没有任何内容。使用容器内的 PFS 工具将共享存储格式化为 PFS 文件系统的格式:
sudo pfs -C disk mkfs nvme1n1
+
格式化完成后,在当前容器内启动 PFS 守护进程,挂载到文件系统上。该守护进程后续将会被计算节点用于访问共享存储:
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme1n1 -w 2
+
使用 initdb
在节点本地存储的 ~/primary
路径上创建本地数据目录。本地数据目录中将会存放节点的配置、审计日志等节点私有的信息:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D $HOME/primary
+
使用 PFS 工具,在共享存储上创建一个共享数据目录;使用 polar-initdb.sh
脚本把将会被所有节点共享的数据文件拷贝到共享存储的数据目录中。将会被所有节点共享的文件包含所有的表文件、WAL 日志文件等:
sudo pfs -C disk mkdir /nvme1n1/shared_data
+
+sudo $HOME/tmp_basedir_polardb_pg_1100_bld/bin/polar-initdb.sh \
+ $HOME/primary/ /nvme1n1/shared_data/
+
对读写节点的配置文件 ~/primary/postgresql.conf
进行修改,使数据库以共享模式启动,并能够找到共享存储上的数据目录:
port=5432
+polar_hostid=1
+
+polar_enable_shared_storage_mode=on
+polar_disk_name='nvme1n1'
+polar_datadir='/nvme1n1/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='disk'
+
+logging_collector=on
+log_line_prefix='%p\t%r\t%u\t%m\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+synchronous_standby_names='replica1'
+
编辑读写节点的客户端认证文件 ~/primary/pg_hba.conf
,允许来自所有地址的客户端以 postgres
用户进行物理复制:
host replication postgres 0.0.0.0/0 trust
+
使用以下命令启动读写节点,并检查节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl -D $HOME/primary start
+
+$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5432 \
+ -d postgres \
+ -c 'SELECT version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
接下来,在已经有一个读写节点的计算集群中扩容一个新的计算节点。由于 PolarDB for PostgreSQL 是一写多读的架构,所以后续扩容的节点只可以对共享存储进行读取,但无法对共享存储进行写入。只读节点通过与读写节点进行物理复制来保持内存状态的同步。
类似地,在用于部署新计算节点的机器上,拉取镜像并启动带有可执行文件的容器:
docker pull polardb/polardb_pg_binary
+docker run -it \
+ --cap-add=SYS_PTRACE \
+ --privileged=true \
+ --name polardb_pg \
+ --shm-size=512m \
+ polardb/polardb_pg_binary \
+ bash
+
确保部署只读节点的机器也可以访问到共享存储的块设备:
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 100G 0 disk
+
由于此时共享存储已经被读写节点格式化为 PFS 格式了,因此这里无需再次进行格式化。只需要启动 PFS 守护进程完成挂载即可:
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme1n1 -w 2
+
在只读节点本地磁盘的 ~/replica1
路径上创建一个空目录,然后通过 polar-replica-initdb.sh
脚本使用共享存储上的数据目录来初始化只读节点的本地目录。初始化后的本地目录中没有默认配置文件,所以还需要使用 initdb
创建一个临时的本地目录模板,然后将所有的默认配置文件拷贝到只读节点的本地目录下:
mkdir -m 0700 $HOME/replica1
+sudo ~/tmp_basedir_polardb_pg_1100_bld/bin/polar-replica-initdb.sh \
+ /nvme1n1/shared_data/ $HOME/replica1/
+
+$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D /tmp/replica1
+cp /tmp/replica1/*.conf $HOME/replica1/
+
编辑只读节点的配置文件 ~/replica1/postgresql.conf
,配置好只读节点的集群标识和监听端口,以及与读写节点相同的共享存储目录:
port=5432
+polar_hostid=2
+
+polar_enable_shared_storage_mode=on
+polar_disk_name='nvme1n1'
+polar_datadir='/nvme1n1/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='disk'
+
+logging_collector=on
+log_line_prefix='%p\t%r\t%u\t%m\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+
编辑只读节点的复制配置文件 ~/replica1/recovery.conf
,配置好当前节点的角色(只读),以及从读写节点进行物理复制的连接串和复制槽:
polar_replica='on'
+recovery_target_timeline='latest'
+primary_conninfo='host=[读写节点所在IP] port=5432 user=postgres dbname=postgres application_name=replica1'
+primary_slot_name='replica1'
+
由于读写节点上暂时还没有名为 replica1
的复制槽,所以需要连接到读写节点上,创建这个复制槽:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5432 \
+ -d postgres \
+ -c "SELECT pg_create_physical_replication_slot('replica1');"
+ pg_create_physical_replication_slot
+-------------------------------------
+ (replica1,)
+(1 row)
+
完成上述步骤后,启动只读节点并验证:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl -D $HOME/replica1 start
+
+$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5432 \
+ -d postgres \
+ -c 'SELECT version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
连接到读写节点上,创建一个表并插入数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \
+ -p 5432 \
+ -d postgres \
+ -c "CREATE TABLE t(id INT); INSERT INTO t SELECT generate_series(1,10);"
+
在只读节点上可以立刻查询到从读写节点上插入的数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \
+ -p 5432 \
+ -d postgres \
+ -c "SELECT * FROM t;"
+ id
+----
+ 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+ 10
+(10 rows)
+
从读写节点上可以看到用于与只读节点进行物理复制的复制槽已经处于活跃状态:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \
+ -p 5432 \
+ -d postgres \
+ -c "SELECT * FROM pg_replication_slots;"
+ slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn
+-----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------
+ replica1 | | physical | | | f | t | 45 | | | 0/4079E8E8 |
+(1 rows)
+
依次类推,使用类似的方法还可以横向扩容更多的只读节点。
集群缩容的步骤较为简单:将只读节点停机即可。
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl -D $HOME/replica1 stop
+
在只读节点停机后,读写节点上的复制槽将变为非活跃状态。非活跃的复制槽将会阻止 WAL 日志的回收,所以需要及时清理。
在读写节点上执行如下命令,移除名为 replica1
的复制槽:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5432 \
+ -d postgres \
+ -c "SELECT pg_drop_replication_slot('replica1');"
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
棠羽
2023/04/11
15 min
本文将引导您对 PolarDB for PostgreSQL 进行 TPC-C 测试。
TPC 是一系列事务处理和数据库基准测试的规范。其中 TPC-C (Transaction Processing Performance Council) 是针对 OLTP 的基准测试模型。TPC-C 测试模型给基准测试提供了一种统一的测试标准,可以大体观察出数据库服务稳定性、性能以及系统性能等一系列问题。对数据库展开 TPC-C 基准性能测试,一方面可以衡量数据库的性能,另一方面可以衡量采用不同硬件软件系统的性价比,是被业内广泛应用并关注的一种测试模型。
参考如下教程部署 PolarDB for PostgreSQL:
BenchmarkSQL 依赖 Java 运行环境与 Maven 包管理工具,需要预先安装。拉取 BenchmarkSQL 工具源码并进入目录后,通过 mvn
编译工程:
$ git clone https://github.com/pgsql-io/benchmarksql.git
+$ cd benchmarksql
+$ mvn
+
编译出的工具位于如下目录中:
$ cd target/run
+
在编译完毕的工具目录下,将会存在面向不同数据库产品的示例配置:
$ ls | grep sample
+sample.firebird.properties
+sample.mariadb.properties
+sample.oracle.properties
+sample.postgresql.properties
+sample.transact-sql.properties
+
其中,sample.postgresql.properties
包含 PostgreSQL 系列数据库的模板参数,可以基于这个模板来修改并自定义配置。参考 BenchmarkSQL 工具的 文档 可以查看关于配置项的详细描述。
配置项包含的配置类型有:
使用 runDatabaseBuild.sh
脚本,以配置文件作为参数,产生和导入测试数据:
./runDatabaseBuild.sh sample.postgresql.properties
+
通常,在正式测试前会进行一次数据预热:
./runBenchmark.sh sample.postgresql.properties
+
预热完毕后,再次运行同样的命令进行正式测试:
./runBenchmark.sh sample.postgresql.properties
+
_____ latency (seconds) _____
+ TransType count | mix % | mean max 90th% | rbk% errors
++--------------+---------------+---------+---------+---------+---------+---------+---------------+
+| NEW_ORDER | 635 | 44.593 | 0.006 | 0.012 | 0.008 | 1.102 | 0 |
+| PAYMENT | 628 | 44.101 | 0.001 | 0.006 | 0.002 | 0.000 | 0 |
+| ORDER_STATUS | 58 | 4.073 | 0.093 | 0.168 | 0.132 | 0.000 | 0 |
+| STOCK_LEVEL | 52 | 3.652 | 0.035 | 0.044 | 0.041 | 0.000 | 0 |
+| DELIVERY | 51 | 3.581 | 0.000 | 0.001 | 0.001 | 0.000 | 0 |
+| DELIVERY_BG | 51 | 0.000 | 0.018 | 0.023 | 0.020 | 0.000 | 0 |
++--------------+---------------+---------+---------+---------+---------+---------+---------------+
+
+Overall NOPM: 635 (98.76% of the theoretical maximum)
+Overall TPM: 1,424
+
另外也有 CSV 形式的结果被保存,从输出日志中可以找到结果存放目录。
棠羽
2023/04/12
20 min
本文将引导您对 PolarDB for PostgreSQL 进行 TPC-H 测试。
TPC-H 是专门测试数据库分析型场景性能的数据集。
使用 Docker 快速拉起一个基于本地存储的 PolarDB for PostgreSQL 集群:
docker pull polardb/polardb_pg_local_instance
+docker run -it \
+ --cap-add=SYS_PTRACE \
+ --privileged=true \
+ --name polardb_pg_htap \
+ --shm-size=512m \
+ polardb/polardb_pg_local_instance \
+ bash
+
或者参考 进阶部署 部署一个基于共享存储的 PolarDB for PostgreSQL 集群。
通过 tpch-dbgen 工具来生成测试数据。
$ git clone https://github.com/ApsaraDB/tpch-dbgen.git
+$ cd tpch-dbgen
+$ ./build.sh --help
+
+ 1) Use default configuration to build
+ ./build.sh
+ 2) Use limited configuration to build
+ ./build.sh --user=postgres --db=postgres --host=localhost --port=5432 --scale=1
+ 3) Run the test case
+ ./build.sh --run
+ 4) Run the target test case
+ ./build.sh --run=3. run the 3rd case.
+ 5) Run the target test case with option
+ ./build.sh --run --option="set polar_enable_px = on;"
+ 6) Clean the test data. This step will drop the database or tables, remove csv
+ and tbl files
+ ./build.sh --clean
+ 7) Quick build TPC-H with 100MB scale of data
+ ./build.sh --scale=0.1
+
通过设置不同的参数,可以定制化地创建不同规模的 TPC-H 数据集。build.sh
脚本中各个参数的含义如下:
--user
:数据库用户名--db
:数据库名--host
:数据库主机地址--port
:数据库服务端口--run
:执行所有 TPC-H 查询,或执行某条特定的 TPC-H 查询--option
:额外指定 GUC 参数--scale
:生成 TPC-H 数据集的规模,单位为 GB该脚本没有提供输入数据库密码的参数,需要通过设置 PGPASSWORD
为数据库用户的数据库密码来完成认证:
export PGPASSWORD=<your password>
+
生成并导入 100MB 规模的 TPC-H 数据:
./build.sh --scale=0.1
+
生成并导入 1GB 规模的 TPC-H 数据:
./build.sh
+
以 TPC-H 的 Q18 为例,执行 PostgreSQL 的单机并行查询,并观测查询速度。
在 tpch-dbgen/
目录下通过 psql
连接到数据库:
cd tpch-dbgen
+psql
+
-- 打开计时
+\timing on
+
+-- 设置单机并行度
+SET max_parallel_workers_per_gather = 2;
+
+-- 查看 Q18 的执行计划
+\i finals/18.explain.sql
+ QUERY PLAN
+------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Sort (cost=3450834.75..3450835.42 rows=268 width=81)
+ Sort Key: orders.o_totalprice DESC, orders.o_orderdate
+ -> GroupAggregate (cost=3450817.91..3450823.94 rows=268 width=81)
+ Group Key: customer.c_custkey, orders.o_orderkey
+ -> Sort (cost=3450817.91..3450818.58 rows=268 width=67)
+ Sort Key: customer.c_custkey, orders.o_orderkey
+ -> Hash Join (cost=1501454.20..3450807.10 rows=268 width=67)
+ Hash Cond: (lineitem.l_orderkey = orders.o_orderkey)
+ -> Seq Scan on lineitem (cost=0.00..1724402.52 rows=59986052 width=22)
+ -> Hash (cost=1501453.37..1501453.37 rows=67 width=53)
+ -> Nested Loop (cost=1500465.85..1501453.37 rows=67 width=53)
+ -> Nested Loop (cost=1500465.43..1501084.65 rows=67 width=34)
+ -> Finalize GroupAggregate (cost=1500464.99..1500517.66 rows=67 width=4)
+ Group Key: lineitem_1.l_orderkey
+ Filter: (sum(lineitem_1.l_quantity) > '314'::numeric)
+ -> Gather Merge (cost=1500464.99..1500511.66 rows=400 width=36)
+ Workers Planned: 2
+ -> Sort (cost=1499464.97..1499465.47 rows=200 width=36)
+ Sort Key: lineitem_1.l_orderkey
+ -> Partial HashAggregate (cost=1499454.82..1499457.32 rows=200 width=36)
+ Group Key: lineitem_1.l_orderkey
+ -> Parallel Seq Scan on lineitem lineitem_1 (cost=0.00..1374483.88 rows=24994188 width=22)
+ -> Index Scan using orders_pkey on orders (cost=0.43..8.45 rows=1 width=30)
+ Index Cond: (o_orderkey = lineitem_1.l_orderkey)
+ -> Index Scan using customer_pkey on customer (cost=0.43..5.50 rows=1 width=23)
+ Index Cond: (c_custkey = orders.o_custkey)
+(26 rows)
+
+Time: 3.965 ms
+
+-- 执行 Q18
+\i finals/18.sql
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 80150.449 ms (01:20.150)
+
PolarDB for PostgreSQL 提供了弹性跨机并行查询(ePQ)的能力,非常适合进行分析型查询。下面的步骤将引导您可以在一台主机上使用 ePQ 并行执行 TPC-H 查询。
在 tpch-dbgen/
目录下通过 psql
连接到数据库:
cd tpch-dbgen
+psql
+
首先需要对 TPC-H 产生的八张表设置 ePQ 的最大查询并行度:
ALTER TABLE nation SET (px_workers = 100);
+ALTER TABLE region SET (px_workers = 100);
+ALTER TABLE supplier SET (px_workers = 100);
+ALTER TABLE part SET (px_workers = 100);
+ALTER TABLE partsupp SET (px_workers = 100);
+ALTER TABLE customer SET (px_workers = 100);
+ALTER TABLE orders SET (px_workers = 100);
+ALTER TABLE lineitem SET (px_workers = 100);
+
以 Q18 为例,执行查询:
-- 打开计时
+\timing on
+
+-- 打开 ePQ 功能的开关
+SET polar_enable_px = ON;
+-- 设置每个节点的 ePQ 并行度为 1
+SET polar_px_dop_per_node = 1;
+
+-- 查看 Q18 的执行计划
+\i finals/18.explain.sql
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------
+ PX Coordinator 2:1 (slice1; segments: 2) (cost=0.00..257526.21 rows=59986052 width=47)
+ Merge Key: orders.o_totalprice, orders.o_orderdate
+ -> GroupAggregate (cost=0.00..243457.68 rows=29993026 width=47)
+ Group Key: orders.o_totalprice, orders.o_orderdate, customer.c_name, customer.c_custkey, orders.o_orderkey
+ -> Sort (cost=0.00..241257.18 rows=29993026 width=47)
+ Sort Key: orders.o_totalprice DESC, orders.o_orderdate, customer.c_name, customer.c_custkey, orders.o_orderkey
+ -> Hash Join (cost=0.00..42729.99 rows=29993026 width=47)
+ Hash Cond: (orders.o_orderkey = lineitem_1.l_orderkey)
+ -> PX Hash 2:2 (slice2; segments: 2) (cost=0.00..15959.71 rows=7500000 width=39)
+ Hash Key: orders.o_orderkey
+ -> Hash Join (cost=0.00..15044.19 rows=7500000 width=39)
+ Hash Cond: (orders.o_custkey = customer.c_custkey)
+ -> PX Hash 2:2 (slice3; segments: 2) (cost=0.00..11561.51 rows=7500000 width=20)
+ Hash Key: orders.o_custkey
+ -> Hash Semi Join (cost=0.00..11092.01 rows=7500000 width=20)
+ Hash Cond: (orders.o_orderkey = lineitem.l_orderkey)
+ -> Partial Seq Scan on orders (cost=0.00..1132.25 rows=7500000 width=20)
+ -> Hash (cost=7760.84..7760.84 rows=400 width=4)
+ -> PX Broadcast 2:2 (slice4; segments: 2) (cost=0.00..7760.84 rows=400 width=4)
+ -> Result (cost=0.00..7760.80 rows=200 width=4)
+ Filter: ((sum(lineitem.l_quantity)) > '314'::numeric)
+ -> Finalize HashAggregate (cost=0.00..7760.78 rows=500 width=12)
+ Group Key: lineitem.l_orderkey
+ -> PX Hash 2:2 (slice5; segments: 2) (cost=0.00..7760.72 rows=500 width=12)
+ Hash Key: lineitem.l_orderkey
+ -> Partial HashAggregate (cost=0.00..7760.70 rows=500 width=12)
+ Group Key: lineitem.l_orderkey
+ -> Partial Seq Scan on lineitem (cost=0.00..3350.82 rows=29993026 width=12)
+ -> Hash (cost=597.51..597.51 rows=749979 width=23)
+ -> PX Hash 2:2 (slice6; segments: 2) (cost=0.00..597.51 rows=749979 width=23)
+ Hash Key: customer.c_custkey
+ -> Partial Seq Scan on customer (cost=0.00..511.44 rows=749979 width=23)
+ -> Hash (cost=5146.80..5146.80 rows=29993026 width=12)
+ -> PX Hash 2:2 (slice7; segments: 2) (cost=0.00..5146.80 rows=29993026 width=12)
+ Hash Key: lineitem_1.l_orderkey
+ -> Partial Seq Scan on lineitem lineitem_1 (cost=0.00..3350.82 rows=29993026 width=12)
+ Optimizer: PolarDB PX Optimizer
+(37 rows)
+
+Time: 216.672 ms
+
+-- 执行 Q18
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 59113.965 ms (00:59.114)
+
可以看到比 PostgreSQL 的单机并行执行的时间略短。加大 ePQ 功能的节点并行度,查询性能将会有更明显的提升:
SET polar_px_dop_per_node = 2;
+\i finals/18.sql
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 42400.500 ms (00:42.401)
+
+SET polar_px_dop_per_node = 4;
+\i finals/18.sql
+
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 19892.603 ms (00:19.893)
+
+SET polar_px_dop_per_node = 8;
+\i finals/18.sql
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 10944.402 ms (00:10.944)
+
使用 ePQ 执行 Q17 和 Q18 时可能会出现 OOM。需要设置以下参数防止用尽内存:
SET polar_px_optimizer_enable_hashagg = 0; +
在上面的例子中,出于简单考虑,PolarDB for PostgreSQL 的多个计算节点被部署在同一台主机上。在这种场景下使用 ePQ 时,由于所有的计算节点都使用了同一台主机的 CPU、内存、I/O 带宽,因此本质上是基于单台主机的并行执行。实际上,PolarDB for PostgreSQL 的计算节点可以被部署在能够共享存储节点的多台机器上。此时使用 ePQ 功能将进行真正的跨机器分布式并行查询,能够充分利用多台机器上的计算资源。
参考 进阶部署 可以搭建起不同形态的 PolarDB for PostgreSQL 集群。集群搭建成功后,使用 ePQ 的方式与单机 ePQ 完全相同。
如果遇到如下错误:
psql:queries/q01.analyze.sq1:24: WARNING: interconnect may encountered a network error, please check your network +DETAIL: Failed to send packet (seq 1) to 192.168.1.8:57871 (pid 17766 cid 0) after 100 retries. +
可以尝试统一修改每台机器的 MTU 为 9000:
ifconfig <网卡名> mtu 9000 +
棠羽
2022/06/20
15 min
PostgreSQL 在优化器中为一个查询树输出一个执行效率最高的物理计划树。其中,执行效率高低的衡量是通过代价估算实现的。比如通过估算查询返回元组的条数,和元组的宽度,就可以计算出 I/O 开销;也可以根据将要执行的物理操作估算出可能需要消耗的 CPU 代价。优化器通过系统表 pg_statistic
获得这些在代价估算过程需要使用到的关键统计信息,而 pg_statistic
系统表中的统计信息又是通过自动或手动的 ANALYZE
操作(或 VACUUM
)计算得到的。ANALYZE
将会扫描表中的数据并按列进行分析,将得到的诸如每列的数据分布、最常见值、频率等统计信息写入系统表。
本文从源码的角度分析一下 ANALYZE
操作的实现机制。源码使用目前 PostgreSQL 最新的稳定版本 PostgreSQL 14。
首先,我们应当搞明白分析操作的输出是什么。所以我们可以看一看 pg_statistic
中有哪些列,每个列的含义是什么。这个系统表中的每一行表示其它数据表中 每一列的统计信息。
postgres=# \d+ pg_statistic
+ Table "pg_catalog.pg_statistic"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+-------------+----------+-----------+----------+---------+----------+--------------+-------------
+ starelid | oid | | not null | | plain | |
+ staattnum | smallint | | not null | | plain | |
+ stainherit | boolean | | not null | | plain | |
+ stanullfrac | real | | not null | | plain | |
+ stawidth | integer | | not null | | plain | |
+ stadistinct | real | | not null | | plain | |
+ stakind1 | smallint | | not null | | plain | |
+ stakind2 | smallint | | not null | | plain | |
+ stakind3 | smallint | | not null | | plain | |
+ stakind4 | smallint | | not null | | plain | |
+ stakind5 | smallint | | not null | | plain | |
+ staop1 | oid | | not null | | plain | |
+ staop2 | oid | | not null | | plain | |
+ staop3 | oid | | not null | | plain | |
+ staop4 | oid | | not null | | plain | |
+ staop5 | oid | | not null | | plain | |
+ stanumbers1 | real[] | | | | extended | |
+ stanumbers2 | real[] | | | | extended | |
+ stanumbers3 | real[] | | | | extended | |
+ stanumbers4 | real[] | | | | extended | |
+ stanumbers5 | real[] | | | | extended | |
+ stavalues1 | anyarray | | | | extended | |
+ stavalues2 | anyarray | | | | extended | |
+ stavalues3 | anyarray | | | | extended | |
+ stavalues4 | anyarray | | | | extended | |
+ stavalues5 | anyarray | | | | extended | |
+Indexes:
+ "pg_statistic_relid_att_inh_index" UNIQUE, btree (starelid, staattnum, stainherit)
+
/* ----------------
+ * pg_statistic definition. cpp turns this into
+ * typedef struct FormData_pg_statistic
+ * ----------------
+ */
+CATALOG(pg_statistic,2619,StatisticRelationId)
+{
+ /* These fields form the unique key for the entry: */
+ Oid starelid BKI_LOOKUP(pg_class); /* relation containing
+ * attribute */
+ int16 staattnum; /* attribute (column) stats are for */
+ bool stainherit; /* true if inheritance children are included */
+
+ /* the fraction of the column's entries that are NULL: */
+ float4 stanullfrac;
+
+ /*
+ * stawidth is the average width in bytes of non-null entries. For
+ * fixed-width datatypes this is of course the same as the typlen, but for
+ * var-width types it is more useful. Note that this is the average width
+ * of the data as actually stored, post-TOASTing (eg, for a
+ * moved-out-of-line value, only the size of the pointer object is
+ * counted). This is the appropriate definition for the primary use of
+ * the statistic, which is to estimate sizes of in-memory hash tables of
+ * tuples.
+ */
+ int32 stawidth;
+
+ /* ----------------
+ * stadistinct indicates the (approximate) number of distinct non-null
+ * data values in the column. The interpretation is:
+ * 0 unknown or not computed
+ * > 0 actual number of distinct values
+ * < 0 negative of multiplier for number of rows
+ * The special negative case allows us to cope with columns that are
+ * unique (stadistinct = -1) or nearly so (for example, a column in which
+ * non-null values appear about twice on the average could be represented
+ * by stadistinct = -0.5 if there are no nulls, or -0.4 if 20% of the
+ * column is nulls). Because the number-of-rows statistic in pg_class may
+ * be updated more frequently than pg_statistic is, it's important to be
+ * able to describe such situations as a multiple of the number of rows,
+ * rather than a fixed number of distinct values. But in other cases a
+ * fixed number is correct (eg, a boolean column).
+ * ----------------
+ */
+ float4 stadistinct;
+
+ /* ----------------
+ * To allow keeping statistics on different kinds of datatypes,
+ * we do not hard-wire any particular meaning for the remaining
+ * statistical fields. Instead, we provide several "slots" in which
+ * statistical data can be placed. Each slot includes:
+ * kind integer code identifying kind of data (see below)
+ * op OID of associated operator, if needed
+ * coll OID of relevant collation, or 0 if none
+ * numbers float4 array (for statistical values)
+ * values anyarray (for representations of data values)
+ * The ID, operator, and collation fields are never NULL; they are zeroes
+ * in an unused slot. The numbers and values fields are NULL in an
+ * unused slot, and might also be NULL in a used slot if the slot kind
+ * has no need for one or the other.
+ * ----------------
+ */
+
+ int16 stakind1;
+ int16 stakind2;
+ int16 stakind3;
+ int16 stakind4;
+ int16 stakind5;
+
+ Oid staop1 BKI_LOOKUP_OPT(pg_operator);
+ Oid staop2 BKI_LOOKUP_OPT(pg_operator);
+ Oid staop3 BKI_LOOKUP_OPT(pg_operator);
+ Oid staop4 BKI_LOOKUP_OPT(pg_operator);
+ Oid staop5 BKI_LOOKUP_OPT(pg_operator);
+
+ Oid stacoll1 BKI_LOOKUP_OPT(pg_collation);
+ Oid stacoll2 BKI_LOOKUP_OPT(pg_collation);
+ Oid stacoll3 BKI_LOOKUP_OPT(pg_collation);
+ Oid stacoll4 BKI_LOOKUP_OPT(pg_collation);
+ Oid stacoll5 BKI_LOOKUP_OPT(pg_collation);
+
+#ifdef CATALOG_VARLEN /* variable-length fields start here */
+ float4 stanumbers1[1];
+ float4 stanumbers2[1];
+ float4 stanumbers3[1];
+ float4 stanumbers4[1];
+ float4 stanumbers5[1];
+
+ /*
+ * Values in these arrays are values of the column's data type, or of some
+ * related type such as an array element type. We presently have to cheat
+ * quite a bit to allow polymorphic arrays of this kind, but perhaps
+ * someday it'll be a less bogus facility.
+ */
+ anyarray stavalues1;
+ anyarray stavalues2;
+ anyarray stavalues3;
+ anyarray stavalues4;
+ anyarray stavalues5;
+#endif
+} FormData_pg_statistic;
+
从数据库命令行的角度和内核 C 代码的角度来看,统计信息的内容都是一致的。所有的属性都以 sta
开头。其中:
starelid
表示当前列所属的表或索引staattnum
表示本行统计信息属于上述表或索引中的第几列stainherit
表示统计信息是否包含子列stanullfrac
表示该列中值为 NULL 的行数比例stawidth
表示该列非空值的平均宽度stadistinct
表示列中非空值的唯一值数量 0
表示未知或未计算> 0
表示唯一值的实际数量< 0
表示 negative of multiplier for number of rows由于不同数据类型所能够被计算的统计信息可能会有一些细微的差别,在接下来的部分中,PostgreSQL 预留了一些存放统计信息的 槽(slots)。目前的内核里暂时预留了五个槽:
#define STATISTIC_NUM_SLOTS 5
+
每一种特定的统计信息可以使用一个槽,具体在槽里放什么完全由这种统计信息的定义自由决定。每一个槽的可用空间包含这么几个部分(其中的 N
表示槽的编号,取值为 1
到 5
):
stakindN
:标识这种统计信息的整数编号staopN
:用于计算或使用统计信息的运算符 OIDstacollN
:排序规则 OIDstanumbersN
:浮点数数组stavaluesN
:任意值数组PostgreSQL 内核中规定,统计信息的编号 1
至 99
被保留给 PostgreSQL 核心统计信息使用,其它部分的编号安排如内核注释所示:
/*
+ * The present allocation of "kind" codes is:
+ *
+ * 1-99: reserved for assignment by the core PostgreSQL project
+ * (values in this range will be documented in this file)
+ * 100-199: reserved for assignment by the PostGIS project
+ * (values to be documented in PostGIS documentation)
+ * 200-299: reserved for assignment by the ESRI ST_Geometry project
+ * (values to be documented in ESRI ST_Geometry documentation)
+ * 300-9999: reserved for future public assignments
+ *
+ * For private use you may choose a "kind" code at random in the range
+ * 10000-30000. However, for code that is to be widely disseminated it is
+ * better to obtain a publicly defined "kind" code by request from the
+ * PostgreSQL Global Development Group.
+ */
+
目前可以在内核代码中看到的 PostgreSQL 核心统计信息有 7 个,编号分别从 1
到 7
。我们可以看看这 7 种统计信息分别如何使用上述的槽。
/*
+ * In a "most common values" slot, staop is the OID of the "=" operator
+ * used to decide whether values are the same or not, and stacoll is the
+ * collation used (same as column's collation). stavalues contains
+ * the K most common non-null values appearing in the column, and stanumbers
+ * contains their frequencies (fractions of total row count). The values
+ * shall be ordered in decreasing frequency. Note that since the arrays are
+ * variable-size, K may be chosen by the statistics collector. Values should
+ * not appear in MCV unless they have been observed to occur more than once;
+ * a unique column will have no MCV slot.
+ */
+#define STATISTIC_KIND_MCV 1
+
对于一个列中的 最常见值,在 staop
中保存 =
运算符来决定一个值是否等于一个最常见值。在 stavalues
中保存了该列中最常见的 K 个非空值,stanumbers
中分别保存了这 K 个值出现的频率。
/*
+ * A "histogram" slot describes the distribution of scalar data. staop is
+ * the OID of the "<" operator that describes the sort ordering, and stacoll
+ * is the relevant collation. (In theory more than one histogram could appear,
+ * if a datatype has more than one useful sort operator or we care about more
+ * than one collation. Currently the collation will always be that of the
+ * underlying column.) stavalues contains M (>=2) non-null values that
+ * divide the non-null column data values into M-1 bins of approximately equal
+ * population. The first stavalues item is the MIN and the last is the MAX.
+ * stanumbers is not used and should be NULL. IMPORTANT POINT: if an MCV
+ * slot is also provided, then the histogram describes the data distribution
+ * *after removing the values listed in MCV* (thus, it's a "compressed
+ * histogram" in the technical parlance). This allows a more accurate
+ * representation of the distribution of a column with some very-common
+ * values. In a column with only a few distinct values, it's possible that
+ * the MCV list describes the entire data population; in this case the
+ * histogram reduces to empty and should be omitted.
+ */
+#define STATISTIC_KIND_HISTOGRAM 2
+
表示一个(数值)列的数据分布直方图。staop
保存 <
运算符用于决定数据分布的排序顺序。stavalues
包含了能够将该列的非空值划分到 M - 1 个容量接近的桶中的 M 个非空值。如果该列中已经有了 MCV 的槽,那么数据分布直方图中将不包含 MCV 中的值,以获得更精确的数据分布。
/*
+ * A "correlation" slot describes the correlation between the physical order
+ * of table tuples and the ordering of data values of this column, as seen
+ * by the "<" operator identified by staop with the collation identified by
+ * stacoll. (As with the histogram, more than one entry could theoretically
+ * appear.) stavalues is not used and should be NULL. stanumbers contains
+ * a single entry, the correlation coefficient between the sequence of data
+ * values and the sequence of their actual tuple positions. The coefficient
+ * ranges from +1 to -1.
+ */
+#define STATISTIC_KIND_CORRELATION 3
+
在 stanumbers
中保存数据值和它们的实际元组位置的相关系数。
/*
+ * A "most common elements" slot is similar to a "most common values" slot,
+ * except that it stores the most common non-null *elements* of the column
+ * values. This is useful when the column datatype is an array or some other
+ * type with identifiable elements (for instance, tsvector). staop contains
+ * the equality operator appropriate to the element type, and stacoll
+ * contains the collation to use with it. stavalues contains
+ * the most common element values, and stanumbers their frequencies. Unlike
+ * MCV slots, frequencies are measured as the fraction of non-null rows the
+ * element value appears in, not the frequency of all rows. Also unlike
+ * MCV slots, the values are sorted into the element type's default order
+ * (to support binary search for a particular value). Since this puts the
+ * minimum and maximum frequencies at unpredictable spots in stanumbers,
+ * there are two extra members of stanumbers, holding copies of the minimum
+ * and maximum frequencies. Optionally, there can be a third extra member,
+ * which holds the frequency of null elements (expressed in the same terms:
+ * the fraction of non-null rows that contain at least one null element). If
+ * this member is omitted, the column is presumed to contain no null elements.
+ *
+ * Note: in current usage for tsvector columns, the stavalues elements are of
+ * type text, even though their representation within tsvector is not
+ * exactly text.
+ */
+#define STATISTIC_KIND_MCELEM 4
+
与 MCV 类似,但是保存的是列中的 最常见元素,主要用于数组等类型。同样,在 staop
中保存了等值运算符用于判断元素出现的频率高低。但与 MCV 不同的是这里的频率计算的分母是非空的行,而不是所有的行。另外,所有的常见元素使用元素对应数据类型的默认顺序进行排序,以便二分查找。
/*
+ * A "distinct elements count histogram" slot describes the distribution of
+ * the number of distinct element values present in each row of an array-type
+ * column. Only non-null rows are considered, and only non-null elements.
+ * staop contains the equality operator appropriate to the element type,
+ * and stacoll contains the collation to use with it.
+ * stavalues is not used and should be NULL. The last member of stanumbers is
+ * the average count of distinct element values over all non-null rows. The
+ * preceding M (>=2) members form a histogram that divides the population of
+ * distinct-elements counts into M-1 bins of approximately equal population.
+ * The first of these is the minimum observed count, and the last the maximum.
+ */
+#define STATISTIC_KIND_DECHIST 5
+
表示列中出现所有数值的频率分布直方图。stanumbers
数组的前 M 个元素是将列中所有唯一值的出现次数大致均分到 M - 1 个桶中的边界值。后续跟上一个所有唯一值的平均出现次数。这个统计信息应该会被用于计算 选择率。
/*
+ * A "length histogram" slot describes the distribution of range lengths in
+ * rows of a range-type column. stanumbers contains a single entry, the
+ * fraction of empty ranges. stavalues is a histogram of non-empty lengths, in
+ * a format similar to STATISTIC_KIND_HISTOGRAM: it contains M (>=2) range
+ * values that divide the column data values into M-1 bins of approximately
+ * equal population. The lengths are stored as float8s, as measured by the
+ * range type's subdiff function. Only non-null rows are considered.
+ */
+#define STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM 6
+
长度直方图描述了一个范围类型的列的范围长度分布。同样也是一个长度为 M 的直方图,保存在 stanumbers
中。
/*
+ * A "bounds histogram" slot is similar to STATISTIC_KIND_HISTOGRAM, but for
+ * a range-type column. stavalues contains M (>=2) range values that divide
+ * the column data values into M-1 bins of approximately equal population.
+ * Unlike a regular scalar histogram, this is actually two histograms combined
+ * into a single array, with the lower bounds of each value forming a
+ * histogram of lower bounds, and the upper bounds a histogram of upper
+ * bounds. Only non-NULL, non-empty ranges are included.
+ */
+#define STATISTIC_KIND_BOUNDS_HISTOGRAM 7
+
边界直方图同样也被用于范围类型,与数据分布直方图类似。stavalues
中保存了使该列数值大致均分到 M - 1 个桶中的 M 个范围边界值。只考虑非空行。
知道 pg_statistic
最终需要保存哪些信息以后,再来看看内核如何收集和计算这些信息。让我们进入 PostgreSQL 内核的执行器代码中。对于 ANALYZE
这种工具性质的指令,执行器代码通过 standard_ProcessUtility()
函数中的 switch case 将每一种指令路由到实现相应功能的函数中。
/*
+ * standard_ProcessUtility itself deals only with utility commands for
+ * which we do not provide event trigger support. Commands that do have
+ * such support are passed down to ProcessUtilitySlow, which contains the
+ * necessary infrastructure for such triggers.
+ *
+ * This division is not just for performance: it's critical that the
+ * event trigger code not be invoked when doing START TRANSACTION for
+ * example, because we might need to refresh the event trigger cache,
+ * which requires being in a valid transaction.
+ */
+void
+standard_ProcessUtility(PlannedStmt *pstmt,
+ const char *queryString,
+ bool readOnlyTree,
+ ProcessUtilityContext context,
+ ParamListInfo params,
+ QueryEnvironment *queryEnv,
+ DestReceiver *dest,
+ QueryCompletion *qc)
+{
+ // ...
+
+ switch (nodeTag(parsetree))
+ {
+ // ...
+
+ case T_VacuumStmt:
+ ExecVacuum(pstate, (VacuumStmt *) parsetree, isTopLevel);
+ break;
+
+ // ...
+ }
+
+ // ...
+}
+
ANALYZE
的处理逻辑入口和 VACUUM
一致,进入 ExecVacuum()
函数。
/*
+ * Primary entry point for manual VACUUM and ANALYZE commands
+ *
+ * This is mainly a preparation wrapper for the real operations that will
+ * happen in vacuum().
+ */
+void
+ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)
+{
+ // ...
+
+ /* Now go through the common routine */
+ vacuum(vacstmt->rels, ¶ms, NULL, isTopLevel);
+}
+
在 parse 了一大堆 option 之后,进入了 vacuum()
函数。在这里,内核代码将会首先明确一下要分析哪些表。因为 ANALYZE
命令在使用上可以:
在明确要分析哪些表以后,依次将每一个表传入 analyze_rel()
函数:
if (params->options & VACOPT_ANALYZE)
+{
+ // ...
+
+ analyze_rel(vrel->oid, vrel->relation, params,
+ vrel->va_cols, in_outer_xact, vac_strategy);
+
+ // ...
+}
+
进入 analyze_rel()
函数以后,内核代码将会对将要被分析的表加 ShareUpdateExclusiveLock
锁,以防止两个并发进行的 ANALYZE
。然后根据待分析表的类型来决定具体的处理方式(比如分析一个 FDW 外表就应该直接调用 FDW routine 中提供的 ANALYZE 功能了)。接下来,将这个表传入 do_analyze_rel()
函数中。
/*
+ * analyze_rel() -- analyze one relation
+ *
+ * relid identifies the relation to analyze. If relation is supplied, use
+ * the name therein for reporting any failure to open/lock the rel; do not
+ * use it once we've successfully opened the rel, since it might be stale.
+ */
+void
+analyze_rel(Oid relid, RangeVar *relation,
+ VacuumParams *params, List *va_cols, bool in_outer_xact,
+ BufferAccessStrategy bstrategy)
+{
+ // ...
+
+ /*
+ * Do the normal non-recursive ANALYZE. We can skip this for partitioned
+ * tables, which don't contain any rows.
+ */
+ if (onerel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)
+ do_analyze_rel(onerel, params, va_cols, acquirefunc,
+ relpages, false, in_outer_xact, elevel);
+
+ // ...
+}
+
进入 do_analyze_rel()
函数后,内核代码将进一步明确要分析一个表中的哪些列:用户可能指定只分析表中的某几个列——被频繁访问的列才更有被分析的价值。然后还要打开待分析表的所有索引,看看是否有可以被分析的列。
为了得到每一列的统计信息,显然我们需要把每一列的数据从磁盘上读起来再去做计算。这里就有一个比较关键的问题了:到底扫描多少行数据呢?理论上,分析尽可能多的数据,最好是全部的数据,肯定能够得到最精确的统计数据;但是对一张很大的表来说,我们没有办法在内存中放下所有的数据,并且分析的阻塞时间也是不可接受的。所以用户可以指定要采样的最大行数,从而在运行开销和统计信息准确性上达成一个妥协:
/*
+ * Determine how many rows we need to sample, using the worst case from
+ * all analyzable columns. We use a lower bound of 100 rows to avoid
+ * possible overflow in Vitter's algorithm. (Note: that will also be the
+ * target in the corner case where there are no analyzable columns.)
+ */
+targrows = 100;
+for (i = 0; i < attr_cnt; i++)
+{
+ if (targrows < vacattrstats[i]->minrows)
+ targrows = vacattrstats[i]->minrows;
+}
+for (ind = 0; ind < nindexes; ind++)
+{
+ AnlIndexData *thisdata = &indexdata[ind];
+
+ for (i = 0; i < thisdata->attr_cnt; i++)
+ {
+ if (targrows < thisdata->vacattrstats[i]->minrows)
+ targrows = thisdata->vacattrstats[i]->minrows;
+ }
+}
+
+/*
+ * Look at extended statistics objects too, as those may define custom
+ * statistics target. So we may need to sample more rows and then build
+ * the statistics with enough detail.
+ */
+minrows = ComputeExtStatisticsRows(onerel, attr_cnt, vacattrstats);
+
+if (targrows < minrows)
+ targrows = minrows;
+
在确定需要采样多少行数据后,内核代码分配了一块相应长度的元组数组,然后开始使用 acquirefunc
函数指针采样数据:
/*
+ * Acquire the sample rows
+ */
+rows = (HeapTuple *) palloc(targrows * sizeof(HeapTuple));
+pgstat_progress_update_param(PROGRESS_ANALYZE_PHASE,
+ inh ? PROGRESS_ANALYZE_PHASE_ACQUIRE_SAMPLE_ROWS_INH :
+ PROGRESS_ANALYZE_PHASE_ACQUIRE_SAMPLE_ROWS);
+if (inh)
+ numrows = acquire_inherited_sample_rows(onerel, elevel,
+ rows, targrows,
+ &totalrows, &totaldeadrows);
+else
+ numrows = (*acquirefunc) (onerel, elevel,
+ rows, targrows,
+ &totalrows, &totaldeadrows);
+
这个函数指针指向的是 analyze_rel()
函数中设置好的 acquire_sample_rows()
函数。该函数使用两阶段模式对表中的数据进行采样:
两阶段同时进行。在采样完成后,被采样到的元组应该已经被放置在元组数组中了。对这个元组数组按照元组的位置进行快速排序,并使用这些采样到的数据估算整个表中的存活元组与死元组的个数:
/*
+ * acquire_sample_rows -- acquire a random sample of rows from the table
+ *
+ * Selected rows are returned in the caller-allocated array rows[], which
+ * must have at least targrows entries.
+ * The actual number of rows selected is returned as the function result.
+ * We also estimate the total numbers of live and dead rows in the table,
+ * and return them into *totalrows and *totaldeadrows, respectively.
+ *
+ * The returned list of tuples is in order by physical position in the table.
+ * (We will rely on this later to derive correlation estimates.)
+ *
+ * As of May 2004 we use a new two-stage method: Stage one selects up
+ * to targrows random blocks (or all blocks, if there aren't so many).
+ * Stage two scans these blocks and uses the Vitter algorithm to create
+ * a random sample of targrows rows (or less, if there are less in the
+ * sample of blocks). The two stages are executed simultaneously: each
+ * block is processed as soon as stage one returns its number and while
+ * the rows are read stage two controls which ones are to be inserted
+ * into the sample.
+ *
+ * Although every row has an equal chance of ending up in the final
+ * sample, this sampling method is not perfect: not every possible
+ * sample has an equal chance of being selected. For large relations
+ * the number of different blocks represented by the sample tends to be
+ * too small. We can live with that for now. Improvements are welcome.
+ *
+ * An important property of this sampling method is that because we do
+ * look at a statistically unbiased set of blocks, we should get
+ * unbiased estimates of the average numbers of live and dead rows per
+ * block. The previous sampling method put too much credence in the row
+ * density near the start of the table.
+ */
+static int
+acquire_sample_rows(Relation onerel, int elevel,
+ HeapTuple *rows, int targrows,
+ double *totalrows, double *totaldeadrows)
+{
+ // ...
+
+ /* Outer loop over blocks to sample */
+ while (BlockSampler_HasMore(&bs))
+ {
+ bool block_accepted;
+ BlockNumber targblock = BlockSampler_Next(&bs);
+ // ...
+ }
+
+ // ...
+
+ /*
+ * If we didn't find as many tuples as we wanted then we're done. No sort
+ * is needed, since they're already in order.
+ *
+ * Otherwise we need to sort the collected tuples by position
+ * (itempointer). It's not worth worrying about corner cases where the
+ * tuples are already sorted.
+ */
+ if (numrows == targrows)
+ qsort((void *) rows, numrows, sizeof(HeapTuple), compare_rows);
+
+ /*
+ * Estimate total numbers of live and dead rows in relation, extrapolating
+ * on the assumption that the average tuple density in pages we didn't
+ * scan is the same as in the pages we did scan. Since what we scanned is
+ * a random sample of the pages in the relation, this should be a good
+ * assumption.
+ */
+ if (bs.m > 0)
+ {
+ *totalrows = floor((liverows / bs.m) * totalblocks + 0.5);
+ *totaldeadrows = floor((deadrows / bs.m) * totalblocks + 0.5);
+ }
+ else
+ {
+ *totalrows = 0.0;
+ *totaldeadrows = 0.0;
+ }
+
+ // ...
+}
+
回到 do_analyze_rel()
函数。采样到数据以后,对于要分析的每一个列,分别计算统计数据,然后更新 pg_statistic
系统表:
/*
+ * Compute the statistics. Temporary results during the calculations for
+ * each column are stored in a child context. The calc routines are
+ * responsible to make sure that whatever they store into the VacAttrStats
+ * structure is allocated in anl_context.
+ */
+if (numrows > 0)
+{
+ // ...
+
+ for (i = 0; i < attr_cnt; i++)
+ {
+ VacAttrStats *stats = vacattrstats[i];
+ AttributeOpts *aopt;
+
+ stats->rows = rows;
+ stats->tupDesc = onerel->rd_att;
+ stats->compute_stats(stats,
+ std_fetch_func,
+ numrows,
+ totalrows);
+
+ // ...
+ }
+
+ // ...
+
+ /*
+ * Emit the completed stats rows into pg_statistic, replacing any
+ * previous statistics for the target columns. (If there are stats in
+ * pg_statistic for columns we didn't process, we leave them alone.)
+ */
+ update_attstats(RelationGetRelid(onerel), inh,
+ attr_cnt, vacattrstats);
+
+ // ...
+}
+
显然,对于不同类型的列,其 compute_stats
函数指针指向的计算函数肯定不太一样。所以我们不妨看看给这个函数指针赋值的地方:
/*
+ * std_typanalyze -- the default type-specific typanalyze function
+ */
+bool
+std_typanalyze(VacAttrStats *stats)
+{
+ // ...
+
+ /*
+ * Determine which standard statistics algorithm to use
+ */
+ if (OidIsValid(eqopr) && OidIsValid(ltopr))
+ {
+ /* Seems to be a scalar datatype */
+ stats->compute_stats = compute_scalar_stats;
+ /*--------------------
+ * The following choice of minrows is based on the paper
+ * "Random sampling for histogram construction: how much is enough?"
+ * by Surajit Chaudhuri, Rajeev Motwani and Vivek Narasayya, in
+ * Proceedings of ACM SIGMOD International Conference on Management
+ * of Data, 1998, Pages 436-447. Their Corollary 1 to Theorem 5
+ * says that for table size n, histogram size k, maximum relative
+ * error in bin size f, and error probability gamma, the minimum
+ * random sample size is
+ * r = 4 * k * ln(2*n/gamma) / f^2
+ * Taking f = 0.5, gamma = 0.01, n = 10^6 rows, we obtain
+ * r = 305.82 * k
+ * Note that because of the log function, the dependence on n is
+ * quite weak; even at n = 10^12, a 300*k sample gives <= 0.66
+ * bin size error with probability 0.99. So there's no real need to
+ * scale for n, which is a good thing because we don't necessarily
+ * know it at this point.
+ *--------------------
+ */
+ stats->minrows = 300 * attr->attstattarget;
+ }
+ else if (OidIsValid(eqopr))
+ {
+ /* We can still recognize distinct values */
+ stats->compute_stats = compute_distinct_stats;
+ /* Might as well use the same minrows as above */
+ stats->minrows = 300 * attr->attstattarget;
+ }
+ else
+ {
+ /* Can't do much but the trivial stuff */
+ stats->compute_stats = compute_trivial_stats;
+ /* Might as well use the same minrows as above */
+ stats->minrows = 300 * attr->attstattarget;
+ }
+
+ // ...
+}
+
这个条件判断语句可以被解读为:
=
(eqopr
:equals operator)和 <
(ltopr
:less than operator),那么这个列应该是一个数值类型,可以使用 compute_scalar_stats()
函数进行分析=
运算符,那么依旧还可以使用 compute_distinct_stats
进行唯一值的统计分析compute_trivial_stats
进行一些简单的分析我们可以分别看看这三个分析函数里做了啥,但我不准备深入每一个分析函数解读其中的逻辑了。因为其中的思想基于一些很古早的统计学论文,古早到连 PDF 上的字母都快看不清了。在代码上没有特别大的可读性,因为基本是参照论文中的公式实现的,不看论文根本没法理解变量和公式的含义。
如果某个列的数据类型不支持等值运算符和比较运算符,那么就只能进行一些简单的分析,比如:
这些可以通过对采样后的元组数组进行循环遍历后轻松得到。
/*
+ * compute_trivial_stats() -- compute very basic column statistics
+ *
+ * We use this when we cannot find a hash "=" operator for the datatype.
+ *
+ * We determine the fraction of non-null rows and the average datum width.
+ */
+static void
+compute_trivial_stats(VacAttrStatsP stats,
+ AnalyzeAttrFetchFunc fetchfunc,
+ int samplerows,
+ double totalrows)
+{}
+
如果某个列只支持等值运算符,也就是说我们只能知道一个数值 是什么,但不能和其它数值比大小。所以无法分析数值在大小范围上的分布,只能分析数值在出现频率上的分布。所以该函数分析的统计数据包含:
/*
+ * compute_distinct_stats() -- compute column statistics including ndistinct
+ *
+ * We use this when we can find only an "=" operator for the datatype.
+ *
+ * We determine the fraction of non-null rows, the average width, the
+ * most common values, and the (estimated) number of distinct values.
+ *
+ * The most common values are determined by brute force: we keep a list
+ * of previously seen values, ordered by number of times seen, as we scan
+ * the samples. A newly seen value is inserted just after the last
+ * multiply-seen value, causing the bottommost (oldest) singly-seen value
+ * to drop off the list. The accuracy of this method, and also its cost,
+ * depend mainly on the length of the list we are willing to keep.
+ */
+static void
+compute_distinct_stats(VacAttrStatsP stats,
+ AnalyzeAttrFetchFunc fetchfunc,
+ int samplerows,
+ double totalrows)
+{}
+
如果一个列的数据类型支持等值运算符和比较运算符,那么可以进行最详尽的分析。分析目标包含:
/*
+ * compute_distinct_stats() -- compute column statistics including ndistinct
+ *
+ * We use this when we can find only an "=" operator for the datatype.
+ *
+ * We determine the fraction of non-null rows, the average width, the
+ * most common values, and the (estimated) number of distinct values.
+ *
+ * The most common values are determined by brute force: we keep a list
+ * of previously seen values, ordered by number of times seen, as we scan
+ * the samples. A newly seen value is inserted just after the last
+ * multiply-seen value, causing the bottommost (oldest) singly-seen value
+ * to drop off the list. The accuracy of this method, and also its cost,
+ * depend mainly on the length of the list we are willing to keep.
+ */
+static void
+compute_distinct_stats(VacAttrStatsP stats,
+ AnalyzeAttrFetchFunc fetchfunc,
+ int samplerows,
+ double totalrows)
+{}
+
以 PostgreSQL 优化器需要的统计信息为切入点,分析了 ANALYZE
命令的大致执行流程。出于简洁性,在流程分析上没有覆盖各种 corner case 和相关的处理逻辑。
PostgreSQL 14 Documentation: ANALYZE
PostgreSQL 14 Documentation: 25.1. Routine Vacuuming
PostgreSQL 14 Documentation: 14.2. Statistics Used by the Planner
严华
2022/09/10
35 min
很多 PolarDB PG 的用户都有 TP (Transactional Processing) 和 AP (Analytical Processing) 共用的需求。他们期望数据库在白天处理高并发的 TP 请求,在夜间 TP 流量下降、机器负载空闲时进行 AP 的报表分析。但是即使这样,依然没有最大化利用空闲机器的资源。原先的 PolarDB PG 数据库在处理复杂的 AP 查询时会遇到两大挑战:
为了解决用户实际使用中的痛点,PolarDB 实现了 HTAP 特性。当前业界 HTAP 的解决方案主要有以下三种:
基于 PolarDB 的存储计算分离架构,我们研发了分布式 MPP 执行引擎,提供了跨机并行执行、弹性计算弹性扩展的保证,使得 PolarDB 初步具备了 HTAP 的能力:
PolarDB HTAP 的核心是分布式 MPP 执行引擎,是典型的火山模型引擎。A、B 两张表先做 join 再做聚合输出,这也是 PostgreSQL 单机执行引擎的执行流程。
在传统的 MPP 执行引擎中,数据被打散到不同的节点上,不同节点上的数据可能具有不同的分布属性,比如哈希分布、随机分布、复制分布等。传统的 MPP 执行引擎会针对不同表的数据分布特点,在执行计划中插入算子来保证上层算子对数据的分布属性无感知。
不同的是,PolarDB 是共享存储架构,存储上的数据可以被所有计算节点全量访问。如果使用传统的 MPP 执行引擎,每个计算节点 Worker 都会扫描全量数据,从而得到重复的数据;同时,也没有起到扫描时分治加速的效果,并不能称得上是真正意义上的 MPP 引擎。
因此,在 PolarDB 分布式 MPP 执行引擎中,我们借鉴了火山模型论文中的思想,对所有扫描算子进行并发处理,引入了 PxScan 算子来屏蔽共享存储。PxScan 算子将 shared-storage 的数据映射为 shared-nothing 的数据,通过 Worker 之间的协调,将目标表划分为多个虚拟分区数据块,每个 Worker 扫描各自的虚拟分区数据块,从而实现了跨机分布式并行扫描。
PxScan 算子扫描出来的数据会通过 Shuffle 算子来重分布。重分布后的数据在每个 Worker 上如同单机执行一样,按照火山模型来执行。
传统 MPP 只能在指定节点发起 MPP 查询,因此每个节点上都只能有单个 Worker 扫描一张表。为了支持云原生下 serverless 弹性扩展的需求,我们引入了分布式事务一致性保证。
任意选择一个节点作为 Coordinator 节点,它的 ReadLSN 会作为约定的 LSN,从所有 MPP 节点的快照版本号中选择最小的版本号作为全局约定的快照版本号。通过 LSN 的回放等待和 Global Snapshot 同步机制,确保在任何一个节点发起 MPP 查询时,数据和快照均能达到一致可用的状态。
为了实现 serverless 的弹性扩展,我们从共享存储的特点出发,将 Coordinator 节点全链路上各个模块需要的外部依赖全部放至共享存储上。各个 Worker 节点运行时需要的参数也会通过控制链路从 Coordinator 节点同步过来,从而使 Coordinator 节点和 Worker 节点全链路 无状态化 (Stateless)。
基于以上两点设计,PolarDB 的弹性扩展具备了以下几大优势:
倾斜是传统 MPP 固有的问题,其根本原因主要是数据分布倾斜和数据计算倾斜:
倾斜会导致传统 MPP 在执行时出现木桶效应,执行完成时间受制于执行最慢的子任务。
PolarDB 设计并实现了 自适应扫描机制。如上图所示,采用 Coordinator 节点来协调 Worker 节点的工作模式。在扫描数据时,Coordinator 节点会在内存中创建一个任务管理器,根据扫描任务对 Worker 节点进行调度。Coordinator 节点内部分为两个线程:
扫描进度较快的 Worker 能够扫描多个数据块,实现能者多劳。比如上图中 RO1 与 RO3 的 Worker 各自扫描了 4 个数据块, RO2 由于计算倾斜可以扫描更多数据块,因此它最终扫描了 6 个数据块。
PolarDB HTAP 的自适应扫描机制还充分考虑了 PostgreSQL 的 Buffer Pool 亲和性,保证每个 Worker 尽可能扫描固定的数据块,从而最大化命中 Buffer Pool 的概率,降低 I/O 开销。
我们使用 256 GB 内存的 16 个 PolarDB PG 实例作为 RO 节点,搭建了 1 TB 的 TPC-H 环境进行对比测试。相较于单机并行,分布式 MPP 并行充分利用了所有 RO 节点的计算资源和底层共享存储的 I/O 带宽,从根本上解决了前文提及的 HTAP 诸多挑战。在 TPC-H 的 22 条 SQL 中,有 3 条 SQL 加速了 60 多倍,19 条 SQL 加速了 10 多倍,平均加速 23 倍。
此外,我们也测试了弹性扩展计算资源带来的性能变化。通过增加 CPU 的总核心数,从 16 核增加到 128 核,TPC-H 的总运行时间线性提升,每条 SQL 的执行速度也呈线性提升,这也验证了 PolarDB HTAP serverless 弹性扩展的特点。
在测试中发现,当 CPU 的总核数增加到 256 核时,性能提升不再明显。原因是此时 PolarDB 共享存储的 I/O 带宽已经打满,成为了瓶颈。
我们将 PolarDB 的分布式 MPP 执行引擎与传统数据库的 MPP 执行引擎进行了对比,同样使用了 256 GB 内存的 16 个节点。
在 1 TB 的 TPC-H 数据上,当保持与传统 MPP 数据库相同单机并行度的情况下(多机单进程),PolarDB 的性能是传统 MPP 数据库的 90%。其中最本质的原因是传统 MPP 数据库的数据默认是哈希分布的,当两张表的 join key 是各自的分布键时,可以不用 shuffle 直接进行本地的 Wise Join。而 PolarDB 的底层是共享存储池,PxScan 算子并行扫描出来的数据等价于随机分布,必须进行 shuffle 重分布以后才能像传统 MPP 数据库一样进行后续的处理。因此,TPC-H 涉及到表连接时,PolarDB 相比传统 MPP 数据库多了一次网络 shuffle 的开销。
PolarDB 分布式 MPP 执行引擎能够进行弹性扩展,数据无需重分布。因此,在有限的 16 台机器上执行 MPP 时,PolarDB 还可以继续扩展单机并行度,充分利用每台机器的资源:当 PolarDB 的单机并行度为 8 时,它的性能是传统 MPP 数据库的 5-6 倍;当 PolarDB 的单机并行度呈线性增加时,PolarDB 的总体性能也呈线性增加。只需要修改配置参数,就可以即时生效。
经过持续迭代的研发,目前 PolarDB HTAP 在 Parallel Query 上支持的功能特性主要有五大部分:
基于 PolarDB 读写分离架构和 HTAP serverless 弹性扩展的设计, PolarDB Parallel DML 支持一写多读、多写多读两种特性。
不同的特性适用不同的场景,用户可以根据自己的业务特点来选择不同的 PDML 功能特性。
PolarDB 分布式 MPP 执行引擎,不仅可以用于只读查询和 DML,还可以用于 索引构建加速。OLTP 业务中有大量的索引,而 B-Tree 索引创建的过程大约有 80% 的时间消耗在排序和构建索引页上,20% 消耗在写入索引页上。如下图所示,PolarDB 利用 RO 节点对数据进行分布式 MPP 加速排序,采用流水化的技术来构建索引页,同时使用批量写入技术来提升索引页的写入速度。
在目前索引构建加速这一特性中,PolarDB 已经对 B-Tree 索引的普通创建以及 B-Tree 索引的在线创建 (Concurrently) 两种功能进行了支持。
PolarDB HTAP 适用于日常业务中的 轻分析类业务,例如:对账业务,报表业务。
PolarDB PG 引擎默认不开启 MPP 功能。若您需要使用此功能,请使用如下参数:
polar_enable_px
:指定是否开启 MPP 功能。默认为 OFF
,即不开启。polar_px_max_workers_number
:设置单个节点上的最大 MPP Worker 进程数,默认为 30
。该参数限制了单个节点上的最大并行度,节点上所有会话的 MPP workers 进程数不能超过该参数大小。polar_px_dop_per_node
:设置当前会话并行查询的并行度,默认为 1
,推荐值为当前 CPU 总核数。若设置该参数为 N
,则一个会话在每个节点上将会启用 N
个 MPP Worker 进程,用于处理当前的 MPP 逻辑polar_px_nodes
:指定参与 MPP 的只读节点。默认为空,表示所有只读节点都参与。可配置为指定节点参与 MPP,以逗号分隔px_worker
:指定 MPP 是否对特定表生效。默认不生效。MPP 功能比较消耗集群计算节点的资源,因此只有对设置了 px_workers
的表才使用该功能。例如: ALTER TABLE t1 SET(px_workers=1)
表示 t1 表允许 MPPALTER TABLE t1 SET(px_workers=-1)
表示 t1 表禁止 MPPALTER TABLE t1 SET(px_workers=0)
表示 t1 表忽略 MPP(默认状态)本示例以简单的单表查询操作,来描述 MPP 的功能是否有效。
-- 创建 test 表并插入基础数据。
+CREATE TABLE test(id int);
+INSERT INTO test SELECT generate_series(1,1000000);
+
+-- 默认情况下 MPP 功能不开启,单表查询执行计划为 PG 原生的 Seq Scan
+EXPLAIN SELECT * FROM test;
+ QUERY PLAN
+--------------------------------------------------------
+ Seq Scan on test (cost=0.00..35.50 rows=2550 width=4)
+(1 row)
+
开启并使用 MPP 功能:
-- 对 test 表启用 MPP 功能
+ALTER TABLE test SET (px_workers=1);
+
+-- 开启 MPP 功能
+SET polar_enable_px = on;
+
+EXPLAIN SELECT * FROM test;
+
+ QUERY PLAN
+-------------------------------------------------------------------------------
+ PX Coordinator 2:1 (slice1; segments: 2) (cost=0.00..431.00 rows=1 width=4)
+ -> Seq Scan on test (scan partial) (cost=0.00..431.00 rows=1 width=4)
+ Optimizer: PolarDB PX Optimizer
+(3 rows)
+
配置参与 MPP 的计算节点范围:
-- 查询当前所有只读节点的名称
+CREATE EXTENSION polar_monitor;
+
+SELECT name,host,port FROM polar_cluster_info WHERE px_node='t';
+ name | host | port
+-------+-----------+------
+ node1 | 127.0.0.1 | 5433
+ node2 | 127.0.0.1 | 5434
+(2 rows)
+
+-- 当前集群有 2 个只读节点,名称分别为:node1,node2
+
+-- 指定 node1 只读节点参与 MPP
+SET polar_px_nodes = 'node1';
+
+-- 查询参与并行查询的节点
+SHOW polar_px_nodes;
+ polar_px_nodes
+----------------
+ node1
+(1 row)
+
+EXPLAIN SELECT * FROM test;
+ QUERY PLAN
+-------------------------------------------------------------------------------
+ PX Coordinator 1:1 (slice1; segments: 1) (cost=0.00..431.00 rows=1 width=4)
+ -> Partial Seq Scan on test (cost=0.00..431.00 rows=1 width=4)
+ Optimizer: PolarDB PX Optimizer
+(3 rows)
+
当前 MPP 对分区表支持的功能如下所示:
--分区表 MPP 功能默认关闭,需要先开启 MPP 功能
+SET polar_enable_px = ON;
+
+-- 执行以下语句,开启分区表 MPP 功能
+SET polar_px_enable_partition = true;
+
+-- 执行以下语句,开启多级分区表 MPP 功能
+SET polar_px_optimizer_multilevel_partitioning = true;
+
当前仅支持对 B-Tree 索引的构建,且暂不支持 INCLUDE
等索引构建语法,暂不支持表达式等索引列类型。
如果需要使用 MPP 功能加速创建索引,请使用如下参数:
polar_px_dop_per_node
:指定通过 MPP 加速构建索引的并行度。默认为 1
。polar_px_enable_replay_wait
:当使用 MPP 加速索引构建时,当前会话内无需手动开启该参数,该参数将自动生效,以保证最近更新的数据表项可以被创建到索引中,保证索引表的完整性。索引创建完成后,该参数将会被重置为数据库默认值。polar_px_enable_btbuild
:是否开启使用 MPP 加速创建索引。取值为 OFF
时不开启(默认),取值为 ON
时开启。polar_bt_write_page_buffer_size
:指定索引构建过程中的写 I/O 策略。该参数默认值为 0
(不开启),单位为块,最大值可设置为 8192
。推荐设置为 4096
。 polar_bt_write_page_buffer_size
大小的 buffer,对于需要写盘的索引页,会通过该 buffer 进行 I/O 合并再统一写盘,避免了频繁调度 I/O 带来的性能开销。该参数会额外提升 20% 的索引创建性能。-- 开启使用 MPP 加速创建索引功能。
+SET polar_px_enable_btbuild = on;
+
+-- 使用如下语法创建索引
+CREATE INDEX t ON test(id) WITH(px_build = ON);
+
+-- 查询表结构
+\d test
+ Table "public.test"
+ Column | Type | Collation | Nullable | Default
+--------+---------+-----------+----------+---------
+ id | integer | | |
+ id2 | integer | | |
+Indexes:
+ "t" btree (id) WITH (px_build=finish)
+
北侠
2021/08/24
35 min
PolarDB for PostgreSQL (hereafter simplified as PolarDB) is a stable, reliable, scalable, highly available, and secure enterprise-grade database service that is independently developed by Alibaba Cloud to help you increase security compliance and cost-effectiveness. PolarDB is 100% compatible with PostgreSQL. It runs in a proprietary compute-storage separation architecture of Alibaba Cloud to support the horizontal scaling of the storage and computing capabilities.
PolarDB can process a mix of online transaction processing (OLTP) workloads and online analytical processing (OLAP) workloads in parallel. PolarDB also provides a wide range of innovative multi-model database capabilities to help you process, analyze, and search for diversified data, such as spatio-temporal, GIS, image, vector, and graph data.
PolarDB supports various deployment architectures. For example, PolarDB supports compute-storage separation, three-node X-Paxos clusters, and local SSDs.
If you are using a conventional database system and the complexity of your workloads continues to increase, you may face the following challenges as the amount of your business data grows:
To help you resolve the issues that occur in conventional database systems, Alibaba Cloud provides PolarDB. PolarDB runs in a proprietary compute-storage separation architecture of Alibaba Cloud. This architecture has the following benefits:
PolarDB is integrated with various technologies and innovations. This document describes the following two aspects of the PolarDB architecture in sequence: compute-storage separation and hybrid transactional/analytical processing (HTAP). You can find and read the content of your interest with ease.
This section explains the following two aspects of the PolarDB architecture: compute-storage separation and HTAP.
PolarDB supports compute-storage separation. Each PolarDB cluster consists of a computing cluster and a storage cluster. You can flexibly scale out the computing cluster or the storage cluster based on your business requirements.
After the shared-storage architecture is used in PolarDB, the primary node and the read-only nodes share the same physical storage. If the primary node still uses the method that is used in conventional database systems to flush write-ahead logging (WAL) records, the following issues may occur.
To resolve the first issue, PolarDB must support multiple versions for each page. To resolve the second issue, PolarDB must control the speed at which the primary node flushes WAL records.
When read/write splitting is enabled, each individual compute node cannot fully utilize the high I/O throughput that is provided by the shared storage. In addition, you cannot accelerate large queries by adding computing resources. To resolve these issues, PolarDB uses the shared storage-based MPP architecture to accelerate OLAP queries in OLTP scenarios.
PolarDB supports a complete suite of data types that are used in OLTP scenarios. PolarDB also supports two computing engines, which can process these types of data:
When the same hardware resources are used, PolarDB delivers performance that is 90% of the performance delivered by traditional MPP database. PolarDB also provides SQL statement-level scalability. If the computing power of your PolarDB cluster is insufficient, you can allocate more CPU resources to OLAP queries without the need to rearrange data.
The following sections provide more details about compute-storage separation and HTAP.
Compute-storage separation enables the compute nodes of your PolarDB cluster to share the same physical storage. Shared storage brings the following challenges:
The following basic principles of shared storage apply to PolarDB:
In a conventional database system, the primary instance and read-only instances each are allocated independent memory resources and storage resources. The primary instance replicates WAL records to the read-only instances, and the read-only instances read and apply the WAL records. These basic principles also apply to replication state machines.
In a PolarDB cluster, the primary node replicates WAL records to the shared storage. The read-only nodes read and apply the most recent WAL records from the shared storage to ensure that the pages in the memory of the read-only nodes are synchronous with the pages in the memory of the primary node.
In the workflow shown in the preceding figure, the new page that the read-only nodes obtain by applying WAL records is removed from the buffer pools of the read-only nodes. When you query the page on the read-only nodes, the read-only nodes read the page from the shared storage. As a result, only the previous version of the page is returned. This previous version is called an outdated page. The following figure shows more details.
When you query a page on the read-only nodes at a specific point in time, the read-only nodes need to read the base version of the page and the WAL records up to that point in time. Then, the read-only nodes need to apply the WAL records one by one in sequence. The following figure shows more details.
PolarDB needs to maintain an inverted index that stores the mapping from each page to the WAL records of the page. However, the memory capacity of each read-only node is limited. Therefore, these inverted indexes must be persistently stored. To meet this requirement, PolarDB provides LogIndex. LogIndex is an index structure, which is used to persistently store hash data.
LogIndex helps prevent outdated pages and enable the read-only nodes to run in lazy log apply mode. In the lazy log apply mode, the read-only nodes apply only the metadata of the WAL records for dirty pages.
The read-only nodes may return future pages, whose versions are later than the versions that are recorded on the read-only nodes. The following figure shows more details.
The read-only nodes apply WAL records at high speeds in lazy apply mode. However, the speeds may still be lower than the speed at which the primary node flushes WAL records. If the primary node flushes WAL records faster than the read-only nodes apply WAL records, future pages are returned. To prevent future pages, PolarDB must ensure that the speed at which the primary node flushes WAL records does not exceed the speeds at which the read-only nodes apply WAL records. The following figure shows more details.
The full path is long, and the latency on the read-only nodes is high. This may cause an imbalance between the read loads and write loads over the read/write splitting link.
The read-only nodes can read WAL records from the shared storage. Therefore, the primary node can remove the payloads of WAL records and send only the metadata of WAL records to the read-only nodes. This alleviates the pressure on network transmission and reduces the I/O loads on critical paths. The following figure shows more details.
This optimization method significantly reduces the amount of data that needs to be transmitted between the primary node and the read-only nodes. The amount of data that needs to be transmitted decreases by 98%, as shown in the following figure.
Conventional database systems need to read a large number of pages, apply WAL records to these pages one by one, and then flush the updated pages to the disk. To reduce the read I/O loads on critical paths, PolarDB supports compute-storage separation. If the page that you query on the read-only nodes cannot be hit in the buffer pools of the read-only nodes, no I/O loads are generated and only LogIndex records are recorded.
The following I/O operations that are performed by log apply processes can be offloaded to session processes:
In the example shown in the following figure, when the log apply process of a read-only node applies the metadata of a WAL record of a page:
This optimization method significantly reduces the log apply latency and increases the log apply speed by 30 times compared with Amazon Aurora.
When the primary node runs a DDL operation such as DROP TABLE to modify a table, the primary node acquires an exclusive DDL lock on the table. The exclusive DDL lock is replicated to the read-only nodes along with WAL records. The read-only nodes apply the WAL records to acquire the exclusive DDL lock on the table. This ensures that the table cannot be deleted by the primary node when a read-only node is reading the table. Only one copy of the table is stored in the shared storage.
When the applying process of a read-only node applies the exclusive DDL lock, the read-only node may require a long period of time to acquire the exclusive DDL lock on the table. You can optimize the critical path of the log apply process by offloading the task of acquiring the exclusive DDL lock to other processes.
This optimization method ensures that the critical path of the log apply process of a read-only node is not blocked even if the log apply process needs to wait for the release of an exclusive DDL lock.
The three optimization methods in combination significantly reduce replication latency and have the following benefits:
If the read-only nodes apply WAL records at low speeds, your PolarDB cluster may require a long period of time to recover from exceptions such as out of memory (OOM) errors and unexpected crashes. When the direct I/O model is used for the shared storage, the severity of this issue increases.
The preceding sections explain how LogIndex enables the read-only nodes to apply WAL records in lazy log apply mode. In general, the recovery process of the primary node after a restart is the same as the process in which the read-only nodes apply WAL records. In this sense, the lazy log apply mode can also be used to accelerate the recovery of the primary node.
The example in the following figure shows how the optimized recovery method significantly reduces the time that is required to apply 500 MB of WAL records.
After the primary node recovers, a session process may need to apply the pages that the session process reads. When a session process is applying pages, the primary node responds at low speeds for a short period of time. To resolve this issue, PolarDB does not delete pages from the buffer pool of the primary node if the primary node restarts or unexpectedly crashes.
The shared memory of the database engine consists of the following two parts:
Not all pages in the buffer pool of the primary node can be reused. For example, if a process acquires an exclusive lock on a page before the primary node restarts and then unexpectedly crashes, no other processes can release the exclusive lock on the page. Therefore, after the primary node unexpectedly crashes or restarts, it needs to traverse all pages in its buffer pool to identify and remove the pages that cannot be reused. In addition, the recycling of buffer pools depends on Kubernetes.
This optimized buffer pool mechanism ensures the stable performance of your PolarDB cluster before and after a restart.
The shared storage of PolarDB is organized as a storage pool. When read/write splitting is enabled, the theoretical I/O throughput that is supported by the shared storage is infinite. However, large queries can be run only on individual compute nodes, and the CPU, memory, and I/O specifications of a single compute node are limited. Therefore, a single compute node cannot fully utilize the high I/O throughput that is supported by the shared storage or accelerate large queries by acquiring more computing resources. To resolve these issues, PolarDB uses the shared storage-based MPP architecture to accelerate OLAP queries in OLTP scenarios.
In a PolarDB cluster, the physical storage is shared among all compute nodes. Therefore, you cannot use the method of scanning tables in conventional MPP databases to scan tables in PolarDB clusters. PolarDB supports MPP on standalone execution engines and provides optimized shared storage. This shared storage-based MPP architecture is the first architecture of its kind in the industry. We recommend that you familiarize yourself with following basic principles of this architecture before you use PolarDB:
The preceding figure shows an example.
The GPORCA optmizer is extended to provide a set of transformation rules that can recognize shared storage. The GPORCA optimizer enables PolarDB to access a specific amount of planned search space. For example, PolarDB can scan a table as a whole or as different virtual partitions. This is a major difference between shared storage-based MPP and conventional MPP.
The modules in gray in the upper part of the following figure are modules of the database engine. These modules enable the database engine of PolarDB to adapt to the GPORCA optimizer.
The modules in the lower part of the following figure comprise the GPORCA optimizer. Among these modules, the modules in gray are extended modules, which enable the GPORCA optimizer to communicate with the shared storage of PolarDB.
Four types of operators in PolarDB require parallelism. This section describes how to enable parallelism for operators that are used to run sequential scans. To fully utilize the I/O throughput that is supported by the shared storage, PolarDB splits each table into logical units during a sequential scan. Each unit contains 4 MB of data. This way, PolarDB can distribute I/O loads to different disks, and the disks can simultaneously scan data to accelerate the sequential scan. In addition, each read-only node needs to scan only specific tables rather than all tables. The size of tables that can be cached is the total size of the buffer pools of all read-only nodes.
Parallelism has the following benefits, as shown in the following figure:
Data skew is a common issue in conventional MPP:
Although a scan task is dynamically distributed, we recommend that you maintain the affinity of buffers at your best. In addition, the context of each operator is stored in the private memory of the worker threads. The coordinator node does not store the information about specific tables.
In the example shown in the following table, PolarDB uses static sharding to shard large objects. During the static sharding process, data skew occurs, but the performance of dynamic scanning can still linearly increase.
Data sharing helps deliver ultimate scalability in cloud-native environments. The full path of the coordinator node involves various modules, and PolarDB can store the external dependencies of these modules to the shared storage. In addition, the full path of a worker thread involves a number of operational parameters, and PolarDB can synchronize these parameters from the coordinator node over the control path. This way, the coordinator node and the worker thread are stateless.
The following conclusions are made based on the preceding analysis:
The log apply wait mechanism and the global snapshot mechanism are used to ensure data consistency among multiple compute nodes. The log apply wait mechanism ensures that all worker threads can obtain the most recent version of each page. The global snapshot mechanism ensures that a unified version of each page can be selected.
A total of 1 TB of data is used for TPC-H testing. First, run 22 SQL statements in a PolarDB cluster and in a conventional database system. The PolarDB cluster supports distributed parallelism, and the conventional database system supports standalone parallelism. The test result shows that the PolarDB cluster executes three SQL statements at speeds that are 60 times higher and 19 statements at speeds that are 10 times higher than the conventional database system.
Then, run a TPC-H test by using a distributed execution engine. The test result shows that the speed at which each of the 22 SQL statements runs linearly increases as the number of cores increases from 16 to 128.
When 16 nodes are configured, PolarDB delivers performance that is 90% of the performance delivered by MPP-based database.
As mentioned earlier, the distributed execution engine of PolarDB supports scalability, and data in PolarDB does not need to be redistributed. When the degree of parallelism (DOP) is 8, PolarDB delivers performance that is 5.6 times the performance delivered by MPP-based database.
A large number of indexes are created in OLTP scenarios. The workloads that you run to create these indexes are divided into two parts: 80% of the workloads are run to sort and create index pages, and 20% of the workloads are run to write index pages. Distributed execution accelerates the process of sorting indexes and supports the batch writing of index pages.
Distributed execution accelerates the creation of indexes by four to five times.
PolarDB is a multi-model database service that supports spatio-temporal data. PolarDB runs CPU-bound workloads and I/O-bound workloads. These workloads can be accelerated by distributed execution. The shared storage of PolarDB supports scans on shared R-tree indexes.
This document describes the crucial technologies that are used in the PolarDB architecture:
More technical details about PolarDB will be discussed in other documents. For example, how the shared storage-based query optimizer runs, how LogIndex achieves high performance, how PolarDB flashes your data back to a specific point in time, how MPP can be implemented in the shared storage, and how PolarDB works with X-Paxos to ensure high availability.
In a conventional database system, the primary instance and the read-only instances are each allocated a specific amount of exclusive storage space. The read-only instances can apply write-ahead logging (WAL) records and can read and write data to their own storage. A PolarDB cluster consists of a primary node and at least one read-only node. The primary node and the read-only nodes share the same physical storage. The primary node can read and write data to the shared storage. The read-only nodes can read data from the shared storage by applying WAL records but cannot write data to the shared storage. The following figure shows the architecture of a PolarDB cluster.
The read-only nodes may read two types of pages from the shared storage:
Future pages: The pages that the read-only nodes read from the shared storage incorporate changes that are made after the apply log sequence numbers (LSNs) of the pages. For example, the read-only nodes have applied all WAL records up to the WAL record with an LSN of 200 to a page, but the change described by the most recent WAL record with an LSN of 300 has been incorporated into the same page in the shared storage. These pages are called future pages.
Outdated pages: The pages that the read-only nodes read from the shared storage do not incorporate changes that are made before the apply LSNs of the pages. For example, the read-only nodes have applied all WAL records up to the most recent WAL record with an LSN of 200 to a page, but the change described by a previous WAL record with an LSN of 200 has not been incorporated into the same page in the shared storage. These pages are called outdated pages.
Each read-only node expects to read pages that incorporate only the changes made up to the apply LSNs of the pages on that read-only node. If the read-only nodes read outdated pages or future pages from the shared storage, you can take the following measures:
Buffer management involves consistent LSNs. For a specific page, each read-only node needs to apply only the WAL records that are generated between the consistent LSN and the apply LSN. This reduces the time that is required to apply WAL records on the read-only nodes.
PolarDB provides a flushing control mechanism to prevent the read-only nodes from reading future pages from the shared storage. Before the primary node writes a page to the shared storage, the primary node checks whether all the read-only nodes have applied the most recent WAL record of the page.
The pages in the buffer pool of the primary node are divided into the following two types based on whether the pages incorporate the changes that are made after the apply LSNs of the pages: pages that can be flushed to the shared storage and pages that cannot be flushed to the shared storage. This categorization is based on the following LSNs:
The primary node determines whether to flush a dirty page to the shared storage based on the following rules:
if buffer latest lsn <= oldest apply lsn
+ flush buffer
+else
+ do not flush buffer
+
To apply the WAL records of a page up to a specified LSN, each read-only node manages the mapping between the page and the LSNs of all WAL records that are generated for the page. This mapping is stored as a LogIndex. A LogIndex is used as a hash table that can be persistently stored. When a read-only node requests a page, the read-only node traverses the LogIndex of the page to obtain the LSNs of all WAL records that need to be applied. Then, the read-only node applies the WAL records in sequence to generate the most recent version of the page.
For a specific page, more changes mean more LSNs and a longer period of time required to apply WAL records. To minimize the number of WAL records that need to be applied for each page, PolarDB provides consistent LSNs.
After all changes that are made up to the consistent LSN of a page are written to the shared storage, the page is persistently stored. The primary node sends the write LSN and consistent LSN of the page to each read-only node, and each read-only node sends the apply LSN of the page and the min used LSN of the page to the primary node. The read-only nodes do not need to apply the WAL records that are generated before the consistent LSN of the page while reading it from shared storage. But the read-only nodes may still need to apply the WAL records that are generated before the consistent LSN of the page while replaying outdated page in buffer pool. Therefore, all LSNs that are smaller than the consistent LSN and the min used LSN can be removed from the LogIndex of the page. This reduces the number of WAL records that the read-only nodes need to apply. This also reduces the storage space that is occupied by LogIndex records.
PolarDB holds a specific state for each buffer in the memory. The state of a buffer in the memory is represented by the LSN that marks the first change to the buffer. This LSN is called the oldest LSN. The consistent LSN of a page is the smallest oldest LSN among the oldest LSNs of all buffers for the page.
A conventional method of obtaining the consistent LSN of a page requires the primary node to traverse the LSNs of all buffers for the page in the buffer pool. This method causes significant CPU overhead and a long traversal process. To address these issues, PolarDB uses a flush list, in which all dirty pages in the buffer pool are sorted in ascending order based on their oldest LSNs. The flush list helps you reduce the time complexity of obtaining consistent LSNs to O(1).
When a buffer is updated for the first time, the buffer is labeled as dirty. PolarDB inserts the buffer into the flush list and generates an oldest LSN for the buffer. When the buffer is flushed to the shared storage, the label is removed.
To efficiently move the consistent LSN of each page towards the head of the flush list, PolarDB runs a BGWRITER process to traverse all buffers in the flush list in chronological order and flush early buffers to the shared storage one by one. After a buffer is flushed to the shared storage, the consistent LSN is moved one position forward towards the head of the flush list. In the example shown in the preceding figure, if the buffer with an oldest LSN of 10 is flushed to the shared storage, the buffer with an oldest LSN of 30 is moved one position forward towards the head of the flush list. LSN 30 becomes the consistent LSN.
To further improve the efficiency of moving the consistent LSN of each page to the head of the flush list, PolarDB runs multiple BGWRITER processes to flush buffers in parallel. Each BGWRITER process reads a number of buffers from the flush list and flushes the buffers to the shared storage at a time.
After the flushing control mechanism is introduced, PolarDB flushes only the buffers that meet specific flush conditions to the shared storage. If a buffer is frequently updated, its latest LSN may remain larger than its oldest apply LSN. As a result, the buffer can never meet the flush conditions. This type of buffer is called hot buffers. If a page has hot buffers, the consistent LSN of the page cannot be moved towards the head of the flush list. To resolve this issue, PolarDB provides a copy buffering mechanism.
The copy buffering mechanism allows PolarDB to copy buffers that do not meet the flush conditions to a copy buffer pool. Buffers in the copy buffer pool and their latest LSNs are no longer updated. As the oldest apply LSN moves towards the head of the flush list, these buffers start to meet the flush conditions. When these buffers meet the flush conditions, PolarDB can flush them from the copy buffer pool to the shared storage.
The following flush rules apply:
In the example shown in the following figure, the buffer with an oldest LSN of 30 and a latest LSN of 500 is considered a hot buffer. The buffer is updated after it is copied to the copy buffer pool. If the change is marked by LSN 600, PolarDB changes the oldest LSN of the buffer to 600 and moves the buffer to the tail of the flush list. At this time, the copy of the buffer is no longer updated, and the latest LSN of the copy remains 500. When the copy meets the flush conditions, PolarDB flushes the copy to the shared storage.
After the copy buffering mechanism is introduced, PolarDB uses a different method to calculate the consistent LSN of each page. For a specific page, the oldest LSN in the flush list is no longer the smallest oldest LSN because the oldest LSN in the copy buffer pool can be smaller. Therefore, PolarDB needs to compare the oldest LSN in the flush list with the oldest LSN in the copy buffer pool. The smaller oldest LSN is considered the consistent LSN.
PolarDB supports consistent LSNs, which are similar to checkpoints. All changes that are made to a page before the checkpoint LSN of the page are flushed to the shared storage. If a recovery operation is run, PolarDB starts to recover the page from the checkpoint LSN. This improves recovery efficiency. If regular checkpoint LSNs are used, PolarDB flushes all dirty pages in the buffer pool and other in-memory pages to the shared storage. This process may require a long period of time and high I/O throughput. As a result, normal queries may be affected.
Consistent LSNs empower PolarDB to implement lazy checkpointing. If the lazy checkpointing mechanism is used, PolarDB does not flush all dirty pages in the buffer pool to the shared storage. Instead, PolarDB uses consistent LSNs as checkpoint LSNs. This significantly increases checkpointing efficiency.
The underlying logic of the lazy checkpointing mechanism allows PolarDB to run BGWRITER processes that continuously flush dirty pages and maintain consistent LSNs. The lazy checkpointing mechanism cannot be used with the full page write feature. If you enable the full page write feature, the lazy checkpointing mechanism is automatically disabled.
In a shared storage architecture that consists of one primary node and multiple read-only nodes, a data file has only one copy. Due to multi-version concurrency control (MVCC), the read and write operations performed on different nodes do not conflict. However, MVCC cannot be used to ensure consistency for some specific data operations, such as file operations.
MVCC applies to tuples within a file but does not apply to the file itself. File operations such as creating and deleting files are visible to the entire cluster immediately after they are performed. This causes an issue that files disappear while read-only nodes are reading the files. To prevent the issue from occurring, file operations need to be synchronized.
In most cases, DDL is used to perform operations on files. For DDL operations, PolarDB provides a synchronization mechanism to prevent concurrent file operations. The logic of DDL operations in PolarDB is the same as the logic of single-node execution. However, the synchronization mechanism is different.
The DDL synchronization mechanism uses AccessExclusiveLocks (DDL locks) to synchronize DDL operations between primary and read-only nodes.
Figure 1: Relationship Between DDL Lock and WAL Log |
DDL locks are table locks at the highest level in databases. DDL locks and locks at other levels are mutually exclusive. When the primary node synchronizes a WAL log file of a table to the read-only nodes, the primary node acquires the LSN of the lock in the WAL log file. When a read-only node applies the WAL log file beyond the LSN of the lock, the lock is considered to have been acquired on the read-only node. The DDL lock is released after the transaction ends. Figure 1 shows the entire process from the acquisition to the release of a DDL lock. When the WAL log file is applied at Apply LSN 1, the DDL lock is not acquired. When the WAL log file is applied at Apply LSN 2, the DDL lock is acquired. When the WAL log file is applied at Apply LSN 3, the DDL lock is released.
Figure 2: Conditions for Acquiring DDL Lock |
When the WAL log file is applied beyond the LSN of the lock on all read-only nodes, the DDL lock is considered to have been acquired by the transaction of the primary node at the cluster level. Then, this table cannot be accessed over other sessions on the primary node or read-only nodes. During this time period, the primary node can perform various file operations on the table.
Note: A standby node in an active/standby environment has independent file storage. When a standby node acquires a lock, the preceding situation never occurs.
Figure 3: DDL Synchronization Workflow |
Figure 3 shows the workflow of how DDL operations are synchronized.
DDL locks are locks at the highest level in PostgreSQL databases. Before a database performs operations such as DROP, ALTER, LOCK, and VACUUM (FULL) on a table, a DDL lock must be acquired. The primary node acquires the DDL lock by responding to user requests. When the lock is acquired, the primary node writes the DDL lock to the log file. Read-only nodes acquire the DDL lock by applying the log file.
DDL operations on a table are synchronized based on the following logic. The <
indicator shows that the operations are performed from left to right.
The sequence of the following operations is inferred based on the preceding execution logic: Queries on the primary node and each read-only node end < The primary node acquires a global DDL lock < The primary node writes data < The primary node releases the global DDL lock < The primary node and read-only nodes run new queries.
When the primary node writes data to the shared storage, no queries are run on the primary node or read-only nodes. This way, data correctness is ensured. The entire operation process follows the two-phase locking (2PL) protocol. This way, data correctness is ensured among multiple tables.
In the preceding synchronization mechanism, DDL locks are synchronized in the main process that is used for primary/secondary synchronization. When the synchronization of a DDL lock to a read-only node is blocked, the synchronization of data to the read-only node is also blocked. In the third and fourth phases of the apply process shown in Figure 1, the DDL lock can be acquired only after the session in which local queries are run is closed. The default timeout period for synchronization in PolarDB is 30s. If the primary node runs in heavy load, a large data latency may occur.
In specific cases, for a read-only node to apply a DDL lock, the data latency is the sum of the time used to apply each log entry. For example, if the primary node writes 10 log entries for a DDL lock within 1s, the read-only node requires 300s to apply all log entries. Data latency can affect the system stability of PolarDB in a negative manner. The primary node may be unable to clean dirty data and perform checkpoints at the earliest opportunity due to data latency. If the system stops responding when a large data latency occurs, the system requires an extended period of time to recover. This can lead to great stability risks.
To resolve this issue, PolarDB optimizes DDL lock apply on read-only nodes.
Figure 4: Asynchronous Apply of DDL Locks on Read-Only Nodes |
PolarDB uses an asynchronous process to apply DDL locks so that the main apply process is not blocked.
Figure 4 shows the overall workflow in which PolarDB offloads the acquisition of DDL locks from the main apply process to the lock apply process and immediately returns to the main apply process. This way, the main apply process is not affected even if lock apply are blocked.
Lock apply conflicts rarely occur. PolarDB does not offload the acquisition of all locks to the lock apply process. PolarDB first attempts to acquire a lock in the main apply process. Then, if the attempt is a success, PolarDB does not offload the lock acquisition to the lock apply process. This can reduce the synchronization overheads between processes.
By default, the asynchronous lock apply feature is enabled in PolarDB. This feature can reduce the apply latency caused by apply conflicts to ensure service stability. AWS Aurora does not provide similar features. Apply conflicts in AWS Aurora can severely increase data latency.
In asynchronous apply mode, only the executor who acquires locks changes, but the execution logic does not change. During the process in which the primary node acquires a global DDL lock, writes data, and then releases the global DDL lock, no queries are run. This way, data correctness is not affected.
PolarDB uses a shared storage architecture. Each PolarDB cluster consists of a primary node and multiple read-only nodes. The primary node can share data in the shared storage. The primary node can read data from the shared storage and write data to the storage. Read-only nodes can read data from the shared storage only by replaying logs. Data in the memory is synchronized from the primary node to read-only nodes. This ensures that data is consistent between the primary node and read-only nodes. Read-only nodes can also provide services to implement read/write splitting and load balancing. If the primary node becomes unavailable, a read-only node can be used as the primary node. This ensures the high availability of the cluster. The following figure shows the architecture of PolarDB.
In the shared-nothing architecture, read-only nodes have independent memory and storage. These nodes need only to receive write-ahead logging (WAL) logs from the primary node and replay the WAL logs. If the data that needs to be replayed is not in buffer pools, the data must be read from storage files and written to buffer pools for replay. This can cause cache misses. More data is evicted from buffer pools because the data is replayed in a continuous manner. The following figure shows more details.
Multiple transactions on the primary node can be executed in parallel. Read-only nodes must replay WAL logs in the sequence in which the WAL logs are generated. As a result, read-only nodes replay WAL logs at a low speed and the latency between the primary node and read-only nodes increases.
If a PolarDB cluster uses a shared storage architecture and consists of one primary node and multiple read-only nodes, the read-only nodes can obtain WAL logs that need to be replayed from the shared storage. If data pages on the shared storage are the most recent pages, read-only nodes can read the data pages without replaying the pages. PolarDB provides LogIndex that can be used on read-only nodes to replay WAL logs at a higher speed.
LogIndex stores the mapping between a data page and all the log sequence numbers (LSNs) of updates on the page. LogIndex can be used to rapidly obtain all LSNs of updates on a data page. This way, the WAL logs generated for the data page can be replayed when the data page is read. The following figure shows the architecture that is used to synchronize data from the primary node to read-only nodes.
Compared with the shared-nothing architecture, the workflow of the primary node and read-only nodes in the shared storage architecture has the following differences:
PolarDB reduces the latency between the primary node and read-only nodes by replicating only WAL log metadata. PolarDB uses LogIndex to delay the replay of WAL logs and replay WAL logs in parallel. This can increase the speed at which read-only nodes replay WAL logs.
WAL logs are also called XLogRecord. Each XLogRecord consists of two parts, as shown in the following figure.
In shared storage mode, complete WAL logs do not need to be replicated from the primary node to read-only nodes. Only WAL log metadata is replicated to the read-only nodes. WAL log metadata consists of the general header portion, header part, and main data, as shown in the preceding figure. Read-only nodes can read complete WAL log content from the shared storage based on WAL log metadata. The following figure shows the process of replicating WAL log metadata from the primary node to read-only nodes.
In streaming replication mode, payloads are not replicated from the primary node to read-only nodes. This reduces the amount of data transmitted on the network. The WalSender process on the primary node obtains the metadata of WAL logs from the metadata queue stored in the memory. After the WalReceiver process on the read-only nodes receives the metadata, the process stores the metadata in the metadata queue of WAL logs in the memory. The disk I/O in streaming replication mode is lower than that in primary/secondary mode. This increases the speed at which logs are transmitted and reduces the latency between the primary node and read-only nodes.
LogIndex is a HashTable structure. The key of this structure is PageTag. A PageTag can identify a specific data page . In this case, the values of this structure are all LSNs generated for updates on the page. The following figure shows the memory data structure of LogIndex. A LogIndex Memtable contains Memtable ID values, maximum and minimum LSNs, and the following arrays:
LogIndex Memtables stored in the memory are divided into two categories: Active LogIndex Memtables and Inactive LogIndex Memtables. The LogIndex records generated based on WAL log metadata are written to an Active LogIndex Memtable. After the Active LogIndex Memtable is full, the table is converted to an Inactive LogIndex Memtable and the system generates another Active LogIndex Memtable. The data in the Inactive LogIndex Memtable can be flushed to the disk. Then, the Inactive LogIndex Memtable can be converted to an Active LogIndex Memtable again. The following figure shows more details.
The disk stores a large number of LogIndex Tables. The structure of a LogIndex Table is similar to the structure of a LogIndex Memtable. A LogIndex Table can contain a maximum of 64 LogIndex Memtables. When data in Inactive LogIndex Memtables is flushed to the disk, Bloom filters are generated for the Memtables. The size of a single Bloom filter is 4,096 bytes. A Bloom filter records the information about an Inactive LogIndex Memtable, such as the mapped values that the bit array of the Bloom filter stores for all pages in the Inactive LogIndex Memtable, the minimum LSN, and the maximum LSN. The following figure shows more details. A Bloom filter can be used to determine whether a page exists in the LogIndex Table that corresponds to the filter. This way, LogIndex Tables in which the page does not exist do not need to be scanned. This accelerates data retrieval.
After the data in an Inactive LogIndex Memtable is flushed to the disk, the LogIndex metadata file is updated. This file is used to ensure the atomicity of I/O operations on the LogIndex Memtable file. The LogIndex metadata file stores the information about the smallest LogIndex Table and the largest LogIndex Memtable on the disk. Start LSN in this file records the maximum LSN among all LogIndex Memtables whose data is flushed to the disk. If data is written to the LogIndex Memtable when the Memtable is flushed, the system parses the WAL logs from Start LSN that are recorded in the LogIndex metadata file. Then, LogIndex records that are discarded during the data write are also regenerated to ensure the atomicity of I/O operations on the Memtable.
All modified data pages recorded in WAL logs before the consistent LSN are persisted to the shared storage based on the information described in Buffer Management. The primary node sends the write LSN and consistent LSN to each read-only node, and each read-only node sends the apply LSN and the min used LSN to the primary node. In this case, the WAL logs whose LSNs are smaller than the consistent LSN and the min used LSN can be cleared from LogIndex Tables. This way, the primary node can truncate LogIndex Tables that are no longer used in the storage. This enables more efficient log replay for read-only nodes and reduces the space occupied by LogIndex Tables.
For scenarios in which LogIndex Tables are used, the startup processes of read-only nodes generate LogIndex records based on the received WAL metadata and mark the pages that correspond to the WAL metadata and exist in buffer pools as outdated pages. This way, WAL logs for the next LSN can be replayed. The startup processes do not replay WAL logs. The backend processes that access the page and the background replay processes replay the logs. The following figure shows how WAL logs are replayed.
The XLOG Buffer is added to cache the read WAL logs. This reduces performance overhead when WAL logs are read from the disk for replay. WAL logs are read from the WAL segment file on the disk. After the XLOG Page Buffer is added, WAL logs are preferentially read from the XLOG Buffer. If WAL logs that you want to replay are not in the XLOG Buffer, the pages of the WAL logs are read from the disk, written to the buffer, and then copied to readBuf of XLogReaderState. If the WAL logs are in the buffer, the logs are copied to readBuf of XLogReaderState. This reduces the number of I/O operations that need to be performed to replay the WAL logs to increase the speed at which the WAL logs are replayed. The following figure shows more details.
The LogIndex mechanism is different from the shared-nothing architecture in terms of log replay. If the LogIndex mechanism is used, the startup process parses WAL metadata to generate LogIndex records and the backend process replays pages based on LogIndex records in parallel. In this case, the startup process and backend process perform the operations in parallel. The backend process replays only the pages that it must access. An XLogRecord may be used to modify multiple pages. For example, in an index block split, Page_0 and Page_1 are modified. The modification is an atomic operation. This indicates that Page_0 or Page_1 is completely modified or not modified. The service provides the mini transaction lock mechanism. This ensures that the memory data structures are consistent when the backend process replays pages.
When mini transaction locks are unavailable, the startup process parses WAL metadata and sequentially inserts the current LSN into the LSN list of each page. The following figure shows more details. The startup process completes the update of the LSN list of Page_0 but does not complete the update of the LSN list of Page_1. In this case, Backend_0 accesses Page_0 and Backend_1 accesses Page_1. Backend_0 replays Page_0 based on the LSN list of Page_0. Backend_1 replays Page_1 based on the LSN list of Page_1. The WAL log for LSN_N+1 is replayed for Page_0 and the WAL log for LSN_N is replayed for Page_1. As a result, the versions of the two pages are not consistent in the buffer pool. This causes inconsistency between the memory data structure of Page_0 and that of Page_1.
In the mini transaction lock mechanism, an update on the LSN list of Page_0 or Page_1 is a mini transaction. Before the startup process updates the LSN list of a page, the process must obtain the mini transaction lock of the page. In the following figure, the process first obtains the mini transaction lock of Page_0. The sequence of the obtained mini transaction lock is consistent with the Page_0 modification sequence in which the WAL log of this page is replayed. After the LSN lists of Page_0 and Page_1 are updated, the mini transaction lock is released. If the backend process replays a specific page based on LogIndex records and the startup process for the page is in a mini transaction, the mini transaction lock of the page must be obtained before the page is replayed. The startup process completes the update of the LSN list of Page_0 but does not complete the update of the LSN list of Page_1. Backend_0 accesses Page_0 and Backend_1 accesses Page_1. In this case, Backend_0 cannot replay Page_0 until the LSN list of this page is updated and the mini transaction lock of this page is released. Before the mini transaction lock of this page is released, the update of the LSN list of page_1 is completed. The memory data structures are modified based on the atomic operation rule.
PolarDB provides LogIndex based on the shared storage between the primary node and read-only nodes. LogIndex accelerates the speed at which memory data is synchronized from the primary node to read-only nodes and reduces the latency between the primary node and read-only nodes. This ensures the availability of read-only nodes and makes data between the primary node and read-only nodes consistent. This topic describes LogIndex and the LogIndex-based memory synchronization architecture of read-only nodes. LogIndex can be used to synchronize memory data from the primary node to read-only nodes. LogIndex can also be used to promote a read-only node as the primary node online. If the primary node becomes unavailable, the speed at which a read-only node is promoted to the primary node can be increased. This achieves the high availability of compute nodes. In addition, services can be restored in a short period of time.
羁鸟
2022/08/22
30 min
Sequence 作为数据库中的一个特别的表级对象,可以根据用户设定的不同属性,产生一系列有规则的整数,从而起到发号器的作用。
在使用方面,可以设置永不重复的 Sequence 用来作为一张表的主键,也可以通过不同表共享同一个 Sequence 来记录多个表的总插入行数。根据 ANSI 标准,一个 Sequence 对象在数据库要具备以下特征:
为了解释上述特性,我们分别定义 a
、b
两种序列来举例其具体的行为。
CREATE SEQUENCE a start with 5 minvalue -1 increment -2;
+CREATE SEQUENCE b start with 2 minvalue 1 maxvalue 4 cycle;
+
两个 Sequence 对象提供的序列值,随着序列申请次数的变化,如下所示:
PostgreSQL | Oracle | SQLSERVER | MySQL | MariaDB | DB2 | Sybase | Hive |
---|---|---|---|---|---|---|---|
支持 | 支持 | 支持 | 仅支持自增字段 | 支持 | 支持 | 仅支持自增字段 | 不支持 |
为了更进一步了解 PostgreSQL 中的 Sequence 对象,我们先来了解 Sequence 的用法,并从用法中透析 Sequence 背后的设计原理。
PostgreSQL 提供了丰富的 Sequence 调用接口,以及组合使用的场景,以充分支持开发者的各种需求。
PostgreSQL 对 Sequence 对象也提供了类似于 表 的访问方式,即 DQL、DML 以及 DDL。我们从下图中可一览对外提供的 SQL 接口。
分别来介绍以下这几个接口:
该接口的含义为,返回 Session 上次使用的某一 Sequence 的值。
postgres=# select nextval('seq');
+ nextval
+---------
+ 2
+(1 row)
+
+postgres=# select currval('seq');
+ currval
+---------
+ 2
+(1 row)
+
需要注意的是,使用该接口必须使用过一次 nextval
方法,否则会提示目标 Sequence 在当前 Session 未定义。
postgres=# select currval('seq');
+ERROR: currval of sequence "seq" is not yet defined in this session
+
该接口的含义为,返回 Session 上次使用的 Sequence 的值。
postgres=# select nextval('seq');
+ nextval
+---------
+ 3
+(1 row)
+
+postgres=# select lastval();
+ lastval
+---------
+ 3
+(1 row)
+
同样,为了知道上次用的是哪个 Sequence 对象,需要用一次 nextval('seq')
,让 Session 以全局变量的形式记录下上次使用的 Sequence 对象。
lastval
与 curval
两个接口仅仅只是参数不同,currval
需要指定是哪个访问过的 Sequence 对象,而 lastval
无法指定,只能是最近一次使用的 Sequence 对象。
该接口的含义为,取 Sequence 对象的下一个序列值。
通过使用 nextval
方法,可以让数据库基于 Sequence 对象的当前值,返回一个递增了 increment
数量的一个序列值,并将递增后的值作为 Sequence 对象当前值。
postgres=# CREATE SEQUENCE seq start with 1 increment 2;
+CREATE SEQUENCE
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 3
+(1 row)
+
increment
称作 Sequence 对象的步长,Sequence 的每次以 nextval
的方式进行申请,都是以步长为单位进行申请的。同时,需要注意的是,Sequence 对象创建好以后,第一次申请获得的值,是 start value 所定义的值。对于 start value 的默认值,有以下 PostgreSQL 规则:
另外,nextval
是一种特殊的 DML,其不受事务所保护,即:申请出的序列值不会再回滚。
postgres=# BEGIN;
+BEGIN
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
+postgres=# ROLLBACK;
+ROLLBACK
+postgres=# select nextval('seq');
+ nextval
+---------
+ 2
+(1 row)
+
PostgreSQL 为了 Sequence 对象可以获得较好的并发性能,并没有采用多版本的方式来更新 Sequence 对象,而是采用了原地修改的方式完成 Sequence 对象的更新,这种不用事务保护的方式几乎成为所有支持 Sequence 对象的 RDMS 的通用做法,这也使得 Sequence 成为一种特殊的表级对象。
该接口的含义是,设置 Sequence 对象的序列值。
postgres=# select nextval('seq');
+ nextval
+---------
+ 4
+(1 row)
+
+postgres=# select setval('seq', 1);
+ setval
+--------
+ 1
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 2
+(1 row)
+
该方法可以将 Sequence 对象的序列值设置到给定的位置,同时可以将第一个序列值申请出来。如果不想申请出来,可以采用加入 false
参数的做法。
postgres=# select nextval('seq');
+ nextval
+---------
+ 4
+(1 row)
+
+postgres=# select setval('seq', 1, false);
+ setval
+--------
+ 1
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
通过在 setval
来设置好 Sequence 对象的值以后,同时来设置 Sequence 对象的 is_called
属性。nextval
就可以根据 Sequence 对象的 is_called
属性来判断要返回的是否要返回设置的序列值。即:如果 is_called
为 false
,nextval
接口会去设置 is_called
为 true
,而不是进行 increment。
CREATE
和 ALTER SEQUENCE
用于创建/变更 Sequence 对象,其中 Sequence 属性也通过 CREATE
和 ALTER SEQUENCE
接口进行设置,前面已简单介绍部分属性,下面将详细描述具体的属性。
CREATE [ TEMPORARY | TEMP ] SEQUENCE [ IF NOT EXISTS ] name
+ [ AS data_type ]
+ [ INCREMENT [ BY ] increment ]
+ [ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE ]
+ [ START [ WITH ] start ] [ CACHE cache ] [ [ NO ] CYCLE ]
+ [ OWNED BY { table_name.column_name | NONE } ]
+ALTER SEQUENCE [ IF EXISTS ] name
+ [ AS data_type ]
+ [ INCREMENT [ BY ] increment ]
+ [ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE ]
+ [ START [ WITH ] start ]
+ [ RESTART [ [ WITH ] restart ] ]
+ [ CACHE cache ] [ [ NO ] CYCLE ]
+ [ OWNED BY { table_name.column_name | NONE } ]
+
AS
:设置 Sequence 的数据类型,只可以设置为 smallint
,int
,bigint
;与此同时也限定了 minvalue
和 maxvalue
的设置范围,默认为 bigint
类型(注意,只是限定,而不是设置,设置的范围不得超过数据类型的范围)。INCREMENT
:步长,nextval
申请序列值的递增数量,默认值为 1。MINVALUE
/ NOMINVALUE
:设置/不设置 Sequence 对象的最小值,如果不设置则是数据类型规定的范围,例如 bigint
类型,则最小值设置为 PG_INT64_MIN
(-9223372036854775808)MAXVALUE
/ NOMAXVALUE
:设置/不设置 Sequence 对象的最大值,如果不设置,则默认设置规则如上。START
:Sequence 对象的初始值,必须在 MINVALUE
和 MAXVALUE
范围之间。RESTART
:ALTER 后,可以重新设置 Sequence 对象的序列值,默认设置为 start value。CACHE
/ NOCACHE
:设置 Sequence 对象使用的 Cache 大小,NOCACHE
或者不设置则默认为 1。OWNED BY
:设置 Sequence 对象归属于某张表的某一列,删除列后,Sequence 对象也将删除。下面描述了一种序列回滚的场景
CREATE SEQUENCE
+postgres=# BEGIN;
+BEGIN
+postgres=# ALTER SEQUENCE seq maxvalue 10;
+ALTER SEQUENCE
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 2
+(1 row)
+
+postgres=# ROLLBACK;
+ROLLBACK
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
与之前描述的不同,此处 Sequence 对象受到了事务的保护,序列值发生了发生回滚。实际上,此处事务保护的是 ALTER SEQUENCE
(DDL),而非 nextval
(DML),因此此处发生的回滚是将 Sequence 对象回滚到 ALTER SEQUENCE
之前的状态,故发生了序列回滚现象。
DROP SEQUENCE
,如字面意思,去除数据库中的 Sequence 对象。TRUNCATE
,准确来讲,是通过 TRUNCATE TABLE
完成 RESTART SEQUENCE
。postgres=# CREATE TABLE tbl_iden (i INTEGER, j int GENERATED ALWAYS AS IDENTITY);
+CREATE TABLE
+postgres=# insert into tbl_iden values (100);
+INSERT 0 1
+postgres=# insert into tbl_iden values (1000);
+INSERT 0 1
+postgres=# select * from tbl_iden;
+ i | j
+------+---
+ 100 | 1
+ 1000 | 2
+(2 rows)
+
+postgres=# TRUNCATE TABLE tbl_iden RESTART IDENTITY;
+TRUNCATE TABLE
+postgres=# insert into tbl_iden values (1234);
+INSERT 0 1
+postgres=# select * from tbl_iden;
+ i | j
+------+---
+ 1234 | 1
+(1 row)
+
此处相当于在 TRUNCATE
表的时候,执行 ALTER SEQUENCE RESTART
。
SEQUENCE 除了作为一个独立的对象时候以外,还可以组合其他 PostgreSQL 其他组件进行使用,我们总结了一下几个常用的场景。
CREATE SEQUENCE seq;
+CREATE TABLE tbl (i INTEGER PRIMARY KEY);
+INSERT INTO tbl (i) VALUES (nextval('seq'));
+SELECT * FROM tbl ORDER BY 1 DESC;
+ tbl
+---------
+ 1
+(1 row)
+
CREATE SEQUENCE seq;
+CREATE TABLE tbl (i INTEGER PRIMARY KEY, j INTEGER);
+CREATE FUNCTION f()
+RETURNS TRIGGER AS
+$$
+BEGIN
+NEW.i := nextval('seq');
+RETURN NEW;
+END;
+$$
+LANGUAGE 'plpgsql';
+
+CREATE TRIGGER tg
+BEFORE INSERT ON tbl
+FOR EACH ROW
+EXECUTE PROCEDURE f();
+
+INSERT INTO tbl (j) VALUES (4);
+
+SELECT * FROM tbl;
+ i | j
+---+---
+ 1 | 4
+(1 row)
+
显式 DEFAULT
调用:
CREATE SEQUENCE seq;
+CREATE TABLE tbl(i INTEGER DEFAULT nextval('seq') PRIMARY KEY, j INTEGER);
+
+INSERT INTO tbl (i,j) VALUES (DEFAULT,11);
+INSERT INTO tbl(j) VALUES (321);
+INSERT INTO tbl (i,j) VALUES (nextval('seq'),1);
+
+SELECT * FROM tbl;
+ i | j
+---+-----
+ 2 | 321
+ 1 | 11
+ 3 | 1
+(3 rows)
+
SERIAL
调用:
CREATE TABLE tbl (i SERIAL PRIMARY KEY, j INTEGER);
+INSERT INTO tbl (i,j) VALUES (DEFAULT,42);
+
+INSERT INTO tbl (j) VALUES (25);
+
+SELECT * FROM tbl;
+ i | j
+---+----
+ 1 | 42
+ 2 | 25
+(2 rows)
+
注意,SERIAL
并不是一种类型,而是 DEFAULT
调用的另一种形式,只不过 SERIAL
会自动创建 DEFAULT
约束所要使用的 Sequence。
CREATE TABLE tbl (i int GENERATED ALWAYS AS IDENTITY,
+ j INTEGER);
+INSERT INTO tbl(i,j) VALUES (DEFAULT,32);
+
+INSERT INTO tbl(j) VALUES (23);
+
+SELECT * FROM tbl;
+ i | j
+---+----
+ 1 | 32
+ 2 | 23
+(2 rows)
+
AUTO_INC
调用对列附加了自增约束,与 default
约束不同,自增约束通过查找 dependency 的方式找到该列关联的 Sequence,而 default
调用仅仅是将默认值设置为一个 nextval
表达式。
在 PostgreSQL 中有一张专门记录 Sequence 信息的系统表,即 pg_sequence
。其表结构如下:
postgres=# \d pg_sequence
+ Table "pg_catalog.pg_sequence"
+ Column | Type | Collation | Nullable | Default
+--------------+---------+-----------+----------+---------
+ seqrelid | oid | | not null |
+ seqtypid | oid | | not null |
+ seqstart | bigint | | not null |
+ seqincrement | bigint | | not null |
+ seqmax | bigint | | not null |
+ seqmin | bigint | | not null |
+ seqcache | bigint | | not null |
+ seqcycle | boolean | | not null |
+Indexes:
+ "pg_sequence_seqrelid_index" PRIMARY KEY, btree (seqrelid)
+
不难看出,pg_sequence
中记录了 Sequence 的全部的属性信息,该属性在 CREATE/ALTER SEQUENCE
中被设置,Sequence 的 nextval
以及 setval
要经常打开这张系统表,按照规则办事。
对于 Sequence 序列数据本身,其实现方式是基于 heap 表实现的,heap 表共计三个字段,其在表结构如下:
typedef struct FormData_pg_sequence_data
+{
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} FormData_pg_sequence_data;
+
last_value
记录了 Sequence 的当前的序列值,我们称之为页面值(与后续的缓存值相区分)log_cnt
记录了 Sequence 在 nextval
申请时,预先向 WAL 中额外申请的序列次数,这一部分我们放在序列申请机制剖析中详细介绍。is_called
标记 Sequence 的 last_value
是否已经被申请过,例如 setval
可以设置 is_called
字段:-- setval false
+postgres=# select setval('seq', 10, false);
+ setval
+--------
+ 10
+(1 row)
+
+postgres=# select * from seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 10 | 0 | f
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 10
+(1 row)
+
+-- setval true
+postgres=# select setval('seq', 10, true);
+ setval
+--------
+ 10
+(1 row)
+
+postgres=# select * from seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 10 | 0 | t
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 11
+(1 row)
+
每当用户创建一个 Sequence 对象时,PostgreSQL 总是会创建出一张上面这种结构的 heap 表,来记录 Sequence 对象的数据信息。当 Sequence 对象因为 nextval
或 setval
导致序列值变化时,PostgreSQL 就会通过原地更新的方式更新 heap 表中的这一行的三个字段。
以 setval
为例,下面的逻辑解释了其具体的原地更新过程。
static void
+do_setval(Oid relid, int64 next, bool iscalled)
+{
+
+ /* 打开并对Sequence heap表进行加锁 */
+ init_sequence(relid, &elm, &seqrel);
+
+ ...
+
+ /* 对buffer进行加锁,同时提取tuple */
+ seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+
+ ...
+
+ /* 原地更新tuple */
+ seq->last_value = next; /* last fetched number */
+ seq->is_called = iscalled;
+ seq->log_cnt = 0;
+
+ ...
+
+ /* 释放buffer锁以及表锁 */
+ UnlockReleaseBuffer(buf);
+ relation_close(seqrel, NoLock);
+}
+
可见,do_setval
会直接去设置 Sequence heap 表中的这一行元组,而非普通 heap 表中的删除 + 插入的方式来完成元组更新,对于 nextval
而言,也是类似的过程,只不过 last_value
的值需要计算得出,而非用户设置。
讲清楚 Sequence 对象在内核中的存在形式之后,就需要讲清楚一个序列值是如何发出的,即 nextval
方法。其在内核的具体实现在 sequence.c
中的 nextval_internal
函数,其最核心的功能,就是计算 last_value
以及 log_cnt
。
last_value
和 log_cnt
的具体关系如下图:
其中 log_cnt
是一个预留的申请次数。默认值为 32,由下面的宏定义决定:
/*
+ * We don't want to log each fetching of a value from a sequence,
+ * so we pre-log a few fetches in advance. In the event of
+ * crash we can lose (skip over) as many values as we pre-logged.
+ */
+#define SEQ_LOG_VALS 32
+
每当将 last_value
增加一个 increment 的长度时,log_cnt
就会递减 1。
当 log_cnt
为 0,或者发生 checkpoint
以后,就会触发一次 WAL 日志写入,按下面的公式设置 WAL 日志中的页面值,并重新将 log_cnt
设置为 SEQ_LOG_VALS
。
通过这种方式,PostgreSQL 每次通过 nextval
修改页面中的 last_value
后,不需要每次都写入 WAL 日志。这意味着:如果 nextval
每次都需要修改页面值的话,这种优化将会使得写 WAL 的频率降低 32 倍。其代价就是,在发生 crash 前如果没有及时进行 checkpoint,那么会丢失一段序列。如下面所示:
postgres=# create sequence seq;
+CREATE SEQUENCE
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
+postgres=# select * from seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 1 | 32 | t
+(1 row)
+
+-- crash and restart
+
+postgres=# select * from seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 33 | 0 | t
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 34
+(1 row)
+
显然,crash 以后,Sequence 对象产生了 2-33 这段空洞,但这个代价是可以被接受的,因为 Sequence 并没有违背唯一性原则。同时,在特定场景下极大地降低了写 WAL 的频率。
通过上述描述,不难发现 Sequence 每次发生序列申请,都需要通过加入 buffer 锁的方式来修改页面,这意味着 Sequence 的并发性能是比较差的。
针对这个问题,PostgreSQL 使用对 Sequence 使用了 Session Cache 来提前缓存一段序列,来提高并发性能。如下图所示:
Sequence Session Cache 的实现是一个 entry 数量固定为 16 的哈希表,以 Sequence 的 OID 为 key 去检索已经缓存好的 Sequence 序列,其缓存的 value 结构如下:
typedef struct SeqTableData
+{
+ Oid relid; /* Sequence OID(hash key) */
+ int64 last; /* value last returned by nextval */
+ int64 cached; /* last value already cached for nextval */
+ int64 increment; /* copy of sequence's increment field */
+} SeqTableData;
+
其中 last
即为 Sequence 在 Session 中的当前值,即 current_value,cached
为 Sequence 在 Session 中的缓存值,即 cached_value,increment
记录了步长,有了这三个值即可满足 Sequence 缓存的基本条件。
对于 Sequence Session Cache 与页面值之间的关系,如下图所示:
类似于 log_cnt
,cache_cnt
即为用户在定义 Sequence 时,设置的 Cache 大小,最小为 1。只有当 cache domain 中的序列用完以后,才会去对 buffer 加锁,修改页中的 Sequence 页面值。调整过程如下所示:
例如,如果 CACHE 设置的值为 20,那么当 cache 使用完以后,就会尝试对 buffer 加锁来调整页面值,并重新申请 20 个 increment 至 cache 中。对于上图而言,有如下关系:
在 Sequence Session Cache 的加持下,nextval
方法的并发性能得到了极大的提升,以下是通过 pgbench 进行压测的结果对比。
Sequence 在 PostgreSQL 中是一类特殊的表级对象,提供了简单而又丰富的 SQL 接口,使得用户可以更加方便的创建、使用定制化的序列对象。不仅如此,Sequence 在内核中也具有丰富的组合使用场景,其使用场景也得到了极大地扩展。
本文详细介绍了 Sequence 对象在 PostgreSQL 内核中的具体设计,从对象的元数据描述、对象的数据描述出发,介绍了 Sequence 对象的组成。本文随后介绍了 Sequence 最为核心的 SQL 接口——nextval
,从 nextval
的序列值计算、原地更新、降低 WAL 日志写入三个方面进行了详细阐述。最后,本文介绍了 Sequence Session Cache 的相关原理,描述了引入 Cache 以后,序列值在 Cache 中,以及页面中的计算方法以及对齐关系,并对比了引入 Cache 前后,nextval
方法在单序列和多序列并发场景下的对比情况。
警告
需要翻译
Coding in C follows PostgreSQL's programing style, such as naming, error message format, control statements, length of lines, comment format, length of functions, and global variable. For detail, please reference Postgresql style. Here is some highlines:
Programs in Shell, Go, or Python can follow Google code conventions
We share the same thoughts and rules as Google Open Source Code Review
Before submitting for code review, please do unit test and pass all tests under src/test, such as regress and isolation. Unit tests or function tests should be submitted with code modification.
In addition to code review, this doc offers instructions for the whole cycle of high-quality development, from design, implementation, testing, documentation, to preparing for code review. Many good questions are asked for critical steps during development, such as about design, about function, about complexity, about test, about naming, about documentation, and about code review. The doc summarized rules for code review as follows.
In doing a code review, you should make sure that:
PolarDB for PostgreSQL 的文档使用 VuePress 2 进行管理,以 Markdown 为中心进行写作。
本文档在线托管于 GitHub Pages 服务上。
若您发现文档中存在内容或格式错误,或者您希望能够贡献新文档,那么您需要在本地安装并配置文档开发环境。本项目的文档是一个 Node.js 工程,以 pnpm 作为软件包管理器。Node.js® 是一个基于 Chrome V8 引擎的 JavaScript 运行时环境。
您需要在本地准备 Node.js 环境。可以选择在 Node.js 官网 下载 页面下载安装包手动安装,也可以使用下面的命令自动安装。
通过 curl
安装 Node 版本管理器 nvm
。
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
+command -v nvm
+
如果上一步显示 command not found
,那么请关闭当前终端,然后重新打开。
如果 nvm
已经被成功安装,执行以下命令安装 Node 的 LTS 版本:
nvm install --lts
+
Node.js 安装完毕后,使用如下命令检查安装是否成功:
node -v
+npm -v
+
使用 npm
全局安装软件包管理器 pnpm
:
npm install -g pnpm
+pnpm -v
+
在 PolarDB for PostgreSQL 工程的根目录下运行以下命令,pnpm
将会根据 package.json
安装所有依赖:
pnpm install
+
在 PolarDB for PostgreSQL 工程的根目录下运行以下命令:
pnpm run docs:dev
+
文档开发服务器将运行于 http://localhost:8080/PolarDB-for-PostgreSQL/
,打开浏览器即可访问。对 Markdown 文件作出修改后,可以在网页上实时查看变化。
PolarDB for PostgreSQL 的文档资源位于工程根目录的 docs/
目录下。其目录被组织为:
└── docs
+ ├── .vuepress
+ │ ├── configs
+ │ ├── public
+ │ └── styles
+ ├── README.md
+ ├── architecture
+ ├── contributing
+ ├── guide
+ ├── imgs
+ ├── roadmap
+ └── zh
+ ├── README.md
+ ├── architecture
+ ├── contributing
+ ├── guide
+ ├── imgs
+ └── roadmap
+
可以看到,docs/zh/
目录下是其父级目录除 .vuepress/
以外的翻版。docs/
目录中全部为英语文档,docs/zh/
目录下全部是相对应的简体中文文档。
.vuepress/
目录下包含文档工程的全局配置信息:
config.ts
:文档配置configs/
:文档配置模块(导航栏 / 侧边栏、英文 / 中文等配置)public/
:公共静态资源styles/
:文档主题默认样式覆盖文档的配置方式请参考 VuePress 2 官方文档的 配置指南。
npx prettier --write docs/
本文档借助 GitHub Actions 提供 CI 服务。向主分支推送代码时,将触发对 docs/
目录下文档资源的构建,并将构建结果推送到 gh-pages 分支上。GitHub Pages 服务会自动将该分支上的文档静态资源部署到 Web 服务器上形成文档网站。
PolarDB for PostgreSQL 基于 PostgreSQL 和其它开源项目进行开发,我们的主要目标是为 PostgreSQL 建立一个更大的社区。我们欢迎来自社区的贡献者提交他们的代码或想法。在更远的未来,我们希望这个项目能够被来自阿里巴巴内部和外部的开发者共同管理。
POLARDB_11_STABLE
是 PolarDB 的稳定分支,只接受来自 POLARDB_11_DEV
的合并POLARDB_11_DEV
是 PolarDB 的稳定开发分支,接受来自开源社区的 PR 合并,以及内部开发者的直接推送新的代码将被合并到 POLARDB_11_DEV
上,再由内部开发者定期合并到 POLARDB_11_STABLE
上。
ApsaraDB/PolarDB-for-PostgreSQL
仓库点击 fork
复制一个属于您自己的仓库在 PolarDB for PostgreSQL 的代码仓库页面上,点击右上角的 fork 按钮复制您自己的 PolarDB 仓库。
git clone https://github.com/<your-github>/PolarDB-for-PostgreSQL.git
+
从稳定开发分支 POLARDB_11_DEV
上检出一个新的开发分支,假设这个分支名为 dev
:
git checkout POLARDB_11_DEV
+git checkout -b dev
+
git status
+git add <files-to-change>
+git commit -m "modification for dev"
+
首先点击您自己仓库页面上的 Fetch upstream
确保您的稳定开发分支与 PolarDB 官方仓库的稳定开发分支一致。然后将稳定开发分支上的最新修改拉取到本地:
git checkout POLARDB_11_DEV
+git pull
+
接下来将您的开发分支变基到目前的稳定开发分支,并解决冲突:
git checkout dev
+git rebase POLARDB_11_DEV
+-- 解决冲突 --
+git push -f dev
+
点击 New pull request 或 Compare & pull request 按钮,选择对 ApsaraDB/PolarDB-for-PostgreSQL:POLARDB_11_DEV
分支和 <your-github>/PolarDB-for-PostgreSQL:dev
分支进行比较,并撰写 PR 描述。
GitHub 会对您的 PR 进行自动化的回归测试,您的 PR 需要 100% 通过这些测试。
您可以与维护者就代码中的问题进行讨论,并解决他们提出的评审意见。
如果您的代码通过了测试和评审,PolarDB 的维护者将会把您的 PR 合并到稳定分支上。
如果在运行 PolarDB for PostgreSQL 的过程中出现问题,请提供数据库的日志与机器的配置信息以方便定位问题。
通过 polar_stat_env
插件可以轻松获取数据库所在主机的硬件配置:
=> CREATE EXTENSION polar_stat_env;
+=> SELECT polar_stat_env();
+ polar_stat_env
+--------------------------------------------------------------------
+ { +
+ "CPU": { +
+ "Architecture": "x86_64", +
+ "Model Name": "Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz",+
+ "CPU Cores": "8", +
+ "CPU Thread Per Cores": "2", +
+ "CPU Core Per Socket": "4", +
+ "NUMA Nodes": "1", +
+ "L1d cache": "192 KiB (4 instances)", +
+ "L1i cache": "128 KiB (4 instances)", +
+ "L2 cache": "5 MiB (4 instances)", +
+ "L3 cache": "48 MiB (1 instance)" +
+ }, +
+ "Memory": { +
+ "Memory Total (GB)": "14", +
+ "HugePage Size (MB)": "2", +
+ "HugePage Total Size (GB)": "0" +
+ }, +
+ "OS Params": { +
+ "OS": "5.10.134-16.1.al8.x86_64", +
+ "Swappiness(1-100)": "0", +
+ "Vfs Cache Pressure(0-1000)": "100", +
+ "Min Free KBytes(KB)": "67584" +
+ } +
+ }
+(1 row)
+
棠羽
2023/08/01
15 min
本文将指导您在单机文件系统(如 ext4)上编译部署 PolarDB-PG,适用于所有计算节点都可以访问相同本地磁盘存储的场景。
我们在 DockerHub 上提供了 PolarDB-PG 的 本地实例镜像,里面已包含启动 PolarDB-PG 本地存储实例的入口脚本。镜像目前支持 linux/amd64
和 linux/arm64
两种 CPU 架构。
docker pull polardb/polardb_pg_local_instance
+
新建一个空白目录 ${your_data_dir}
作为 PolarDB-PG 实例的数据目录。启动容器时,将该目录作为 VOLUME 挂载到容器内,对数据目录进行初始化。在初始化的过程中,可以传入环境变量覆盖默认值:
POLARDB_PORT
:PolarDB-PG 运行所需要使用的端口号,默认值为 5432
;镜像将会使用三个连续的端口号(默认 5432-5434
)POLARDB_USER
:初始化数据库时创建默认的 superuser(默认 postgres
)POLARDB_PASSWORD
:默认 superuser 的密码使用如下命令初始化数据库:
docker run -it --rm \
+ --env POLARDB_PORT=5432 \
+ --env POLARDB_USER=u1 \
+ --env POLARDB_PASSWORD=your_password \
+ -v ${your_data_dir}:/var/polardb \
+ polardb/polardb_pg_local_instance \
+ echo 'done'
+
数据库初始化完毕后,使用 -d
参数以后台模式创建容器,启动 PolarDB-PG 服务。通常 PolarDB-PG 的端口需要暴露给外界使用,使用 -p
参数将容器内的端口范围暴露到容器外。比如,初始化数据库时使用的是 5432-5434
端口,如下命令将会把这三个端口映射到容器外的 54320-54322
端口:
docker run -d \
+ -p 54320-54322:5432-5434 \
+ -v ${your_data_dir}:/var/polardb \
+ polardb/polardb_pg_local_instance
+
或者也可以直接让容器与宿主机共享网络:
docker run -d \
+ --network=host \
+ -v ${your_data_dir}:/var/polardb \
+ polardb/polardb_pg_local_instance
+
程义
2022/11/02
15 min
本文将指导您在分布式文件系统 PolarDB File System(PFS)上编译部署 PolarDB,适用于已经在 Curve 块存储上格式化并挂载 PFS 的计算节点。
我们在 DockerHub 上提供了一个 PolarDB 开发镜像,里面已经包含编译运行 PolarDB for PostgreSQL 所需要的所有依赖。您可以直接使用这个开发镜像进行实例搭建。镜像目前支持 AMD64 和 ARM64 两种 CPU 架构。
在前置文档中,我们已经从 DockerHub 上拉取了 PolarDB 开发镜像,并且进入到了容器中。进入容器后,从 GitHub 上下载 PolarDB for PostgreSQL 的源代码,稳定分支为 POLARDB_11_STABLE
。如果因网络原因不能稳定访问 GitHub,则可以访问 Gitee 国内镜像。
git clone -b POLARDB_11_STABLE https://github.com/ApsaraDB/PolarDB-for-PostgreSQL.git
+
git clone -b POLARDB_11_STABLE https://gitee.com/mirrors/PolarDB-for-PostgreSQL
+
代码克隆完毕后,进入源码目录:
cd PolarDB-for-PostgreSQL/
+
在读写节点上,使用 --with-pfsd
选项编译 PolarDB 内核。请参考 编译测试选项说明 查看更多编译选项的说明。
./polardb_build.sh --with-pfsd
+
注意
上述脚本在编译完成后,会自动部署一个基于 本地文件系统 的实例,运行于 5432
端口上。
手动键入以下命令停止这个实例,以便 在 PFS 和共享存储上重新部署实例:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl \
+ -D $HOME/tmp_master_dir_polardb_pg_1100_bld/ \
+ stop
+
在节点本地初始化数据目录 $HOME/primary/
:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D $HOME/primary
+
在共享存储的 /pool@@volume_my_/shared_data
目录上初始化共享数据目录
# 使用 pfs 创建共享数据目录
+sudo pfs -C curve mkdir /pool@@volume_my_/shared_data
+# 初始化 db 的本地和共享数据目录
+sudo $HOME/tmp_basedir_polardb_pg_1100_bld/bin/polar-initdb.sh \
+ $HOME/primary/ /pool@@volume_my_/shared_data/ curve
+
编辑读写节点的配置。打开 $HOME/primary/postgresql.conf
,增加配置项:
port=5432
+polar_hostid=1
+polar_enable_shared_storage_mode=on
+polar_disk_name='pool@@volume_my_'
+polar_datadir='/pool@@volume_my_/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='curve'
+logging_collector=on
+log_line_prefix='%p\t%r\t%u\t%m\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+synchronous_standby_names='replica1'
+
打开 $HOME/primary/pg_hba.conf
,增加以下配置项:
host replication postgres 0.0.0.0/0 trust
+
最后,启动读写节点:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D $HOME/primary
+
检查读写节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5432 \
+ -d postgres \
+ -c 'select version();'
+# 下面为输出内容
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
在读写节点上,为对应的只读节点创建相应的 replication slot,用于只读节点的物理流复制:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5432 \
+ -d postgres \
+ -c "select pg_create_physical_replication_slot('replica1');"
+# 下面为输出内容
+ pg_create_physical_replication_slot
+-------------------------------------
+ (replica1,)
+(1 row)
+
在只读节点上,使用 --with-pfsd
选项编译 PolarDB 内核。
./polardb_build.sh --with-pfsd
+
注意
上述脚本在编译完成后,会自动部署一个基于 本地文件系统 的实例,运行于 5432
端口上。
手动键入以下命令停止这个实例,以便 在 PFS 和共享存储上重新部署实例:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl \
+ -D $HOME/tmp_master_dir_polardb_pg_1100_bld/ \
+ stop
+
在节点本地初始化数据目录 $HOME/replica1/
:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D $HOME/replica1
+
编辑只读节点的配置。打开 $HOME/replica1/postgresql.conf
,增加配置项:
port=5433
+polar_hostid=2
+polar_enable_shared_storage_mode=on
+polar_disk_name='pool@@volume_my_'
+polar_datadir='/pool@@volume_my_/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='curve'
+logging_collector=on
+log_line_prefix='%p\t%r\t%u\t%m\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+
创建 $HOME/replica1/recovery.conf
,增加以下配置项:
注意
请在下面替换读写节点(容器)所在的 IP 地址。
polar_replica='on'
+recovery_target_timeline='latest'
+primary_slot_name='replica1'
+primary_conninfo='host=[读写节点所在IP] port=5432 user=postgres dbname=postgres application_name=replica1'
+
最后,启动只读节点:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D $HOME/replica1
+
检查只读节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5433 \
+ -d postgres \
+ -c 'select version();'
+# 下面为输出内容
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
部署完成后,需要进行实例检查和测试,确保读写节点可正常写入数据、只读节点可以正常读取。
登录 读写节点,创建测试表并插入样例数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \
+ -p 5432 \
+ -d postgres \
+ -c "create table t(t1 int primary key, t2 int);insert into t values (1, 1),(2, 3),(3, 3);"
+
登录 只读节点,查询刚刚插入的样例数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \
+ -p 5433 \
+ -d postgres \
+ -c "select * from t;"
+# 下面为输出内容
+ t1 | t2
+----+----
+ 1 | 1
+ 2 | 3
+ 3 | 3
+(3 rows)
+
在读写节点上插入的数据对只读节点可见。
棠羽
2022/05/09
15 min
本文将指导您在分布式文件系统 PolarDB File System(PFS)上编译部署 PolarDB,适用于已经在共享存储上格式化并挂载 PFS 文件系统的计算节点。
初始化读写节点的本地数据目录 ~/primary/
:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D $HOME/primary
+
在共享存储的 /nvme1n1/shared_data/
路径上创建共享数据目录,然后使用 polar-initdb.sh
脚本初始化共享数据目录:
# 使用 pfs 创建共享数据目录
+sudo pfs -C disk mkdir /nvme1n1/shared_data
+# 初始化 db 的本地和共享数据目录
+sudo $HOME/tmp_basedir_polardb_pg_1100_bld/bin/polar-initdb.sh \
+ $HOME/primary/ /nvme1n1/shared_data/
+
编辑读写节点的配置。打开 ~/primary/postgresql.conf
,增加配置项:
port=5432
+polar_hostid=1
+polar_enable_shared_storage_mode=on
+polar_disk_name='nvme1n1'
+polar_datadir='/nvme1n1/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='disk'
+logging_collector=on
+log_line_prefix='%p\t%r\t%u\t%m\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+synchronous_standby_names='replica1'
+
编辑读写节点的客户端认证文件 ~/primary/pg_hba.conf
,增加以下配置项,允许只读节点进行物理复制:
host replication postgres 0.0.0.0/0 trust
+
最后,启动读写节点:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D $HOME/primary
+
检查读写节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5432 \
+ -d postgres \
+ -c 'SELECT version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
在读写节点上,为对应的只读节点创建相应的复制槽,用于只读节点的物理复制:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5432 \
+ -d postgres \
+ -c "SELECT pg_create_physical_replication_slot('replica1');"
+ pg_create_physical_replication_slot
+-------------------------------------
+ (replica1,)
+(1 row)
+
在只读节点本地磁盘的 ~/replica1
路径上创建一个空目录,然后通过 polar-replica-initdb.sh
脚本使用共享存储上的数据目录来初始化只读节点的本地目录。初始化后的本地目录中没有默认配置文件,所以还需要使用 initdb
创建一个临时的本地目录模板,然后将所有的默认配置文件拷贝到只读节点的本地目录下:
mkdir -m 0700 $HOME/replica1
+sudo ~/tmp_basedir_polardb_pg_1100_bld/bin/polar-replica-initdb.sh \
+ /nvme1n1/shared_data/ $HOME/replica1/
+
+$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D /tmp/replica1
+cp /tmp/replica1/*.conf $HOME/replica1/
+
编辑只读节点的配置。打开 ~/replica1/postgresql.conf
,增加配置项:
port=5433
+polar_hostid=2
+polar_enable_shared_storage_mode=on
+polar_disk_name='nvme1n1'
+polar_datadir='/nvme1n1/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='disk'
+logging_collector=on
+log_line_prefix='%p\t%r\t%u\t%m\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+
创建只读节点的复制配置文件 ~/replica1/recovery.conf
,增加读写节点的连接信息,以及复制槽名称:
polar_replica='on'
+recovery_target_timeline='latest'
+primary_slot_name='replica1'
+primary_conninfo='host=[读写节点所在IP] port=5432 user=postgres dbname=postgres application_name=replica1'
+
最后,启动只读节点:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D $HOME/replica1
+
检查只读节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5433 \
+ -d postgres \
+ -c 'SELECT version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
部署完成后,需要进行实例检查和测试,确保读写节点可正常写入数据、只读节点可以正常读取。
登录 读写节点,创建测试表并插入样例数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \
+ -p 5432 \
+ -d postgres \
+ -c "CREATE TABLE t (t1 INT PRIMARY KEY, t2 INT); INSERT INTO t VALUES (1, 1),(2, 3),(3, 3);"
+
登录 只读节点,查询刚刚插入的样例数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \
+ -p 5433 \
+ -d postgres \
+ -c "SELECT * FROM t;"
+ t1 | t2
+----+----
+ 1 | 1
+ 2 | 3
+ 3 | 3
+(3 rows)
+
在读写节点上插入的数据对只读节点可见,这意味着基于共享存储的 PolarDB 计算节点集群搭建成功。
阿里云官网直接提供了可供购买的 云原生关系型数据库 PolarDB PostgreSQL 引擎。
PolarDB Stack 是轻量级 PolarDB PaaS 软件。基于共享存储提供一写多读的 PolarDB 数据库服务,特别定制和深度优化了数据库生命周期管理。通过 PolarDB Stack 可以一键部署 PolarDB-for-PostgreSQL 内核和 PolarDB-FileSystem。
PolarDB Stack 架构如下图所示,进入 PolarDB Stack 的部署文档
棠羽
2022/05/09
10 min
部署 PolarDB for PostgreSQL 需要在以下三个层面上做准备:
以下表格给出了三个层次排列组合出的的不同实践方式,其中的步骤包含:
我们强烈推荐使用发布在 DockerHub 上的 PolarDB 开发镜像 来完成实践!开发镜像中已经包含了文件系统层和数据库层所需要安装的所有依赖,无需手动安装。
块存储 | 文件系统 | |
---|---|---|
实践 1(极简本地部署) | 本地 SSD | 本地文件系统(如 ext4) |
实践 2(生产环境最佳实践) 视频 | 阿里云 ECS + ESSD 云盘 | PFS |
实践 3(生产环境最佳实践) 视频 | CurveBS 共享存储 | PFS for Curve |
实践 4 | Ceph 共享存储 | PFS |
实践 5 | NBD 共享存储 | PFS |
棠羽
2022/08/31
20 min
PolarDB File System,简称 PFS 或 PolarFS,是由阿里云自主研发的高性能类 POSIX 的用户态分布式文件系统,服务于阿里云数据库 PolarDB 产品。使用 PFS 对共享存储进行格式化并挂载后,能够保证一个计算节点对共享存储的写入能够立刻对另一个计算节点可见。
在 PolarDB 计算节点上准备好 PFS 相关工具。推荐使用 DockerHub 上的 PolarDB 开发镜像,其中已经包含了编译完毕的 PFS,无需再次编译安装。Curve 开源社区 针对 PFS 对接 CurveBS 存储做了专门的优化。在用于部署 PolarDB 的计算节点上,使用下面的命令拉起带有 PFS for CurveBS 的 PolarDB 开发镜像:
docker pull polardb/polardb_pg_devel:curvebs
+docker run -it \
+ --network=host \
+ --cap-add=SYS_PTRACE --privileged=true \
+ --name polardb_pg \
+ polardb/polardb_pg_devel:curvebs bash
+
进入容器后需要修改 curve 相关的配置文件:
sudo vim /etc/curve/client.conf
+#
+################### mds一侧配置信息 ##################
+#
+
+# mds的地址信息,对于mds集群,地址以逗号隔开
+mds.listen.addr=127.0.0.1:6666
+... ...
+
注意,这里的 mds.listen.addr
请填写部署 CurveBS 集群中集群状态中输出的 cluster mds addr
容器内已经安装了 curve
工具,该工具可用于创建卷,用户需要使用该工具创建实际存储 PolarFS 数据的 curve 卷:
curve create --filename /volume --user my --length 10 --stripeUnit 16384 --stripeCount 64
+
用户可通过 curve create -h 命令查看创建卷的详细说明。上面的列子中,我们创建了一个拥有以下属性的卷:
特别需要注意的是,在数据库场景下,我们强烈建议使用条带卷,只有这样才能充分发挥 Curve 的性能优势,而 16384 * 64 的条带设置是目前最优的条带设置。
在使用 curve 卷之前需要使用 pfs 来格式化对应的 curve 卷:
sudo pfs -C curve mkfs pool@@volume_my_
+
与我们在本地挂载文件系统前要先在磁盘上格式化文件系统一样,我们也要把我们的 curve 卷格式化为 PolarFS 文件系统。
注意,由于 PolarFS 解析的特殊性,我们将以 pool@${volume}_${user}_
的形式指定我们的 curve 卷,此外还需要将卷名中的 / 替换成 @
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p pool@@volume_my_
+
如果 pfsd 启动成功,那么至此 curve 版 PolarFS 已全部部署完成,已经成功挂载 PFS 文件系统。 下面需要编译部署 PolarDB。
棠羽
2022/05/09
15 min
PolarDB File System,简称 PFS 或 PolarFS,是由阿里云自主研发的高性能类 POSIX 的用户态分布式文件系统,服务于阿里云数据库 PolarDB 产品。使用 PFS 对共享存储进行格式化并挂载后,能够保证一个计算节点对共享存储的写入能够立刻对另一个计算节点可见。
推荐使用 DockerHub 上的 PolarDB for PostgreSQL 可执行文件镜像,目前支持 linux/amd64
和 linux/arm64
两种架构,其中已经包含了编译完毕的 PFS 工具,无需手动编译安装。通过以下命令进入容器即可:
docker pull polardb/polardb_pg_binary
+docker run -it \
+ --cap-add=SYS_PTRACE \
+ --privileged=true \
+ --name polardb_pg \
+ --shm-size=512m \
+ polardb/polardb_pg_binary \
+ bash
+
PFS 的手动编译安装方式请参考 PFS 的 README,此处不再赘述。
PFS 仅支持访问 以特定字符开头的块设备(详情可见 PolarDB File System 源代码的 src/pfs_core/pfs_api.h
文件):
#define PFS_PATH_ISVALID(path) \
+ (path != NULL && \
+ ((path[0] == '/' && isdigit((path)[1])) || path[0] == '.' \
+ || strncmp(path, "/pangu-", 7) == 0 \
+ || strncmp(path, "/sd", 3) == 0 \
+ || strncmp(path, "/sf", 3) == 0 \
+ || strncmp(path, "/vd", 3) == 0 \
+ || strncmp(path, "/nvme", 5) == 0 \
+ || strncmp(path, "/loop", 5) == 0 \
+ || strncmp(path, "/mapper_", 8) ==0))
+
因此,为了保证能够顺畅完成后续流程,我们建议在所有访问块设备的节点上使用相同的软链接访问共享块设备。例如,在 NBD 服务端主机上,使用新的块设备名 /dev/nvme1n1
软链接到共享存储块设备的原有名称 /dev/vdb
上:
sudo ln -s /dev/vdb /dev/nvme1n1
+
在 NBD 客户端主机上,使用同样的块设备名 /dev/nvme1n1
软链到共享存储块设备的原有名称 /dev/nbd0
上:
sudo ln -s /dev/nbd0 /dev/nvme1n1
+
这样便可以在服务端和客户端两台主机上使用相同的块设备名 /dev/nvme1n1
访问同一个块设备。
使用 任意一台主机,在共享存储块设备上格式化 PFS 分布式文件系统:
sudo pfs -C disk mkfs nvme1n1
+
在能够访问共享存储的 所有主机节点 上分别启动 PFS 守护进程,挂载 PFS 文件系统:
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme1n1 -w 2
+
棠羽
2022/05/09
5 min
PolarDB for PostgreSQL 采用了基于 Shared-Storage 的存储计算分离架构。数据库由传统的 Share-Nothing 架构,转变成了 Shared-Storage 架构——由原来的 N 份计算 + N 份存储,转变成了 N 份计算 + 1 份存储;而 PostgreSQL 使用了传统的单体数据库架构,存储和计算耦合在一起。
为保证所有计算节点能够以相同的可见性视角访问分布式块存储设备,PolarDB 需要使用分布式文件系统 PolarDB File System(PFS) 来访问块设备,其实现原理可参考发表在 2018 年 VLDB 上的论文[1];如果所有计算节点都可以本地访问同一个块存储设备,那么也可以不使用 PFS,直接使用本地的单机文件系统(如 ext4)。这是与 PostgreSQL 的不同点之一。
棠羽
2022/05/09
5 min
警告
为简化使用,容器内的 postgres
用户没有设置密码,仅供体验。如果在生产环境等高安全性需求场合,请务必修改健壮的密码!
仅需单台计算机,同时满足以下要求,就可以快速开启您的 PolarDB 之旅:
从 DockerHub 上拉取 PolarDB for PostgreSQL 的 本地存储实例镜像,创建并运行容器,然后直接试用 PolarDB-PG:
# 拉取 PolarDB-PG 镜像
+docker pull polardb/polardb_pg_local_instance
+# 创建并运行容器
+docker run -it --rm polardb/polardb_pg_local_instance psql
+# 测试可用性
+postgres=# SELECT version();
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
棠羽
2022/05/09
20 min
阿里云 ESSD(Enhanced SSD)云盘 结合 25 GE 网络和 RDMA 技术,能够提供单盘高达 100 万的随机读写能力和单路低时延性能。阿里云 ESSD 云盘支持 NVMe 协议,且可以同时挂载到多台支持 NVMe 协议的 ECS(Elastic Compute Service)实例上,从而实现多个 ECS 实例并发读写访问,具备高可靠、高并发、高性能等特点。更新信息请参考阿里云 ECS 文档:
本文将指导您完成以下过程:
首先需要准备两台或以上的 阿里云 ECS。目前,ECS 对支持 ESSD 多重挂载的规格有较多限制,详情请参考 使用限制。仅 部分可用区、部分规格(ecs.g7se、ecs.c7se、ecs.r7se)的 ECS 实例可以支持 ESSD 的多重挂载。如图,请务必选择支持多重挂载的 ECS 规格:
对 ECS 存储配置的选择,系统盘可以选用任意的存储类型,数据盘和共享盘暂不选择。后续再单独创建一个 ESSD 云盘作为共享盘:
如图所示,在 同一可用区 中建好两台 ECS:
在阿里云 ECS 的管理控制台中,选择 存储与快照 下的 云盘,点击 创建云盘。在与已经建好的 ECS 所在的相同可用区内,选择建立一个 ESSD 云盘,并勾选 多实例挂载。如果您的 ECS 不符合多实例挂载的限制条件,则该选框不会出现。
ESSD 云盘创建完毕后,控制台显示云盘支持多重挂载,状态为 待挂载:
接下来,把这个云盘分别挂载到两台 ECS 上:
挂载完毕后,查看该云盘,将会显示该云盘已经挂载的两台 ECS 实例:
通过 ssh 分别连接到两台 ECS 上,运行 lsblk
命令可以看到:
nvme0n1
是 40GB 的 ECS 系统盘,为 ECS 私有nvme1n1
是 100GB 的 ESSD 云盘,两台 ECS 同时可见$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 100G 0 disk
+
接下来,将在两台 ECS 上分别部署 PolarDB 的主节点和只读节点。作为前提,需要在 ECS 共享的 ESSD 块设备上 格式化并挂载 PFS。
Ceph 是一个统一的分布式存储系统,由于它可以提供较好的性能、可靠性和可扩展性,被广泛的应用在存储领域。Ceph 搭建需要 2 台及以上的物理机/虚拟机实现存储共享与数据备份,本教程以 3 台虚拟机环境为例,介绍基于 ceph 共享存储的实例构建方法。大体如下:
注意
操作系统版本要求 CentOS 7.5 及以上。以下步骤在 CentOS 7.5 上通过测试。
使用的虚拟机环境如下:
IP hostname
+192.168.1.173 ceph001
+192.168.1.174 ceph002
+192.168.1.175 ceph003
+
提示
本教程使用阿里云镜像站提供的 docker 包。
yum install -y yum-utils device-mapper-persistent-data lvm2
+
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
+yum makecache
+yum install -y docker-ce
+
+systemctl start docker
+systemctl enable docker
+
docker run hello-world
+
ssh-keygen
+ssh-copy-id -i /root/.ssh/id_rsa.pub root@ceph001
+ssh-copy-id -i /root/.ssh/id_rsa.pub root@ceph002
+ssh-copy-id -i /root/.ssh/id_rsa.pub root@ceph003
+
ssh root@ceph003
+
docker pull ceph/daemon
+
docker run -d \
+ --net=host \
+ --privileged=true \
+ -v /etc/ceph:/etc/ceph \
+ -v /var/lib/ceph/:/var/lib/ceph/ \
+ -e MON_IP=192.168.1.173 \
+ -e CEPH_PUBLIC_NETWORK=192.168.1.0/24 \
+ --security-opt seccomp=unconfined \
+ --name=mon01 \
+ ceph/daemon mon
+
注意
根据实际网络环境修改 IP、子网掩码位数。
$ docker exec mon01 ceph -s
+cluster:
+ id: 937ccded-3483-4245-9f61-e6ef0dbd85ca
+ health: HEALTH_OK
+
+services:
+ mon: 1 daemons, quorum ceph001 (age 26m)
+ mgr: no daemons active
+ osd: 0 osds: 0 up, 0 in
+
+data:
+ pools: 0 pools, 0 pgs
+ objects: 0 objects, 0 B
+ usage: 0 B used, 0 B / 0 B avail
+ pgs:
+
注意
如果遇到 mon is allowing insecure global_id reclaim
的报错,使用以下命令解决。
docker exec mon01 ceph config set mon auth_allow_insecure_global_id_reclaim false
+
docker exec mon01 ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap-osd/ceph.keyring
+docker exec mon01 ceph auth get client.bootstrap-rgw -o /var/lib/ceph/bootstrap-rgw/ceph.keyring
+
ssh root@ceph002 mkdir -p /var/lib/ceph
+scp -r /etc/ceph root@ceph002:/etc
+scp -r /var/lib/ceph/bootstrap* root@ceph002:/var/lib/ceph
+ssh root@ceph003 mkdir -p /var/lib/ceph
+scp -r /etc/ceph root@ceph003:/etc
+scp -r /var/lib/ceph/bootstrap* root@ceph003:/var/lib/ceph
+
docker run -d \
+ --net=host \
+ --privileged=true \
+ -v /etc/ceph:/etc/ceph \
+ -v /var/lib/ceph/:/var/lib/ceph/ \
+ -e MON_IP=192.168.1.174 \
+ -e CEPH_PUBLIC_NETWORK=192.168.1.0/24 \
+ --security-opt seccomp=unconfined \
+ --name=mon02 \
+ ceph/daemon mon
+
+docker run -d \
+ --net=host \
+ --privileged=true \
+ -v /etc/ceph:/etc/ceph \
+ -v /var/lib/ceph/:/var/lib/ceph/ \
+ -e MON_IP=192.168.1.175 \
+ -e CEPH_PUBLIC_NETWORK=192.168.1.0/24 \
+ --security-opt seccomp=unconfined \
+ --name=mon03 \
+ ceph/daemon mon
+
$ docker exec mon01 ceph -s
+cluster:
+ id: 937ccded-3483-4245-9f61-e6ef0dbd85ca
+ health: HEALTH_OK
+
+services:
+ mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 35s)
+ mgr: no daemons active
+ osd: 0 osds: 0 up, 0 in
+
+data:
+ pools: 0 pools, 0 pgs
+ objects: 0 objects, 0 B
+ usage: 0 B used, 0 B / 0 B avail
+ pgs:
+
注意
从 mon 节点信息查看是否有添加在另外两个节点创建的 mon 添加进来。
提示
本环境的虚拟机只有一个 /dev/vdb
磁盘可用,因此为每个虚拟机只创建了一个 osd 节点。
docker run --rm --privileged=true --net=host --ipc=host \
+ --security-opt seccomp=unconfined \
+ -v /run/lock/lvm:/run/lock/lvm:z \
+ -v /var/run/udev/:/var/run/udev/:z \
+ -v /dev:/dev -v /etc/ceph:/etc/ceph:z \
+ -v /run/lvm/:/run/lvm/ \
+ -v /var/lib/ceph/:/var/lib/ceph/:z \
+ -v /var/log/ceph/:/var/log/ceph/:z \
+ --entrypoint=ceph-volume \
+ docker.io/ceph/daemon \
+ --cluster ceph lvm prepare --bluestore --data /dev/vdb
+
注意
以上命令在三个节点都是一样的,只需要根据磁盘名称进行修改调整即可。
docker run -d --privileged=true --net=host --pid=host --ipc=host \
+ --security-opt seccomp=unconfined \
+ -v /dev:/dev \
+ -v /etc/localtime:/etc/ localtime:ro \
+ -v /var/lib/ceph:/var/lib/ceph:z \
+ -v /etc/ceph:/etc/ceph:z \
+ -v /var/run/ceph:/var/run/ceph:z \
+ -v /var/run/udev/:/var/run/udev/ \
+ -v /var/log/ceph:/var/log/ceph:z \
+ -v /run/lvm/:/run/lvm/ \
+ -e CLUSTER=ceph \
+ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \
+ -e CONTAINER_IMAGE=docker.io/ceph/daemon \
+ -e OSD_ID=0 \
+ --name=ceph-osd-0 \
+ docker.io/ceph/daemon
+
注意
各个节点需要修改 OSD_ID 与 name 属性,OSD_ID 是从编号 0 递增的,其余节点为 OSD_ID=1、OSD_ID=2。
$ docker exec mon01 ceph -s
+cluster:
+ id: e430d054-dda8-43f1-9cda-c0881b782e17
+ health: HEALTH_WARN
+ no active mgr
+
+services:
+ mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 44m)
+ mgr: no daemons active
+ osd: 3 osds: 3 up (since 7m), 3 in (since 13m)
+
+data:
+ pools: 0 pools, 0 pgs
+ objects: 0 objects, 0 B
+ usage: 0 B used, 0 B / 0 B avail
+ pgs:
+
以下命令均在 ceph001 进行:
docker run -d --net=host \
+ --privileged=true \
+ --security-opt seccomp=unconfined \
+ -v /etc/ceph:/etc/ceph \
+ -v /var/lib/ceph/:/var/lib/ceph/ \
+ --name=ceph-mgr-0 \
+ ceph/daemon mgr
+
+docker run -d --net=host \
+ --privileged=true \
+ --security-opt seccomp=unconfined \
+ -v /var/lib/ceph/:/var/lib/ceph/ \
+ -v /etc/ceph:/etc/ceph \
+ -e CEPHFS_CREATE=1 \
+ --name=ceph-mds-0 \
+ ceph/daemon mds
+
+docker run -d --net=host \
+ --privileged=true \
+ --security-opt seccomp=unconfined \
+ -v /var/lib/ceph/:/var/lib/ceph/ \
+ -v /etc/ceph:/etc/ceph \
+ --name=ceph-rgw-0 \
+ ceph/daemon rgw
+
查看集群状态:
docker exec mon01 ceph -s
+cluster:
+ id: e430d054-dda8-43f1-9cda-c0881b782e17
+ health: HEALTH_OK
+
+services:
+ mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 92m)
+ mgr: ceph001(active, since 25m)
+ mds: 1/1 daemons up
+ osd: 3 osds: 3 up (since 54m), 3 in (since 60m)
+ rgw: 1 daemon active (1 hosts, 1 zones)
+
+data:
+ volumes: 1/1 healthy
+ pools: 7 pools, 145 pgs
+ objects: 243 objects, 7.2 KiB
+ usage: 50 MiB used, 2.9 TiB / 2.9 TiB avail
+ pgs: 145 active+clean
+
提示
以下命令均在容器 mon01 中进行。
docker exec -it mon01 bash
+ceph osd pool create rbd_polar
+
rbd create --size 512000 rbd_polar/image02
+rbd info rbd_polar/image02
+
+rbd image 'image02':
+size 500 GiB in 128000 objects
+order 22 (4 MiB objects)
+snapshot_count: 0
+id: 13b97b252c5d
+block_name_prefix: rbd_data.13b97b252c5d
+format: 2
+features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
+op_features:
+flags:
+create_timestamp: Thu Oct 28 06:18:07 2021
+access_timestamp: Thu Oct 28 06:18:07 2021
+modify_timestamp: Thu Oct 28 06:18:07 2021
+
modprobe rbd # 加载内核模块,在主机上执行
+rbd map rbd_polar/image02
+
+rbd: sysfs write failed
+RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable rbd_polar/image02 object-map fast-diff deep-flatten".
+In some cases useful info is found in syslog - try "dmesg | tail".
+rbd: map failed: (6) No such device or address
+
注意
某些特性内核不支持,需要关闭才可以映射成功。如下进行:关闭 rbd 不支持特性,重新映射镜像,并查看映射列表。
rbd feature disable rbd_polar/image02 object-map fast-diff deep-flatten
+rbd map rbd_polar/image02
+rbd device list
+
+id pool namespace image snap device
+0 rbd_polar image01 - /dev/ rbd0
+1 rbd_polar image02 - /dev/ rbd1
+
提示
此处我已经先映射了一个 image01,所以有两条信息。
回到容器外,进行操作。查看系统中的块设备:
lsblk
+
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+vda 253:0 0 500G 0 disk
+└─vda1 253:1 0 500G 0 part /
+vdb 253:16 0 1000G 0 disk
+└─ceph--7eefe77f--c618--4477--a1ed--b4f44520dfc 2-osd--block--bced3ff1--42b9--43e1--8f63--e853b ce41435
+ 252:0 0 1000G 0 lvm
+rbd0 251:0 0 100G 0 disk
+rbd1 251:16 0 500G 0 disk
+
注意
块设备镜像需要在各个节点都进行映射才可以在本地环境中通过 lsblk
命令查看到,否则不显示。ceph002 与 ceph003 上映射命令与上述一致。
参阅 格式化并挂载 PFS。
棠羽
2022/08/31
30 min
Curve 是一款高性能、易运维、云原生的开源分布式存储系统。可应用于主流的云原生基础设施平台:
Curve 亦可作为云存储中间件使用 S3 兼容的对象存储作为数据存储引擎,为公有云用户提供高性价比的共享文件存储。
本示例将引导您以 CurveBS 作为块存储,部署 PolarDB for PostgreSQL。更多进阶配置和使用方法请参考 Curve 项目的 wiki。
如图所示,本示例共使用六台服务器。其中,一台中控服务器和三台存储服务器共同组成 CurveBS 集群,对外暴露为一个共享存储服务。剩余两台服务器分别用于部署 PolarDB for PostgreSQL 数据库的读写节点和只读节点,它们共享 CurveBS 对外暴露的块存储设备。
本示例使用阿里云 ECS 模拟全部六台服务器。六台 ECS 全部运行 Anolis OS 8.6(兼容 CentOS 8.6)系统,使用 root 用户,并处于同一局域网段内。需要完成的准备工作包含:
bash -c "$(curl -fsSL https://curveadm.nos-eastchina1.126.net/script/install.sh)"
+source /root/.bash_profile
+
在中控机上编辑主机列表文件:
vim hosts.yaml
+
文件中包含另外五台服务器的 IP 地址和在 Curve 集群内的名称,其中:
global:
+ user: root
+ ssh_port: 22
+ private_key_file: /root/.ssh/id_rsa
+
+hosts:
+ # Curve worker nodes
+ - host: server-host1
+ hostname: 172.16.0.223
+ - host: server-host2
+ hostname: 172.16.0.224
+ - host: server-host3
+ hostname: 172.16.0.225
+ # PolarDB nodes
+ - host: polardb-primary
+ hostname: 172.16.0.226
+ - host: polardb-replica
+ hostname: 172.16.0.227
+
导入主机列表:
curveadm hosts commit hosts.yaml
+
准备磁盘列表,并提前生成一批固定大小并预写过的 chunk 文件。磁盘列表中需要包含:
/dev/vdb
)vim format.yaml
+
host:
+ - server-host1
+ - server-host2
+ - server-host3
+disk:
+ - /dev/vdb:/data/chunkserver0:90 # device:mount_path:format_percent
+
开始格式化。此时,中控机将在每台存储节点主机上对每个块设备启动一个格式化进程容器。
$ curveadm format -f format.yaml
+Start Format Chunkfile Pool: ⠸
+ + host=server-host1 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [0/1] ⠸
+ + host=server-host2 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [0/1] ⠸
+ + host=server-host3 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [0/1] ⠸
+
当显示 OK
时,说明这个格式化进程容器已启动,但 并不代表格式化已经完成。格式化是个较久的过程,将会持续一段时间:
Start Format Chunkfile Pool: [OK]
+ + host=server-host1 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [1/1] [OK]
+ + host=server-host2 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [1/1] [OK]
+ + host=server-host3 device=/dev/vdb mountPoint=/data/chunkserver0 usage=90% [1/1] [OK]
+
可以通过以下命令查看格式化进度,目前仍在格式化状态中:
$ curveadm format --status
+Get Format Status: [OK]
+
+Host Device MountPoint Formatted Status
+---- ------ ---------- --------- ------
+server-host1 /dev/vdb /data/chunkserver0 19/90 Formatting
+server-host2 /dev/vdb /data/chunkserver0 22/90 Formatting
+server-host3 /dev/vdb /data/chunkserver0 22/90 Formatting
+
格式化完成后的输出:
$ curveadm format --status
+Get Format Status: [OK]
+
+Host Device MountPoint Formatted Status
+---- ------ ---------- --------- ------
+server-host1 /dev/vdb /data/chunkserver0 95/90 Done
+server-host2 /dev/vdb /data/chunkserver0 95/90 Done
+server-host3 /dev/vdb /data/chunkserver0 95/90 Done
+
首先,准备集群配置文件:
vim topology.yaml
+
粘贴如下配置文件:
kind: curvebs
+global:
+ container_image: opencurvedocker/curvebs:v1.2
+ log_dir: ${home}/logs/${service_role}${service_replicas_sequence}
+ data_dir: ${home}/data/${service_role}${service_replicas_sequence}
+ s3.nos_address: 127.0.0.1
+ s3.snapshot_bucket_name: curve
+ s3.ak: minioadmin
+ s3.sk: minioadmin
+ variable:
+ home: /tmp
+ machine1: server-host1
+ machine2: server-host2
+ machine3: server-host3
+
+etcd_services:
+ config:
+ listen.ip: ${service_host}
+ listen.port: 2380
+ listen.client_port: 2379
+ deploy:
+ - host: ${machine1}
+ - host: ${machine2}
+ - host: ${machine3}
+
+mds_services:
+ config:
+ listen.ip: ${service_host}
+ listen.port: 6666
+ listen.dummy_port: 6667
+ deploy:
+ - host: ${machine1}
+ - host: ${machine2}
+ - host: ${machine3}
+
+chunkserver_services:
+ config:
+ listen.ip: ${service_host}
+ listen.port: 82${format_replicas_sequence} # 8200,8201,8202
+ data_dir: /data/chunkserver${service_replicas_sequence} # /data/chunkserver0, /data/chunkserver1
+ copysets: 100
+ deploy:
+ - host: ${machine1}
+ replicas: 1
+ - host: ${machine2}
+ replicas: 1
+ - host: ${machine3}
+ replicas: 1
+
+snapshotclone_services:
+ config:
+ listen.ip: ${service_host}
+ listen.port: 5555
+ listen.dummy_port: 8081
+ listen.proxy_port: 8080
+ deploy:
+ - host: ${machine1}
+ - host: ${machine2}
+ - host: ${machine3}
+
根据上述的集群拓扑文件创建集群 my-cluster
:
curveadm cluster add my-cluster -f topology.yaml
+
切换 my-cluster
集群为当前管理集群:
curveadm cluster checkout my-cluster
+
部署集群。如果部署成功,将会输出类似 Cluster 'my-cluster' successfully deployed ^_^.
字样。
$ curveadm deploy --skip snapshotclone
+
+...
+Create Logical Pool: [OK]
+ + host=server-host1 role=mds containerId=c6fdd71ae678 [1/1] [OK]
+
+Start Service: [OK]
+ + host=server-host1 role=snapshotclone containerId=9d3555ba72fa [1/1] [OK]
+ + host=server-host2 role=snapshotclone containerId=e6ae2b23b57e [1/1] [OK]
+ + host=server-host3 role=snapshotclone containerId=f6d3446c7684 [1/1] [OK]
+
+Balance Leader: [OK]
+ + host=server-host1 role=mds containerId=c6fdd71ae678 [1/1] [OK]
+
+Cluster 'my-cluster' successfully deployed ^_^.
+
查看集群状态:
$ curveadm status
+Get Service Status: [OK]
+
+cluster name : my-cluster
+cluster kind : curvebs
+cluster mds addr : 172.16.0.223:6666,172.16.0.224:6666,172.16.0.225:6666
+cluster mds leader: 172.16.0.225:6666 / d0a94a7afa14
+
+Id Role Host Replicas Container Id Status
+-- ---- ---- -------- ------------ ------
+5567a1c56ab9 etcd server-host1 1/1 f894c5485a26 Up 17 seconds
+68f9f0e6f108 etcd server-host2 1/1 69b09cdbf503 Up 17 seconds
+a678263898cc etcd server-host3 1/1 2ed141800731 Up 17 seconds
+4dcbdd08e2cd mds server-host1 1/1 76d62ff0eb25 Up 17 seconds
+8ef1755b0a10 mds server-host2 1/1 d8d838258a6f Up 17 seconds
+f3599044c6b5 mds server-host3 1/1 d63ae8502856 Up 17 seconds
+9f1d43bc5b03 chunkserver server-host1 1/1 39751a4f49d5 Up 16 seconds
+3fb8fd7b37c1 chunkserver server-host2 1/1 0f55a19ed44b Up 16 seconds
+c4da555952e3 chunkserver server-host3 1/1 9411274d2c97 Up 16 seconds
+
在 Curve 中控机上编辑客户端配置文件:
vim client.yaml
+
注意,这里的 mds.listen.addr
请填写上一步集群状态中输出的 cluster mds addr
:
kind: curvebs
+container_image: opencurvedocker/curvebs:v1.2
+mds.listen.addr: 172.16.0.223:6666,172.16.0.224:6666,172.16.0.225:6666
+log_dir: /root/curvebs/logs/client
+
接下来,将在两台运行 PolarDB 计算节点的 ECS 上分别部署 PolarDB 的主节点和只读节点。作为前提,需要让这两个节点能够共享 CurveBS 块设备,并在块设备上 格式化并挂载 PFS。
Network Block Device (NBD) 是一种网络协议,可以在多个主机间共享块存储设备。NBD 被设计为 Client-Server 的架构,因此至少需要两台物理机来部署。
以两台物理机环境为例,本小节介绍基于 NBD 共享存储的实例构建方法大体如下:
注意
以上步骤在 CentOS 7.5 上通过测试。
提示
操作系统内核需要支持 NBD 内核模块,如果操作系统当前不支持该内核模块,则需要自己通过对应内核版本进行编译和加载 NBD 内核模块。
从 CentOS 官网 下载对应内核版本的驱动源码包并解压:
rpm -ihv kernel-3.10.0-862.el7.src.rpm
+cd ~/rpmbuild/SOURCES
+tar Jxvf linux-3.10.0-862.el7.tar.xz -C /usr/src/kernels/
+cd /usr/src/kernels/linux-3.10.0-862.el7/
+
NBD 驱动源码路径位于:drivers/block/nbd.c
。接下来编译操作系统内核依赖和组件:
cp ../$(uname -r)/Module.symvers ./
+make menuconfig # Device Driver -> Block devices -> Set 'M' On 'Network block device support'
+make prepare && make modules_prepare && make scripts
+make CONFIG_BLK_DEV_NBD=m M=drivers/block
+
检查是否正常生成驱动:
modinfo drivers/block/nbd.ko
+
拷贝、生成依赖并安装驱动:
cp drivers/block/nbd.ko /lib/modules/$(uname -r)/kernel/drivers/block
+depmod -a
+modprobe nbd # 或者 modprobe -f nbd 可以忽略模块版本检查
+
检查是否安装成功:
# 检查已安装内核模块
+lsmod | grep nbd
+# 如果NBD驱动已经安装,则会生成/dev/nbd*设备(例如:/dev/nbd0、/dev/nbd1等)
+ls /dev/nbd*
+
yum install nbd
+
拉起 NBD 服务端,按照同步方式(sync/flush=true
)配置,在指定端口(例如 1921
)上监听对指定块设备(例如 /dev/vdb
)的访问。
nbd-server -C /root/nbd.conf
+
配置文件 /root/nbd.conf
的内容举例如下:
[generic]
+ #user = nbd
+ #group = nbd
+ listenaddr = 0.0.0.0
+ port = 1921
+[export1]
+ exportname = /dev/vdb
+ readonly = false
+ multifile = false
+ copyonwrite = false
+ flush = true
+ fua = true
+ sync = true
+
NBD 驱动安装成功后会看到 /dev/nbd*
设备, 根据服务端的配置把远程块设备映射为本地的某个 NBD 设备即可:
nbd-client x.x.x.x 1921 -N export1 /dev/nbd0
+# x.x.x.x是NBD服务端主机的IP地址
+
参阅 格式化并挂载 PFS。
DockerHub 上已有构建完毕的开发镜像 polardb/polardb_pg_devel
可供直接使用(支持 linux/amd64
和 linux/arm64
两种架构)。
另外,我们也提供了构建上述开发镜像的 Dockerfile,从 Ubuntu 官方镜像 ubuntu:20.04
开始构建出一个安装完所有开发和运行时依赖的镜像,您可以根据自己的需要在 Dockerfile 中添加更多依赖。以下是手动构建镜像的 Dockerfile 及方法:
FROM ubuntu:20.04
+LABEL maintainer="mrdrivingduck@gmail.com"
+CMD bash
+
+# Timezone problem
+ENV TZ=Asia/Shanghai
+RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
+
+# Upgrade softwares
+RUN apt update -y && \
+ apt upgrade -y && \
+ apt clean -y
+
+# GCC (force to 9) and LLVM (force to 11)
+RUN apt install -y \
+ gcc-9 \
+ g++-9 \
+ llvm-11-dev \
+ clang-11 \
+ make \
+ gdb \
+ pkg-config \
+ locales && \
+ update-alternatives --install \
+ /usr/bin/gcc gcc /usr/bin/gcc-9 60 --slave \
+ /usr/bin/g++ g++ /usr/bin/g++-9 && \
+ update-alternatives --install \
+ /usr/bin/llvm-config llvm-config /usr/bin/llvm-config-11 60 --slave \
+ /usr/bin/clang++ clang++ /usr/bin/clang++-11 --slave \
+ /usr/bin/clang clang /usr/bin/clang-11 && \
+ apt clean -y
+
+# Generate locale
+RUN sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && \
+ sed -i '/zh_CN.UTF-8/s/^# //g' /etc/locale.gen && \
+ locale-gen
+
+# Dependencies
+RUN apt install -y \
+ libicu-dev \
+ bison \
+ flex \
+ python3-dev \
+ libreadline-dev \
+ libgss-dev \
+ libssl-dev \
+ libpam0g-dev \
+ libxml2-dev \
+ libxslt1-dev \
+ libldap2-dev \
+ uuid-dev \
+ liblz4-dev \
+ libkrb5-dev \
+ gettext \
+ libxerces-c-dev \
+ tcl-dev \
+ libperl-dev \
+ libipc-run-perl \
+ libaio-dev \
+ libfuse-dev && \
+ apt clean -y
+
+# Tools
+RUN apt install -y \
+ iproute2 \
+ wget \
+ ccache \
+ sudo \
+ vim \
+ git \
+ cmake && \
+ apt clean -y
+
+# set to empty if GitHub is not barriered
+# ENV GITHUB_PROXY=https://ghproxy.com/
+ENV GITHUB_PROXY=
+
+ENV ZLOG_VERSION=1.2.14
+ENV PFSD_VERSION=pfsd4pg-release-1.2.42-20220419
+
+# install dependencies from GitHub mirror
+RUN cd /usr/local && \
+ # zlog for PFSD
+ wget --no-verbose --no-check-certificate "${GITHUB_PROXY}https://github.com/HardySimpson/zlog/archive/refs/tags/${ZLOG_VERSION}.tar.gz" && \
+ # PFSD
+ wget --no-verbose --no-check-certificate "${GITHUB_PROXY}https://github.com/ApsaraDB/PolarDB-FileSystem/archive/refs/tags/${PFSD_VERSION}.tar.gz" && \
+ # unzip and install zlog
+ gzip -d $ZLOG_VERSION.tar.gz && \
+ tar xpf $ZLOG_VERSION.tar && \
+ cd zlog-$ZLOG_VERSION && \
+ make && make install && \
+ echo '/usr/local/lib' >> /etc/ld.so.conf && ldconfig && \
+ cd .. && \
+ rm -rf $ZLOG_VERSION* && \
+ rm -rf zlog-$ZLOG_VERSION && \
+ # unzip and install PFSD
+ gzip -d $PFSD_VERSION.tar.gz && \
+ tar xpf $PFSD_VERSION.tar && \
+ cd PolarDB-FileSystem-$PFSD_VERSION && \
+ sed -i 's/-march=native //' CMakeLists.txt && \
+ ./autobuild.sh && ./install.sh && \
+ cd .. && \
+ rm -rf $PFSD_VERSION* && \
+ rm -rf PolarDB-FileSystem-$PFSD_VERSION*
+
+# create default user
+ENV USER_NAME=postgres
+RUN echo "create default user" && \
+ groupadd -r $USER_NAME && \
+ useradd -ms /bin/bash -g $USER_NAME $USER_NAME -p '' && \
+ usermod -aG sudo $USER_NAME
+
+# modify conf
+RUN echo "modify conf" && \
+ mkdir -p /var/log/pfs && chown $USER_NAME /var/log/pfs && \
+ mkdir -p /var/run/pfs && chown $USER_NAME /var/run/pfs && \
+ mkdir -p /var/run/pfsd && chown $USER_NAME /var/run/pfsd && \
+ mkdir -p /dev/shm/pfsd && chown $USER_NAME /dev/shm/pfsd && \
+ touch /var/run/pfsd/.pfsd && \
+ echo "ulimit -c unlimited" >> /home/postgres/.bashrc && \
+ echo "export PGHOST=127.0.0.1" >> /home/postgres/.bashrc && \
+ echo "alias pg='psql -h /home/postgres/tmp_master_dir_polardb_pg_1100_bld/'" >> /home/postgres/.bashrc
+
+ENV PATH="/home/postgres/tmp_basedir_polardb_pg_1100_bld/bin:$PATH"
+WORKDIR /home/$USER_NAME
+USER $USER_NAME
+
将上述内容复制到一个文件内(假设文件名为 Dockerfile-PolarDB
)后,使用如下命令构建镜像:
提示
💡 请在下面的高亮行中按需替换 <image_name>
内的 Docker 镜像名称
docker build --network=host \
+ -t <image_name> \
+ -f Dockerfile-PolarDB .
+
该方式假设您从一台具有 root 权限的干净的 CentOS 7 操作系统上从零开始,可以是:
centos:centos7
上启动的 Docker 容器PolarDB for PostgreSQL 需要以非 root 用户运行。以下步骤能够帮助您创建一个名为 postgres
的用户组和一个名为 postgres
的用户。
提示
如果您已经有了一个非 root 用户,但名称不是 postgres:postgres
,可以忽略该步骤;但请注意在后续示例步骤中将命令中用户相关的信息替换为您自己的用户组名与用户名。
下面的命令能够创建用户组 postgres
和用户 postgres
,并为该用户赋予 sudo 和工作目录的权限。需要以 root 用户执行这些命令。
# install sudo
+yum install -y sudo
+# create user and group
+groupadd -r postgres
+useradd -m -g postgres postgres -p ''
+usermod -aG wheel postgres
+# make postgres as sudoer
+chmod u+w /etc/sudoers
+echo 'postgres ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers
+chmod u-w /etc/sudoers
+# grant access to home directory
+chown -R postgres:postgres /home/postgres/
+echo 'source /etc/bashrc' >> /home/postgres/.bashrc
+# for su postgres
+sed -i 's/4096/unlimited/g' /etc/security/limits.d/20-nproc.conf
+
接下来,切换到 postgres
用户,就可以进行后续的步骤了:
su postgres
+source /etc/bashrc
+cd ~
+
在 PolarDB for PostgreSQL 源码库根目录的 deps/
子目录下,放置了在各个 Linux 发行版上编译安装 PolarDB 和 PFS 需要运行的所有依赖。因此,首先需要克隆 PolarDB 的源码库。
PolarDB for PostgreSQL 的代码托管于 GitHub 上,稳定分支为 POLARDB_11_STABLE
。如果因网络原因不能稳定访问 GitHub,则可以访问 Gitee 国内镜像。
sudo yum install -y git
+git clone -b POLARDB_11_STABLE https://github.com/ApsaraDB/PolarDB-for-PostgreSQL.git
+
sudo yum install -y git
+git clone -b POLARDB_11_STABLE https://gitee.com/mirrors/PolarDB-for-PostgreSQL
+
源码下载完毕后,使用 sudo
执行 deps/
目录下的相应脚本 deps-***.sh
自动完成所有的依赖安装。比如:
cd PolarDB-for-PostgreSQL
+sudo ./deps/deps-centos7.sh
+
警告
为简化使用,容器内的 postgres
用户没有设置密码,仅供体验。如果在生产环境等高安全性需求场合,请务必修改健壮的密码!
从 GitHub 上下载 PolarDB for PostgreSQL 的源代码,稳定分支为 POLARDB_11_STABLE
。如果因网络原因不能稳定访问 GitHub,则可以访问 Gitee 国内镜像。
git clone -b POLARDB_11_STABLE https://github.com/ApsaraDB/PolarDB-for-PostgreSQL.git
+
git clone -b POLARDB_11_STABLE https://gitee.com/mirrors/PolarDB-for-PostgreSQL
+
代码克隆完毕后,进入源码目录:
cd PolarDB-for-PostgreSQL/
+
从 DockerHub 上拉取 PolarDB for PostgreSQL 的 开发镜像。
# 拉取 PolarDB 开发镜像
+docker pull polardb/polardb_pg_devel
+
此时我们已经在开发机器的源码目录中。从开发镜像上创建一个容器,将当前目录作为一个 volume 挂载到容器中,这样可以:
docker run -it \
+ -v $PWD:/home/postgres/polardb_pg \
+ --shm-size=512m --cap-add=SYS_PTRACE --privileged=true \
+ --name polardb_pg_devel \
+ polardb/polardb_pg_devel \
+ bash
+
进入容器后,为容器内用户获取源码目录的权限,然后编译部署 PolarDB-PG 实例。
# 获取权限并编译部署
+cd polardb_pg
+sudo chmod -R a+wr ./
+sudo chown -R postgres:postgres ./
+./polardb_build.sh
+
+# 验证
+psql -h 127.0.0.1 -c 'select version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
以下表格列出了编译、初始化或测试 PolarDB-PG 集群所可能使用到的选项及说明。更多选项及其说明详见源码目录下的 polardb_build.sh
脚本。
选项 | 描述 | 默认值 |
---|---|---|
--withrep | 是否初始化只读节点 | NO |
--repnum | 只读节点数量 | 1 |
--withstandby | 是否初始化热备份节点 | NO |
--initpx | 是否初始化为 HTAP 集群(1 个读写节点,2 个只读节点) | NO |
--with-pfsd | 是否编译 PolarDB File System(PFS)相关功能 | NO |
--with-tde | 是否初始化 透明数据加密(TDE) 功能 | NO |
--with-dma | 是否初始化为 DMA(Data Max Availability)高可用三节点集群 | NO |
-r / -t / --regress | 在编译安装完毕后运行内核回归测试 | NO |
-r-px | 运行 HTAP 实例的回归测试 | NO |
-e /--extension | 运行扩展插件测试 | NO |
-r-external | 测试 external/ 下的扩展插件 | NO |
-r-contrib | 测试 contrib/ 下的扩展插件 | NO |
-r-pl | 测试 src/pl/ 下的扩展插件 | NO |
如无定制的需求,则可以按照下面给出的选项编译部署不同形态的 PolarDB-PG 集群并进行测试。
5432
端口)./polardb_build.sh
+
5432
端口)5433
端口)./polardb_build.sh --withrep --repnum=1
+
5432
端口)5433
端口)5434
端口)./polardb_build.sh --withrep --repnum=1 --withstandby
+
5432
端口)5433
/ 5434
端口)./polardb_build.sh --initpx
+
普通实例回归测试:
./polardb_build.sh --withrep -r -e -r-external -r-contrib -r-pl --with-tde
+
HTAP 实例回归测试:
./polardb_build.sh -r-px -e -r-external -r-contrib -r-pl --with-tde
+
DMA 实例回归测试:
./polardb_build.sh -r -e -r-external -r-contrib -r-pl --with-tde --with-dma
+
功能 / 版本 | PostgreSQL | PolarDB for PostgreSQL 11 |
---|---|---|
高性能 | ... | ... |
预读 / 预扩展 | / | V11 / v1.1.1- |
表大小缓存 | / | V11 / v1.1.10- |
Shared Server | / | V11 / v1.1.30- |
高可用 | ... | ... |
只读节点 Online Promote | / | V11 / v1.1.1- |
WAL 日志并行回放 | / | V11 / v1.1.17- |
DataMax 日志节点 | / | V11 / v1.1.6- |
Resource Manager | / | V11 / v1.1.1- |
闪回表和闪回日志 | / | V11 / v1.1.22- |
安全 | ... | ... |
透明数据加密 | / | V11 / v1.1.1- |
弹性跨机并行查询(ePQ) | ... | ... |
ePQ 执行计划查看与分析 | / | V11 / v1.1.22- |
ePQ 计算节点范围选择与并行度控制 | / | V11 / v1.1.20- |
ePQ 支持分区表查询 | / | V11 / v1.1.17- |
ePQ 支持创建 B-Tree 索引并行加速 | / | V11 / v1.1.15- |
集群拓扑视图 | / | V11 / v1.1.20- |
自适应扫描 | / | V11 / v1.1.17- |
并行 INSERT | / | V11 / v1.1.17- |
ePQ 支持创建/刷新物化视图并行加速和批量写入 | / | V11 / v1.1.30- |
第三方插件 | ... | ... |
pgvector | / | V11 / v1.1.35- |
smlar | / | V11 / v1.1.35- |
学弈
2022/09/20
25 min
PolarDB 是基于共享存储的一写多读架构,与传统数据库的主备架构有所不同:
传统数据库支持 Standby 节点升级为主库节点的 Promote 操作,在不重启的情况下,提升备库节点为主库节点,继续提供读写服务,保证集群高可用的同时,也有效降低了实例的恢复时间 RTO。
PolarDB 同样需要只读备库节点提升为主库节点的 Promote 能力,鉴于只读节点与传统数据库 Standby 节点的不同,PolarDB 提出了一种一写多读架构下只读节点的 OnlinePromote 机制。
使用 pg_ctl
工具对 Replica 节点执行 Promote 操作:
pg_ctl promote -D [datadir]
+
PolarDB 使用和传统数据库一致的备库节点 Promote 方法,触发条件如下:
pg_ctl
工具的 Promote 命令,pg_ctl
工具会向 Postmaster 进程发送信号,接收到信号的 Postmaster 进程再通知其他进程执行相应的操作,完成整个 Promote 操作。recovery.conf
中定义 trigger file 的路径,其他组件通过生成 trigger file 来触发。相比于传统数据库 Standby 节点的 Promote 操作,PolarDB Replica 节点的 OnlinePromote 操作需要多考虑以下几个问题:
SIGTERM
信号给当前所有 Backend 进程。 SIGUSR2
信号给 Startup 进程,通知其结束回放并处理 OnlinePromote 操作;SIGUSR2
信号给 Polar Worker 辅助进程,通知其停止对于部分 LogIndex 数据的解析,因为这部分 LogIndex 数据只对于正常运行期间的 Replica 节点有用处。SIGUSR2
信号给 LogIndex BGW (Background Ground Worker) 后台回放进程,通知其处理 OnlinePromote 操作。POLAR_BG_WAITING_RESET
状态;POLAR_BG_ONLINE_PROMOTE
,至此实例可以对外提供读写服务。LogIndex BGW 进程有自己的状态机,在其生命周期内,一直按照该状态机运行,具体每个状态机的操作内容如下:
POLAR_BG_WAITING_RESET
:LogIndex BGW 进程状态重置,通知其他进程状态机发生变化;POLAR_BG_ONLINE_PROMOTE
:读取 LogIndex 数据,组织并分发回放任务,利用并行回放进程组回放 WAL 日志,该状态的进程需要回放完所有的 LogIndex 数据才会进行状态切换,最后推进后台回放进程的回放位点;POLAR_BG_REDO_NOT_START
:表示回放任务结束;POLAR_BG_RO_BUF_REPLAYING
:Replica 节点正常运行时,进程处于该状态,读取 LogIndex 数据,按照 WAL 日志的顺序回放一定量的 WAL 日志,每回放一轮,便会推进后台回放进程的回放位点;POLAR_BG_PARALLEL_REPLAYING
:LogIndex BGW 进程每次读取一定量的 LogIndex 数据,组织并分发回放任务,利用并行回放进程组回放 WAL 日志,每回放一轮,便会推进后台回放进程的回放位点。LogIndex BGW 进程接收到 Postmaster 的 SIGUSR2
信号后,执行 OnlinePromote 操作的流程如下:
POLAR_BG_WAITING_RESET
;POLAR_BG_ONLINE_PROMOTE
状态; MarkBufferDirty
标记该页面为脏页,等待刷脏;POLAR_BG_REDO_NOT_START
。每个脏页都带有一个 Oldest LSN,该 LSN 在 FlushList 里是有序的,目的是通过这个 LSN 来确定一致性位点。
Replica 节点在 OnlinePromote 过程后,由于同时存在着回放和新的页面写入,如果像主库节点一样,直接将当前的 WAL 日志插入位点设为 Buffer 的 Oldest LSN,可能会导致:比它小的 Buffer 还未落盘,但新的一致性位点已经被设置。
所以 Replica 节点在 OnlinePromote 过程中需要面对两个问题:
PolarDB 在 Replica 节点 OnlinePromote 的过程中,将上述两类情况产生的脏页的 Oldest LSN 都设置为 LogIndex BGW 进程推进的回放位点。只有当标记为相同 Oldest LSN 的 Buffer 都落盘了,才将一致性位点向前推进。
学弈
2022/09/20
30 min
在 PolarDB for PostgreSQL 的一写多读架构下,只读节点(Replica 节点)运行过程中,LogIndex 后台回放进程(LogIndex Background Worker)和会话进程(Backend)分别使用 LogIndex 数据在不同的 Buffer 上回放 WAL 日志,本质上达到了一种并行回放 WAL 日志的效果。
鉴于 WAL 日志回放在 PolarDB 集群的高可用中起到至关重要的作用,将并行回放 WAL 日志的思想用到常规的日志回放路径上,是一种很好的优化思路。
并行回放 WAL 日志至少可以在以下三个场景下发挥优势:
一条 WAL 日志可能修改多个数据块 Block,因此可以使用如下定义来表示 WAL 日志的回放过程:
i
条 WAL 日志 LSN 为 ,其修改了 m
个数据块,则定义第 i
条 WAL 日志修改的数据块列表 ;i
条 WAL 日志;k
个 Block 的 WAL 日志就可以表示成 k
个回放子任务的集合:;在日志回放子任务集合 中,每个子任务的执行,有时并不依赖于前序子任务的执行结果。假设回放子任务集合如下:,其中:
并且 ,,
则可以并行回放的子任务集合有三个:、、
综上所述,在整个 WAL 日志所表示的回放子任务集合中,存在很多子任务序列可以并行执行,而且不会影响最终回放结果的一致性。PolarDB 借助这种思想,提出了一种并行任务执行框架,并成功运用到了 WAL 日志回放的过程中。
将一段共享内存根据并发进程数目进行等分,每一段作为一个环形队列,分配给一个进程。通过配置参数设定每个环形队列的深度:
环形队列的内容由 Task Node 组成,每个 Task Node 包含五个状态:Idle、Running、Hold、Finished、Removed。
Idle
:表示该 Task Node 未分配任务;Running
:表示该 Task Node 已经分配任务,正在等待进程执行,或已经在执行;Hold
:表示该 Task Node 有前向依赖的任务,需要等待依赖的任务执行完再执行;Finished
:表示进程组中的进程已经执行完该任务;Removed
:当 Dispatcher 进程发现一个任务的状态已经为 Finished
,那么该任务所有的前置依赖任务也都应该为 Finished
状态,Removed
状态表示 Dispatcher 进程已经将该任务以及该任务所有前置任务都从管理结构体中删除;可以通过该机制保证 Dispatcher 进程按顺序处理有依赖关系的任务执行结果。上述状态机的状态转移过程中,黑色线标识的状态转移过程在 Dispatcher 进程中完成,橙色线标识的状态转移过程在并行回放进程组中完成。
Dispatcher 进程有三个关键数据结构:Task HashMap、Task Running Queue 以及 Task Idle Nodes。
Hold
,需等待前置任务先执行。Idle
状态的 Task Node;Dispatcher 调度策略如下:
Idle
的 Task Node 来调度任务执行;目的是让任务尽量平均分配到不同的进程进行执行。该并行执行针对的是相同类型的任务,它们具有相同的 Task Node 数据结构;在进程组初始化时配置 SchedContext
,指定负责执行具体任务的函数指针:
TaskStartup
表示进程执行任务前需要进行的初始化动作TaskHandler
根据传入的 Task Node,负责执行具体的任务TaskCleanup
表示执行进程退出前需要执行的回收动作进程组中的进程从环形队列中获取一个 Task Node,如果 Task Node 当前的状态是 Hold
,则将该 Task Node 插入到 Hold List 的尾部;如果 Task Node 的状态为 Running,则调用 TaskHandler 执行;如果 TaskHandler 执行失败,则设置该 Task Node 重新执行需要等待调用的次数,默认为 3,将该 Task Node 插入到 Hold List 的头部。
进程优先从 Hold List 头部搜索,获取可执行的 Task;如果 Task 状态为 Running,且等待调用次数为 0,则执行该 Task;如果 Task 状态为 Running,但等待调用次数大于 0,则将等待调用次数减去 1。
根据 LogIndex 章节介绍,LogIndex 数据中记录了 WAL 日志和其修改的数据块之间的对应关系,而且 LogIndex 数据支持使用 LSN 进行检索,鉴于此,PolarDB 数据库在 Standby 节点持续回放 WAL 日志过程中,引入了上述并行任务执行框架,并结合 LogIndex 数据将 WAL 日志的回放任务并行化,提高了 Standby 节点数据同步的速度。
在 Standby 节点的 postgresql.conf
中添加以下参数开启功能:
polar_enable_parallel_replay_standby_mode = ON
+
玊于
2022/11/17
30 min
在高可用的场景中,为保证 RPO = 0,主库和备库之间需配置为同步复制模式。但当主备库距离较远时,同步复制的方式会存在较大延迟,从而对主库性能带来较大影响。异步复制对主库的性能影响较小,但会带来一定程度的数据丢失。PolarDB for PostgreSQL 采用基于共享存储的一写多读架构,可同时提供 AZ 内 / 跨 AZ / 跨域级别的高可用。为了减少日志同步对主库的影响,PolarDB for PostgreSQL 引入了 DataMax 节点。在进行跨 AZ 甚至跨域同步时,DataMax 节点可以作为主库日志的中转节点,能够以较低成本实现零数据丢失的同时,降低日志同步对主库性能的影响。
PolarDB for PostgreSQL 基于物理流复制实现主备库之间的数据同步,主库与备库的流复制模式分为 同步模式 及 异步模式 两种:
synchronous_standby_names
参数开启备库同步后,可以通过 synchronous_commit
参数设置主库及备库之间的同步级别,包括: remote_write
:主库的事务提交需等待对应 WAL 日志写入主库磁盘文件及备库的系统缓存中后,才能进行事务提交的后续操作;on
:主库的事务提交需等待对应 WAL 日志已写入主库及备库的磁盘文件中后,才能进行事务提交的后续操作;remote_apply
:主库的事务提交需等待对应 WAL 日志写入主库及备库的磁盘文件中,并且备库已经回放完相应 WAL 日志使备库上的查询对该事务可见后,才能进行事务提交的后续操作。同步模式保证了主库的事务提交操作需等待备库接收到对应的 WAL 日志数据之后才可执行,实现了主库与备库之间的零数据丢失,可保证 RPO = 0。然而,该模式下主库的事务提交操作能否继续进行依赖于备库的 WAL 日志接收结果,当主备之间距离较远导致传输延迟较大时,同步模式会对主库的性能带来影响。极端情况下,若备库异常崩溃,则主库会一直阻塞等待备库,导致无法正常提供服务。
针对传统主备模式下同步复制对主库性能影响较大的问题,PolarDB for PostgreSQL 新增了 DataMax 节点用于实现远程同步,该模式下的高可用架构如下所示:
其中:
DataMax 是一种新的节点角色,用户需要通过配置文件来标识当前节点是否为 DataMax 节点。DataMax 模式下,Startup 进程在回放完 DataMax 节点自身日志之后,从 PM_HOT_STANDBY
进入到 PM_DATAMAX
模式。PM_DATAMAX
模式下,Startup 进程仅进行相关信号及状态的处理,并通知 Postmaster 进程启动流复制,Startup 进程不再进行日志回放的操作。因此 DataMax 节点不会保存 Primary 节点的数据文件,从而降低了存储成本。
如上图所示,DataMax 节点通过 WalReceiver 进程向 Primary 节点发起流复制请求,接收并保存 Primary 节点发送的 WAL 日志信息;同时通过 WalSender 进程将所接收的主库 WAL 日志发送给异地的备库节点;备库节点接收到 WAL 日志后,通知其 Startup 进程进行日志回放,从而实现备库节点与 Primary 节点的数据同步。
DataMax 节点在数据目录中新增了 polar_datamax/
目录,用于保存所接收的主库 WAL 日志。DataMax 节点自身的 WAL 日志仍保存在原始目录下,两者的 WAL 日志不会相互覆盖,DataMax 节点也可以有自身的独有数据。
由于 DataMax 节点不会回放 Primary 节点的日志数据,在 DataMax 节点因为异常原因需要重启恢复时,就有了日志起始位点的问题。DataMax 节点通过 polar_datamax_meta
元数据文件存储相关的位点信息,以此来确认运行的起始位点:
InvalidXLogRecPtr
位点,表明其需要从 Primary 节点当前最旧的位点开始复制; Primary 节点接收到 InvalidXLogRecPtr
的流复制请求之后,会开始从当前最旧且完整的 WAL segment 文件开始发送 WAL 日志,并将相应复制槽的 restart_lsn
设置为该位点;如下图所示,增加 DataMax 节点后,若 Primary 节点与 Replica 节点同时异常,或存储无法提供服务时,则可将位于不同可用区的 Standby 节点提升为 Primary 节点,保证服务的可用性。在将 Standby 节点提升为 Primary 节点并向外提供服务之前,会确认 Standby 节点是否已从 DataMax 节点拉取完所有日志,待 Standby 节点获取完所有日志后才会将其提升为 Primary 节点。由于 DataMax 节点与 Primary 节点为同步复制,因此该场景下可保证 RPO = 0。
此外,DataMax 节点在进行日志清理时,除了保留下游 Standby 节点尚未接收的 WAL 日志文件以外,还会保留上游 Primary 节点尚未删除的 WAL 日志文件,避免 Primary 节点异常后,备份系统无法获取到 Primary 节点相较于 DataMax 节点多出的日志信息,保证集群数据的完整性。
若 DataMax 节点异常,则优先尝试通过重启进行恢复;若重启失败则会对其进行重建。因 DataMax 节点与 Primary 节点的存储彼此隔离,因此两者的数据不会互相影响。此外,DataMax 节点同样可以使用计算存储分离架构,确保 DataMax 节点的异常不会导致其存储的 WAL 日志数据丢失。
类似地,DataMax 节点实现了如下几种日志同步模式,用户可以根据具体业务需求进行相应配置:
综上,通过 DataMax 日志中转节点降低日志同步延迟、分流 Primary 节点的日志传输压力,在性能稳定的情况下,可以保障跨 AZ / 跨域 RPO = 0 的高可用。
初始化 DataMax 节点时需要指定 Primary 节点的 system identifier:
# 获取 Primary 节点的 system identifier
+~/tmp_basedir_polardb_pg_1100_bld/bin/pg_controldata -D ~/primary | grep 'system identifier'
+
+# 创建 DataMax 节点
+# -i 参数指定的 [primary_system_identifier] 为上一步得到的 Primary 节点 system identifier
+~/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D datamax -i [primary_system_identifier]
+
+# 如有需要,参考 Primary 节点,对 DataMax 节点的共享存储进行初始化
+sudo pfs -C disk mkdir /nvme0n1/dm_shared_data
+sudo ~/tmp_basedir_polardb_pg_1100_bld/bin/polar-initdb.sh ~/datamax/ /nvme0n1/dm_shared_data/
+
以可写节点的形式拉起 DataMax 节点,创建用户和插件以方便后续运维。DataMax 节点默认为只读模式,无法创建用户和插件。
~/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D ~/datamax
+
创建管理账号及插件:
postgres=# create user test superuser;
+CREATE ROLE
+postgres=# create extension polar_monitor;
+CREATE EXTENSION
+
关闭 DataMax 节点:
~/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl stop -D ~/datamax;
+
在 DataMax 节点的 recovery.conf
中添加 polar_datamax_mode
参数,表示当前节点为 DataMax 节点:
polar_datamax_mode = standalone
+recovery_target_timeline='latest'
+primary_slot_name='datamax'
+primary_conninfo='host=[主节点的IP] port=[主节点的端口] user=[$USER] dbname=postgres application_name=datamax'
+
启动 DataMax 节点:
~/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl start -D ~/datamax
+
DataMax 节点自身可通过 polar_get_datamax_info()
接口来判断其运行是否正常:
postgres=# SELECT * FROM polar_get_datamax_info();
+ min_received_timeline | min_received_lsn | last_received_timeline | last_received_lsn | last_valid_received_lsn | clean_reserved_lsn | force_clean
+-----------------------+------------------+------------------------+-------------------+-------------------------+--------------------+-------------
+ 1 | 0/40000000 | 1 | 0/4079DFE0 | 0/4079DFE0 | 0/0 | f
+(1 row)
+
在 Primary 节点可以通过 pg_replication_slots
查看对应复制槽的状态:
postgres=# SELECT * FROM pg_replication_slots;
+ slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn
+-----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------
+ datamax | | physical | | | f | t | 124551 | 570 | | 0/4079DFE0 |
+(1 row)
+
通过配置 Primary 节点的 postgresql.conf
,可以设置下游 DataMax 节点的日志同步模式:
最大保护模式。其中 datamax
为 Primary 节点创建的复制槽名称:
polar_enable_transaction_sync_mode = on
+synchronous_commit = on
+synchronous_standby_names = 'datamax'
+
最大性能模式:
polar_enable_transaction_sync_mode = on
+synchronous_commit = on
+
最大高可用模式:
polar_sync_replication_timeout
用于设置同步超时时间阈值,单位为毫秒;等待同步复制锁超过此阈值时,同步复制将降级为异步复制;polar_sync_rep_timeout_break_lsn_lag
用于设置同步恢复延迟阈值,单位为字节;当异步复制延迟阈值小于此阈值时,异步复制将重新恢复为同步复制。polar_enable_transaction_sync_mode = on
+synchronous_commit = on
+synchronous_standby_names = 'datamax'
+polar_sync_replication_timeout = 10s
+polar_sync_rep_timeout_break_lsn_lag = 8kB
+
恒亦
2022/11/23
20 min
目前文件系统并不能保证数据库页面级别的原子读写,在一次页面的 I/O 过程中,如果发生设备断电等情况,就会造成页面数据的错乱和丢失。在实现闪回表的过程中,我们发现通过定期保存旧版本数据页 + WAL 日志回放的方式可以得到任意时间点的数据页,这样就可以解决半写问题。这种方式和 PostgreSQL 原生的 Full Page Write 相比,由于不在事务提交的主路径上,因此性能有了约 30% ~ 100% 的提升。实例规格越大,负载压力越大,效果越明显。
闪回日志 (Flashback Log) 用于保存压缩后的旧版本数据页。其解决半写问题的方案如下:
当遭遇半写问题(数据页 checksum 不正确)时,通过日志索引快速找到该页对应的 Flashback Log 记录,通过 Flashback Log 记录可以得到旧版本的正确数据页,用于替换被损坏的页。在文件系统不能保证 8kB 级别原子读写的任何设备上,都可以使用这个功能。需要特别注意的是,启用这个功能会造成一定的性能下降。
闪回表 (Flashback Table) 功能通过定期保留数据页面快照到闪回日志中,保留事务信息到快速恢复区中,支持用户将某个时刻的表数据恢复到一个新的表中。
FLASHBACK TABLE
+ [ schema. ]table
+ TO TIMESTAMP expr;
+
准备测试数据。创建表 test
,并插入数据:
CREATE TABLE test(id int);
+INSERT INTO test select * FROM generate_series(1, 10000);
+
查看已插入的数据:
polardb=# SELECT count(1) FROM test;
+ count
+-------
+ 10000
+(1 row)
+
+polardb=# SELECT sum(id) FROM test;
+ sum
+----------
+ 50005000
+(1 row)
+
等待 10 秒并删除表数据:
SELECT pg_sleep(10);
+DELETE FROM test;
+
表中已无数据:
polardb=# SELECT * FROM test;
+ id
+----
+(0 rows)
+
闪回表到 10 秒之前的数据:
polardb=# FLASHBACK TABLE test TO TIMESTAMP now() - interval'10s';
+NOTICE: Flashback the relation test to new relation polar_flashback_65566, please check the data
+FLASHBACK TABLE
+
检查闪回表数据:
polardb=# SELECT count(1) FROM polar_flashback_65566;
+ count
+-------
+ 10000
+(1 row)
+
+polardb=# SELECT sum(id) FROM polar_flashback_65566;
+ sum
+----------
+ 50005000
+(1 row)
+
闪回表功能依赖闪回日志和快速恢复区功能,需要设置 polar_enable_flashback_log
和 polar_enable_fast_recovery_area
参数并重启。其他的参数也需要按照需求来修改,建议一次性修改完成并在业务低峰期重启。打开闪回表功能将会增大内存、磁盘的占用量,并带来一定的性能损失,请谨慎评估后再使用。
打开闪回日志功能需要增加的共享内存大小为以下三项之和:
polar_flashback_log_buffers
* 8kBpolar_flashback_logindex_mem_size
MBpolar_flashback_logindex_queue_buffers
MB打开快速恢复区需要增加大约 32kB 的共享内存大小,请评估当前实例状态后再调整参数。
为了保证能够闪回到一定时间之前,需要保留该段时间的闪回日志和 WAL 日志,以及两者的 LogIndex 文件,这会增加磁盘空间的占用。理论上 polar_fast_recovery_area_rotation
设置得越大,磁盘占用越多。若 polar_fast_recovery_area_rotation
设置为 300
,则将会保存 5 个小时的历史数据。
打开闪回日志之后,会定期去做 闪回点(Flashback Point)。闪回点是检查点的一种,当触发检查点后会检查 polar_flashback_point_segments
和 polar_flashback_point_timeout
参数来判断当前检查点是否为闪回点。所以建议:
polar_flashback_point_segments
为 max_wal_size
的倍数polar_flashback_point_timeout
为 checkpoint_timeout
的倍数假设 5 个小时共产生 20GB 的 WAL 日志,闪回日志与 WAL 日志的比例大约是 1:20,那么大约会产生 1GB 的闪回日志。闪回日志和 WAL 日志的比例大小和以下两个因素有关:
polar_flashback_point_segments
、polar_flashback_point_timeout
参数设定越大,闪回日志越少闪回日志特性增加了两个后台进程来消费闪回日志,这势必会增大 CPU 的开销。可以调整 polar_flashback_log_bgwrite_delay
和 polar_flashback_log_insert_list_delay
参数使得两个后台进程工作间隔周期更长,从而减少 CPU 消耗,但是这可能会造成一定性能的下降,建议使用默认值即可。
另外,由于闪回日志功能需要在该页面刷脏之前,先刷对应的闪回日志,来保证不丢失闪回日志,所以可能会造成一定的性能下降。目前测试在大多数场景下性能下降不超过 5%。
在表闪回的过程中,目标表涉及到的页面在共享内存池中换入换出,可能会造成其他数据库访问操作的性能抖动。
目前闪回表功能会恢复目标表的数据到一个新表中,表名为 polar_flashback_目标表 OID
。在执行 FLASHBACK TABLE
语法后会有如下 NOTICE
提示:
polardb=# flashback table test to timestamp now() - interval '1h';
+NOTICE: Flashback the relation test to new relation polar_flashback_54986, please check the data
+FLASHBACK TABLE
+
其中的 polar_flashback_54986
就是闪回恢复出的临时表,只恢复表数据到目标时刻。目前只支持 普通表 的闪回,不支持以下数据库对象:
另外,如果在目标时间到当前时刻对表执行过某些 DDL,则无法闪回:
DROP TABLE
ALTER TABLE SET WITH OIDS
ALTER TABLE SET WITHOUT OIDS
TRUNCATE TABLE
UNLOGGED
或者 LOGGED
IDENTITY
的列其中 DROP TABLE
的闪回可以使用 PolarDB for PostgreSQL/Oracle 的闪回删除功能来恢复。
当出现人为误操作数据的情况时,建议先使用审计日志快速定位到误操作发生的时间,然后将目标表闪回到该时间之前。在表闪回过程中,会持有目标表的排他锁,因此仅可以对目标表进行查询操作。另外,在表闪回的过程中,目标表涉及到的页面在共享内存池中换入换出,可能会造成其他数据库访问操作的性能抖动。因此,建议在业务低峰期执行闪回操作。
闪回的速度和表的大小相关。当表比较大时,为节约时间,可以加大 polar_workers_per_flashback_table
参数,增加并行闪回的 worker 个数。
在表闪回结束后,可以根据 NOTICE
的提示,查询对应闪回表的数据,和原表的数据进行比对。闪回表上不会有任何索引,用户可以根据查询需要自行创建索引。在数据比对完成之后,可以将缺失的数据重新回流到原表。
参数名 | 参数含义 | 取值范围 | 默认值 | 生效方法 |
---|---|---|---|---|
polar_enable_flashback_log | 是否打开闪回日志 | on / off | off | 修改配置文件后重启生效 |
polar_enable_fast_recovery_area | 是否打开快速恢复区 | on / off | off | 修改配置文件后重启生效 |
polar_flashback_log_keep_segments | 闪回日志保留的文件个数,可重用。每个文件 256MB | [3, 2147483647] | 8 | SIGHUP 生效 |
polar_fast_recovery_area_rotation | 快速恢复区保留的事务信息时长,单位为分钟,即最大可闪回表到几分钟之前。 | [1, 14400] | 180 | SIGHUP 生效 |
polar_flashback_point_segments | 两个闪回点之间的最小 WAL 日志个数,每个 WAL 日志 1GB | [1, 2147483647] | 16 | SIGHUP 生效 |
polar_flashback_point_timeout | 两个闪回点之间的最小时间间隔,单位为秒 | [1, 86400] | 300 | SIGHUP 生效 |
polar_flashback_log_buffers | 闪回日志共享内存大小,单位为 8kB | [4, 262144] | 2048 (16MB) | 修改配置文件后重启生效 |
polar_flashback_logindex_mem_size | 闪回日志索引共享内存大小,单位为 MB | [3, 1073741823] | 64 | 修改配置文件后重启生效 |
polar_flashback_logindex_bloom_blocks | 闪回日志索引的布隆过滤器页面个数 | [8, 1073741823] | 512 | 修改配置文件后重启生效 |
polar_flashback_log_insert_locks | 闪回日志插入锁的个数 | [1, 2147483647] | 8 | 修改配置文件后重启生效 |
polar_workers_per_flashback_table | 闪回表并行 worker 的数量 | [0, 1024] (0 为关闭并行) | 5 | 即时生效 |
polar_flashback_log_bgwrite_delay | 闪回日志 bgwriter 进程的工作间隔周期,单位为 ms | [1, 10000] | 100 | SIGHUP 生效 |
polar_flashback_log_flush_max_size | 闪回日志 bgwriter 进程每次刷盘闪回日志的大小,单位为 kB | [0, 2097152] (0 为不限制) | 5120 | SIGHUP 生效 |
polar_flashback_log_insert_list_delay | 闪回日志 bginserter 进程的工作间隔周期,单位为 ms | [1, 10000] | 10 | SIGHUP 生效 |
学有
2022/11/25
20 min
PolarDB for PostgreSQL 的内存可以分为以下三部分:
进程间动态共享内存和进程私有内存是 动态分配 的,其使用量随着实例承载的业务运行情况而不断变化。过多使用动态内存,可能会导致内存使用量超过操作系统限制,触发内核内存限制机制,造成实例进程异常退出,实例重启,引发实例不可用的问题。
进程私有内存 MemoryContext 管理的内存可以分为两部分:
为了解决以上问题,PolarDB for PostgreSQL 增加了 Resource Manager 资源限制机制,能够在实例运行期间,周期性检测资源使用情况。对于超过资源限制阈值的进程,强制进行资源限制,降低实例不可用的风险。
Resource Manager 主要的限制资源有:
当前仅支持对内存资源进行限制。
内存限制依赖 Cgroup,如果不存在 Cgroup,则无法有效进行资源限制。Resource Manager 作为 PolarDB for PostgreSQL 一个后台辅助进程,周期性读取 Cgroup 的内存使用数据作为内存限制的依据。当发现存在进程超过内存限制阈值后,会读取内核的用户进程内存记账,按照内存大小排序,依次对内存使用量超过阈值的进程发送中断进程信号(SIGTERM)或取消操作信号(SIGINT)。
Resource Manager 守护进程会随着实例启动而建立,同时对 RW、RO 以及 Standby 节点起作用。可以通过修改参数改变 Resource Manager 的行为。
enable_resource_manager
:是否启动 Resource Manager,取值为 on
/ off
,默认值为 on
stat_interval
:资源使用量周期检测的间隔,单位为毫秒,取值范围为 10
-10000
,默认值为 500
total_mem_limit_rate
:限制实例内存使用的百分比,当实例内存使用超过该百分比后,开始强制对内存资源进行限制,默认值为 95
total_mem_limit_remain_size
:实例内存预留值,当实例空闲内存小于预留值后,开始强制对内存资源进行限制,单位为 kB,取值范围为 131072
-MAX_KILOBYTES
(整型数值最大值),默认值为 524288
mem_release_policy
:内存资源限制的策略 none
:无动作default
:缺省策略(默认值),优先中断空闲进程,然后中断活跃进程cancel_query
:中断活跃进程terminate_idle_backend
:中断空闲进程terminate_any_backend
:中断所有进程terminate_random_backend
:中断随机进程2022-11-28 14:07:56.929 UTC [18179] LOG: [polar_resource_manager] terminate process 13461 release memory 65434123 bytes
+2022-11-28 14:08:17.143 UTC [35472] FATAL: terminating connection due to out of memory
+2022-11-28 14:08:17.143 UTC [35472] BACKTRACE:
+ postgres: primary: postgres postgres [local] idle(ProcessInterrupts+0x34c) [0xae5fda]
+ postgres: primary: postgres postgres [local] idle(ProcessClientReadInterrupt+0x3a) [0xae1ad6]
+ postgres: primary: postgres postgres [local] idle(secure_read+0x209) [0x8c9070]
+ postgres: primary: postgres postgres [local] idle() [0x8d4565]
+ postgres: primary: postgres postgres [local] idle(pq_getbyte+0x30) [0x8d4613]
+ postgres: primary: postgres postgres [local] idle() [0xae1861]
+ postgres: primary: postgres postgres [local] idle() [0xae1a83]
+ postgres: primary: postgres postgres [local] idle(PostgresMain+0x8df) [0xae7949]
+ postgres: primary: postgres postgres [local] idle() [0x9f4c4c]
+ postgres: primary: postgres postgres [local] idle() [0x9f440c]
+ postgres: primary: postgres postgres [local] idle() [0x9ef963]
+ postgres: primary: postgres postgres [local] idle(PostmasterMain+0x1321) [0x9ef18a]
+ postgres: primary: postgres postgres [local] idle() [0x8dc1f6]
+ /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f888afff445]
+ postgres: primary: postgres postgres [local] idle() [0x49d209]
+
步真
2022/09/21
25 min
PolarDB for PostgreSQL 支持 ePQ 弹性跨机并行查询特性,通过利用集群中多个节点的计算能力,来实现跨节点的并行查询功能。ePQ 可以支持顺序扫描、索引扫描等多种物理算子的跨节点并行化。其中,对顺序扫描算子,ePQ 提供了两种扫描模式,分别为 自适应扫描模式 与 非自适应扫描模式。
非自适应扫描模式是 ePQ 顺序扫描算子(Sequential Scan)的默认扫描方式。每一个参与并行查询的 PX Worker 在执行过程中都会被分配一个唯一的 Worker ID。非自适应扫描模式将会依据 Worker ID 划分数据表在物理存储上的 Disk Unit ID,从而实现每个 PX Worker 可以均匀扫描数据表在共享存储上的存储单元,所有 PX Worker 的扫描结果最终汇总形成全量的数据。
在非自适应扫描模式下,扫描单元会均匀划分给每个 PX Worker。当存在个别只读节点计算资源不足的情况下,可能会导致扫描过程发生计算倾斜:用户发起的单次并行查询迟迟不能完成,查询受限于计算资源不足的节点长时间不能完成扫描任务。
ePQ 提供的自适应扫描模式可以解决这个问题。自适应扫描模式不再限定每个 PX Worker 扫描特定的 Disk Unit ID,而是采用 请求-响应(Request-Response)模式,通过 QC 进程与 PX Worker 进程之间的特定 RPC 通信机制,由 QC 进程负责告知每个 PX Worker 进程可以执行的扫描任务,从而消除计算倾斜的问题。
QC 进程在发起并行查询任务时,会为每个 PX Worker 进程分配固定的 Worker ID,每个 PX Worker 进程根据 Worker ID 对存储单元 取模,只扫描其所属的特定的 Dist Unit。
QC 进程在发起并行查询任务时,会启动 自适应扫描线程,用于接收并处理来自 PX Worker 进程的请求消息。自适应扫描线程维护了当前查询扫描任务的进度,并根据每个 PX Worker 进程的工作进度,向 PX Worker 进程分派需要扫描的 Disk Unit ID。对于需要扫描的最后一个 Disk Unit,自适应扫描线程会唤醒处于空闲状态的 PX Worker,加速最后一块 Disk Unit 的扫描过程。
由于自适应扫描线程与各个 PX worker 进程之间的通信数据很少,频率不高,所以重用了已有的 QC 进程与 PX worker 进程之间的 libpq 连接进行报文通信。自适应扫描线程通过 poll 的方式在需要时同步轮询 PX Worker 进程的请求和响应。
PX Worker 进程在执行顺序扫描算子时,会首先向 QC 进程发起询问请求,将以下信息发送给 QC 端的自适应扫描线程:
自适应扫描线程在收到询问请求后,会创建扫描任务或更新扫描任务的进度。
为了减少请求带来的网络交互次数,ePQ 实现了可变的任务颗粒度。当扫描任务量剩余较多时,PX Worker 进程单次领取的扫描物理块数较多;当扫描任务量剩余较少时,PX Worker 进程单次领取的扫描物理块数相应减少。通过这种方法,可以平衡 网络开销 与 负载均衡 两者之间的关系。
自适应扫描模式将尽量保证每个节点在多次执行并行查询任务时,能够重用 Shared Buffer 缓存,避免缓存频繁更新 / 淘汰。在实现上,自适应扫描功能会根据 集群拓扑视图 配置的节点 IP 地址信息,采用缓存绑定策略,尽量让同一个物理 Page 被同一个节点复用。
PX Worker 请求报文:采用 libpq 的 'S'
协议进行通信,按照 key-value 的方式编码为字符串。
内容 | 描述 |
---|---|
task_id | 扫描任务编号 |
direction | 扫描方向 |
page_count | 需扫描的总物理块数 |
scan_start | 扫描起始物理块号 |
current_page | 当前扫描的物理块号 |
scan_round | 扫描的次数 |
自适应扫描线程回复报文
内容 | 描述 |
---|---|
success | 是否成功 |
page_start | 响应的起始物理块号 |
page_end | 响应的结束物理块号 |
创建测试表:
postgres=# CREATE TABLE t(id INT);
+CREATE TABLE
+postgres=# INSERT INTO t VALUES(generate_series(1,100));
+INSERT 0 100
+
开启 ePQ 并行查询功能,并设置单节点并发度为 3。通过 EXPLAIN
可以看到执行计划来自 PX 优化器。由于参与测试的只读节点有两个,所以从执行计划中可以看到整体并发度为 6。
postgres=# SET polar_enable_px = 1;
+SET
+postgres=# SET polar_px_dop_per_node = 3;
+SET
+postgres=# SHOW polar_px_enable_adps;
+ polar_px_enable_adps
+----------------------
+ off
+(1 row)
+
+postgres=# EXPLAIN SELECT * FROM t;
+ QUERY PLAN
+-------------------------------------------------------------------------------
+ PX Coordinator 6:1 (slice1; segments: 6) (cost=0.00..431.00 rows=1 width=4)
+ -> Partial Seq Scan on t (cost=0.00..431.00 rows=1 width=4)
+ Optimizer: PolarDB PX Optimizer
+(3 rows)
+
+postgres=# SELECT COUNT(*) FROM t;
+ count
+-------
+ 100
+(1 row)
+
开启自适应扫描功能的开关后,通过 EXPLAIN ANALYZE
可以看到每个 PX Worker 进程扫描的物理块号。
postgres=# SET polar_enable_px = 1;
+SET
+postgres=# SET polar_px_dop_per_node = 3;
+SET
+postgres=# SET polar_px_enable_adps = 1;
+SET
+postgres=# SHOW polar_px_enable_adps;
+ polar_px_enable_adps
+----------------------
+ on
+(1 row)
+
+postgres=# SET polar_px_enable_adps_explain_analyze = 1;
+SET
+postgres=# SHOW polar_px_enable_adps_explain_analyze;
+ polar_px_enable_adps_explain_analyze
+--------------------------------------
+ on
+(1 row)
+
+postgres=# EXPLAIN ANALYZE SELECT * FROM t;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------
+ PX Coordinator 6:1 (slice1; segments: 6) (cost=0.00..431.00 rows=1 width=4) (actual time=0.968..0.982 rows=100 loops=1)
+ -> Partial Seq Scan on t (cost=0.00..431.00 rows=1 width=4) (actual time=0.380..0.435 rows=100 loops=1)
+ Dynamic Pages Per Worker: [1]
+ Planning Time: 5.571 ms
+ Optimizer: PolarDB PX Optimizer
+ (slice0) Executor memory: 23K bytes.
+ (slice1) Executor memory: 14K bytes avg x 6 workers, 14K bytes max (seg0).
+ Execution Time: 9.047 ms
+(8 rows)
+
+postgres=# SELECT COUNT(*) FROM t;
+ count
+-------
+ 100
+(1 row)
+
烛远
2022/09/20
20 min
PolarDB for PostgreSQL 的 ePQ 弹性跨机并行查询功能可以将一个大查询分散到多个节点上执行,从而加快查询速度。该功能会涉及到各个节点之间的通信,包括执行计划的分发、执行的控制、结果的获取等等。因此设计了 集群拓扑视图 功能,用于为 ePQ 组件收集并展示集群的拓扑信息,实现跨节点查询。
集群拓扑视图的维护是完全透明的,用户只需要按照部署文档搭建一写多读的集群,集群拓扑视图即可正确维护起来。关键在于需要搭建带有流复制槽的 Replica / Standby 节点。
使用以下接口可以获取集群拓扑视图(执行结果来自于 PolarDB for PostgreSQL 11):
postgres=# SELECT * FROM polar_cluster_info;
+ name | host | port | release_date | version | slot_name | type | state | cpu | cpu_quota | memory | memory_quota | iops | iops_quota | connection | connection_quota | px_connection | px_connection_quota | px_node
+-------+-----------+------+--------------+---------+-----------+---------+-------+-----+-----------+--------+--------------+------+------------+------------+------------------+---------------+---------------------+---------
+ node0 | 127.0.0.1 | 5432 | 20220930 | 1.1.27 | | RW | Ready | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | f
+ node1 | 127.0.0.1 | 5433 | 20220930 | 1.1.27 | replica1 | RO | Ready | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | t
+ node2 | 127.0.0.1 | 5434 | 20220930 | 1.1.27 | replica2 | RO | Ready | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | t
+ node3 | 127.0.0.1 | 5431 | 20220930 | 1.1.27 | standby1 | Standby | Ready | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | f
+(4 rows)
+
name
是节点的名称,是自动生成的。host
/ port
表示了节点的连接信息。在这里,都是本地地址。release_date
和 version
标识了 PolarDB 的版本信息。slot_name
是节点连接所使用的流复制槽,只有使用流复制槽连接上来的节点才会被统计在该视图中(除 Primary 节点外)。type
表示节点的类型,有三类: state
表示节点的状态。有 Offline / Going Offline / Disabled / Initialized / Pending / Ready / Unknown 这些状态,其中只有 Ready 才有可能参与 PX 计算,其他的都无法参与 PX 计算。px_node
表示是否参与 PX 计算。对于 ePQ 查询来说,默认只有 Replica 节点参与。可以通过参数控制使用 Primary 节点或者 Standby 节点参与计算:
-- 使 Primary 节点参与计算
+SET polar_px_use_master = ON;
+
+-- 使 Standby 节点参与计算
+SET polar_px_use_standby = ON;
+
提示
从 PolarDB for PostgreSQL 14 起,polar_px_use_master
参数改名为 polar_px_use_primary
。
还可以使用 polar_px_nodes
指定哪些节点参与 PX 计算。例如使用上述集群拓扑视图,可以执行如下命令,让 PX 查询只在 replica1 上执行。
SET polar_px_nodes = 'node1';
+
集群拓扑视图信息的采集是通过流复制来传递信息的。该功能对流复制协议增加了新的消息类型用于集群拓扑视图的传递。分为以下两个步骤:
集群拓扑视图并非定时更新与发送,因为视图并非一直变化。只有当节点刚启动时,或发生关键状态变化时再进行更新发送。
在具体实现上,Primary 节点收集的全局状态带有版本 generation,只有在接收到节点拓扑变化才会递增;当全局状态版本更新后,才会发送到其他节点,其他节点接收到后,设置到自己的节点上。
状态指标:
同 WAL Sender / WAL Reciver 的其他消息的做法,新增 'm'
和 'M'
消息类型,用于收集节点信息和广播集群拓扑视图。
提供接口获取 Replica 列表,提供 IP / port 等信息,用于 PX 查询。
预留了较多的负载接口,可以根据负载来实现动态调整并行度。(尚未接入)
同时增加了参数 polar_px_use_master
/ polar_px_use_standby
,将 Primary / Standby 加入到 PX 计算中,默认不打开(可能会有正确性问题,因为快照格式、Vacuum 等原因,快照有可能不可用)。
ePQ 会使用上述信息生成节点的连接信息并缓存下来,并在 ePQ 查询中使用该视图。当 generation 更新或者设置了 polar_px_nodes
/ polar_px_use_master
/ polar_px_use_standby
时,该缓存会被重置,并在下次使用时重新生成缓存。
通过 polar_monitor
插件提供视图,将上述集群拓扑视图提供出去,在任意节点均可获取。
棠羽
2023/09/20
20 min
在使用 PostgreSQL 时,如果想要在一张表中查询符合某个条件的行,默认情况下需要扫描整张表的数据,然后对每一行数据依次判断过滤条件。如果符合条件的行数非常少,而表的数据总量非常大,这显然是一个非常低效的操作。与阅读书籍类似,想要阅读某个特定的章节时,读者通常会通过书籍开头处的索引查询到对应章节的页码,然后直接从指定的页码开始阅读;在数据库中,通常会对被频繁查找的列创建索引,以避免进行开销极大的全表扫描:通过索引可以精确定位到被查找的数据位于哪些数据页面上。
PostgreSQL 支持创建多种类型的索引,其中使用得最多的是 B-Tree 索引,也是 PostgreSQL 默认创建的索引类型。在一张数据量较大的表上创建索引是一件非常耗时的事,因为其中涉及到的工作包含:
PostgreSQL 支持并行(多进程扫描/排序)和并发(不阻塞 DML)创建索引,但只能在创建索引的过程中使用单个计算节点的资源。
PolarDB-PG 的 ePQ 弹性跨机并行查询特性支持对 B-Tree 类型的索引创建进行加速。ePQ 能够利用多个计算节点的 I/O 带宽并行扫描全表数据,并利用多个计算节点的 CPU 和内存资源对每行数据在表中的物理位置按索引列值进行排序,构建索引元组。最终,将有序的索引元组归并到创建索引的进程中,写入索引页面,完成索引的创建。
创建一张包含三个列,数据量为 1000000 行的表:
CREATE TABLE t (id INT, age INT, msg TEXT);
+
+INSERT INTO t
+SELECT
+ random() * 1000000,
+ random() * 10000,
+ md5(random()::text)
+FROM generate_series(1, 1000000);
+
使用 ePQ 创建索引需要以下三个步骤:
polar_enable_px
为 ON
,打开 ePQ 的开关polar_px_dop_per_node
调整查询并行度px_build
属性为 ON
SET polar_enable_px TO ON;
+SET polar_px_dop_per_node TO 8;
+CREATE INDEX t_idx1 ON t(id, msg) WITH(px_build = ON);
+
在创建索引的过程中,数据库会对正在创建索引的表施加 ShareLock
锁。这个级别的锁将会阻塞其它进程对表的 DML 操作(INSERT
/ UPDATE
/ DELETE
)。
类似地,ePQ 支持并发创建索引,只需要在 CREATE INDEX
后加上 CONCURRENTLY
关键字即可:
SET polar_enable_px TO ON;
+SET polar_px_dop_per_node TO 8;
+CREATE INDEX CONCURRENTLY t_idx2 ON t(id, msg) WITH(px_build = ON);
+
在创建索引的过程中,数据库会对正在创建索引的表施加 ShareUpdateExclusiveLock
锁。这个级别的锁将不会阻塞其它进程对表的 DML 操作。
ePQ 加速创建索引暂不支持以下场景:
UNIQUE
索引INCLUDING
列TABLESPACE
WHERE
而成为部分索引(Partial Index)棠羽
2023/02/08
10 min
物化视图 (Materialized View) 是一个包含查询结果的数据库对象。与普通的视图不同,物化视图不仅保存视图的定义,还保存了 创建物化视图 时的数据副本。当物化视图的数据与视图定义中的数据不一致时,可以进行 物化视图刷新 (Refresh) 保持物化视图中的数据与视图定义一致。物化视图本质上是对视图定义中的查询做预计算,以便于在查询时复用。
CREATE TABLE AS
语法用于将一个查询所对应的数据构建为一个新的表,其表结构与查询的输出列完全相同。
SELECT INTO
语法用于建立一张新表,并将查询所对应的数据写入表中,而不是将查询到的数据返回给客户端。其表结构与查询的输出列完全相同。
对于物化视图的创建和刷新,以及 CREATE TABLE AS
/ SELECT INTO
语法,由于在数据库层面需要完成的工作步骤十分相似,因此 PostgreSQL 内核使用同一套代码逻辑来处理这几种语法。内核执行过程中的主要步骤包含:
CREATE TABLE AS
/ SELECT INTO
语法中定义的查询,扫描符合查询条件的数据PolarDB for PostgreSQL 对上述两个步骤分别引入了 ePQ 并行扫描和批量数据写入的优化。在需要扫描或写入的数据量较大时,能够显著提升上述 DDL 语法的性能,缩短执行时间:
将以下参数设置为 ON
即可启用 ePQ 并行扫描来加速上述语法中的查询过程,目前其默认值为 ON
。该参数生效的前置条件是 ePQ 特性的总开关 polar_enable_px
被打开。
SET polar_px_enable_create_table_as = ON;
+
由于 ePQ 特性的限制,该优化不支持 CREATE TABLE AS ... WITH OIDS
语法。对于该语法的处理流程中将会回退使用 PostgreSQL 内置优化器为 DDL 定义中的查询生成执行计划,并通过 PostgreSQL 的单机执行器完成查询。
将以下参数设置为 ON
即可启用批量写入来加速上述语法中的写入过程,目前其默认值为 ON
。
SET polar_enable_create_table_as_bulk_insert = ON;
+
渊云、秦疏
2023/09/06
30 min
PostgreSQL 提供了 EXPLAIN
命令用于 SQL 语句的性能分析。它能够输出 SQL 对应的查询计划,以及在执行过程中的具体耗时、资源消耗等信息,可用于排查 SQL 的性能瓶颈。
EXPLAIN
命令原先只适用于单机执行的 SQL 性能分析。PolarDB-PG 的 ePQ 弹性跨机并行查询扩展了 EXPLAIN
的功能,使其可以打印 ePQ 的跨机并行执行计划,还能够统计 ePQ 执行计划在各个算子上的执行时间、数据扫描量、内存使用量等信息,并以统一的视角返回给客户端。
ePQ 的执行计划是分片的。每个计划分片(Slice)由计算节点上的虚拟执行单元(Segment)启动的一组进程(Gang)负责执行,完成 SQL 的一部分计算。ePQ 在执行计划中引入了 Motion 算子,用于在执行不同计划分片的进程组之间进行数据传递。因此,Motion 算子就是计划分片的边界。
ePQ 中总共引入了三种 Motion 算子:
PX Coordinator
:源端数据发送到同一个目标端(汇聚)PX Broadcast
:源端数据发送到每一个目标端(广播)PX Hash
:源端数据经过哈希计算后发送到某一个目标端(重分布)以一个简单查询作为例子:
=> CREATE TABLE t (id INT);
+=> SET polar_enable_px TO ON;
+=> EXPLAIN (COSTS OFF) SELECT * FROM t LIMIT 1;
+ QUERY PLAN
+-------------------------------------------------
+ Limit
+ -> PX Coordinator 6:1 (slice1; segments: 6)
+ -> Partial Seq Scan on t
+ Optimizer: PolarDB PX Optimizer
+(4 rows)
+
以上执行计划以 Motion 算子为界,被分为了两个分片:一个是接收最终结果的分片 slice0
,一个是扫描数据的分片slice1
。对于 slice1
这个计划分片,ePQ 将使用六个执行单元(segments: 6
)分别启动一个进程来执行,这六个进程各自负责扫描表的一部分数据(Partial Seq Scan
),通过 Motion 算子将六个进程的数据汇聚到一个目标端(PX Coordinator 6:1
),传递给 Limit
算子。
如果查询逐渐复杂,则执行计划中的计划分片和 Motion 算子会越来越多:
=> CREATE TABLE t1 (a INT, b INT, c INT);
+=> SET polar_enable_px TO ON;
+=> EXPLAIN (COSTS OFF) SELECT SUM(b) FROM t1 GROUP BY a LIMIT 1;
+ QUERY PLAN
+------------------------------------------------------------
+ Limit
+ -> PX Coordinator 6:1 (slice1; segments: 6)
+ -> GroupAggregate
+ Group Key: a
+ -> Sort
+ Sort Key: a
+ -> PX Hash 6:6 (slice2; segments: 6)
+ Hash Key: a
+ -> Partial Seq Scan on t1
+ Optimizer: PolarDB PX Optimizer
+(10 rows)
+
以上执行计划中总共有三个计划分片。将会有六个进程(segments: 6
)负责执行 slice2
分片,分别扫描表的一部分数据,然后通过 Motion 算子(PX Hash 6:6
)将数据重分布到另外六个(segments: 6
)负责执行 slice1
分片的进程上,各自完成排序(Sort
)和聚合(GroupAggregate
),最终通过 Motion 算子(PX Coordinator 6:1
)将数据汇聚到结果分片 slice0
。
渊云
2023/09/06
20 min
PolarDB-PG 的 ePQ 弹性跨机并行查询特性提供了精细的粒度控制方法,可以合理使用集群内的计算资源。在最大程度利用闲置计算资源进行并行查询,提升资源利用率的同时,避免了对其它业务负载产生影响:
参数 polar_px_nodes
指定了参与 ePQ 的计算节点范围,默认值为空,表示所有只读节点都参与 ePQ 并行查询:
=> SHOW polar_px_nodes;
+ polar_px_nodes
+----------------
+
+(1 row)
+
如果希望读写节点也参与 ePQ 并行,则可以设置如下参数:
SET polar_px_use_primary TO ON;
+
如果部分只读节点负载较高,则可以通过修改 polar_px_nodes
参数设置仅特定几个而非所有只读节点参与 ePQ 并行查询。参数 polar_px_nodes
的合法格式是一个以英文逗号分隔的节点名称列表。获取节点名称需要安装 polar_monitor
插件:
CREATE EXTENSION IF NOT EXISTS polar_monitor;
+
通过 polar_monitor
插件提供的集群拓扑视图,可以查询到集群中所有计算节点的名称:
=> SELECT name,slot_name,type FROM polar_cluster_info;
+ name | slot_name | type
+-------+-----------+---------
+ node0 | | Primary
+ node1 | standby1 | Standby
+ node2 | replica1 | Replica
+ node3 | replica2 | Replica
+(4 rows)
+
其中:
Primary
表示读写节点Replica
表示只读节点Standby
表示备库节点通用的最佳实践是使用负载较低的只读节点参与 ePQ 并行查询:
=> SET polar_px_nodes = 'node2,node3';
+=> SHOW polar_px_nodes;
+ polar_px_nodes
+----------------
+ node2,node3
+(1 row)
+
参数 polar_px_dop_per_node
用于设置当前会话中的 ePQ 查询在每个计算节点上的执行单元(Segment)数量,每个执行单元会为其需要执行的每一个计划分片(Slice)启动一个进程。
该参数默认值为 3
,通用最佳实践值为当前计算节点 CPU 核心数的一半。如果计算节点的 CPU 负载较高,可以酌情递减该参数,控制计算节点的 CPU 占用率至 80% 以下;如果查询性能不佳时,可以酌情递增该参数,也需要保持计算节点的 CPU 水位不高于 80%。否则可能会拖慢其它的后台进程。
创建一张表:
CREATE TABLE test(id INT);
+
假设集群内有两个只读节点,polar_px_nodes
为空,此时 ePQ 将使用集群内的所有只读节点参与并行查询;参数 polar_px_dop_per_node
的值为 3
,表示每个计算节点上将会有三个执行单元。执行计划如下:
=> SHOW polar_px_nodes;
+ polar_px_nodes
+----------------
+
+(1 row)
+
+=> SHOW polar_px_dop_per_node;
+ polar_px_dop_per_node
+-----------------------
+ 3
+(1 row)
+
+=> EXPLAIN SELECT * FROM test;
+ QUERY PLAN
+-------------------------------------------------------------------------------
+ PX Coordinator 6:1 (slice1; segments: 6) (cost=0.00..431.00 rows=1 width=4)
+ -> Partial Seq Scan on test (cost=0.00..431.00 rows=1 width=4)
+ Optimizer: PolarDB PX Optimizer
+(3 rows)
+
从执行计划中可以看出,两个只读节点上总计有六个执行单元(segments: 6
)将会执行这个计划中唯一的计划分片 slice1
。这意味着总计会有六个进程并行执行当前查询。
此时,调整 polar_px_dop_per_node
为 4
,再次执行查询,两个只读节点上总计会有八个执行单元参与当前查询。由于执行计划中只有一个计划分片 slice1
,这意味着总计会有八个进程并行执行当前查询:
=> SET polar_px_dop_per_node TO 4;
+SET
+=> EXPLAIN SELECT * FROM test;
+ QUERY PLAN
+-------------------------------------------------------------------------------
+ PX Coordinator 8:1 (slice1; segments: 8) (cost=0.00..431.00 rows=1 width=4)
+ -> Partial Seq Scan on test (cost=0.00..431.00 rows=1 width=4)
+ Optimizer: PolarDB PX Optimizer
+(3 rows)
+
此时,如果设置 polar_px_use_primary
参数,让读写节点也参与查询,那么读写节点上也将会有四个执行单元参与 ePQ 并行执行,集群内总计 12 个进程参与并行执行:
=> SET polar_px_use_primary TO ON;
+SET
+=> EXPLAIN SELECT * FROM test;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ PX Coordinator 12:1 (slice1; segments: 12) (cost=0.00..431.00 rows=1 width=4)
+ -> Partial Seq Scan on test (cost=0.00..431.00 rows=1 width=4)
+ Optimizer: PolarDB PX Optimizer
+(3 rows)
+
渊云
2023/09/06
20 min
随着数据量的不断增长,表的规模将会越来越大。为了方便管理和提高查询性能,比较好的实践是使用分区表,将大表拆分成多个子分区表。甚至每个子分区表还可以进一步拆成二级子分区表,从而形成了多级分区表。
PolarDB-PG 支持 ePQ 弹性跨机并行查询,能够利用集群中多个计算节点提升只读查询的性能。ePQ 不仅能够对普通表进行高效的跨机并行查询,对分区表也实现了跨机并行查询。
ePQ 对分区表的基础功能支持包含:
此外,ePQ 还支持了部分与分区表相关的高级功能:
ePQ 暂不支持对具有多列分区键的分区表进行并行查询。
创建一张分区策略为 Range 的分区表,并创建三个子分区:
CREATE TABLE t1 (id INT) PARTITION BY RANGE(id);
+CREATE TABLE t1_p1 PARTITION OF t1 FOR VALUES FROM (0) TO (200);
+CREATE TABLE t1_p2 PARTITION OF t1 FOR VALUES FROM (200) TO (400);
+CREATE TABLE t1_p3 PARTITION OF t1 FOR VALUES FROM (400) TO (600);
+
设置参数打开 ePQ 开关和 ePQ 分区表扫描功能的开关:
SET polar_enable_px TO ON;
+SET polar_px_enable_partition TO ON;
+
查看对分区表进行全表扫描的执行计划:
=> EXPLAIN (COSTS OFF) SELECT * FROM t1;
+ QUERY PLAN
+-------------------------------------------
+ PX Coordinator 6:1 (slice1; segments: 6)
+ -> Append
+ -> Partial Seq Scan on t1_p1
+ -> Partial Seq Scan on t1_p2
+ -> Partial Seq Scan on t1_p3
+ Optimizer: PolarDB PX Optimizer
+(6 rows)
+
ePQ 将会启动一组进程并行扫描分区表的每一个子表。每一个扫描进程都会通过 Append
算子依次扫描每一个子表的一部分数据(Partial Seq Scan
),并通过 Motion 算子(PX Coordinator
)将所有进程的扫描结果汇聚到发起查询的进程并返回。
当查询的过滤条件中包含分区键时,ePQ 优化器可以根据过滤条件对将要扫描的分区表进行裁剪,避免扫描不需要的子分区,节省系统资源,提升查询性能。以上述 t1
表为例,查看以下查询的执行计划:
=> EXPLAIN (COSTS OFF) SELECT * FROM t1 WHERE id < 100;
+ QUERY PLAN
+-------------------------------------------
+ PX Coordinator 6:1 (slice1; segments: 6)
+ -> Append
+ -> Partial Seq Scan on t1_p1
+ Filter: (id < 100)
+ Optimizer: PolarDB PX Optimizer
+(5 rows)
+
由于查询的过滤条件 id < 100
包含分区键,因此 ePQ 优化器可以根据分区表的分区边界,在产生执行计划时去除不符合过滤条件的子分区(t1_p2
、t1_p3
),只保留符合过滤条件的子分区(t1_p1
)。
在进行分区表之间的连接操作时,如果分区策略和边界相同,并且连接条件为分区键时,ePQ 优化器可以产生以子分区为单位进行连接的执行计划,避免两张分区表的进行笛卡尔积式的连接,节省系统资源,提升查询性能。
以两张 Range 分区表的连接为例。使用以下 SQL 创建两张分区策略和边界都相同的分区表 t2
和 t3
:
CREATE TABLE t2 (id INT) PARTITION BY RANGE(id);
+CREATE TABLE t2_p1 PARTITION OF t2 FOR VALUES FROM (0) TO (200);
+CREATE TABLE t2_p2 PARTITION OF t2 FOR VALUES FROM (200) TO (400);
+CREATE TABLE t2_p3 PARTITION OF t2 FOR VALUES FROM (400) TO (600);
+
+CREATE TABLE t3 (id INT) PARTITION BY RANGE(id);
+CREATE TABLE t3_p1 PARTITION OF t3 FOR VALUES FROM (0) TO (200);
+CREATE TABLE t3_p2 PARTITION OF t3 FOR VALUES FROM (200) TO (400);
+CREATE TABLE t3_p3 PARTITION OF t3 FOR VALUES FROM (400) TO (600);
+
打开以下参数启用 ePQ 对分区表的支持:
SET polar_enable_px TO ON;
+SET polar_px_enable_partition TO ON;
+
当 Partition Wise join 关闭时,两表在分区键上等值连接的执行计划如下:
=> SET polar_px_enable_partitionwise_join TO OFF;
+=> EXPLAIN (COSTS OFF) SELECT * FROM t2 JOIN t3 ON t2.id = t3.id;
+ QUERY PLAN
+-----------------------------------------------------------
+ PX Coordinator 6:1 (slice1; segments: 6)
+ -> Hash Join
+ Hash Cond: (t2_p1.id = t3_p1.id)
+ -> Append
+ -> Partial Seq Scan on t2_p1
+ -> Partial Seq Scan on t2_p2
+ -> Partial Seq Scan on t2_p3
+ -> Hash
+ -> PX Broadcast 6:6 (slice2; segments: 6)
+ -> Append
+ -> Partial Seq Scan on t3_p1
+ -> Partial Seq Scan on t3_p2
+ -> Partial Seq Scan on t3_p3
+ Optimizer: PolarDB PX Optimizer
+(14 rows)
+
从执行计划中可以看出,执行 slice1
计划分片的六个进程会分别通过 Append
算子依次扫描分区表 t2
每一个子分区的一部分数据,并通过 Motion 算子(PX Broadcast
)接收来自执行 slice2
的六个进程广播的 t3
全表数据,在本地完成哈希连接(Hash Join
)后,通过 Motion 算子(PX Coordinator
)汇聚结果并返回。本质上,分区表 t2
的每一行数据都与 t3
的每一行数据做了一次连接。
打开参数 polar_px_enable_partitionwise_join
启用 Partition Wise join 后,再次查看执行计划:
=> SET polar_px_enable_partitionwise_join TO ON;
+=> EXPLAIN (COSTS OFF) SELECT * FROM t2 JOIN t3 ON t2.id = t3.id;
+ QUERY PLAN
+------------------------------------------------
+ PX Coordinator 6:1 (slice1; segments: 6)
+ -> Append
+ -> Hash Join
+ Hash Cond: (t2_p1.id = t3_p1.id)
+ -> Partial Seq Scan on t2_p1
+ -> Hash
+ -> Full Seq Scan on t3_p1
+ -> Hash Join
+ Hash Cond: (t2_p2.id = t3_p2.id)
+ -> Partial Seq Scan on t2_p2
+ -> Hash
+ -> Full Seq Scan on t3_p2
+ -> Hash Join
+ Hash Cond: (t2_p3.id = t3_p3.id)
+ -> Partial Seq Scan on t2_p3
+ -> Hash
+ -> Full Seq Scan on t3_p3
+ Optimizer: PolarDB PX Optimizer
+(18 rows)
+
在上述执行计划中,执行 slice1
计划分片的六个进程将通过 Append
算子依次扫描分区表 t2
每个子分区中的一部分数据,以及分区表 t3
相对应子分区 的全部数据,将两份数据进行哈希连接(Hash Join
),最终通过 Motion 算子(PX Coordinator
)汇聚结果并返回。在上述执行过程中,分区表 t2
的每一个子分区 t2_p1
、t2_p2
、t2_p3
分别只与分区表 t3
对应的 t3_p1
、t3_p2
、t3_p3
做了连接,并没有与其它不相关的分区连接,节省了不必要的工作。
在多级分区表中,每级分区表的分区维度(分区键)可以不同:比如一级分区表按照时间维度分区,二级分区表按照地域维度分区。当查询 SQL 的过滤条件中包含每一级分区表中的分区键时,ePQ 优化器支持对多级分区表进行静态分区裁剪,从而过滤掉不需要被扫描的子分区。
以下图为例:当查询过滤条件 WHERE date = '202201' AND region = 'beijing'
中包含一级分区键 date
和二级分区键 region
时,ePQ 优化器能够裁剪掉所有不相关的分区,产生的执行计划中只包含符合条件的子分区。由此,执行器只对需要扫描的子分区进行扫描即可。
使用以下 SQL 为例,创建一张多级分区表:
CREATE TABLE r1 (a INT, b TIMESTAMP) PARTITION BY RANGE (b);
+
+CREATE TABLE r1_p1 PARTITION OF r1 FOR VALUES FROM ('2000-01-01') TO ('2010-01-01') PARTITION BY RANGE (a);
+CREATE TABLE r1_p1_p1 PARTITION OF r1_p1 FOR VALUES FROM (1) TO (1000000);
+CREATE TABLE r1_p1_p2 PARTITION OF r1_p1 FOR VALUES FROM (1000000) TO (2000000);
+
+CREATE TABLE r1_p2 PARTITION OF r1 FOR VALUES FROM ('2010-01-01') TO ('2020-01-01') PARTITION BY RANGE (a);
+CREATE TABLE r1_p2_p1 PARTITION OF r1_p2 FOR VALUES FROM (1) TO (1000000);
+CREATE TABLE r1_p2_p2 PARTITION OF r1_p2 FOR VALUES FROM (1000000) TO (2000000);
+
打开以下参数启用 ePQ 对分区表的支持:
SET polar_enable_px TO ON;
+SET polar_px_enable_partition TO ON;
+
执行一条以两级分区键作为过滤条件的 SQL,并关闭 ePQ 的多级分区扫描功能,将得到 PostgreSQL 内置优化器经过多级分区静态裁剪后的执行计划:
=> SET polar_px_optimizer_multilevel_partitioning TO OFF;
+=> EXPLAIN (COSTS OFF) SELECT * FROM r1 WHERE a < 1000000 AND b < '2009-01-01 00:00:00';
+ QUERY PLAN
+----------------------------------------------------------------------------------------
+ Seq Scan on r1_p1_p1 r1
+ Filter: ((a < 1000000) AND (b < '2009-01-01 00:00:00'::timestamp without time zone))
+(2 rows)
+
启用 ePQ 的多级分区扫描功能,再次查看执行计划:
=> SET polar_px_optimizer_multilevel_partitioning TO ON;
+=> EXPLAIN (COSTS OFF) SELECT * FROM r1 WHERE a < 1000000 AND b < '2009-01-01 00:00:00';
+ QUERY PLAN
+----------------------------------------------------------------------------------------------------
+ PX Coordinator 6:1 (slice1; segments: 6)
+ -> Append
+ -> Partial Seq Scan on r1_p1_p1
+ Filter: ((a < 1000000) AND (b < '2009-01-01 00:00:00'::timestamp without time zone))
+ Optimizer: PolarDB PX Optimizer
+(5 rows)
+
在上述计划中,ePQ 优化器进行了对多级分区表的静态裁剪。执行 slice1
计划分片的六个进程只需对符合过滤条件的子分区 r1_p1_p1
进行并行扫描(Partial Seq Scan
)即可,并将扫描到的数据通过 Motion 算子(PX Coordinator
)汇聚并返回。
渊云
2022/09/27
30 min
PolarDB-PG 支持 ePQ 弹性跨机并行查询,能够利用集群中多个计算节点提升只读查询的性能。此外,ePQ 也支持在读写节点上通过多进程并行写入,实现对 INSERT
语句的加速。
ePQ 的并行 INSERT
功能可以用于加速 INSERT INTO ... SELECT ...
这种读写兼备的 SQL。对于 SQL 中的 SELECT
部分,ePQ 将启动多个进程并行执行查询;对于 SQL 中的 INSERT
部分,ePQ 将在读写节点上启动多个进程并行执行写入。执行写入的进程与执行查询的进程之间通过 Motion 算子 进行数据传递。
能够支持并行 INSERT
的表类型有:
并行 INSERT
支持动态调整写入并行度(写入进程数量),在查询不成为瓶颈的条件下性能最高能提升三倍。
创建两张表 t1
和 t2
,向 t1
中插入一些数据:
CREATE TABLE t1 (id INT);
+CREATE TABLE t2 (id INT);
+INSERT INTO t1 SELECT generate_series(1,100000);
+
打开 ePQ 及并行 INSERT
的开关:
SET polar_enable_px TO ON;
+SET polar_px_enable_insert_select TO ON;
+
通过 INSERT
语句将 t1
表中的所有数据插入到 t2
表中。查看并行 INSERT
的执行计划:
=> EXPLAIN INSERT INTO t2 SELECT * FROM t1;
+ QUERY PLAN
+-----------------------------------------------------------------------------------------
+ Insert on t2 (cost=0.00..952.87 rows=33334 width=4)
+ -> Result (cost=0.00..0.00 rows=0 width=0)
+ -> PX Hash 6:3 (slice1; segments: 6) (cost=0.00..432.04 rows=100000 width=8)
+ -> Partial Seq Scan on t1 (cost=0.00..431.37 rows=16667 width=4)
+ Optimizer: PolarDB PX Optimizer
+(5 rows)
+
其中的 PX Hash 6:3
表示 6 个并行查询 t1
的进程通过 Motion 算子将数据传递给 3 个并行写入 t2
的进程。
通过参数 polar_px_insert_dop_num
可以动态调整写入并行度,比如:
=> SET polar_px_insert_dop_num TO 12;
+=> EXPLAIN INSERT INTO t2 SELECT * FROM t1;
+ QUERY PLAN
+------------------------------------------------------------------------------------------
+ Insert on t2 (cost=0.00..952.87 rows=8334 width=4)
+ -> Result (cost=0.00..0.00 rows=0 width=0)
+ -> PX Hash 6:12 (slice1; segments: 6) (cost=0.00..432.04 rows=100000 width=8)
+ -> Partial Seq Scan on t1 (cost=0.00..431.37 rows=16667 width=4)
+ Optimizer: PolarDB PX Optimizer
+(5 rows)
+
执行计划中的 PX Hash 6:12
显示,并行查询 t1
的进程数量不变,并行写入 t2
的进程数量变更为 12
。
调整 polar_px_dop_per_node
和 polar_px_insert_dop_num
可以分别修改 INSERT INTO ... SELECT ...
中查询和写入的并行度。
ePQ 对并行 INSERT
的处理如下:
并行查询和并行写入是以流水线的形式同时进行的。上述执行过程如图所示:
山现
2023/12/25
10 min
pgvector
作为一款高效的向量数据库插件,基于 PostgreSQL 的扩展机制,利用 C 语言实现了多种向量数据类型和运算算法,同时还能够高效存储与查询以向量表示的 AI Embedding。
pgvector
支持 IVFFlat 索引。IVFFlat 索引能够将向量空间分为若干个划分区域,每个区域都包含一些向量,并创建倒排索引,用于快速地查找与给定向量相似的向量。IVFFlat 是 IVFADC 索引的简化版本,适用于召回精度要求高,但对查询耗时要求不严格(100ms 级别)的场景。相比其他索引类型,IVFFlat 索引具有高召回率、高精度、算法和参数简单、空间占用小的优势。
pgvector
插件算法的具体流程如下:
pgvector
可以顺序检索或索引检索高维向量,关于索引类型和更多参数介绍可以参考插件源代码的 README。
CREATE EXTENSION vector;
+
执行如下命令,创建一个含有向量字段的表:
CREATE TABLE t (val vector(3));
+
执行如下命令,可以插入向量数据:
INSERT INTO t (val) VALUES ('[0,0,0]'), ('[1,2,3]'), ('[1,1,1]'), (NULL);
+
创建 IVFFlat 类型的索引:
val vector_ip_ops
表示需要创建索引的列名为 val
,并且使用向量操作符 vector_ip_ops
来计算向量之间的相似度。该操作符支持向量之间的点积、余弦相似度、欧几里得距离等计算方式WITH (lists = 1)
表示使用的划分区域数量为 1,这意味着所有向量都将被分配到同一个区域中。在实际应用中,划分区域数量需要根据数据规模和查询性能进行调整CREATE INDEX ON t USING ivfflat (val vector_ip_ops) WITH (lists = 1);
+
计算近似向量:
=> SELECT * FROM t ORDER BY val <#> '[3,3,3]';
+ val
+---------
+ [1,2,3]
+ [1,1,1]
+ [0,0,0]
+
+(4 rows)
+
DROP EXTENSION vector;
+
棠羽
2022/10/05
10 min
对大规模的数据进行相似度计算在电商业务、搜索引擎中是一个很关键的技术问题。相对简易的相似度计算实现不仅运算速度慢,还十分消耗资源。smlar
是 PostgreSQL 的一款开源第三方插件,提供了可以在数据库内高效计算数据相似度的函数,并提供了支持 GiST 和 GIN 索引的相似度运算符。目前该插件已经支持 PostgreSQL 所有的内置数据类型。
注意
由于 smlar 插件的 %
操作符与 RUM 插件的 %
操作符冲突,因此 smlar 与 RUM 两个插件无法同时创建在同一 schema 中。
float4 smlar(anyarray, anyarray)
计算两个数组的相似度,数组的数据类型需要一致。
float4 smlar(anyarray, anyarray, bool useIntersect)
计算两个自定义复合类型数组的相似度,useIntersect
参数表示是否让仅重叠元素还是全部元素参与运算;复合类型可由以下方式定义:
CREATE TYPE type_name AS (element_name anytype, weight_name FLOAT4);
+
float4 smlar(anyarray a, anyarray b, text formula);
使用参数给定的公式来计算两个数组的相似度,数组的数据类型需要一致;公式中可以使用的预定义变量有:
N.i
:两个数组中的相同元素个数(交集)N.a
:第一个数组中的唯一元素个数N.b
:第二个数组中的唯一元素个数SELECT smlar('{1,4,6}'::int[], '{5,4,6}', 'N.i / sqrt(N.a * N.b)');
+
anyarray % anyarray
该运算符的含义为,当两个数组的的相似度超过阈值时返回 TRUE
,否则返回 FALSE
。
text[] tsvector2textarray(tsvector)
将 tsvector
类型转换为字符串数组。
anyarray array_unique(anyarray)
对数组进行排序、去重。
float4 inarray(anyarray, anyelement)
如果元素出现在数组中,则返回 1.0
;否则返回 0
。
float4 inarray(anyarray, anyelement, float4, float4)
如果元素出现在数组中,则返回第三个参数;否则返回第四个参数。
smlar.threshold FLOAT
相似度阈值,用于给 %
运算符判断两个数组是否相似。
smlar.persistent_cache BOOL
全局统计信息的缓存是否存放在与事务无关的内存中。
smlar.type STRING
:相似度计算公式,可选的相似度类型包含:
smlar.stattable STRING
存储集合范围统计信息的表名,表定义如下:
CREATE TABLE table_name (
+ value data_type UNIQUE,
+ ndoc int4 (or bigint) NOT NULL CHECK (ndoc>0)
+);
+
smlar.tf_method STRING
:计算词频(TF,Term Frequency)的方法,取值如下
n
:简单计数(默认)log
:1 + log(n)
const
:频率等于 1
smlar.idf_plus_one BOOL
:计算逆文本频率指数的方法(IDF,Inverse Document Frequency)的方法,取值如下
FALSE
:log(d / df)
(默认)TRUE
:log(1 + d / df)
CREATE EXTENSION smlar;
+
使用上述的函数计算两个数组的相似度:
SELECT smlar('{3,2}'::int[], '{3,2,1}');
+ smlar
+----------
+ 0.816497
+(1 row)
+
+SELECT smlar('{1,4,6}'::int[], '{5,4,6}', 'N.i / (N.a + N.b)' );
+ smlar
+----------
+ 0.333333
+(1 row)
+
DROP EXTENSION smlar;
+
PGCon 2012 - Finding Similar: Effective similarity search in database (slides)
何柯文
2022/09/21
30 min
PolarDB for PostgreSQL(以下简称 PolarDB)底层使用 PolarFS(以下简称为 PFS)作为文件系统。不同于 ext4 等单机文件系统,PFS 在页扩展过程中,元数据更新开销较大;且 PFS 的最小页扩展粒度为 4MB。而 PostgreSQL 8kB 的页扩展粒度并不适合 PFS,将会导致写表或创建索引时性能下降;同时,PFS 在读取大块页面时 I/O 效率更高。为了适配上述特征,我们为 PolarDB 设计了堆表预读、堆表预扩展、索引创建预扩展的功能,使运行在 PFS 上的 PolarDB 能够获得更好的性能。
在 PostgreSQL 读取堆表的过程中,会以 8kB 页为单位通过文件系统读取页面至内存缓冲池(Buffer Pool)中。PFS 对于这种数据量较小的 I/O 操作并不是特别高效。所以,PolarDB 为了适配 PFS 而设计了 堆表批量预读。当读取的页数量大于 1 时,将会触发批量预读,一次 I/O 读取 128kB 数据至 Buffer Pool 中。预读对顺序扫描(Sequential Scan)、Vacuum 两种场景性能可以带来一倍左右的提升,在索引创建场景下可以带来 18% 的性能提升。
在 PostgreSQL 中,表空间的扩展过程中将会逐个申请并扩展 8kB 的页。即使是 PostgreSQL 支持的批量页扩展,进行一次 N 页扩展的流程中也包含了 N 次 I/O 操作。这种页扩展不符合 PFS 最小页扩展粒度为 4MB 的特性。为此,PolarDB 设计了堆表批量预扩展,在扩展堆表的过程中,一次 I/O 扩展 4MB 页。在写表频繁的场景下(如装载数据),能够带来一倍的性能提升。
索引创建预扩展与堆表预扩展的功能类似。索引创建预扩展特别针对 PFS 优化索引创建过程。在索引创建的页扩展过程中,一次 I/O 扩展 4MB 页。这种设计可以在创建索引的过程中带来 30% 的性能提升。
注意
当前索引创建预扩展只适配了 B-Tree 索引。其他索引类型暂未支持。
堆表预读的实现步骤主要分为四步:
palloc
在内存中申请一段大小为 N * 页大小
的空间,简称为 p
N * 页大小
的数据拷贝至 p
中p
中 N 个页的内容逐个拷贝至从 Buffer Pool 申请的 N 个 Buffer 中。后续的读取操作会直接命中 Buffer。数据流图如下所示:
预扩展的实现步骤主要分为三步:
索引创建预扩展的实现步骤与预扩展类似,但没有涉及 Buffer 的申请。步骤如下:
堆表预读的参数名为 polar_bulk_read_size
,功能默认开启,默认大小为 128kB。不建议用户自行修改该参数,128kB 是贴合 PFS 的最优值,自行调整并不会带来性能的提升。
关闭功能:
ALTER SYSTEM SET polar_bulk_read_size = 0;
+SELECT pg_reload_conf();
+
打开功能并设置预读大小为 128kB:
ALTER SYSTEM SET polar_bulk_read_size = '128kB';
+SELECT pg_reload_conf();
+
堆表预扩展的参数名为 polar_bulk_extend_size
,功能默认开启,预扩展的大小默认是 4MB。不建议用户自行修改该参数值,4MB 是贴合 PFS 的最优值。
关闭功能:
ALTER SYSTEM SET polar_bulk_extend_size = 0;
+SELECT pg_reload_conf();
+
打开功能并设置预扩展大小为 4MB:
ALTER SYSTEM SET polar_bulk_extend_size = '4MB';
+SELECT pg_reload_conf();
+
索引创建预扩展的参数名为 polar_index_create_bulk_extend_size
,功能默认开启。索引创建预扩展的大小默认是 4MB。不建议用户自行修改该参数值,4MB 是贴合 PFS 的最优值。
关闭功能:
ALTER SYSTEM SET polar_index_create_bulk_extend_size = 0;
+SELECT pg_reload_conf();
+
打开功能,并设置预扩展大小为 4MB:
ALTER SYSTEM SET polar_index_create_bulk_extend_size = 512;
+SELECT pg_reload_conf();
+
为了展示堆表预读、堆表预扩展、索引创建预扩展的性能提升效果,我们在 PolarDB for PostgreSQL 14 的实例上进行了测试。
400GB 表的 Vacuum 性能:
400GB 表的 SeqScan 性能:
结论:
400GB 表数据装载性能:
结论:
400GB 表创建索引性能:
结论:
步真
2022/11/14
50 min
在 SQL 执行的过程中,存在若干次对系统表和用户表的查询。PolarDB for PostgreSQL 通过文件系统的 lseek 系统调用来获取表大小。频繁执行 lseek 系统调用会严重影响数据库的执行性能,特别是对于存储计算分离架构的 PolarDB for PostgreSQL 来说,在 PolarFS 上的 PFS lseek 系统调用会带来更大的 RTO 时延。为了降低 lseek 系统调用的使用频率,PolarDB for PostgreSQL 在自身存储引擎上提供了一层表大小缓存接口,用于提升数据库的运行时性能。
PolarDB for PostgreSQL 为了实现 RSC,在 smgr 层进行了重新适配与设计。在整体上,RSC 是一个 缓存数组 + 两级索引 的结构设计:一级索引通过内存地址 + 引用计数来寻找共享内存 RSC 缓存中的一个缓存块;二级索引通过共享内存中的哈希表来索引得到一个 RSC 缓存块的数组下标,根据下标进一步访问 RSC 缓存,获取表大小信息。
在开启 RSC 缓存功能后,各个 smgr 层接口将会生效 RSC 缓存查询与更新的逻辑:
smgrnblocks
:获取表大小的实际入口,将会通过查询 RSC 一级或二级索引得到 RSC 缓存块地址,从而得到物理表大小。如果 RSC 缓存命中则直接返回缓存中的物理表大小;否则需要进行一次 lseek 系统调用,并将实际的物理表大小更新到 RSC 缓存中,并同步更新 RSC 一级与二级索引。smgrextend
:表文件扩展接口,将会把物理表文件扩展一个页,并更新对应表的 RSC 索引与缓存。smgrextendbatch
:表文件的预扩展接口,将会把物理表文件预扩展多个页,并更新对应表的 RSC 索引与缓存。smgrtruncate
:表文件的删除接口,将会把物理表文件删除,并清空对应表的 RSC 索引与缓存。在共享内存中,维护了一个数组形式的 RSC 缓存。数组中的每个元素是一个 RSC 缓存块,其中保存的关键信息包含:
generation
:表发生更新操作时,这个计数会自增对于每个执行用户操作的会话进程而言,其所需访问的表被维护在进程私有的 SmgrRelation
结构中,其中包含:
generation
计数当执行表访问操作时,如果引用计数与 RSC 缓存中的 generation
一致,则认为 RSC 缓存没有被更新过,可以直接通过指针得到 RSC 缓存,获得物理表的当前大小。RSC 一级索引整体上是一个共享引用计数 + 共享内存指针的设计,在对大多数特定表的读多写少场景中,这样的设计可以有效降低对 RSC 二级索引的并发访问。
当表大小发生更新(例如 INSERT
、UPDATE
、COPY
等触发表文件大小元信息变更的操作)时,会导致 RSC 一级索引失效(generation
计数不一致),会话进程会尝试访问 RSC 二级索引。RSC 二级索引的形式是一个共享内存哈希表:
通过待访问物理表的 OID,查找位于共享内存中的 RSC 二级索引:如果命中,则直接得到 RSC 缓存块,取得表大小,同时更新 RSC 一级索引;如果不命中,则使用 lseek 系统调用获取物理表的实际大小,并更新 RSC 缓存及其一二级索引。RSC 缓存更新的过程可能因缓存已满而触发缓存淘汰。
在 RSC 缓存被更新的过程中,可能会因为缓存总容量已满,进而触发缓存淘汰。RSC 实现了一个 SLRU 缓存淘汰算法,用于在缓存块满时选择一个旧缓存块进行淘汰。每一个 RSC 缓存块上都维护了一个引用计数器,缓存每被访问一次,计数器的值加 1;缓存被淘汰时计数器清 0。当缓存淘汰被触发时,将从 RSC 缓存数组上一次遍历到的位置开始向前遍历,递减每一个 RSC 缓存上的引用计数,直到找到一个引用计数为 0 的缓存块进行淘汰。遍历的长度可以通过 GUC 参数控制,默认为 8:当向前遍历 8 个块后仍未找到一个可以被淘汰的 RSC 缓存块时,将会随机选择一个缓存块进行淘汰。
PolarDB for PostgreSQL 的备节点分为两种,一种是提供只读服务的共享存储 Read Only 节点(RO),一种是提供跨数据中心高可用的 Standby 节点。对于 Standby 节点,由于其数据同步机制采用传统流复制 + WAL 日志回放的方式进行,故 RSC 缓存的使用与更新方式与 Read Write 节点(RW)无异。但对于 RO 节点,其数据是通过 PolarDB for PostgreSQL 实现的 LogIndex 机制实现同步的,故需要额外支持该机制下 RO 节点的 RSC 缓存同步方式。对于每种 WAL 日志类型,都需要根据当前是否存在 New Page 类型的日志,进行缓存更新与淘汰处理,保证 RO 节点下 RSC 缓存的一致性。
该功能默认生效。提供如下 GUC 参数控制:
polar_nblocks_cache_mode
:是否开启 RSC 功能,取值为: scan
(默认值):表示仅在 scan
顺序查询场景下开启on
:在所有场景下全量开启 RSCoff
:关闭 RSC;参数从 scan
或 on
设置为 off
,可以直接通过 ALTER SYSTEM SET
进行设置,无需重启即可生效;参数从 off
设置为 scan
/ on
,需要修改 postgresql.conf
配置文件并重启生效polar_enable_replica_use_smgr_cache
:RO 节点是否开启 RSC 功能,默认为 on
。可配置为 on
/ off
。polar_enable_standby_use_smgr_cache
:Standby 节点是否开启 RSC 功能,默认为 on
。可配置为 on
/ off
。通过如下 Shell 脚本创建一个带有 1000 个子分区的分区表:
psql -c "CREATE TABLE hp(a INT) PARTITION BY HASH(a);"
+for ((i=1; i<1000; i++)); do
+ psql -c "CREATE TABLE hp$i PARTITION OF hp FOR VALUES WITH(modulus 1000, remainder $i);"
+done
+
此时分区子表无数据。接下来借助一条在所有子分区上的聚合查询,来验证打开或关闭 RSC 功能时,lseek 系统调用所带来的时间性能影响。
开启 RSC:
ALTER SYSTEM SET polar_nblocks_cache_mode = 'scan';
+ALTER SYSTEM
+
+ALTER SYSTEM SET polar_enable_replica_use_smgr_cache = on;
+ALTER SYSTEM
+
+ALTER SYSTEM SET polar_enable_standby_use_smgr_cache = on;
+ALTER SYSTEM
+
+SELECT pg_reload_conf();
+ pg_reload_conf
+----------------
+ t
+(1 row)
+
+SHOW polar_nblocks_cache_mode;
+ polar_nblocks_cache_mode
+--------------------------
+ scan
+(1 row)
+
+SHOW polar_enable_replica_use_smgr_cache ;
+ polar_enable_replica_use_smgr_cache
+--------------------------
+ on
+(1 row)
+
+SHOW polar_enable_standby_use_smgr_cache ;
+ polar_enable_standby_use_smgr_cache
+--------------------------
+ on
+(1 row)
+
+SELECT COUNT(*) FROM hp;
+ count
+-------
+ 0
+(1 row)
+
+Time: 97.658 ms
+
+SELECT COUNT(*) FROM hp;
+ count
+-------
+ 0
+(1 row)
+
+Time: 108.672 ms
+
+SELECT COUNT(*) FROM hp;
+ count
+-------
+ 0
+(1 row)
+
+Time: 93.678 ms
+
关闭 RSC:
ALTER SYSTEM SET polar_nblocks_cache_mode = 'off';
+ALTER SYSTEM
+
+ALTER SYSTEM SET polar_enable_replica_use_smgr_cache = off;
+ALTER SYSTEM
+
+ALTER SYSTEM SET polar_enable_standby_use_smgr_cache = off;
+ALTER SYSTEM
+
+SELECT pg_reload_conf();
+ pg_reload_conf
+----------------
+ t
+(1 row)
+
+SELECT COUNT(*) FROM hp;
+ count
+-------
+ 0
+(1 row)
+
+Time: 164.772 ms
+
+SELECT COUNT(*) FROM hp;
+ count
+-------
+ 0
+(1 row)
+
+Time: 147.255 ms
+
+SELECT COUNT(*) FROM hp;
+ count
+-------
+ 0
+(1 row)
+
+Time: 177.039 ms
+
+SELECT COUNT(*) FROM hp;
+ count
+-------
+ 0
+(1 row)
+
+Time: 194.724 ms
+
严华
2022/11/25
20 min
原生 PostgreSQL 的连接调度方式是每一个进程对应一个连接 (One-Process-Per-Connection),这种调度方式适合低并发、长连接的业务场景。而在高并发或大量短连接的业务场景中,进程的大量创建、销毁以及上下文切换,会严重影响性能。同时,在业务容器化部署后,每个容器通过连接池向数据库发起连接,业务在高峰期会弹性扩展出很多容器,后端数据库的连接数会瞬间增高,影响数据库稳定性,导致 OOM 频发。
为了解决上述问题,业界在使用 PostgreSQL 时通常会配置连接池组件,比如部署在数据库侧的后置连接池 PgBouncer,部署在应用侧的前置连接池 Druid。但后置连接池无法支持保留用户连接私有信息(如 GUC 参数、Prepared Statement)的相关功能,在面临进程被污染的情况(如加载动态链接库、修改 role
参数)时也无法及时清理。前置连接池不仅无法解决后置连接池的缺陷,还无法根据应用规模扩展而实时调整配置,仍然会面临连接数膨胀的问题。
PolarDB for PostgreSQL 针对上述问题,从数据库内部提供了 Shared Server(后文简称 SS)内置连接池功能,采用共享内存 + Session Context + Dispatcher 转发 + Backend Pool 的架构,实现了用户连接与后端进程的解绑。后端进程具备了 Native、Shared、Dedicated 三种执行模式,并且在运行时可以根据实时负载和进程污染情况进行动态转换。负载调度算法充分吸收 AliSQL 对社区版 MySQL 线程池的缺陷改进,使用 Stall 机制弹性控制 Worker 数量,同时避免用户连接饿死。从根本上解决了高并发或者大量短连接带来的性能、稳定性问题。
在 PostgreSQL 原生的 One-Process-Per-Connection 连接调度策略中,用户发起的连接与后端进程一一绑定:这里不仅是生命周期的绑定,同时还是服务与被服务关系的绑定。
在 Shared Server 内置连接池中,通过提取出会话相关上下文 Session Context,将用户连接和后端进程进行了解绑,并且引入 Dispatcher 来进行代理转发:
<user, database, GUCs>
为 key,划分成不同的后端进程池。每个后端进程池都有自己独占的后端进程组,单个后端进程池内的后端进程数量随着负载增高而增多,随着负载降低而减少。在 Shared Server 中,后端进程有三种执行模式。进程执行模式在运行时会根据实时负载和进程污染情况进行动态转换:
polar_ss_dedicated_dbuser_names
黑名单范围内的数据库或用户DECLARE CURSOR
命令CURSOR WITH HOLD
操作Shared Server 主要应用于高并发或大量短连接的业务场景,因此这里使用 TPC-C 进行测试。
使用 104c 512GB 的物理机单机部署,测试 TPC-C 1000 仓下,并发数从 300 增大到 5000 时,不同配置下的分数对比。如下图所示:
从图中可以看出:
使用 104c 512GB 的物理机单机部署,利用 pgbench
分别测试以下配置中,并发短连接数从 1 到 128 的场景下的性能表现:
从图中可以看出,使用连接池后,对于短连接,PgBouncer 和 Shared Server 的性能均有所提升。但 PgBouncer 最高只能提升 14 倍性能,Shared Server 最高可以提升 42 倍性能。
业界典型的后置连接池 PgBouncer 具有多种模式。其中 session pooling 模式仅对短连接友好,一般不使用;transaction pooling 模式对短连接、长连接都友好,是默认推荐的模式。与 PgBouncer 相比,Shared Server 的差异化功能特点如下:
Feature | PgBouncer Session Pooling | PgBouncer Transaction Pooling | Shared Server |
---|---|---|---|
Startup parameters | 受限 | 受限 | 支持 |
SSL | 支持 | 支持 | 未来将支持 |
LISTEN/NOTIFY | 支持 | 不支持 | 支持 触发兜底 |
LOAD statement | 支持 | 不支持 | 支持 触发兜底 |
Session-level advisory locks | 支持 | 不支持 | 支持 触发兜底 |
SET/RESET GUC | 支持 | 不支持 | 支持 |
Protocol-level prepared plans | 支持 | 未来将支持 | 支持 |
PREPARE / DEALLOCATE | 支持 | 不支持 | 支持 |
Cached Plan Reset | 支持 | 支持 | 支持 |
WITHOUT HOLD CURSOR | 支持 | 支持 | 支持 |
WITH HOLD CURSOR | 支持 | 不支持 | 未来将支持 触发兜底 |
PRESERVE/DELETE ROWS temp | 支持 | 不支持 | 未来将支持 触发兜底 |
ON COMMIT DROP temp | 支持 | 支持 | 支持 |
注:
client_encoding
datestyle
timezone
standard_conforming_strings
为了适应不同的环境,Shared Server 支持丰富了参数配置:
Shared Server 的典型配置参数说明如下:
polar_enable_shm_aset
:是否开启全局共享内存,当前默认关闭,重启生效polar_ss_shared_memory_size
:Shared Server 全局共享内存的使用上限,单位 kB,为 0
时表示关闭,默认 1MB。重启生效。polar_ss_dispatcher_count
:Dispatcher 进程的最大个数,默认为 2
,最大为 CPU 核心数,建议配置与 CPU 核心数相同。重启生效。polar_enable_shared_server
:Shared Server 功能是否开启,默认关闭。polar_ss_backend_max_count
:后端进程的最大数量,默认为 -5
,表示为 max_connection
的 1/5;0
/ -1
表示与 max_connection
保持一致。建议设置为 CPU 核心数的 10 倍为佳。polar_ss_backend_idle_timeout
:后端进程的空闲退出时间,默认 3 分钟polar_ss_session_wait_timeout
:后端进程被用满时,用户连接等待被服务的最大时间,默认 60 秒polar_ss_dedicated_dbuser_names
:记录指定数据库/用户使用时进入 Native 模式,默认为空,格式为 d1/_,_/u1,d2/u2
,表示对使用数据库 d1
的任意连接、使用用户 u1
的任意连接、使用数据库 d2
且用户 u2
的任意连接,都会回退到 Native 模式恒亦
2022/09/27
20 min
TDE(Transparent Data Encryption),即 透明数据加密。TDE 通过在数据库层执行透明的数据加密,阻止可能的攻击者绕过数据库直接从存储层读取敏感信息。经过数据库身份验证的用户可以 透明(不需要更改应用代码或配置)地访问数据,而尝试读取表空间文件中敏感数据的 OS 用户以及尝试读取磁盘或备份信息的不法之徒将不允许访问明文数据。在国内,为了保证互联网信息安全,国家要求相关服务开发商需要满足一些数据安全标准,例如:
在国际上,一些相关行业也有监管数据安全标准,例如:
为了满足保护用户数据安全的需求,我们在 PolarDB 中实现 TDE 功能。
pg_strong_random
随机生成,存在内存中,作为实际加密数据的密码。对于用户来说:
initdb
时增加 --cluster-passphrase-command 'xxx' -e aes-256
参数就会生成支持 TDE 的集群,其中 cluster-passphrase-command
参数为得到加密密钥的密钥的命令,-e
代表数据加密采用的加密算法,目前支持 AES-128、AES-256 和 SM4。
initdb --cluster-passphrase-command 'echo \"abc123\"' -e aes-256
+
在数据库运行过程中,只有超级用户可以执行如下命令得到对应的加密算法:
show polar_data_encryption_cipher;
+
在数据库运行过程中,可以创建插件 polar_tde_utils
来修改 TDE 的加密密钥或者查询 TDE 的一些执行状态,目前支持:
修改加密密钥,其中函数参数为获取加密密钥的方法(该方法保证只能在宿主机所在网络才可以获得),该函数执行后,kmgr
文件内容变更,等下次重启后生效。
select polar_tde_update_kmgr_file('echo \"abc123456\"');
+
得到当前的 kmgr 的 info 信息。
select * from polar_tde_kmgr_info_view();
+
检查 kmgr 文件的完整性。
select polar_tde_check_kmgr_file();
+
执行 pg_filedump
解析加密后的页面,用于一些极端情况下,做页面解析。
pg_filedump -e aes-128 -C 'echo \"abc123\"' -K global/kmgr base/14543/2608
+
采用 2 层密钥结构,即密钥加密密钥和表数据加密密钥。表数据加密密钥是实际对数据库数据进行加密的密钥。密钥加密密钥则是对表数据加密密钥进行进一步加密的密钥。两层密钥的详细介绍如下:
polar_cluster_passphrase_command
参数中命令并计算 SHA-512 后得到 64 字节的数据,其中前 32 字节为顶层加密密钥 KEK,后 32 字节为 HMACK。KEK 和 HMACK 每次都是通过外部获取,例如 KMS,测试的时候可以直接 echo passphrase
得到。ENCMDEK 和 KEK_HMAC 需要保存在共享存储上,用来保证下次启动时 RW 和 RO 都可以读取该文件,获取真正的加密密钥。其数据结构如下:
typedef struct KmgrFileData
+{
+ /* version for kmgr file */
+ uint32 kmgr_version_no;
+
+ /* Are data pages encrypted? Zero if encryption is disabled */
+ uint32 data_encryption_cipher;
+
+ /*
+ * Wrapped Key information for data encryption.
+ */
+ WrappedEncKeyWithHmac tde_rdek;
+ WrappedEncKeyWithHmac tde_wdek;
+
+ /* CRC of all above ... MUST BE LAST! */
+ pg_crc32c crc;
+} KmgrFileData;
+
该文件当前是在 initdb
的时候产生,这样就可以保证 Standby 通过 pg_basebackup
获取到。
在实例运行状态下,TDE 相关的控制信息保存在进程的内存中,结构如下:
static keydata_t keyEncKey[TDE_KEK_SIZE];
+static keydata_t relEncKey[TDE_MAX_DEK_SIZE];
+static keydata_t walEncKey[TDE_MAX_DEK_SIZE];
+char *polar_cluster_passphrase_command = NULL;
+extern int data_encryption_cipher;
+
数据库初始化时需要生成密钥,过程示意图如下:
polar_cluster_passphrase_command
得到 64 字节的 KEK + HMACK,其中 KEK 长度为 32 字节,HMACK 长度为 32 字节。KmgrFileData
结构信息写入 global/kmgr
文件。当数据库崩溃或重新启动等情况下,需要通过有限的密文信息解密出对应的密钥,其过程如下:
global/kmgr
文件获取 ENCMDEK 和 KEK_HMAC。polar_cluster_passphrase_command
得到 64 字节的 KEK + HMACK。密钥更换的过程可以理解为先用旧的 KEK 还原密钥,然后再用新的 KEK 生成新的 kmgr 文件。其过程如下图:
global/kmgr
文件获取 ENCMDEK 和 KEK_HMAC。polar_cluster_passphrase_command
得到 64 字节的 KEK + HMACKpolar_cluster_passphrase_command
得到 64 字节新的 new_KEK + new_HMACK。KmgrFileData
结构信息写入 global/kmgr
文件。我们期望对所有的用户数据按照 Page 的粒度进行加密,加密方法采用 AES-128/256 加密算法(产品化默认使用 AES-256)。(page LSN,page number)
作为每个数据页加密的 IV,IV 是可以保证相同内容加密出不同结果的初始向量。
每个 Page 的头部数据结构如下:
typedef struct PageHeaderData
+{
+ /* XXX LSN is member of *any* block, not only page-organized ones */
+ PageXLogRecPtr pd_lsn; /* LSN: next byte after last byte of xlog
+ * record for last change to this page */
+ uint16 pd_checksum; /* checksum */
+ uint16 pd_flags; /* flag bits, see below */
+ LocationIndex pd_lower; /* offset to start of free space */
+ LocationIndex pd_upper; /* offset to end of free space */
+ LocationIndex pd_special; /* offset to start of special space */
+ uint16 pd_pagesize_version;
+ TransactionId pd_prune_xid; /* oldest prunable XID, or zero if none */
+ ItemIdData pd_linp[FLEXIBLE_ARRAY_MEMBER]; /* line pointer array */
+} PageHeaderData;
+
在上述结构中:
pd_lsn
不能加密:因为解密时需要使用 IV 来解密。pd_flags
增加是否加密的标志位 0x8000
,并且不加密:这样可以兼容明文 page 的读取,为增量实例打开 TDE 提供条件。pd_checksum
不加密:这样可以在密文条件下判断 Page 的校验和。当前加密含有用户数据的文件,比如数据目录中以下子目录中的文件:
base/
global/
pg_tblspc/
pg_replslot/
pg_stat/
pg_stat_tmp/
当前对于按照数据 Page 来进行组织的数据,将按照 Page 来进行加密的。Page 落盘之前必定需要计算校验和,即使校验和相关参数关闭,也会调用校验和相关的函数 PageSetChecksumCopy
、PageSetChecksumInplace
。所以,只需要计算校验和之前加密 Page,即可保证用户数据在存储上是被加密的。
存储上的 Page 读入内存之前必定经过 checksum 校验,即使相关参数关闭,也会调用校验函数 PageIsVerified
。所以,只需要在校验和计算之后解密,即可保证内存中的数据已被解密。
慎追、棠羽
2023/01/11
30 min
PolarDB for PostgreSQL 采用基于共享存储的存算分离架构,其备份恢复和 PostgreSQL 存在部分差异。本文将指导您如何对 PolarDB for PostgreSQL 进行备份,并通过备份来搭建 Replica 节点或 Standby 节点。
PostgreSQL 的备份流程可以总结为以下几步:
backup_label
文件,其中包含基础备份的起始点位置CHECKPOINT
backup_label
文件备份 PostgreSQL 数据库最简便方法是使用 pg_basebackup
工具。
PolarDB for PostgreSQL 采用基于共享存储的存算分离架构,其数据目录分为以下两类:
由于本地数据目录中的目录和文件不涉及数据库的核心数据,因此在备份数据库时,备份本地数据目录是可选的。可以仅备份共享存储上的数据目录,然后使用 initdb
重新生成新的本地存储目录。但是计算节点的本地配置文件需要被手动备份,如 postgresql.conf
、pg_hba.conf
等文件。
通过以下 SQL 命令可以查看节点的本地数据目录:
postgres=# SHOW data_directory;
+ data_directory
+------------------------
+ /home/postgres/primary
+(1 row)
+
本地数据目录类似于 PostgreSQL 的数据目录,大多数目录和文件都是通过 initdb
生成的。随着数据库服务的运行,本地数据目录中会产生更多的本地文件,如临时文件、缓存文件、配置文件、日志文件等。其结构如下:
$ tree ./ -L 1
+./
+├── base
+├── current_logfiles
+├── global
+├── pg_commit_ts
+├── pg_csnlog
+├── pg_dynshmem
+├── pg_hba.conf
+├── pg_ident.conf
+├── pg_log
+├── pg_logical
+├── pg_logindex
+├── pg_multixact
+├── pg_notify
+├── pg_replslot
+├── pg_serial
+├── pg_snapshots
+├── pg_stat
+├── pg_stat_tmp
+├── pg_subtrans
+├── pg_tblspc
+├── PG_VERSION
+├── pg_xact
+├── polar_cache_trash
+├── polar_dma.conf
+├── polar_fullpage
+├── polar_node_static.conf
+├── polar_rel_size_cache
+├── polar_shmem
+├── polar_shmem_stat_file
+├── postgresql.auto.conf
+├── postgresql.conf
+├── postmaster.opts
+└── postmaster.pid
+
+21 directories, 12 files
+
通过以下 SQL 命令可以查看所有计算节点在共享存储上的共享数据目录:
postgres=# SHOW polar_datadir;
+ polar_datadir
+-----------------------
+ /nvme1n1/shared_data/
+(1 row)
+
共享数据目录中存放 PolarDB for PostgreSQL 的核心数据文件,如表文件、索引文件、WAL 日志、DMA、LogIndex、Flashback Log 等。这些文件被所有节点共享,因此必须被备份。其结构如下:
$ sudo pfs -C disk ls /nvme1n1/shared_data/
+ Dir 1 512 Wed Jan 11 09:34:01 2023 base
+ Dir 1 7424 Wed Jan 11 09:34:02 2023 global
+ Dir 1 0 Wed Jan 11 09:34:02 2023 pg_tblspc
+ Dir 1 512 Wed Jan 11 09:35:05 2023 pg_wal
+ Dir 1 384 Wed Jan 11 09:35:01 2023 pg_logindex
+ Dir 1 0 Wed Jan 11 09:34:02 2023 pg_twophase
+ Dir 1 128 Wed Jan 11 09:34:02 2023 pg_xact
+ Dir 1 0 Wed Jan 11 09:34:02 2023 pg_commit_ts
+ Dir 1 256 Wed Jan 11 09:34:03 2023 pg_multixact
+ Dir 1 0 Wed Jan 11 09:34:03 2023 pg_csnlog
+ Dir 1 256 Wed Jan 11 09:34:03 2023 polar_dma
+ Dir 1 512 Wed Jan 11 09:35:09 2023 polar_fullpage
+ File 1 32 Wed Jan 11 09:35:00 2023 RWID
+ Dir 1 256 Wed Jan 11 10:25:42 2023 pg_replslot
+ File 1 224 Wed Jan 11 10:19:37 2023 polar_non_exclusive_backup_label
+total 16384 (unit: 512Bytes)
+
PolarDB for PostgreSQL 的备份工具 polar_basebackup
,由 PostgreSQL 的 pg_basebackup
改造而来,完全兼容 pg_basebackup
,因此同样可以用于对 PostgreSQL 做备份恢复。polar_basebackup
的可执行文件位于 PolarDB for PostgreSQL 安装目录下的 bin/
目录中。
该工具的主要功能是将一个运行中的 PolarDB for PostgreSQL 数据库的数据目录(包括本地数据目录和共享数据目录)备份到目标目录中。
polar_basebackup takes a base backup of a running PostgreSQL server.
+
+Usage:
+ polar_basebackup [OPTION]...
+
+Options controlling the output:
+ -D, --pgdata=DIRECTORY receive base backup into directory
+ -F, --format=p|t output format (plain (default), tar)
+ -r, --max-rate=RATE maximum transfer rate to transfer data directory
+ (in kB/s, or use suffix "k" or "M")
+ -R, --write-recovery-conf
+ write recovery.conf for replication
+ -T, --tablespace-mapping=OLDDIR=NEWDIR
+ relocate tablespace in OLDDIR to NEWDIR
+ --waldir=WALDIR location for the write-ahead log directory
+ -X, --wal-method=none|fetch|stream
+ include required WAL files with specified method
+ -z, --gzip compress tar output
+ -Z, --compress=0-9 compress tar output with given compression level
+
+General options:
+ -c, --checkpoint=fast|spread
+ set fast or spread checkpointing
+ -C, --create-slot create replication slot
+ -l, --label=LABEL set backup label
+ -n, --no-clean do not clean up after errors
+ -N, --no-sync do not wait for changes to be written safely to disk
+ -P, --progress show progress information
+ -S, --slot=SLOTNAME replication slot to use
+ -v, --verbose output verbose messages
+ -V, --version output version information, then exit
+ --no-slot prevent creation of temporary replication slot
+ --no-verify-checksums
+ do not verify checksums
+ -?, --help show this help, then exit
+
+Connection options:
+ -d, --dbname=CONNSTR connection string
+ -h, --host=HOSTNAME database server host or socket directory
+ -p, --port=PORT database server port number
+ -s, --status-interval=INTERVAL
+ time between status packets sent to server (in seconds)
+ -U, --username=NAME connect as specified database user
+ -w, --no-password never prompt for password
+ -W, --password force password prompt (should happen automatically)
+ --polardata=datadir receive polar data backup into directory
+ --polar_disk_home=disk_home polar_disk_home for polar data backup
+ --polar_host_id=host_id polar_host_id for polar data backup
+ --polar_storage_cluster_name=cluster_name polar_storage_cluster_name for polar data backup
+
polar_basebackup
的参数及用法几乎和 pg_basebackup
一致,新增了以下与共享存储相关的参数:
--polar_disk_home
/ --polar_host_id
/ --polar_storage_cluster_name
:这三个参数指定了用于存放备份共享数据的共享存储节点--polardata
:该参数指定了备份共享存储节点上存放共享数据的路径;如不指定,则默认将共享数据备份到本地数据备份目录的 polar_shared_data/
路径下基础备份可用于搭建一个新的 Replica(RO)节点。如前文所述,一个正在运行中的 PolarDB for PostgreSQL 实例的数据文件分布在各计算节点的本地存储和存储节点的共享存储中。下面将说明如何使用 polar_basebackup
将实例的数据文件备份到一个本地磁盘上,并从这个备份上启动一个 Replica 节点。
首先,在将要部署 Replica 节点的机器上启动 PFSD 守护进程,挂载到正在运行中的共享存储的 PFS 文件系统上。后续启动的 Replica 节点将使用这个守护进程来访问共享存储。
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme1n1 -w 2
+
运行如下命令,将实例 Primary 节点的本地数据和共享数据备份到用于部署 Replica 节点的本地存储路径 /home/postgres/replica1
下:
polar_basebackup \
+ --host=[Primary节点所在IP] \
+ --port=[Primary节点所在端口号] \
+ -D /home/postgres/replica1 \
+ -X stream --progress --write-recovery-conf -v
+
将看到如下输出:
polar_basebackup: initiating base backup, waiting for checkpoint to complete
+polar_basebackup: checkpoint completed
+polar_basebackup: write-ahead log start point: 0/16ADD60 on timeline 1
+polar_basebackup: starting background WAL receiver
+polar_basebackup: created temporary replication slot "pg_basebackup_359"
+851371/851371 kB (100%), 2/2 tablespaces
+polar_basebackup: write-ahead log end point: 0/16ADE30
+polar_basebackup: waiting for background process to finish streaming ...
+polar_basebackup: base backup completed
+
备份完成后,可以以这个备份目录作为本地数据目录,启动一个新的 Replica 节点。由于本地数据目录中不需要共享存储上已有的共享数据文件,所以删除掉本地数据目录中的 polar_shared_data/
目录:
rm -rf ~/replica1/polar_shared_data
+
重新编辑 Replica 节点的配置文件 ~/replica1/postgresql.conf
:
-polar_hostid=1
++polar_hostid=2
+-synchronous_standby_names='replica1'
+
重新编辑 Replica 节点的复制配置文件 ~/replica1/recovery.conf
:
polar_replica='on'
+recovery_target_timeline='latest'
+primary_slot_name='replica1'
+primary_conninfo='host=[Primary节点所在IP] port=5432 user=postgres dbname=postgres application_name=replica1'
+
启动 Replica 节点:
pg_ctl -D $HOME/replica1 start
+
在 Primary 节点上执行建表并插入数据,在 Replica 节点上可以查到 Primary 节点插入的数据:
$ psql -q \
+ -h [Primary节点所在IP] \
+ -p 5432 \
+ -d postgres \
+ -c "CREATE TABLE t (t1 INT PRIMARY KEY, t2 INT); INSERT INTO t VALUES (1, 1),(2, 3),(3, 3);"
+
+$ psql -q \
+ -h [Replica节点所在IP] \
+ -p 5432 \
+ -d postgres \
+ -c "SELECT * FROM t;"
+ t1 | t2
+----+----
+ 1 | 1
+ 2 | 3
+ 3 | 3
+(3 rows)
+
基础备份也可以用于搭建一个新的 Standby 节点。如下图所示,Standby 节点与 Primary / Replica 节点各自使用独立的共享存储,与 Primary 节点使用物理复制保持同步。Standby 节点可用于作为主共享存储的灾备。
假设此时用于部署 Standby 计算节点的机器已经准备好用于后备的共享存储 nvme2n1
:
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:1 0 40G 0 disk
+└─nvme0n1p1 259:2 0 40G 0 part /etc/hosts
+nvme2n1 259:3 0 70G 0 disk
+nvme1n1 259:0 0 60G 0 disk
+
将这个共享存储格式化为 PFS 格式,并启动 PFSD 守护进程挂载到 PFS 文件系统:
sudo pfs -C disk mkfs nvme2n1
+sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme2n1 -w 2
+
在用于部署 Standby 节点的机器上执行备份,以 ~/standby
作为本地数据目录,以 /nvme2n1/shared_data
作为共享存储目录:
polar_basebackup \
+ --host=[Primary节点所在IP] \
+ --port=[Primary节点所在端口号] \
+ -D /home/postgres/standby \
+ --polardata=/nvme2n1/shared_data/ \
+ --polar_storage_cluster_name=disk \
+ --polar_disk_name=nvme2n1 \
+ --polar_host_id=3 \
+ -X stream --progress --write-recovery-conf -v
+
将会看到如下输出。其中,除了 polar_basebackup
的输出以外,还有 PFS 的输出日志:
[PFSD_SDK INF Jan 11 10:11:27.247112][99]pfs_mount_prepare 103: begin prepare mount cluster(disk), PBD(nvme2n1), hostid(3),flags(0x13)
+[PFSD_SDK INF Jan 11 10:11:27.247161][99]pfs_mount_prepare 165: pfs_mount_prepare success for nvme2n1 hostid 3
+[PFSD_SDK INF Jan 11 10:11:27.293900][99]chnl_connection_poll_shm 1238: ack data update s_mount_epoch 1
+[PFSD_SDK INF Jan 11 10:11:27.293912][99]chnl_connection_poll_shm 1266: connect and got ack data from svr, err = 0, mntid 0
+[PFSD_SDK INF Jan 11 10:11:27.293979][99]pfsd_sdk_init 191: pfsd_chnl_connect success
+[PFSD_SDK INF Jan 11 10:11:27.293987][99]pfs_mount_post 208: pfs_mount_post err : 0
+[PFSD_SDK ERR Jan 11 10:11:27.297257][99]pfsd_opendir 1437: opendir /nvme2n1/shared_data/ error: No such file or directory
+[PFSD_SDK INF Jan 11 10:11:27.297396][99]pfsd_mkdir 1320: mkdir /nvme2n1/shared_data
+polar_basebackup: initiating base backup, waiting for checkpoint to complete
+WARNING: a labelfile "/nvme1n1/shared_data//polar_non_exclusive_backup_label" is already on disk
+HINT: POLAR: we overwrite it
+polar_basebackup: checkpoint completed
+polar_basebackup: write-ahead log start point: 0/16C91F8 on timeline 1
+polar_basebackup: starting background WAL receiver
+polar_basebackup: created temporary replication slot "pg_basebackup_373"
+...
+[PFSD_SDK INF Jan 11 10:11:32.992005][99]pfsd_open 539: open /nvme2n1/shared_data/polar_non_exclusive_backup_label with inode 6325, fd 0
+[PFSD_SDK INF Jan 11 10:11:32.993074][99]pfsd_open 539: open /nvme2n1/shared_data/global/pg_control with inode 8373, fd 0
+851396/851396 kB (100%), 2/2 tablespaces
+polar_basebackup: write-ahead log end point: 0/16C9300
+polar_basebackup: waiting for background process to finish streaming ...
+polar_basebackup: base backup completed
+[PFSD_SDK INF Jan 11 10:11:52.378220][99]pfsd_umount_force 247: pbdname nvme2n1
+[PFSD_SDK INF Jan 11 10:11:52.378229][99]pfs_umount_prepare 269: pfs_umount_prepare. pbdname:nvme2n1
+[PFSD_SDK INF Jan 11 10:11:52.404010][99]chnl_connection_release_shm 1164: client umount return : deleted /var/run/pfsd//nvme2n1/99.pid
+[PFSD_SDK INF Jan 11 10:11:52.404171][99]pfs_umount_post 281: pfs_umount_post. pbdname:nvme2n1
+[PFSD_SDK INF Jan 11 10:11:52.404174][99]pfsd_umount_force 261: umount success for nvme2n1
+
上述命令会在当前机器的本地存储上备份 Primary 节点的本地数据目录,在参数指定的共享存储目录上备份共享数据目录。
重新编辑 Standby 节点的配置文件 ~/standby/postgresql.conf
:
-polar_hostid=1
++polar_hostid=3
+-polar_disk_name='nvme1n1'
+-polar_datadir='/nvme1n1/shared_data/'
++polar_disk_name='nvme2n1'
++polar_datadir='/nvme2n1/shared_data/'
+-synchronous_standby_names='replica1'
+
在 Standby 节点的复制配置文件 ~/standby/recovery.conf
中添加:
+recovery_target_timeline = 'latest'
++primary_slot_name = 'standby1'
+
在 Primary 节点上创建用于与 Standby 进行物理复制的复制槽:
$ psql \
+ --host=[Primary节点所在IP] --port=5432 \
+ -d postgres \
+ -c "SELECT * FROM pg_create_physical_replication_slot('standby1');"
+ slot_name | lsn
+-----------+-----
+ standby1 |
+(1 row)
+
启动 Standby 节点:
pg_ctl -D $HOME/standby start
+
在 Primary 节点上创建表并插入数据,在 Standby 节点上可以查询到数据:
$ psql -q \
+ -h [Primary节点所在IP] \
+ -p 5432 \
+ -d postgres \
+ -c "CREATE TABLE t (t1 INT PRIMARY KEY, t2 INT); INSERT INTO t VALUES (1, 1),(2, 3),(3, 3);"
+
+$ psql -q \
+ -h [Standby节点所在IP] \
+ -p 5432 \
+ -d postgres \
+ -c "SELECT * FROM t;"
+ t1 | t2
+----+----
+ 1 | 1
+ 2 | 3
+ 3 | 3
+(3 rows)
+
棠羽
2023/03/06
20 min
在 PolarDB for PostgreSQL 的使用过程中,可能会出现 CPU 使用率异常升高甚至达到满载的情况。本文将介绍造成这种情况的常见原因和排查方法,以及相应的解决方案。
当 CPU 使用率上升时,最有可能的情况是业务量的上涨导致数据库使用的计算资源增多。所以首先需要排查目前数据库的活跃连接数是否比平时高很多。如果数据库配备了监控系统,那么活跃连接数的变化情况可以通过图表的形式观察到;否则可以直接连接到数据库,执行如下 SQL 来获取当前活跃连接数:
SELECT COUNT(*) FROM pg_stat_activity WHERE state NOT LIKE 'idle';
+
pg_stat_activity
是 PostgreSQL 的内置系统视图,该视图返回的每一行都是一个正在运行中的 PostgreSQL 进程,state
列表示进程当前的状态。该列可能的取值为:
active
:进程正在执行查询idle
:进程空闲,正在等待新的客户端命令idle in transaction
:进程处于事务中,但目前暂未执行查询idle in transaction (aborted)
:进程处于事务中,且有一条语句发生过错误fastpath function call
:进程正在执行一个 fast-path 函数disabled
:进程的状态采集功能被关闭上述 SQL 能够查询到所有非空闲状态的进程数,即可能占用 CPU 的活跃连接数。如果活跃连接数较平时更多,则 CPU 使用率的上升是符合预期的。
如果 CPU 使用率上升,而活跃连接数的变化范围处在正常范围内,那么有可能出现了较多性能较差的慢查询。这些慢查询可能在很长一段时间里占用了较多的 CPU,导致 CPU 使用率上升。PostgreSQL 提供了慢查询日志的功能,执行时间高于 log_min_duration_statement
的 SQL 将会被记录到慢查询日志中。然而当 CPU 占用率接近满载时,将会导致整个系统的停滞,所有 SQL 的执行可能都会慢下来,所以慢查询日志中记录的信息可能非常多,并不容易排查。
pg_stat_statements
插件能够记录数据库服务器上所有 SQL 语句在优化和执行阶段的统计信息。由于该插件需要使用共享内存,因此插件名需要被配置在 shared_preload_libraries
参数中。
如果没有在当前数据库中创建过 pg_stat_statements
插件的话,首先需要创建这个插件。该过程将会注册好插件提供的函数及视图:
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
+
该插件和数据库系统本身都会不断累积统计信息。为了排查 CPU 异常升高后这段时间内的问题,需要把数据库和插件中留存的统计信息做一次清空,然后开始收集从当前时刻开始的统计信息:
-- 清空当前数据库的统计信息
+SELECT pg_stat_reset();
+-- 清空 pg_stat_statements 插件截止目前收集的统计信息
+SELECT pg_stat_statements_reset();
+
接下来需要等待一段时间(1-2 分钟),使数据库和插件充分采集这段时间内的统计信息。
统计信息收集完毕后,参考使用如下 SQL 查询执行时间最长的 5 条 SQL:
-- < PostgreSQL 13
+SELECT * FROM pg_stat_statements ORDER BY total_time DESC LIMIT 5;
+-- >= PostgreSQL 13
+SELECT * FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 5;
+
当一张表缺少索引,而对该表的查询基本上都是点查时,数据库将不得不使用全表扫描,并在内存中进行过滤条件的判断,处理掉大量的无效记录,导致 CPU 使用率大幅提升。利用 pg_stat_statements
插件的统计信息,参考如下 SQL,可以列出截止目前读取 Buffer 数量最多的 5 条 SQL:
SELECT * FROM pg_stat_statements
+ORDER BY shared_blks_hit + shared_blks_read DESC
+LIMIT 5;
+
借助 PostgreSQL 内置系统视图 pg_stat_user_tables
中的统计信息,也可以统计出使用全表扫描的次数最多的表。参考如下 SQL,可以获取具备一定规模数据量(元组约为 10 万个)且使用全表扫描获取到的元组数量最多的 5 张表:
SELECT * FROM pg_stat_user_tables
+WHERE n_live_tup > 100000 AND seq_scan > 0
+ORDER BY seq_tup_read DESC
+LIMIT 5;
+
通过系统内置视图 pg_stat_activity
,可以查询出长时间执行不结束的 SQL,这些 SQL 有极大可能造成 CPU 使用率过高。参考以下 SQL 获取查询执行时间最长,且目前还未退出的 5 条 SQL:
SELECT
+ *,
+ extract(epoch FROM (NOW() - xact_start)) AS xact_stay,
+ extract(epoch FROM (NOW() - query_start)) AS query_stay
+FROM pg_stat_activity
+WHERE state NOT LIKE 'idle%'
+ORDER BY query_stay DESC
+LIMIT 5;
+
结合前一步中排查到的 使用全表扫描最多的表,参考如下 SQL 获取 在该表上 执行时间超过一定阈值(比如 10s)的慢查询:
SELECT * FROM pg_stat_activity
+WHERE
+ state NOT LIKE 'idle%' AND
+ query ILIKE '%表名%' AND
+ NOW() - query_start > interval '10s';
+
对于异常占用 CPU 较高的 SQL,如果仅有个别非预期 SQL,则可以通过给后端进程发送信号的方式,先让 SQL 执行中断,使 CPU 使用率恢复正常。参考如下 SQL,以慢查询执行所使用的进程 pid(pg_stat_activity
视图的 pid
列)作为参数,中止相应的进程的执行:
SELECT pg_cancel_backend(pid);
+SELECT pg_terminate_backend(pid);
+
如果执行较慢的 SQL 是业务上必要的 SQL,那么需要对它进行调优。
首先可以对 SQL 涉及到的表进行采样,更新其统计信息,使优化器能够产生更加准确的执行计划。采样需要占用一定的 CPU,最好在业务低谷期运行:
ANALYZE 表名;
+
对于全表扫描较多的表,可以在常用的过滤列上创建索引,以尽量使用索引扫描,减少全表扫描在内存中过滤不符合条件的记录所造成的 CPU 浪费。
棠羽
2022/10/12
15 min
在使用数据库时,随着数据量的逐渐增大,不可避免需要对数据库所使用的存储空间进行扩容。由于 PolarDB for PostgreSQL 基于共享存储与分布式文件系统 PFS 的架构设计,与安装部署时类似,在扩容时,需要在以下三个层面分别进行操作:
本文将指导您分别在以上三个层面上分别完成扩容操作,以实现不停止数据库实例的动态扩容。
首先需要进行的是块存储层面上的扩容。不管使用哪种类型的共享存储,存储层面扩容最终需要达成的目的是:在能够访问共享存储的主机上运行 lsblk
命令,显示存储块设备的物理空间变大。由于不同类型的共享存储有不同的扩容方式,本文以 阿里云 ECS + ESSD 云盘共享存储 为例演示如何进行存储层面的扩容。
另外,为保证后续扩容步骤的成功,请以 10GB 为单位进行扩容。
本示例中,在扩容之前,已有一个 20GB 的 ESSD 云盘多重挂载在两台 ECS 上。在这两台 ECS 上运行 lsblk
,可以看到 ESSD 云盘共享存储对应的块设备 nvme1n1
目前的物理空间为 20GB。
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 20G 0 disk
+
接下来对这块 ESSD 云盘进行扩容。在阿里云 ESSD 云盘的管理页面上,点击 云盘扩容:
进入到云盘扩容界面以后,可以看到该云盘已被两台 ECS 实例多重挂载。填写扩容后的容量,然后点击确认扩容,把 20GB 的云盘扩容为 40GB:
扩容成功后,将会看到如下提示:
此时,两台 ECS 上运行 lsblk
,可以看到 ESSD 对应块设备 nvme1n1
的物理空间已经变为 40GB:
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 40G 0 disk
+
至此,块存储层面的扩容就完成了。
在物理块设备完成扩容以后,接下来需要使用 PFS 文件系统提供的工具,对块设备上扩大后的物理空间进行格式化,以完成文件系统层面的扩容。
在能够访问共享存储的 任意一台主机上 运行 PFS 的 growfs
命令,其中:
-o
表示共享存储扩容前的空间(以 10GB 为单位)-n
表示共享存储扩容后的空间(以 10GB 为单位)本例将共享存储从 20GB 扩容至 40GB,所以参数分别填写 2
和 4
:
$ sudo pfs -C disk growfs -o 2 -n 4 nvme1n1
+
+...
+
+Init chunk 2
+ metaset 2/1: sectbda 0x500001000, npage 80, objsize 128, nobj 2560, oid range [ 2000, 2a00)
+ metaset 2/2: sectbda 0x500051000, npage 64, objsize 128, nobj 2048, oid range [ 1000, 1800)
+ metaset 2/3: sectbda 0x500091000, npage 64, objsize 128, nobj 2048, oid range [ 1000, 1800)
+
+Init chunk 3
+ metaset 3/1: sectbda 0x780001000, npage 80, objsize 128, nobj 2560, oid range [ 3000, 3a00)
+ metaset 3/2: sectbda 0x780051000, npage 64, objsize 128, nobj 2048, oid range [ 1800, 2000)
+ metaset 3/3: sectbda 0x780091000, npage 64, objsize 128, nobj 2048, oid range [ 1800, 2000)
+
+pfs growfs succeeds!
+
如果看到上述输出,说明文件系统层面的扩容已经完成。
最后,在数据库实例层,扩容需要做的工作是执行 SQL 函数来通知每个实例上已经挂载到共享存储的 PFSD(PFS Daemon)守护进程,告知共享存储上的新空间已经可以被使用了。需要注意的是,数据库实例集群中的 所有 PFSD 都需要被通知到,并且需要 先通知所有 RO 节点上的 PFSD,最后通知 RW 节点上的 PFSD。这意味着我们需要在 每一个 PolarDB for PostgreSQL 节点上执行一次通知 PFSD 的 SQL 函数,并且 RO 节点在先,RW 节点在后。
数据库实例层通知 PFSD 的扩容函数实现在 PolarDB for PostgreSQL 的 polar_vfs
插件中,所以首先需要在 RW 节点 上加载 polar_vfs
插件。在加载插件的过程中,会在 RW 节点和所有 RO 节点上注册好 polar_vfs_disk_expansion
这个 SQL 函数。
CREATE EXTENSION IF NOT EXISTS polar_vfs;
+
接下来,依次 在所有的 RO 节点上,再到 RW 节点上 分别 执行这个 SQL 函数。其中函数的参数名为块设备名:
SELECT polar_vfs_disk_expansion('nvme1n1');
+
执行完毕后,数据库实例层面的扩容也就完成了。此时,新的存储空间已经能够被数据库使用了。
棠羽
2022/12/25
15 min
PolarDB for PostgreSQL 是一款存储与计算分离的云原生数据库,所有计算节点共享一份存储,并且对存储的访问具有 一写多读 的限制:所有计算节点可以对存储进行读取,但只有一个计算节点可以对存储进行写入。这种限制会带来一个问题:当读写节点因为宕机或网络故障而不可用时,集群中将没有能够可以写入存储的计算节点,应用业务中的增、删、改,以及 DDL 都将无法运行。
本文将指导您在 PolarDB for PostgreSQL 计算集群中的读写节点停止服务时,将任意一个只读节点在线提升为读写节点,从而使集群恢复对于共享存储的写入能力。
为方便起见,本示例使用基于本地磁盘的实例来进行演示。拉取如下镜像并启动容器,可以得到一个基于本地磁盘的 HTAP 实例:
docker pull polardb/polardb_pg_local_instance
+docker run -it \
+ --cap-add=SYS_PTRACE \
+ --privileged=true \
+ --name polardb_pg_htap \
+ --shm-size=512m \
+ polardb/polardb_pg_local_instance \
+ bash
+
容器内的 5432
至 5434
端口分别运行着一个读写节点和两个只读节点。两个只读节点与读写节点共享同一份数据,并通过物理复制保持与读写节点的内存状态同步。
首先,连接到读写节点,创建一张表并插入一些数据:
psql -p5432
+
postgres=# CREATE TABLE t (id int);
+CREATE TABLE
+postgres=# INSERT INTO t SELECT generate_series(1,10);
+INSERT 0 10
+
然后连接到只读节点,并同样试图对表插入数据,将会发现无法进行插入操作:
psql -p5433
+
postgres=# INSERT INTO t SELECT generate_series(1,10);
+ERROR: cannot execute INSERT in a read-only transaction
+
此时,关闭读写节点,模拟出读写节点不可用的行为:
$ pg_ctl -D ~/tmp_master_dir_polardb_pg_1100_bld/ stop
+waiting for server to shut down.... done
+server stopped
+
此时,集群中没有任何节点可以写入存储了。这时,我们需要将一个只读节点提升为读写节点,恢复对存储的写入。
只有当读写节点停止写入后,才可以将只读节点提升为读写节点,否则将会出现集群内两个节点同时写入的情况。当数据库检测到出现多节点写入时,将会导致运行异常。
将运行在 5433
端口的只读节点提升为读写节点:
$ pg_ctl -D ~/tmp_replica_dir_polardb_pg_1100_bld1/ promote
+waiting for server to promote.... done
+server promoted
+
连接到已经完成 promote 的新读写节点上,再次尝试之前的 INSERT
操作:
postgres=# INSERT INTO t SELECT generate_series(1,10);
+INSERT 0 10
+
从上述结果中可以看到,新的读写节点能够成功对存储进行写入。这说明原先的只读节点已经被成功提升为读写节点了。
棠羽
2022/12/19
30 min
PolarDB for PostgreSQL 是一款存储与计算分离的数据库,所有计算节点共享存储,并可以按需要弹性增加或删减计算节点而无需做任何数据迁移。所有本教程将协助您在共享存储集群上添加或删除计算节点。
首先,在已经搭建完毕的共享存储集群上,初始化并启动第一个计算节点,即读写节点,该节点可以对共享存储进行读写。我们在下面的镜像中提供了已经编译完毕的 PolarDB for PostgreSQL 内核和周边工具的可执行文件:
$ docker pull polardb/polardb_pg_binary
+$ docker run -it \
+ --cap-add=SYS_PTRACE \
+ --privileged=true \
+ --name polardb_pg \
+ --shm-size=512m \
+ polardb/polardb_pg_binary \
+ bash
+
+$ ls ~/tmp_basedir_polardb_pg_1100_bld/bin/
+clusterdb dropuser pg_basebackup pg_dump pg_resetwal pg_test_timing polar-initdb.sh psql
+createdb ecpg pgbench pg_dumpall pg_restore pg_upgrade polar-replica-initdb.sh reindexdb
+createuser initdb pg_config pg_isready pg_rewind pg_verify_checksums polar_tools vacuumdb
+dbatools.sql oid2name pg_controldata pg_receivewal pg_standby pg_waldump postgres vacuumlo
+dropdb pg_archivecleanup pg_ctl pg_recvlogical pg_test_fsync polar_basebackup postmaster
+
使用 lsblk
命令确认存储集群已经能够被当前机器访问到。比如,如下示例中的 nvme1n1
是将要使用的共享存储的块设备:
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 100G 0 disk
+
此时,共享存储上没有任何内容。使用容器内的 PFS 工具将共享存储格式化为 PFS 文件系统的格式:
sudo pfs -C disk mkfs nvme1n1
+
格式化完成后,在当前容器内启动 PFS 守护进程,挂载到文件系统上。该守护进程后续将会被计算节点用于访问共享存储:
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme1n1 -w 2
+
使用 initdb
在节点本地存储的 ~/primary
路径上创建本地数据目录。本地数据目录中将会存放节点的配置、审计日志等节点私有的信息:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D $HOME/primary
+
使用 PFS 工具,在共享存储上创建一个共享数据目录;使用 polar-initdb.sh
脚本把将会被所有节点共享的数据文件拷贝到共享存储的数据目录中。将会被所有节点共享的文件包含所有的表文件、WAL 日志文件等:
sudo pfs -C disk mkdir /nvme1n1/shared_data
+
+sudo $HOME/tmp_basedir_polardb_pg_1100_bld/bin/polar-initdb.sh \
+ $HOME/primary/ /nvme1n1/shared_data/
+
对读写节点的配置文件 ~/primary/postgresql.conf
进行修改,使数据库以共享模式启动,并能够找到共享存储上的数据目录:
port=5432
+polar_hostid=1
+
+polar_enable_shared_storage_mode=on
+polar_disk_name='nvme1n1'
+polar_datadir='/nvme1n1/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='disk'
+
+logging_collector=on
+log_line_prefix='%p\t%r\t%u\t%m\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+synchronous_standby_names='replica1'
+
编辑读写节点的客户端认证文件 ~/primary/pg_hba.conf
,允许来自所有地址的客户端以 postgres
用户进行物理复制:
host replication postgres 0.0.0.0/0 trust
+
使用以下命令启动读写节点,并检查节点能否正常运行:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl -D $HOME/primary start
+
+$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5432 \
+ -d postgres \
+ -c 'SELECT version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
接下来,在已经有一个读写节点的计算集群中扩容一个新的计算节点。由于 PolarDB for PostgreSQL 是一写多读的架构,所以后续扩容的节点只可以对共享存储进行读取,但无法对共享存储进行写入。只读节点通过与读写节点进行物理复制来保持内存状态的同步。
类似地,在用于部署新计算节点的机器上,拉取镜像并启动带有可执行文件的容器:
docker pull polardb/polardb_pg_binary
+docker run -it \
+ --cap-add=SYS_PTRACE \
+ --privileged=true \
+ --name polardb_pg \
+ --shm-size=512m \
+ polardb/polardb_pg_binary \
+ bash
+
确保部署只读节点的机器也可以访问到共享存储的块设备:
$ lsblk
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme0n1 259:0 0 40G 0 disk
+└─nvme0n1p1 259:1 0 40G 0 part /etc/hosts
+nvme1n1 259:2 0 100G 0 disk
+
由于此时共享存储已经被读写节点格式化为 PFS 格式了,因此这里无需再次进行格式化。只需要启动 PFS 守护进程完成挂载即可:
sudo /usr/local/polarstore/pfsd/bin/start_pfsd.sh -p nvme1n1 -w 2
+
在只读节点本地磁盘的 ~/replica1
路径上创建一个空目录,然后通过 polar-replica-initdb.sh
脚本使用共享存储上的数据目录来初始化只读节点的本地目录。初始化后的本地目录中没有默认配置文件,所以还需要使用 initdb
创建一个临时的本地目录模板,然后将所有的默认配置文件拷贝到只读节点的本地目录下:
mkdir -m 0700 $HOME/replica1
+sudo ~/tmp_basedir_polardb_pg_1100_bld/bin/polar-replica-initdb.sh \
+ /nvme1n1/shared_data/ $HOME/replica1/
+
+$HOME/tmp_basedir_polardb_pg_1100_bld/bin/initdb -D /tmp/replica1
+cp /tmp/replica1/*.conf $HOME/replica1/
+
编辑只读节点的配置文件 ~/replica1/postgresql.conf
,配置好只读节点的集群标识和监听端口,以及与读写节点相同的共享存储目录:
port=5432
+polar_hostid=2
+
+polar_enable_shared_storage_mode=on
+polar_disk_name='nvme1n1'
+polar_datadir='/nvme1n1/shared_data/'
+polar_vfs.localfs_mode=off
+shared_preload_libraries='$libdir/polar_vfs,$libdir/polar_worker'
+polar_storage_cluster_name='disk'
+
+logging_collector=on
+log_line_prefix='%p\t%r\t%u\t%m\t'
+log_directory='pg_log'
+listen_addresses='*'
+max_connections=1000
+
编辑只读节点的复制配置文件 ~/replica1/recovery.conf
,配置好当前节点的角色(只读),以及从读写节点进行物理复制的连接串和复制槽:
polar_replica='on'
+recovery_target_timeline='latest'
+primary_conninfo='host=[读写节点所在IP] port=5432 user=postgres dbname=postgres application_name=replica1'
+primary_slot_name='replica1'
+
由于读写节点上暂时还没有名为 replica1
的复制槽,所以需要连接到读写节点上,创建这个复制槽:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5432 \
+ -d postgres \
+ -c "SELECT pg_create_physical_replication_slot('replica1');"
+ pg_create_physical_replication_slot
+-------------------------------------
+ (replica1,)
+(1 row)
+
完成上述步骤后,启动只读节点并验证:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl -D $HOME/replica1 start
+
+$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5432 \
+ -d postgres \
+ -c 'SELECT version();'
+ version
+--------------------------------
+ PostgreSQL 11.9 (POLARDB 11.9)
+(1 row)
+
连接到读写节点上,创建一个表并插入数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \
+ -p 5432 \
+ -d postgres \
+ -c "CREATE TABLE t(id INT); INSERT INTO t SELECT generate_series(1,10);"
+
在只读节点上可以立刻查询到从读写节点上插入的数据:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \
+ -p 5432 \
+ -d postgres \
+ -c "SELECT * FROM t;"
+ id
+----
+ 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+ 10
+(10 rows)
+
从读写节点上可以看到用于与只读节点进行物理复制的复制槽已经处于活跃状态:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql -q \
+ -p 5432 \
+ -d postgres \
+ -c "SELECT * FROM pg_replication_slots;"
+ slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn
+-----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------
+ replica1 | | physical | | | f | t | 45 | | | 0/4079E8E8 |
+(1 rows)
+
依次类推,使用类似的方法还可以横向扩容更多的只读节点。
集群缩容的步骤较为简单:将只读节点停机即可。
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/pg_ctl -D $HOME/replica1 stop
+
在只读节点停机后,读写节点上的复制槽将变为非活跃状态。非活跃的复制槽将会阻止 WAL 日志的回收,所以需要及时清理。
在读写节点上执行如下命令,移除名为 replica1
的复制槽:
$HOME/tmp_basedir_polardb_pg_1100_bld/bin/psql \
+ -p 5432 \
+ -d postgres \
+ -c "SELECT pg_drop_replication_slot('replica1');"
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
棠羽
2023/04/11
15 min
本文将引导您对 PolarDB for PostgreSQL 进行 TPC-C 测试。
TPC 是一系列事务处理和数据库基准测试的规范。其中 TPC-C (Transaction Processing Performance Council) 是针对 OLTP 的基准测试模型。TPC-C 测试模型给基准测试提供了一种统一的测试标准,可以大体观察出数据库服务稳定性、性能以及系统性能等一系列问题。对数据库展开 TPC-C 基准性能测试,一方面可以衡量数据库的性能,另一方面可以衡量采用不同硬件软件系统的性价比,是被业内广泛应用并关注的一种测试模型。
参考如下教程部署 PolarDB for PostgreSQL:
BenchmarkSQL 依赖 Java 运行环境与 Maven 包管理工具,需要预先安装。拉取 BenchmarkSQL 工具源码并进入目录后,通过 mvn
编译工程:
$ git clone https://github.com/pgsql-io/benchmarksql.git
+$ cd benchmarksql
+$ mvn
+
编译出的工具位于如下目录中:
$ cd target/run
+
在编译完毕的工具目录下,将会存在面向不同数据库产品的示例配置:
$ ls | grep sample
+sample.firebird.properties
+sample.mariadb.properties
+sample.oracle.properties
+sample.postgresql.properties
+sample.transact-sql.properties
+
其中,sample.postgresql.properties
包含 PostgreSQL 系列数据库的模板参数,可以基于这个模板来修改并自定义配置。参考 BenchmarkSQL 工具的 文档 可以查看关于配置项的详细描述。
配置项包含的配置类型有:
使用 runDatabaseBuild.sh
脚本,以配置文件作为参数,产生和导入测试数据:
./runDatabaseBuild.sh sample.postgresql.properties
+
通常,在正式测试前会进行一次数据预热:
./runBenchmark.sh sample.postgresql.properties
+
预热完毕后,再次运行同样的命令进行正式测试:
./runBenchmark.sh sample.postgresql.properties
+
_____ latency (seconds) _____
+ TransType count | mix % | mean max 90th% | rbk% errors
++--------------+---------------+---------+---------+---------+---------+---------+---------------+
+| NEW_ORDER | 635 | 44.593 | 0.006 | 0.012 | 0.008 | 1.102 | 0 |
+| PAYMENT | 628 | 44.101 | 0.001 | 0.006 | 0.002 | 0.000 | 0 |
+| ORDER_STATUS | 58 | 4.073 | 0.093 | 0.168 | 0.132 | 0.000 | 0 |
+| STOCK_LEVEL | 52 | 3.652 | 0.035 | 0.044 | 0.041 | 0.000 | 0 |
+| DELIVERY | 51 | 3.581 | 0.000 | 0.001 | 0.001 | 0.000 | 0 |
+| DELIVERY_BG | 51 | 0.000 | 0.018 | 0.023 | 0.020 | 0.000 | 0 |
++--------------+---------------+---------+---------+---------+---------+---------+---------------+
+
+Overall NOPM: 635 (98.76% of the theoretical maximum)
+Overall TPM: 1,424
+
另外也有 CSV 形式的结果被保存,从输出日志中可以找到结果存放目录。
棠羽
2023/04/12
20 min
本文将引导您对 PolarDB for PostgreSQL 进行 TPC-H 测试。
TPC-H 是专门测试数据库分析型场景性能的数据集。
使用 Docker 快速拉起一个基于本地存储的 PolarDB for PostgreSQL 集群:
docker pull polardb/polardb_pg_local_instance
+docker run -it \
+ --cap-add=SYS_PTRACE \
+ --privileged=true \
+ --name polardb_pg_htap \
+ --shm-size=512m \
+ polardb/polardb_pg_local_instance \
+ bash
+
或者参考 进阶部署 部署一个基于共享存储的 PolarDB for PostgreSQL 集群。
通过 tpch-dbgen 工具来生成测试数据。
$ git clone https://github.com/ApsaraDB/tpch-dbgen.git
+$ cd tpch-dbgen
+$ ./build.sh --help
+
+ 1) Use default configuration to build
+ ./build.sh
+ 2) Use limited configuration to build
+ ./build.sh --user=postgres --db=postgres --host=localhost --port=5432 --scale=1
+ 3) Run the test case
+ ./build.sh --run
+ 4) Run the target test case
+ ./build.sh --run=3. run the 3rd case.
+ 5) Run the target test case with option
+ ./build.sh --run --option="set polar_enable_px = on;"
+ 6) Clean the test data. This step will drop the database or tables, remove csv
+ and tbl files
+ ./build.sh --clean
+ 7) Quick build TPC-H with 100MB scale of data
+ ./build.sh --scale=0.1
+
通过设置不同的参数,可以定制化地创建不同规模的 TPC-H 数据集。build.sh
脚本中各个参数的含义如下:
--user
:数据库用户名--db
:数据库名--host
:数据库主机地址--port
:数据库服务端口--run
:执行所有 TPC-H 查询,或执行某条特定的 TPC-H 查询--option
:额外指定 GUC 参数--scale
:生成 TPC-H 数据集的规模,单位为 GB该脚本没有提供输入数据库密码的参数,需要通过设置 PGPASSWORD
为数据库用户的数据库密码来完成认证:
export PGPASSWORD=<your password>
+
生成并导入 100MB 规模的 TPC-H 数据:
./build.sh --scale=0.1
+
生成并导入 1GB 规模的 TPC-H 数据:
./build.sh
+
以 TPC-H 的 Q18 为例,执行 PostgreSQL 的单机并行查询,并观测查询速度。
在 tpch-dbgen/
目录下通过 psql
连接到数据库:
cd tpch-dbgen
+psql
+
-- 打开计时
+\timing on
+
+-- 设置单机并行度
+SET max_parallel_workers_per_gather = 2;
+
+-- 查看 Q18 的执行计划
+\i finals/18.explain.sql
+ QUERY PLAN
+------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Sort (cost=3450834.75..3450835.42 rows=268 width=81)
+ Sort Key: orders.o_totalprice DESC, orders.o_orderdate
+ -> GroupAggregate (cost=3450817.91..3450823.94 rows=268 width=81)
+ Group Key: customer.c_custkey, orders.o_orderkey
+ -> Sort (cost=3450817.91..3450818.58 rows=268 width=67)
+ Sort Key: customer.c_custkey, orders.o_orderkey
+ -> Hash Join (cost=1501454.20..3450807.10 rows=268 width=67)
+ Hash Cond: (lineitem.l_orderkey = orders.o_orderkey)
+ -> Seq Scan on lineitem (cost=0.00..1724402.52 rows=59986052 width=22)
+ -> Hash (cost=1501453.37..1501453.37 rows=67 width=53)
+ -> Nested Loop (cost=1500465.85..1501453.37 rows=67 width=53)
+ -> Nested Loop (cost=1500465.43..1501084.65 rows=67 width=34)
+ -> Finalize GroupAggregate (cost=1500464.99..1500517.66 rows=67 width=4)
+ Group Key: lineitem_1.l_orderkey
+ Filter: (sum(lineitem_1.l_quantity) > '314'::numeric)
+ -> Gather Merge (cost=1500464.99..1500511.66 rows=400 width=36)
+ Workers Planned: 2
+ -> Sort (cost=1499464.97..1499465.47 rows=200 width=36)
+ Sort Key: lineitem_1.l_orderkey
+ -> Partial HashAggregate (cost=1499454.82..1499457.32 rows=200 width=36)
+ Group Key: lineitem_1.l_orderkey
+ -> Parallel Seq Scan on lineitem lineitem_1 (cost=0.00..1374483.88 rows=24994188 width=22)
+ -> Index Scan using orders_pkey on orders (cost=0.43..8.45 rows=1 width=30)
+ Index Cond: (o_orderkey = lineitem_1.l_orderkey)
+ -> Index Scan using customer_pkey on customer (cost=0.43..5.50 rows=1 width=23)
+ Index Cond: (c_custkey = orders.o_custkey)
+(26 rows)
+
+Time: 3.965 ms
+
+-- 执行 Q18
+\i finals/18.sql
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 80150.449 ms (01:20.150)
+
PolarDB for PostgreSQL 提供了弹性跨机并行查询(ePQ)的能力,非常适合进行分析型查询。下面的步骤将引导您可以在一台主机上使用 ePQ 并行执行 TPC-H 查询。
在 tpch-dbgen/
目录下通过 psql
连接到数据库:
cd tpch-dbgen
+psql
+
首先需要对 TPC-H 产生的八张表设置 ePQ 的最大查询并行度:
ALTER TABLE nation SET (px_workers = 100);
+ALTER TABLE region SET (px_workers = 100);
+ALTER TABLE supplier SET (px_workers = 100);
+ALTER TABLE part SET (px_workers = 100);
+ALTER TABLE partsupp SET (px_workers = 100);
+ALTER TABLE customer SET (px_workers = 100);
+ALTER TABLE orders SET (px_workers = 100);
+ALTER TABLE lineitem SET (px_workers = 100);
+
以 Q18 为例,执行查询:
-- 打开计时
+\timing on
+
+-- 打开 ePQ 功能的开关
+SET polar_enable_px = ON;
+-- 设置每个节点的 ePQ 并行度为 1
+SET polar_px_dop_per_node = 1;
+
+-- 查看 Q18 的执行计划
+\i finals/18.explain.sql
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------
+ PX Coordinator 2:1 (slice1; segments: 2) (cost=0.00..257526.21 rows=59986052 width=47)
+ Merge Key: orders.o_totalprice, orders.o_orderdate
+ -> GroupAggregate (cost=0.00..243457.68 rows=29993026 width=47)
+ Group Key: orders.o_totalprice, orders.o_orderdate, customer.c_name, customer.c_custkey, orders.o_orderkey
+ -> Sort (cost=0.00..241257.18 rows=29993026 width=47)
+ Sort Key: orders.o_totalprice DESC, orders.o_orderdate, customer.c_name, customer.c_custkey, orders.o_orderkey
+ -> Hash Join (cost=0.00..42729.99 rows=29993026 width=47)
+ Hash Cond: (orders.o_orderkey = lineitem_1.l_orderkey)
+ -> PX Hash 2:2 (slice2; segments: 2) (cost=0.00..15959.71 rows=7500000 width=39)
+ Hash Key: orders.o_orderkey
+ -> Hash Join (cost=0.00..15044.19 rows=7500000 width=39)
+ Hash Cond: (orders.o_custkey = customer.c_custkey)
+ -> PX Hash 2:2 (slice3; segments: 2) (cost=0.00..11561.51 rows=7500000 width=20)
+ Hash Key: orders.o_custkey
+ -> Hash Semi Join (cost=0.00..11092.01 rows=7500000 width=20)
+ Hash Cond: (orders.o_orderkey = lineitem.l_orderkey)
+ -> Partial Seq Scan on orders (cost=0.00..1132.25 rows=7500000 width=20)
+ -> Hash (cost=7760.84..7760.84 rows=400 width=4)
+ -> PX Broadcast 2:2 (slice4; segments: 2) (cost=0.00..7760.84 rows=400 width=4)
+ -> Result (cost=0.00..7760.80 rows=200 width=4)
+ Filter: ((sum(lineitem.l_quantity)) > '314'::numeric)
+ -> Finalize HashAggregate (cost=0.00..7760.78 rows=500 width=12)
+ Group Key: lineitem.l_orderkey
+ -> PX Hash 2:2 (slice5; segments: 2) (cost=0.00..7760.72 rows=500 width=12)
+ Hash Key: lineitem.l_orderkey
+ -> Partial HashAggregate (cost=0.00..7760.70 rows=500 width=12)
+ Group Key: lineitem.l_orderkey
+ -> Partial Seq Scan on lineitem (cost=0.00..3350.82 rows=29993026 width=12)
+ -> Hash (cost=597.51..597.51 rows=749979 width=23)
+ -> PX Hash 2:2 (slice6; segments: 2) (cost=0.00..597.51 rows=749979 width=23)
+ Hash Key: customer.c_custkey
+ -> Partial Seq Scan on customer (cost=0.00..511.44 rows=749979 width=23)
+ -> Hash (cost=5146.80..5146.80 rows=29993026 width=12)
+ -> PX Hash 2:2 (slice7; segments: 2) (cost=0.00..5146.80 rows=29993026 width=12)
+ Hash Key: lineitem_1.l_orderkey
+ -> Partial Seq Scan on lineitem lineitem_1 (cost=0.00..3350.82 rows=29993026 width=12)
+ Optimizer: PolarDB PX Optimizer
+(37 rows)
+
+Time: 216.672 ms
+
+-- 执行 Q18
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 59113.965 ms (00:59.114)
+
可以看到比 PostgreSQL 的单机并行执行的时间略短。加大 ePQ 功能的节点并行度,查询性能将会有更明显的提升:
SET polar_px_dop_per_node = 2;
+\i finals/18.sql
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 42400.500 ms (00:42.401)
+
+SET polar_px_dop_per_node = 4;
+\i finals/18.sql
+
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 19892.603 ms (00:19.893)
+
+SET polar_px_dop_per_node = 8;
+\i finals/18.sql
+ c_name | c_custkey | o_orderkey | o_orderdate | o_totalprice | sum
+--------------------+-----------+------------+-------------+--------------+--------
+ Customer#001287812 | 1287812 | 42290181 | 1997-11-26 | 558289.17 | 318.00
+ Customer#001172513 | 1172513 | 36667107 | 1997-06-06 | 550142.18 | 322.00
+ ...
+ Customer#001288183 | 1288183 | 48943904 | 1996-07-22 | 398081.59 | 325.00
+ Customer#000114613 | 114613 | 59930883 | 1997-05-17 | 394335.49 | 319.00
+(84 rows)
+
+Time: 10944.402 ms (00:10.944)
+
使用 ePQ 执行 Q17 和 Q18 时可能会出现 OOM。需要设置以下参数防止用尽内存:
SET polar_px_optimizer_enable_hashagg = 0; +
在上面的例子中,出于简单考虑,PolarDB for PostgreSQL 的多个计算节点被部署在同一台主机上。在这种场景下使用 ePQ 时,由于所有的计算节点都使用了同一台主机的 CPU、内存、I/O 带宽,因此本质上是基于单台主机的并行执行。实际上,PolarDB for PostgreSQL 的计算节点可以被部署在能够共享存储节点的多台机器上。此时使用 ePQ 功能将进行真正的跨机器分布式并行查询,能够充分利用多台机器上的计算资源。
参考 进阶部署 可以搭建起不同形态的 PolarDB for PostgreSQL 集群。集群搭建成功后,使用 ePQ 的方式与单机 ePQ 完全相同。
如果遇到如下错误:
psql:queries/q01.analyze.sq1:24: WARNING: interconnect may encountered a network error, please check your network +DETAIL: Failed to send packet (seq 1) to 192.168.1.8:57871 (pid 17766 cid 0) after 100 retries. +
可以尝试统一修改每台机器的 MTU 为 9000:
ifconfig <网卡名> mtu 9000 +
棠羽
2022/06/20
15 min
PostgreSQL 在优化器中为一个查询树输出一个执行效率最高的物理计划树。其中,执行效率高低的衡量是通过代价估算实现的。比如通过估算查询返回元组的条数,和元组的宽度,就可以计算出 I/O 开销;也可以根据将要执行的物理操作估算出可能需要消耗的 CPU 代价。优化器通过系统表 pg_statistic
获得这些在代价估算过程需要使用到的关键统计信息,而 pg_statistic
系统表中的统计信息又是通过自动或手动的 ANALYZE
操作(或 VACUUM
)计算得到的。ANALYZE
将会扫描表中的数据并按列进行分析,将得到的诸如每列的数据分布、最常见值、频率等统计信息写入系统表。
本文从源码的角度分析一下 ANALYZE
操作的实现机制。源码使用目前 PostgreSQL 最新的稳定版本 PostgreSQL 14。
首先,我们应当搞明白分析操作的输出是什么。所以我们可以看一看 pg_statistic
中有哪些列,每个列的含义是什么。这个系统表中的每一行表示其它数据表中 每一列的统计信息。
postgres=# \d+ pg_statistic
+ Table "pg_catalog.pg_statistic"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+-------------+----------+-----------+----------+---------+----------+--------------+-------------
+ starelid | oid | | not null | | plain | |
+ staattnum | smallint | | not null | | plain | |
+ stainherit | boolean | | not null | | plain | |
+ stanullfrac | real | | not null | | plain | |
+ stawidth | integer | | not null | | plain | |
+ stadistinct | real | | not null | | plain | |
+ stakind1 | smallint | | not null | | plain | |
+ stakind2 | smallint | | not null | | plain | |
+ stakind3 | smallint | | not null | | plain | |
+ stakind4 | smallint | | not null | | plain | |
+ stakind5 | smallint | | not null | | plain | |
+ staop1 | oid | | not null | | plain | |
+ staop2 | oid | | not null | | plain | |
+ staop3 | oid | | not null | | plain | |
+ staop4 | oid | | not null | | plain | |
+ staop5 | oid | | not null | | plain | |
+ stanumbers1 | real[] | | | | extended | |
+ stanumbers2 | real[] | | | | extended | |
+ stanumbers3 | real[] | | | | extended | |
+ stanumbers4 | real[] | | | | extended | |
+ stanumbers5 | real[] | | | | extended | |
+ stavalues1 | anyarray | | | | extended | |
+ stavalues2 | anyarray | | | | extended | |
+ stavalues3 | anyarray | | | | extended | |
+ stavalues4 | anyarray | | | | extended | |
+ stavalues5 | anyarray | | | | extended | |
+Indexes:
+ "pg_statistic_relid_att_inh_index" UNIQUE, btree (starelid, staattnum, stainherit)
+
/* ----------------
+ * pg_statistic definition. cpp turns this into
+ * typedef struct FormData_pg_statistic
+ * ----------------
+ */
+CATALOG(pg_statistic,2619,StatisticRelationId)
+{
+ /* These fields form the unique key for the entry: */
+ Oid starelid BKI_LOOKUP(pg_class); /* relation containing
+ * attribute */
+ int16 staattnum; /* attribute (column) stats are for */
+ bool stainherit; /* true if inheritance children are included */
+
+ /* the fraction of the column's entries that are NULL: */
+ float4 stanullfrac;
+
+ /*
+ * stawidth is the average width in bytes of non-null entries. For
+ * fixed-width datatypes this is of course the same as the typlen, but for
+ * var-width types it is more useful. Note that this is the average width
+ * of the data as actually stored, post-TOASTing (eg, for a
+ * moved-out-of-line value, only the size of the pointer object is
+ * counted). This is the appropriate definition for the primary use of
+ * the statistic, which is to estimate sizes of in-memory hash tables of
+ * tuples.
+ */
+ int32 stawidth;
+
+ /* ----------------
+ * stadistinct indicates the (approximate) number of distinct non-null
+ * data values in the column. The interpretation is:
+ * 0 unknown or not computed
+ * > 0 actual number of distinct values
+ * < 0 negative of multiplier for number of rows
+ * The special negative case allows us to cope with columns that are
+ * unique (stadistinct = -1) or nearly so (for example, a column in which
+ * non-null values appear about twice on the average could be represented
+ * by stadistinct = -0.5 if there are no nulls, or -0.4 if 20% of the
+ * column is nulls). Because the number-of-rows statistic in pg_class may
+ * be updated more frequently than pg_statistic is, it's important to be
+ * able to describe such situations as a multiple of the number of rows,
+ * rather than a fixed number of distinct values. But in other cases a
+ * fixed number is correct (eg, a boolean column).
+ * ----------------
+ */
+ float4 stadistinct;
+
+ /* ----------------
+ * To allow keeping statistics on different kinds of datatypes,
+ * we do not hard-wire any particular meaning for the remaining
+ * statistical fields. Instead, we provide several "slots" in which
+ * statistical data can be placed. Each slot includes:
+ * kind integer code identifying kind of data (see below)
+ * op OID of associated operator, if needed
+ * coll OID of relevant collation, or 0 if none
+ * numbers float4 array (for statistical values)
+ * values anyarray (for representations of data values)
+ * The ID, operator, and collation fields are never NULL; they are zeroes
+ * in an unused slot. The numbers and values fields are NULL in an
+ * unused slot, and might also be NULL in a used slot if the slot kind
+ * has no need for one or the other.
+ * ----------------
+ */
+
+ int16 stakind1;
+ int16 stakind2;
+ int16 stakind3;
+ int16 stakind4;
+ int16 stakind5;
+
+ Oid staop1 BKI_LOOKUP_OPT(pg_operator);
+ Oid staop2 BKI_LOOKUP_OPT(pg_operator);
+ Oid staop3 BKI_LOOKUP_OPT(pg_operator);
+ Oid staop4 BKI_LOOKUP_OPT(pg_operator);
+ Oid staop5 BKI_LOOKUP_OPT(pg_operator);
+
+ Oid stacoll1 BKI_LOOKUP_OPT(pg_collation);
+ Oid stacoll2 BKI_LOOKUP_OPT(pg_collation);
+ Oid stacoll3 BKI_LOOKUP_OPT(pg_collation);
+ Oid stacoll4 BKI_LOOKUP_OPT(pg_collation);
+ Oid stacoll5 BKI_LOOKUP_OPT(pg_collation);
+
+#ifdef CATALOG_VARLEN /* variable-length fields start here */
+ float4 stanumbers1[1];
+ float4 stanumbers2[1];
+ float4 stanumbers3[1];
+ float4 stanumbers4[1];
+ float4 stanumbers5[1];
+
+ /*
+ * Values in these arrays are values of the column's data type, or of some
+ * related type such as an array element type. We presently have to cheat
+ * quite a bit to allow polymorphic arrays of this kind, but perhaps
+ * someday it'll be a less bogus facility.
+ */
+ anyarray stavalues1;
+ anyarray stavalues2;
+ anyarray stavalues3;
+ anyarray stavalues4;
+ anyarray stavalues5;
+#endif
+} FormData_pg_statistic;
+
从数据库命令行的角度和内核 C 代码的角度来看,统计信息的内容都是一致的。所有的属性都以 sta
开头。其中:
starelid
表示当前列所属的表或索引staattnum
表示本行统计信息属于上述表或索引中的第几列stainherit
表示统计信息是否包含子列stanullfrac
表示该列中值为 NULL 的行数比例stawidth
表示该列非空值的平均宽度stadistinct
表示列中非空值的唯一值数量 0
表示未知或未计算> 0
表示唯一值的实际数量< 0
表示 negative of multiplier for number of rows由于不同数据类型所能够被计算的统计信息可能会有一些细微的差别,在接下来的部分中,PostgreSQL 预留了一些存放统计信息的 槽(slots)。目前的内核里暂时预留了五个槽:
#define STATISTIC_NUM_SLOTS 5
+
每一种特定的统计信息可以使用一个槽,具体在槽里放什么完全由这种统计信息的定义自由决定。每一个槽的可用空间包含这么几个部分(其中的 N
表示槽的编号,取值为 1
到 5
):
stakindN
:标识这种统计信息的整数编号staopN
:用于计算或使用统计信息的运算符 OIDstacollN
:排序规则 OIDstanumbersN
:浮点数数组stavaluesN
:任意值数组PostgreSQL 内核中规定,统计信息的编号 1
至 99
被保留给 PostgreSQL 核心统计信息使用,其它部分的编号安排如内核注释所示:
/*
+ * The present allocation of "kind" codes is:
+ *
+ * 1-99: reserved for assignment by the core PostgreSQL project
+ * (values in this range will be documented in this file)
+ * 100-199: reserved for assignment by the PostGIS project
+ * (values to be documented in PostGIS documentation)
+ * 200-299: reserved for assignment by the ESRI ST_Geometry project
+ * (values to be documented in ESRI ST_Geometry documentation)
+ * 300-9999: reserved for future public assignments
+ *
+ * For private use you may choose a "kind" code at random in the range
+ * 10000-30000. However, for code that is to be widely disseminated it is
+ * better to obtain a publicly defined "kind" code by request from the
+ * PostgreSQL Global Development Group.
+ */
+
目前可以在内核代码中看到的 PostgreSQL 核心统计信息有 7 个,编号分别从 1
到 7
。我们可以看看这 7 种统计信息分别如何使用上述的槽。
/*
+ * In a "most common values" slot, staop is the OID of the "=" operator
+ * used to decide whether values are the same or not, and stacoll is the
+ * collation used (same as column's collation). stavalues contains
+ * the K most common non-null values appearing in the column, and stanumbers
+ * contains their frequencies (fractions of total row count). The values
+ * shall be ordered in decreasing frequency. Note that since the arrays are
+ * variable-size, K may be chosen by the statistics collector. Values should
+ * not appear in MCV unless they have been observed to occur more than once;
+ * a unique column will have no MCV slot.
+ */
+#define STATISTIC_KIND_MCV 1
+
对于一个列中的 最常见值,在 staop
中保存 =
运算符来决定一个值是否等于一个最常见值。在 stavalues
中保存了该列中最常见的 K 个非空值,stanumbers
中分别保存了这 K 个值出现的频率。
/*
+ * A "histogram" slot describes the distribution of scalar data. staop is
+ * the OID of the "<" operator that describes the sort ordering, and stacoll
+ * is the relevant collation. (In theory more than one histogram could appear,
+ * if a datatype has more than one useful sort operator or we care about more
+ * than one collation. Currently the collation will always be that of the
+ * underlying column.) stavalues contains M (>=2) non-null values that
+ * divide the non-null column data values into M-1 bins of approximately equal
+ * population. The first stavalues item is the MIN and the last is the MAX.
+ * stanumbers is not used and should be NULL. IMPORTANT POINT: if an MCV
+ * slot is also provided, then the histogram describes the data distribution
+ * *after removing the values listed in MCV* (thus, it's a "compressed
+ * histogram" in the technical parlance). This allows a more accurate
+ * representation of the distribution of a column with some very-common
+ * values. In a column with only a few distinct values, it's possible that
+ * the MCV list describes the entire data population; in this case the
+ * histogram reduces to empty and should be omitted.
+ */
+#define STATISTIC_KIND_HISTOGRAM 2
+
表示一个(数值)列的数据分布直方图。staop
保存 <
运算符用于决定数据分布的排序顺序。stavalues
包含了能够将该列的非空值划分到 M - 1 个容量接近的桶中的 M 个非空值。如果该列中已经有了 MCV 的槽,那么数据分布直方图中将不包含 MCV 中的值,以获得更精确的数据分布。
/*
+ * A "correlation" slot describes the correlation between the physical order
+ * of table tuples and the ordering of data values of this column, as seen
+ * by the "<" operator identified by staop with the collation identified by
+ * stacoll. (As with the histogram, more than one entry could theoretically
+ * appear.) stavalues is not used and should be NULL. stanumbers contains
+ * a single entry, the correlation coefficient between the sequence of data
+ * values and the sequence of their actual tuple positions. The coefficient
+ * ranges from +1 to -1.
+ */
+#define STATISTIC_KIND_CORRELATION 3
+
在 stanumbers
中保存数据值和它们的实际元组位置的相关系数。
/*
+ * A "most common elements" slot is similar to a "most common values" slot,
+ * except that it stores the most common non-null *elements* of the column
+ * values. This is useful when the column datatype is an array or some other
+ * type with identifiable elements (for instance, tsvector). staop contains
+ * the equality operator appropriate to the element type, and stacoll
+ * contains the collation to use with it. stavalues contains
+ * the most common element values, and stanumbers their frequencies. Unlike
+ * MCV slots, frequencies are measured as the fraction of non-null rows the
+ * element value appears in, not the frequency of all rows. Also unlike
+ * MCV slots, the values are sorted into the element type's default order
+ * (to support binary search for a particular value). Since this puts the
+ * minimum and maximum frequencies at unpredictable spots in stanumbers,
+ * there are two extra members of stanumbers, holding copies of the minimum
+ * and maximum frequencies. Optionally, there can be a third extra member,
+ * which holds the frequency of null elements (expressed in the same terms:
+ * the fraction of non-null rows that contain at least one null element). If
+ * this member is omitted, the column is presumed to contain no null elements.
+ *
+ * Note: in current usage for tsvector columns, the stavalues elements are of
+ * type text, even though their representation within tsvector is not
+ * exactly text.
+ */
+#define STATISTIC_KIND_MCELEM 4
+
与 MCV 类似,但是保存的是列中的 最常见元素,主要用于数组等类型。同样,在 staop
中保存了等值运算符用于判断元素出现的频率高低。但与 MCV 不同的是这里的频率计算的分母是非空的行,而不是所有的行。另外,所有的常见元素使用元素对应数据类型的默认顺序进行排序,以便二分查找。
/*
+ * A "distinct elements count histogram" slot describes the distribution of
+ * the number of distinct element values present in each row of an array-type
+ * column. Only non-null rows are considered, and only non-null elements.
+ * staop contains the equality operator appropriate to the element type,
+ * and stacoll contains the collation to use with it.
+ * stavalues is not used and should be NULL. The last member of stanumbers is
+ * the average count of distinct element values over all non-null rows. The
+ * preceding M (>=2) members form a histogram that divides the population of
+ * distinct-elements counts into M-1 bins of approximately equal population.
+ * The first of these is the minimum observed count, and the last the maximum.
+ */
+#define STATISTIC_KIND_DECHIST 5
+
表示列中出现所有数值的频率分布直方图。stanumbers
数组的前 M 个元素是将列中所有唯一值的出现次数大致均分到 M - 1 个桶中的边界值。后续跟上一个所有唯一值的平均出现次数。这个统计信息应该会被用于计算 选择率。
/*
+ * A "length histogram" slot describes the distribution of range lengths in
+ * rows of a range-type column. stanumbers contains a single entry, the
+ * fraction of empty ranges. stavalues is a histogram of non-empty lengths, in
+ * a format similar to STATISTIC_KIND_HISTOGRAM: it contains M (>=2) range
+ * values that divide the column data values into M-1 bins of approximately
+ * equal population. The lengths are stored as float8s, as measured by the
+ * range type's subdiff function. Only non-null rows are considered.
+ */
+#define STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM 6
+
长度直方图描述了一个范围类型的列的范围长度分布。同样也是一个长度为 M 的直方图,保存在 stanumbers
中。
/*
+ * A "bounds histogram" slot is similar to STATISTIC_KIND_HISTOGRAM, but for
+ * a range-type column. stavalues contains M (>=2) range values that divide
+ * the column data values into M-1 bins of approximately equal population.
+ * Unlike a regular scalar histogram, this is actually two histograms combined
+ * into a single array, with the lower bounds of each value forming a
+ * histogram of lower bounds, and the upper bounds a histogram of upper
+ * bounds. Only non-NULL, non-empty ranges are included.
+ */
+#define STATISTIC_KIND_BOUNDS_HISTOGRAM 7
+
边界直方图同样也被用于范围类型,与数据分布直方图类似。stavalues
中保存了使该列数值大致均分到 M - 1 个桶中的 M 个范围边界值。只考虑非空行。
知道 pg_statistic
最终需要保存哪些信息以后,再来看看内核如何收集和计算这些信息。让我们进入 PostgreSQL 内核的执行器代码中。对于 ANALYZE
这种工具性质的指令,执行器代码通过 standard_ProcessUtility()
函数中的 switch case 将每一种指令路由到实现相应功能的函数中。
/*
+ * standard_ProcessUtility itself deals only with utility commands for
+ * which we do not provide event trigger support. Commands that do have
+ * such support are passed down to ProcessUtilitySlow, which contains the
+ * necessary infrastructure for such triggers.
+ *
+ * This division is not just for performance: it's critical that the
+ * event trigger code not be invoked when doing START TRANSACTION for
+ * example, because we might need to refresh the event trigger cache,
+ * which requires being in a valid transaction.
+ */
+void
+standard_ProcessUtility(PlannedStmt *pstmt,
+ const char *queryString,
+ bool readOnlyTree,
+ ProcessUtilityContext context,
+ ParamListInfo params,
+ QueryEnvironment *queryEnv,
+ DestReceiver *dest,
+ QueryCompletion *qc)
+{
+ // ...
+
+ switch (nodeTag(parsetree))
+ {
+ // ...
+
+ case T_VacuumStmt:
+ ExecVacuum(pstate, (VacuumStmt *) parsetree, isTopLevel);
+ break;
+
+ // ...
+ }
+
+ // ...
+}
+
ANALYZE
的处理逻辑入口和 VACUUM
一致,进入 ExecVacuum()
函数。
/*
+ * Primary entry point for manual VACUUM and ANALYZE commands
+ *
+ * This is mainly a preparation wrapper for the real operations that will
+ * happen in vacuum().
+ */
+void
+ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)
+{
+ // ...
+
+ /* Now go through the common routine */
+ vacuum(vacstmt->rels, ¶ms, NULL, isTopLevel);
+}
+
在 parse 了一大堆 option 之后,进入了 vacuum()
函数。在这里,内核代码将会首先明确一下要分析哪些表。因为 ANALYZE
命令在使用上可以:
在明确要分析哪些表以后,依次将每一个表传入 analyze_rel()
函数:
if (params->options & VACOPT_ANALYZE)
+{
+ // ...
+
+ analyze_rel(vrel->oid, vrel->relation, params,
+ vrel->va_cols, in_outer_xact, vac_strategy);
+
+ // ...
+}
+
进入 analyze_rel()
函数以后,内核代码将会对将要被分析的表加 ShareUpdateExclusiveLock
锁,以防止两个并发进行的 ANALYZE
。然后根据待分析表的类型来决定具体的处理方式(比如分析一个 FDW 外表就应该直接调用 FDW routine 中提供的 ANALYZE 功能了)。接下来,将这个表传入 do_analyze_rel()
函数中。
/*
+ * analyze_rel() -- analyze one relation
+ *
+ * relid identifies the relation to analyze. If relation is supplied, use
+ * the name therein for reporting any failure to open/lock the rel; do not
+ * use it once we've successfully opened the rel, since it might be stale.
+ */
+void
+analyze_rel(Oid relid, RangeVar *relation,
+ VacuumParams *params, List *va_cols, bool in_outer_xact,
+ BufferAccessStrategy bstrategy)
+{
+ // ...
+
+ /*
+ * Do the normal non-recursive ANALYZE. We can skip this for partitioned
+ * tables, which don't contain any rows.
+ */
+ if (onerel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)
+ do_analyze_rel(onerel, params, va_cols, acquirefunc,
+ relpages, false, in_outer_xact, elevel);
+
+ // ...
+}
+
进入 do_analyze_rel()
函数后,内核代码将进一步明确要分析一个表中的哪些列:用户可能指定只分析表中的某几个列——被频繁访问的列才更有被分析的价值。然后还要打开待分析表的所有索引,看看是否有可以被分析的列。
为了得到每一列的统计信息,显然我们需要把每一列的数据从磁盘上读起来再去做计算。这里就有一个比较关键的问题了:到底扫描多少行数据呢?理论上,分析尽可能多的数据,最好是全部的数据,肯定能够得到最精确的统计数据;但是对一张很大的表来说,我们没有办法在内存中放下所有的数据,并且分析的阻塞时间也是不可接受的。所以用户可以指定要采样的最大行数,从而在运行开销和统计信息准确性上达成一个妥协:
/*
+ * Determine how many rows we need to sample, using the worst case from
+ * all analyzable columns. We use a lower bound of 100 rows to avoid
+ * possible overflow in Vitter's algorithm. (Note: that will also be the
+ * target in the corner case where there are no analyzable columns.)
+ */
+targrows = 100;
+for (i = 0; i < attr_cnt; i++)
+{
+ if (targrows < vacattrstats[i]->minrows)
+ targrows = vacattrstats[i]->minrows;
+}
+for (ind = 0; ind < nindexes; ind++)
+{
+ AnlIndexData *thisdata = &indexdata[ind];
+
+ for (i = 0; i < thisdata->attr_cnt; i++)
+ {
+ if (targrows < thisdata->vacattrstats[i]->minrows)
+ targrows = thisdata->vacattrstats[i]->minrows;
+ }
+}
+
+/*
+ * Look at extended statistics objects too, as those may define custom
+ * statistics target. So we may need to sample more rows and then build
+ * the statistics with enough detail.
+ */
+minrows = ComputeExtStatisticsRows(onerel, attr_cnt, vacattrstats);
+
+if (targrows < minrows)
+ targrows = minrows;
+
在确定需要采样多少行数据后,内核代码分配了一块相应长度的元组数组,然后开始使用 acquirefunc
函数指针采样数据:
/*
+ * Acquire the sample rows
+ */
+rows = (HeapTuple *) palloc(targrows * sizeof(HeapTuple));
+pgstat_progress_update_param(PROGRESS_ANALYZE_PHASE,
+ inh ? PROGRESS_ANALYZE_PHASE_ACQUIRE_SAMPLE_ROWS_INH :
+ PROGRESS_ANALYZE_PHASE_ACQUIRE_SAMPLE_ROWS);
+if (inh)
+ numrows = acquire_inherited_sample_rows(onerel, elevel,
+ rows, targrows,
+ &totalrows, &totaldeadrows);
+else
+ numrows = (*acquirefunc) (onerel, elevel,
+ rows, targrows,
+ &totalrows, &totaldeadrows);
+
这个函数指针指向的是 analyze_rel()
函数中设置好的 acquire_sample_rows()
函数。该函数使用两阶段模式对表中的数据进行采样:
两阶段同时进行。在采样完成后,被采样到的元组应该已经被放置在元组数组中了。对这个元组数组按照元组的位置进行快速排序,并使用这些采样到的数据估算整个表中的存活元组与死元组的个数:
/*
+ * acquire_sample_rows -- acquire a random sample of rows from the table
+ *
+ * Selected rows are returned in the caller-allocated array rows[], which
+ * must have at least targrows entries.
+ * The actual number of rows selected is returned as the function result.
+ * We also estimate the total numbers of live and dead rows in the table,
+ * and return them into *totalrows and *totaldeadrows, respectively.
+ *
+ * The returned list of tuples is in order by physical position in the table.
+ * (We will rely on this later to derive correlation estimates.)
+ *
+ * As of May 2004 we use a new two-stage method: Stage one selects up
+ * to targrows random blocks (or all blocks, if there aren't so many).
+ * Stage two scans these blocks and uses the Vitter algorithm to create
+ * a random sample of targrows rows (or less, if there are less in the
+ * sample of blocks). The two stages are executed simultaneously: each
+ * block is processed as soon as stage one returns its number and while
+ * the rows are read stage two controls which ones are to be inserted
+ * into the sample.
+ *
+ * Although every row has an equal chance of ending up in the final
+ * sample, this sampling method is not perfect: not every possible
+ * sample has an equal chance of being selected. For large relations
+ * the number of different blocks represented by the sample tends to be
+ * too small. We can live with that for now. Improvements are welcome.
+ *
+ * An important property of this sampling method is that because we do
+ * look at a statistically unbiased set of blocks, we should get
+ * unbiased estimates of the average numbers of live and dead rows per
+ * block. The previous sampling method put too much credence in the row
+ * density near the start of the table.
+ */
+static int
+acquire_sample_rows(Relation onerel, int elevel,
+ HeapTuple *rows, int targrows,
+ double *totalrows, double *totaldeadrows)
+{
+ // ...
+
+ /* Outer loop over blocks to sample */
+ while (BlockSampler_HasMore(&bs))
+ {
+ bool block_accepted;
+ BlockNumber targblock = BlockSampler_Next(&bs);
+ // ...
+ }
+
+ // ...
+
+ /*
+ * If we didn't find as many tuples as we wanted then we're done. No sort
+ * is needed, since they're already in order.
+ *
+ * Otherwise we need to sort the collected tuples by position
+ * (itempointer). It's not worth worrying about corner cases where the
+ * tuples are already sorted.
+ */
+ if (numrows == targrows)
+ qsort((void *) rows, numrows, sizeof(HeapTuple), compare_rows);
+
+ /*
+ * Estimate total numbers of live and dead rows in relation, extrapolating
+ * on the assumption that the average tuple density in pages we didn't
+ * scan is the same as in the pages we did scan. Since what we scanned is
+ * a random sample of the pages in the relation, this should be a good
+ * assumption.
+ */
+ if (bs.m > 0)
+ {
+ *totalrows = floor((liverows / bs.m) * totalblocks + 0.5);
+ *totaldeadrows = floor((deadrows / bs.m) * totalblocks + 0.5);
+ }
+ else
+ {
+ *totalrows = 0.0;
+ *totaldeadrows = 0.0;
+ }
+
+ // ...
+}
+
回到 do_analyze_rel()
函数。采样到数据以后,对于要分析的每一个列,分别计算统计数据,然后更新 pg_statistic
系统表:
/*
+ * Compute the statistics. Temporary results during the calculations for
+ * each column are stored in a child context. The calc routines are
+ * responsible to make sure that whatever they store into the VacAttrStats
+ * structure is allocated in anl_context.
+ */
+if (numrows > 0)
+{
+ // ...
+
+ for (i = 0; i < attr_cnt; i++)
+ {
+ VacAttrStats *stats = vacattrstats[i];
+ AttributeOpts *aopt;
+
+ stats->rows = rows;
+ stats->tupDesc = onerel->rd_att;
+ stats->compute_stats(stats,
+ std_fetch_func,
+ numrows,
+ totalrows);
+
+ // ...
+ }
+
+ // ...
+
+ /*
+ * Emit the completed stats rows into pg_statistic, replacing any
+ * previous statistics for the target columns. (If there are stats in
+ * pg_statistic for columns we didn't process, we leave them alone.)
+ */
+ update_attstats(RelationGetRelid(onerel), inh,
+ attr_cnt, vacattrstats);
+
+ // ...
+}
+
显然,对于不同类型的列,其 compute_stats
函数指针指向的计算函数肯定不太一样。所以我们不妨看看给这个函数指针赋值的地方:
/*
+ * std_typanalyze -- the default type-specific typanalyze function
+ */
+bool
+std_typanalyze(VacAttrStats *stats)
+{
+ // ...
+
+ /*
+ * Determine which standard statistics algorithm to use
+ */
+ if (OidIsValid(eqopr) && OidIsValid(ltopr))
+ {
+ /* Seems to be a scalar datatype */
+ stats->compute_stats = compute_scalar_stats;
+ /*--------------------
+ * The following choice of minrows is based on the paper
+ * "Random sampling for histogram construction: how much is enough?"
+ * by Surajit Chaudhuri, Rajeev Motwani and Vivek Narasayya, in
+ * Proceedings of ACM SIGMOD International Conference on Management
+ * of Data, 1998, Pages 436-447. Their Corollary 1 to Theorem 5
+ * says that for table size n, histogram size k, maximum relative
+ * error in bin size f, and error probability gamma, the minimum
+ * random sample size is
+ * r = 4 * k * ln(2*n/gamma) / f^2
+ * Taking f = 0.5, gamma = 0.01, n = 10^6 rows, we obtain
+ * r = 305.82 * k
+ * Note that because of the log function, the dependence on n is
+ * quite weak; even at n = 10^12, a 300*k sample gives <= 0.66
+ * bin size error with probability 0.99. So there's no real need to
+ * scale for n, which is a good thing because we don't necessarily
+ * know it at this point.
+ *--------------------
+ */
+ stats->minrows = 300 * attr->attstattarget;
+ }
+ else if (OidIsValid(eqopr))
+ {
+ /* We can still recognize distinct values */
+ stats->compute_stats = compute_distinct_stats;
+ /* Might as well use the same minrows as above */
+ stats->minrows = 300 * attr->attstattarget;
+ }
+ else
+ {
+ /* Can't do much but the trivial stuff */
+ stats->compute_stats = compute_trivial_stats;
+ /* Might as well use the same minrows as above */
+ stats->minrows = 300 * attr->attstattarget;
+ }
+
+ // ...
+}
+
这个条件判断语句可以被解读为:
=
(eqopr
:equals operator)和 <
(ltopr
:less than operator),那么这个列应该是一个数值类型,可以使用 compute_scalar_stats()
函数进行分析=
运算符,那么依旧还可以使用 compute_distinct_stats
进行唯一值的统计分析compute_trivial_stats
进行一些简单的分析我们可以分别看看这三个分析函数里做了啥,但我不准备深入每一个分析函数解读其中的逻辑了。因为其中的思想基于一些很古早的统计学论文,古早到连 PDF 上的字母都快看不清了。在代码上没有特别大的可读性,因为基本是参照论文中的公式实现的,不看论文根本没法理解变量和公式的含义。
如果某个列的数据类型不支持等值运算符和比较运算符,那么就只能进行一些简单的分析,比如:
这些可以通过对采样后的元组数组进行循环遍历后轻松得到。
/*
+ * compute_trivial_stats() -- compute very basic column statistics
+ *
+ * We use this when we cannot find a hash "=" operator for the datatype.
+ *
+ * We determine the fraction of non-null rows and the average datum width.
+ */
+static void
+compute_trivial_stats(VacAttrStatsP stats,
+ AnalyzeAttrFetchFunc fetchfunc,
+ int samplerows,
+ double totalrows)
+{}
+
如果某个列只支持等值运算符,也就是说我们只能知道一个数值 是什么,但不能和其它数值比大小。所以无法分析数值在大小范围上的分布,只能分析数值在出现频率上的分布。所以该函数分析的统计数据包含:
/*
+ * compute_distinct_stats() -- compute column statistics including ndistinct
+ *
+ * We use this when we can find only an "=" operator for the datatype.
+ *
+ * We determine the fraction of non-null rows, the average width, the
+ * most common values, and the (estimated) number of distinct values.
+ *
+ * The most common values are determined by brute force: we keep a list
+ * of previously seen values, ordered by number of times seen, as we scan
+ * the samples. A newly seen value is inserted just after the last
+ * multiply-seen value, causing the bottommost (oldest) singly-seen value
+ * to drop off the list. The accuracy of this method, and also its cost,
+ * depend mainly on the length of the list we are willing to keep.
+ */
+static void
+compute_distinct_stats(VacAttrStatsP stats,
+ AnalyzeAttrFetchFunc fetchfunc,
+ int samplerows,
+ double totalrows)
+{}
+
如果一个列的数据类型支持等值运算符和比较运算符,那么可以进行最详尽的分析。分析目标包含:
/*
+ * compute_distinct_stats() -- compute column statistics including ndistinct
+ *
+ * We use this when we can find only an "=" operator for the datatype.
+ *
+ * We determine the fraction of non-null rows, the average width, the
+ * most common values, and the (estimated) number of distinct values.
+ *
+ * The most common values are determined by brute force: we keep a list
+ * of previously seen values, ordered by number of times seen, as we scan
+ * the samples. A newly seen value is inserted just after the last
+ * multiply-seen value, causing the bottommost (oldest) singly-seen value
+ * to drop off the list. The accuracy of this method, and also its cost,
+ * depend mainly on the length of the list we are willing to keep.
+ */
+static void
+compute_distinct_stats(VacAttrStatsP stats,
+ AnalyzeAttrFetchFunc fetchfunc,
+ int samplerows,
+ double totalrows)
+{}
+
以 PostgreSQL 优化器需要的统计信息为切入点,分析了 ANALYZE
命令的大致执行流程。出于简洁性,在流程分析上没有覆盖各种 corner case 和相关的处理逻辑。
PostgreSQL 14 Documentation: ANALYZE
PostgreSQL 14 Documentation: 25.1. Routine Vacuuming
PostgreSQL 14 Documentation: 14.2. Statistics Used by the Planner
严华
2022/09/10
35 min
很多 PolarDB PG 的用户都有 TP (Transactional Processing) 和 AP (Analytical Processing) 共用的需求。他们期望数据库在白天处理高并发的 TP 请求,在夜间 TP 流量下降、机器负载空闲时进行 AP 的报表分析。但是即使这样,依然没有最大化利用空闲机器的资源。原先的 PolarDB PG 数据库在处理复杂的 AP 查询时会遇到两大挑战:
为了解决用户实际使用中的痛点,PolarDB 实现了 HTAP 特性。当前业界 HTAP 的解决方案主要有以下三种:
基于 PolarDB 的存储计算分离架构,我们研发了分布式 MPP 执行引擎,提供了跨机并行执行、弹性计算弹性扩展的保证,使得 PolarDB 初步具备了 HTAP 的能力:
PolarDB HTAP 的核心是分布式 MPP 执行引擎,是典型的火山模型引擎。A、B 两张表先做 join 再做聚合输出,这也是 PostgreSQL 单机执行引擎的执行流程。
在传统的 MPP 执行引擎中,数据被打散到不同的节点上,不同节点上的数据可能具有不同的分布属性,比如哈希分布、随机分布、复制分布等。传统的 MPP 执行引擎会针对不同表的数据分布特点,在执行计划中插入算子来保证上层算子对数据的分布属性无感知。
不同的是,PolarDB 是共享存储架构,存储上的数据可以被所有计算节点全量访问。如果使用传统的 MPP 执行引擎,每个计算节点 Worker 都会扫描全量数据,从而得到重复的数据;同时,也没有起到扫描时分治加速的效果,并不能称得上是真正意义上的 MPP 引擎。
因此,在 PolarDB 分布式 MPP 执行引擎中,我们借鉴了火山模型论文中的思想,对所有扫描算子进行并发处理,引入了 PxScan 算子来屏蔽共享存储。PxScan 算子将 shared-storage 的数据映射为 shared-nothing 的数据,通过 Worker 之间的协调,将目标表划分为多个虚拟分区数据块,每个 Worker 扫描各自的虚拟分区数据块,从而实现了跨机分布式并行扫描。
PxScan 算子扫描出来的数据会通过 Shuffle 算子来重分布。重分布后的数据在每个 Worker 上如同单机执行一样,按照火山模型来执行。
传统 MPP 只能在指定节点发起 MPP 查询,因此每个节点上都只能有单个 Worker 扫描一张表。为了支持云原生下 serverless 弹性扩展的需求,我们引入了分布式事务一致性保证。
任意选择一个节点作为 Coordinator 节点,它的 ReadLSN 会作为约定的 LSN,从所有 MPP 节点的快照版本号中选择最小的版本号作为全局约定的快照版本号。通过 LSN 的回放等待和 Global Snaphot 同步机制,确保在任何一个节点发起 MPP 查询时,数据和快照均能达到一致可用的状态。
为了实现 serverless 的弹性扩展,我们从共享存储的特点出发,将 Coordinator 节点全链路上各个模块需要的外部依赖全部放至共享存储上。各个 Worker 节点运行时需要的参数也会通过控制链路从 Coordinator 节点同步过来,从而使 Coordinator 节点和 Worker 节点全链路 无状态化 (Stateless)。
基于以上两点设计,PolarDB 的弹性扩展具备了以下几大优势:
倾斜是传统 MPP 固有的问题,其根本原因主要是数据分布倾斜和数据计算倾斜:
倾斜会导致传统 MPP 在执行时出现木桶效应,执行完成时间受制于执行最慢的子任务。
PolarDB 设计并实现了 自适应扫描机制。如上图所示,采用 Coordinator 节点来协调 Worker 节点的工作模式。在扫描数据时,Coordinator 节点会在内存中创建一个任务管理器,根据扫描任务对 Worker 节点进行调度。Coordinator 节点内部分为两个线程:
扫描进度较快的 Worker 能够扫描多个数据块,实现能者多劳。比如上图中 RO1 与 RO3 的 Worker 各自扫描了 4 个数据块, RO2 由于计算倾斜可以扫描更多数据块,因此它最终扫描了 6 个数据块。
PolarDB HTAP 的自适应扫描机制还充分考虑了 PostgreSQL 的 Buffer Pool 亲和性,保证每个 Worker 尽可能扫描固定的数据块,从而最大化命中 Buffer Pool 的概率,降低 I/O 开销。
我们使用 256 GB 内存的 16 个 PolarDB PG 实例作为 RO 节点,搭建了 1 TB 的 TPC-H 环境进行对比测试。相较于单机并行,分布式 MPP 并行充分利用了所有 RO 节点的计算资源和底层共享存储的 I/O 带宽,从根本上解决了前文提及的 HTAP 诸多挑战。在 TPC-H 的 22 条 SQL 中,有 3 条 SQL 加速了 60 多倍,19 条 SQL 加速了 10 多倍,平均加速 23 倍。
此外,我们也测试了弹性扩展计算资源带来的性能变化。通过增加 CPU 的总核心数,从 16 核增加到 128 核,TPC-H 的总运行时间线性提升,每条 SQL 的执行速度也呈线性提升,这也验证了 PolarDB HTAP serverless 弹性扩展的特点。
在测试中发现,当 CPU 的总核数增加到 256 核时,性能提升不再明显。原因是此时 PolarDB 共享存储的 I/O 带宽已经打满,成为了瓶颈。
我们将 PolarDB 的分布式 MPP 执行引擎与传统数据库的 MPP 执行引擎进行了对比,同样使用了 256 GB 内存的 16 个节点。
在 1 TB 的 TPC-H 数据上,当保持与传统 MPP 数据库相同单机并行度的情况下(多机单进程),PolarDB 的性能是传统 MPP 数据库的 90%。其中最本质的原因是传统 MPP 数据库的数据默认是哈希分布的,当两张表的 join key 是各自的分布键时,可以不用 shuffle 直接进行本地的 Wise Join。而 PolarDB 的底层是共享存储池,PxScan 算子并行扫描出来的数据等价于随机分布,必须进行 shuffle 重分布以后才能像传统 MPP 数据库一样进行后续的处理。因此,TPC-H 涉及到表连接时,PolarDB 相比传统 MPP 数据库多了一次网络 shuffle 的开销。
PolarDB 分布式 MPP 执行引擎能够进行弹性扩展,数据无需重分布。因此,在有限的 16 台机器上执行 MPP 时,PolarDB 还可以继续扩展单机并行度,充分利用每台机器的资源:当 PolarDB 的单机并行度为 8 时,它的性能是传统 MPP 数据库的 5-6 倍;当 PolarDB 的单机并行度呈线性增加时,PolarDB 的总体性能也呈线性增加。只需要修改配置参数,就可以即时生效。
经过持续迭代的研发,目前 PolarDB HTAP 在 Parallel Query 上支持的功能特性主要有五大部分:
基于 PolarDB 读写分离架构和 HTAP serverless 弹性扩展的设计, PolarDB Parallel DML 支持一写多读、多写多读两种特性。
不同的特性适用不同的场景,用户可以根据自己的业务特点来选择不同的 PDML 功能特性。
PolarDB 分布式 MPP 执行引擎,不仅可以用于只读查询和 DML,还可以用于 索引构建加速。OLTP 业务中有大量的索引,而 B-Tree 索引创建的过程大约有 80% 的时间消耗在排序和构建索引页上,20% 消耗在写入索引页上。如下图所示,PolarDB 利用 RO 节点对数据进行分布式 MPP 加速排序,采用流水化的技术来构建索引页,同时使用批量写入技术来提升索引页的写入速度。
在目前索引构建加速这一特性中,PolarDB 已经对 B-Tree 索引的普通创建以及 B-Tree 索引的在线创建 (Concurrently) 两种功能进行了支持。
PolarDB HTAP 适用于日常业务中的 轻分析类业务,例如:对账业务,报表业务。
PolarDB PG 引擎默认不开启 MPP 功能。若您需要使用此功能,请使用如下参数:
polar_enable_px
:指定是否开启 MPP 功能。默认为 OFF
,即不开启。polar_px_max_workers_number
:设置单个节点上的最大 MPP Worker 进程数,默认为 30
。该参数限制了单个节点上的最大并行度,节点上所有会话的 MPP workers 进程数不能超过该参数大小。polar_px_dop_per_node
:设置当前会话并行查询的并行度,默认为 1
,推荐值为当前 CPU 总核数。若设置该参数为 N
,则一个会话在每个节点上将会启用 N
个 MPP Worker 进程,用于处理当前的 MPP 逻辑polar_px_nodes
:指定参与 MPP 的只读节点。默认为空,表示所有只读节点都参与。可配置为指定节点参与 MPP,以逗号分隔px_worker
:指定 MPP 是否对特定表生效。默认不生效。MPP 功能比较消耗集群计算节点的资源,因此只有对设置了 px_workers
的表才使用该功能。例如: ALTER TABLE t1 SET(px_workers=1)
表示 t1 表允许 MPPALTER TABLE t1 SET(px_workers=-1)
表示 t1 表禁止 MPPALTER TABLE t1 SET(px_workers=0)
表示 t1 表忽略 MPP(默认状态)本示例以简单的单表查询操作,来描述 MPP 的功能是否有效。
-- 创建 test 表并插入基础数据。
+CREATE TABLE test(id int);
+INSERT INTO test SELECT generate_series(1,1000000);
+
+-- 默认情况下 MPP 功能不开启,单表查询执行计划为 PG 原生的 Seq Scan
+EXPLAIN SELECT * FROM test;
+ QUERY PLAN
+--------------------------------------------------------
+ Seq Scan on test (cost=0.00..35.50 rows=2550 width=4)
+(1 row)
+
开启并使用 MPP 功能:
-- 对 test 表启用 MPP 功能
+ALTER TABLE test SET (px_workers=1);
+
+-- 开启 MPP 功能
+SET polar_enable_px = on;
+
+EXPLAIN SELECT * FROM test;
+
+ QUERY PLAN
+-------------------------------------------------------------------------------
+ PX Coordinator 2:1 (slice1; segments: 2) (cost=0.00..431.00 rows=1 width=4)
+ -> Seq Scan on test (scan partial) (cost=0.00..431.00 rows=1 width=4)
+ Optimizer: PolarDB PX Optimizer
+(3 rows)
+
配置参与 MPP 的计算节点范围:
-- 查询当前所有只读节点的名称
+CREATE EXTENSION polar_monitor;
+
+SELECT name,host,port FROM polar_cluster_info WHERE px_node='t';
+ name | host | port
+-------+-----------+------
+ node1 | 127.0.0.1 | 5433
+ node2 | 127.0.0.1 | 5434
+(2 rows)
+
+-- 当前集群有 2 个只读节点,名称分别为:node1,node2
+
+-- 指定 node1 只读节点参与 MPP
+SET polar_px_nodes = 'node1';
+
+-- 查询参与并行查询的节点
+SHOW polar_px_nodes;
+ polar_px_nodes
+----------------
+ node1
+(1 row)
+
+EXPLAIN SELECT * FROM test;
+ QUERY PLAN
+-------------------------------------------------------------------------------
+ PX Coordinator 1:1 (slice1; segments: 1) (cost=0.00..431.00 rows=1 width=4)
+ -> Partial Seq Scan on test (cost=0.00..431.00 rows=1 width=4)
+ Optimizer: PolarDB PX Optimizer
+(3 rows)
+
当前 MPP 对分区表支持的功能如下所示:
--分区表 MPP 功能默认关闭,需要先开启 MPP 功能
+SET polar_enable_px = ON;
+
+-- 执行以下语句,开启分区表 MPP 功能
+SET polar_px_enable_partition = true;
+
+-- 执行以下语句,开启多级分区表 MPP 功能
+SET polar_px_optimizer_multilevel_partitioning = true;
+
当前仅支持对 B-Tree 索引的构建,且暂不支持 INCLUDE
等索引构建语法,暂不支持表达式等索引列类型。
如果需要使用 MPP 功能加速创建索引,请使用如下参数:
polar_px_dop_per_node
:指定通过 MPP 加速构建索引的并行度。默认为 1
。polar_px_enable_replay_wait
:当使用 MPP 加速索引构建时,当前会话内无需手动开启该参数,该参数将自动生效,以保证最近更新的数据表项可以被创建到索引中,保证索引表的完整性。索引创建完成后,该参数将会被重置为数据库默认值。polar_px_enable_btbuild
:是否开启使用 MPP 加速创建索引。取值为 OFF
时不开启(默认),取值为 ON
时开启。polar_bt_write_page_buffer_size
:指定索引构建过程中的写 I/O 策略。该参数默认值为 0
(不开启),单位为块,最大值可设置为 8192
。推荐设置为 4096
。 polar_bt_write_page_buffer_size
大小的 buffer,对于需要写盘的索引页,会通过该 buffer 进行 I/O 合并再统一写盘,避免了频繁调度 I/O 带来的性能开销。该参数会额外提升 20% 的索引创建性能。-- 开启使用 MPP 加速创建索引功能。
+SET polar_px_enable_btbuild = on;
+
+-- 使用如下语法创建索引
+CREATE INDEX t ON test(id) WITH(px_build = ON);
+
+-- 查询表结构
+\d test
+ Table "public.test"
+ Column | Type | Collation | Nullable | Default
+--------+---------+-----------+----------+---------
+ id | integer | | |
+ id2 | integer | | |
+Indexes:
+ "t" btree (id) WITH (px_build=finish)
+
北侠
2021/08/24
35 min
PolarDB for PostgreSQL(以下简称 PolarDB)是一款阿里云自主研发的企业级数据库产品,采用计算存储分离架构,100% 兼容 PostgreSQL。PolarDB 的存储与计算能力均可横向扩展,具有高可靠、高可用、弹性扩展等企业级数据库特性。同时,PolarDB 具有大规模并行计算能力,可以应对 OLTP 与 OLAP 混合负载;还具有时空、向量、搜索、图谱等多模创新特性,可以满足企业对数据处理日新月异的新需求。
PolarDB 支持多种部署形态:存储计算分离部署、X-Paxos 三节点部署、本地盘部署。
随着用户业务数据量越来越大,业务越来越复杂,传统数据库系统面临巨大挑战,如:
针对上述传统数据库的问题,阿里云研发了 PolarDB 云原生数据库。采用了自主研发的计算集群和存储集群分离的架构。具备如下优势:
下面会从两个方面来解读 PolarDB 的架构,分别是:存储计算分离架构、HTAP 架构。
PolarDB 是存储计算分离的设计,存储集群和计算集群可以分别独立扩展:
基于 Shared-Storage 后,主节点和多个只读节点共享一份存储数据,主节点刷脏不能再像传统的刷脏方式了,否则:
对于第一个问题,我们需要有页面多版本能力;对于第二个问题,我们需要主库控制脏页的刷脏速度。
读写分离后,单个计算节点无法发挥出存储侧大 IO 带宽的优势,也无法通过增加计算资源来加速大的查询。我们研发了基于 Shared-Storage 的 MPP 分布式并行执行,来加速在 OLTP 场景下 OLAP 查询。 PolarDB 支持一套 OLTP 场景型的数据在如下两种计算引擎下使用:
在使用相同的硬件资源时性能达到了传统 MPP 数据库的 90%,同时具备了 SQL 级别的弹性:在计算能力不足时,可随时增加参与 OLAP 分析查询的 CPU,而数据无需重分布。
基于 Shared-Storage 之后,数据库由传统的 share nothing,转变成了 shared storage 架构。需要解决如下问题:
首先来看下基于 Shared-Storage 的 PolarDB 的架构原理。
传统 share nothing 的数据库,主节点和只读节点都有自己的内存和存储,只需要从主节点复制 WAL 日志到只读节点,并在只读节点上依次回放日志即可,这也是复制状态机的基本原理。
前面讲到过存储计算分离后,Shared-Storage 上读取到的页面是一致的,内存状态是通过从 Shared-Storage 上读取最新的 WAL 并回放得来,如下图:
上述流程中,只读节点中基于日志回放出来的页面会被淘汰掉,此后需要再次从存储上读取页面,会出现读取的页面是之前的老页面,称为“过去页面”。如下图:
只读节点在任意时刻读取页面时,需要找到对应的 Base 页面和对应起点的日志,依次回放。如下图:
通过上述分析,需要维护每个 Page 到日志的“倒排”索引,而只读节点的内存是有限的,因此这个 Page 到日志的索引需要持久化,PolarDB 设计了一个可持久化的索引结构 - LogIndex。LogIndex 本质是一个可持久化的 hash 数据结构。
通过 LogIndex 解决了刷脏依赖“过去页面”的问题,也是得只读节点的回放转变成了 Lazy 的回放:只需要回放日志的 meta 信息即可。
在存储计算分离后,刷脏依赖还存在“未来页面”的问题。如下图所示:
“未来页面”的原因是主节点刷脏的速度超过了任一只读节点的回放速度(虽然只读节点的 Lazy 回放已经很快了)。因此,解法就是对主节点刷脏进度时做控制:不能超过最慢的只读节点的回放位点。如下图所示:
如下图所示:
可以看到,整个链路是很长的,只读节点延迟高,影响用户业务读写分离负载均衡。
因为底层是 Shared-Storage,只读节点可直接从 Shared-Storage 上读取所需要的 WAL 数据。因此主节点只把 WAL 日志的元数据(去掉 Payload)复制到只读节点,这样网络传输量小,减少关键路径上的 IO。如下图所示:
通过上述优化,能显著减少主节点和只读节点间的网络传输量。从下图可以看到网络传输量减少了 98%。
在传统 DB 中日志回放的过程中会读取大量的 Page 并逐个日志 Apply,然后落盘。该流程在用户读 IO 的关键路径上,借助存储计算分离可以做到:如果只读节点上 Page 不在 BufferPool 中,不产生任何 IO,仅仅记录 LogIndex 即可。
可以将回放进程中的如下 IO 操作 offload 到 session 进程中:
如下图所示,在只读节点上的回放进程中,在 Apply 一条 WAL 的 meta 时:
通过上述优化,能显著减少回放的延迟,比 AWS Aurora 快 30 倍。
在主节点执行 DDL 时,比如:drop table,需要在所有节点上都对表上排他锁,这样能保证表文件不会在只读节点上读取时被主节点删除掉了(因为文件在 Shared-Storage 上只有一份)。在所有只读节点上对表上排他锁是通过 WAL 复制到所有的只读节点,只读节点回放 DDL 锁来完成。而回放进程在回放 DDL 锁时,对表上锁可能会阻塞很久,因此可以通过把 DDL 锁也 offload 到其他进程上来优化回放进程的关键路径。
通过上述优化,能够回放进程一直处于平滑的状态,不会因为去等 DDL 而阻塞了回放的关键路径。
上述 3 个优化之后,极大的降低了复制延迟,能够带来如下优势:
数据库 OOM、Crash 等场景恢复时间长,本质上是日志回放慢,在共享存储 Direct-IO 模型下问题更加突出。
前面讲到过通过 LogIndex 我们在只读节点上做到了 Lazy 的回放,那么在主节点重启后的 recovery 过程中,本质也是在回放日志,那么我们可以借助 Lazy 回放来加速 recovery 的过程:
优化之后(回放 500MB 日志量):
上述方案优化了在 recovery 的重启速度,但是在重启之后,session 进程通过读取 WAL 日志来回放想要的 page。表现就是在 recovery 之后会有短暂的响应慢的问题。优化的办法为在数据库重启时 BufferPool 并不销毁,如下图所示:crash 和 restart 期间 BufferPool 不销毁。
内核中的共享内存分成 2 部分:
而 BufferPool 中并不是所有的 Page 都是可以复用的,比如:在重启前,某进程对 Page 上 X 锁,随后 crash 了,该 X 锁就没有进程来释放了。因此,在 crash 和 restart 之后需要把所有的 BufferPool 遍历一遍,剔除掉不能被复用的 Page。另外,BufferPool 的回收依赖 k8s。该优化之后,使得重启前后性能平稳。
PolarDB 读写分离后,由于底层是存储池,理论上 IO 吞吐是无限大的。而大查询只能在单个计算节点上执行,单个计算节点的 CPU/MEM/IO 是有限的,因此单个计算节点无法发挥出存储侧的大 IO 带宽的优势,也无法通过增加计算资源来加速大的查询。我们研发了基于 Shared-Storage 的 MPP 分布式并行执行,来加速在 OLTP 场景下 OLAP 查询。
PolarDB 底层存储在不同节点上是共享的,因此不能直接像传统 MPP 一样去扫描表。我们在原来单机执行引擎上支持了 MPP 分布式并行执行,同时对 Shared-Storage 进行了优化。 基于 Shared-Storage 的 MPP 是业界首创,它的原理是:
如图所示:
基于社区的 GPORCA 优化器扩展了能感知共享存储特性的 Transformation Rules。使得能够探索共享存储下特有的 Plan 空间,比如:对于一个表在 PolarDB 中既可以全量的扫描,也可以分区域扫描,这个是和传统 MPP 的本质区别。图中,上面灰色部分是 PolarDB 内核与 GPORCA 优化器的适配部分。下半部分是 ORCA 内核,灰色模块是我们在 ORCA 内核中对共享存储特性所做的扩展。
PolarDB 中有 4 类算子需要并行化,下面介绍一个具有代表性的 Seqscan 的算子的并行化。为了最大限度的利用存储的大 IO 带宽,在顺序扫描时,按照 4MB 为单位做逻辑切分,将 IO 尽量打散到不同的盘上,达到所有的盘同时提供读服务的效果。这样做还有一个优势,就是每个只读节点只扫描部分表文件,那么最终能缓存的表大小是所有只读节点的 BufferPool 总和。
下面的图表中:
倾斜是传统 MPP 固有的问题:
以上两点会导致分布执行时存在长尾进程。
需要注意的是:尽管是动态分配,尽量维护 buffer 的亲和性;另外,每个算子的上下文存储在 worker 的私有内存中,Coordinator 不存储具体表的信息;
下面表格中,当出现大对象时,静态切分出现数据倾斜,而动态扫描仍然能够线性提升。
那我们利用数据共享的特点,还可以支持云原生下极致弹性的要求:把 Coordinator 全链路上各个模块所需要的外部依赖存在共享存储上,同时 worker 全链路上需要的运行时参数通过控制链路从 Coordinator 同步过来,使 Coordinator 和 worker 无状态化。
因此:
多个计算节点数据一致性通过等待回放和 globalsnapshot 机制来完成。等待回放保证所有 worker 能看到所需要的数据版本,而 globalsnapshot 保证了选出一个统一的版本。
我们使用 1TB 的 TPC-H 进行了测试,首先对比了 PolarDB 新的分布式并行和单机并行的性能:有 3 个 SQL 提速 60 倍,19 个 SQL 提速 10 倍以上;
另外,使用分布式执行引擎测,试增加 CPU 时的性能,可以看到,从 16 核和 128 核时性能线性提升;单看 22 条 SQL,通过该增加 CPU,每个条 SQL 性能线性提升。
与传统 MPP 数据库相比,同样使用 16 个节点,PolarDB 的性能是传统 MPP 数据库的 90%。
前面讲到我们给 PolarDB 的分布式引擎做到了弹性扩展,数据不需要充分重分布,当 dop = 8 时,性能是传统 MPP 数据库的 5.6 倍。
OLTP 业务中会建大量的索引,经分析建索引过程中:80%是在排序和构建索引页,20%在写索引页。通过使用分布式并行来加速排序过程,同时流水化批量写入。
上述优化能够使得创建索引有 4~5 倍的提升。
PolarDB 是对多模数据库,支持时空数据。时空数据库是计算密集型和 IO 密集型,可以借助分布式执行来加速。我们针对共享存储开发了扫描共享 RTREE 索引的功能。
本文从架构层面分析了 PolarDB 的技术要点:
后续文章将具体讨论更多的技术细节,比如:如何基于 Shared-Storage 的查询优化器,LogIndex 如何做到高性能,如何闪回到任意时间点,如何在 Shared-Storage 上支持 MPP,如何和 X-Paxos 结合构建高可用等等,敬请期待。
传统数据库的主备架构,主备有各自的存储,备节点回放 WAL 日志并读写自己的存储,主备节点在存储层没有耦合。PolarDB 的实现是基于共享存储的一写多读架构,主备使用共享存储中的一份数据。读写节点,也称为主节点或 Primary 节点,可以读写共享存储中的数据;只读节点,也称为备节点或 Replica 节点,仅能各自通过回放日志,从共享存储中读取数据,而不能写入。基本架构图如下所示:
一写多读架构下,只读节点可能从共享存储中读到两类数据页:
未来页:数据页中包含只读节点尚未回放到的数据,比如只读节点回放到 LSN 为 200 的 WAL 日志,但数据页中已经包含 LSN 为 300 的 WAL 日志对应的改动。此类数据页被称为“未来页”。
过去页:数据页中未包含所有回放位点之前的改动,比如只读节点将数据页回放到 LSN 为 200 的 WAL 日志,但该数据页在从 Buffer Pool 淘汰之后,再次从共享存储中读取的数据页中没有包含 LSN 为 200 的 WAL 日志的改动,此类数据页被称为“过去页”。
对于只读节点而言,只需要访问与其回放位点相对应的数据页。如果读取到如上所述的“未来页”和“过去页”应该如何处理呢?
除此之外,Buffer 管理还需要维护一致性位点,对于某个数据页,只读节点仅需回放一致性位点和当前回放位点之间的 WAL 日志即可,从而加速回放效率。
为避免只读节点读取到“未来页”,PolarDB 引入刷脏控制功能,即在主节点要将数据页写入共享存储时,判断所有只读节点是否均已回放到该数据页最近一次修改对应的 WAL 日志。
主节点 Buffer Pool 中的数据页,根据是否包含“未来数据”(即只读节点的回放位点之后新产生的数据),可以分为两类:可以写入存储的和不能写入存储的。该判断依赖两个位点:
刷脏控制判断规则如下:
if buffer latest lsn <= oldest apply lsn
+ flush buffer
+else
+ do not flush buffer
+
为将数据页回放到指定的 LSN 位点,只读节点会维护数据页与该页上的 LSN 的映射关系,这种映射关系保存在 LogIndex 中。LogIndex 可以理解为是一种可以持久化存储的 HashTable。访问数据页时,会从该映射关系中获取数据页需要回放的所有 LSN,依次回放对应的 WAL 日志,最终生成需要使用的数据页。
可见,数据页上的修改越多,其对应的 LSN 也越多,回放所需耗时也越长。为了尽量减少数据页需要回放的 LSN 数量,PolarDB 中引入了一致性位点的概念。
一致性位点表示该位点之前的所有 WAL 日志修改的数据页均已经持久化到存储。主备之间,主节点向备节点发送当前 WAL 日志的写入位点和一致性位点,备节点向主节点反馈当前回放的位点和当前使用的最小 WAL 日志位点。由于一致性位点之前的 WAL 修改都已经写入共享存储,备节点从存储上读取新的数据页面时,无需再回放该位点之前的 WAL 日志,但是备节点回放 Buffer Pool 中的被标记为 Outdate 的数据页面时,有可能需要回放该位点之前的 WAL 日志。因此,主库节点可以根据备节点传回的‘当前使用的最小 WAL 日志位点’和一致性位点,将 LogIndex 中所有小于两个位点的 LSN 清理掉,既加速回放效率,同时还能减少 LogIndex 占用的空间。
为维护一致性位点,PolarDB 为每个 Buffer 引入了一个内存状态,即第一次修改该 Buffer 对应的 LSN,称之为 oldest LSN,所有 Buffer 中最小的 oldest LSN 即为一致性位点。
一种获取一致性位点的方法是遍历 Buffer Pool 中所有 Buffer,找到最小值,但遍历代价较大,CPU 开销和耗时都不能接受。为高效获取一致性位点,PolarDB 引入 FlushList 机制,将 Buffer Pool 中所有脏页按照 oldest LSN 从小到大排序。借助 FlushList,获取一致性位点的时间复杂度可以达到 O(1)。
第一次修改 Buffer 并将其标记为脏时,将该 Buffer 插入到 FlushList 中,并设置其 oldest LSN。Buffer 被写入存储时,将该内存中的标记清除。
为高效推进一致性位点,PolarDB 的后台刷脏进程(bgwriter)采用“先被修改的 Buffer 先落盘”的刷脏策略,即 bgwriter 会从前往后遍历 FlushList,逐个刷脏,一旦有脏页写入存储,一致性位点就可以向前推进。以上图为例,如果 oldest LSN 为 10 的 Buffer 落盘,一致性位点就可以推进到 30。
为进一步提升一致性位点的推进效率,PolarDB 实现了并行刷脏。每个后台刷脏进程会从 FlushList 中获取一批数据页进行刷脏。
引入刷脏控制之后,仅满足刷脏条件的 Buffer 才能写入存储,假如某个 Buffer 修改非常频繁,可能导致 Buffer Latest LSN 总是大于 Oldest Apply LSN,该 Buffer 始终无法满足刷脏条件,此类 Buffer 我们称之为热点页。热点页会导致一致性位点无法推进,为解决热点页的刷脏问题,PolarDB 引入了 Copy Buffer 机制。
Copy Buffer 机制会将特定的、不满足刷脏条件的 Buffer 从 Buffer Pool 中拷贝至新增的 Copy Buffer Pool 中,Copy Buffer Pool 中的 Buffer 不会再被修改,其对应的 Latest LSN 也不会更新,随着 Oldest Apply LSN 的推进,Copy Buffer 会逐步满足刷脏条件,从而可以将 Copy Buffer 落盘。
引入 Copy Buffer 机制后,刷脏的流程如下:
如下图中,[oldest LSN, latest LSN]
为 [30, 500]
的 Buffer 被认为是热点页,将当前 Buffer 拷贝至 Copy Buffer Pool 中,随后该数据页再次被修改,假设修改对应的 LSN 为 600,则设置其 Oldest LSN 为 600,并将其从 FlushList 中删除,然后追加至 FlushList 末尾。此时,Copy Buffer 中数据页不会再修改,其 Latest LSN 始终为 500,若满足刷脏条件,则可以将 Copy Buffer 写入存储。
需要注意的是,引入 Copy Buffer 之后,一致性位点的计算方法有所改变。FlushList 中的 Oldest LSN 不再是最小的 Oldest LSN,Copy Buffer Pool 中可能存在更小的 oldest LSN。因此,除考虑 FlushList 中的 Oldest LSN 之外,还需要遍历 Copy Buffer Pool,找到 Copy Buffer Pool 中最小的 Oldest LSN,取两者的最小值即为一致性位点。
PolarDB 引入的一致性位点概念,与 checkpoint 的概念类似。PolarDB 中 checkpoint 位点表示该位点之前的所有数据都已经落盘,数据库 Crash Recovery 时可以从 checkpoint 位点开始恢复,提升恢复效率。普通的 checkpoint 会将所有 Buffer Pool 中的脏页以及其他内存数据落盘,这个过程可能耗时较长且在此期间 I/O 吞吐较大,可能会对正常的业务请求产生影响。
借助一致性位点,PolarDB 中引入了一种特殊的 checkpoint:Lazy Checkpoint。之所以称之为 Lazy(懒惰的),是与普通的 checkpoint 相比,lazy checkpoint 不会把 Buffer Pool 中所有的脏页落盘,而是直接使用当前的一致性位点作为 checkpoint 位点,极大地提升了 checkpoint 的执行效率。
Lazy Checkpoint 的整体思路是将普通 checkpoint 一次性刷大量脏页落盘的逻辑转换为后台刷脏进程持续不断落盘并维护一致性位点的逻辑。需要注意的是,Lazy Checkpoint 与 PolarDB 中 Full Page Write 的功能有冲突,开启 Full Page Write 之后会自动关闭该功能。
在共享存储一写多读的架构下,数据文件实际上只有一份。得益于多版本机制,不同节点的读写实际上并不会冲突。但是有一些数据操作不具有多版本机制,其中比较有代表性的就是文件操作。
多版本机制仅限于文件内的元组,但不包括文件本身。对文件进行创建、删除等操作实际上会对全集群立即可见,这会导致 RO 在读取文件时出现文件消失的情况,因此需要做一些同步操作,来防止此类情况。
对文件进行操作通常使用 DDL,因此对于 DDL 操作,PolarDB 提供了一种同步机制,来防止并发的文件操作的出现。除了同步机制外,DDL 的其他逻辑和单机执行逻辑并无区别。
同步 DDL 机制利用 AccessExclusiveLock(后文简称 DDL 锁)来进行 RW / RO 的 DDL 操作同步。
图 1:DDL 锁和 WAL 日志的关系 |
DDL 锁是数据库中最高级的表锁,对其他所有的锁级别都互斥,会伴随着 WAL 日志同步到 RO 节点上,并且可以获取到该锁在 WAL 日志的写入位点。当 RO 回放超过 Lock LSN 位点时,就可以认为在 RO 中已经获取了这把锁。DDL 锁会伴随着事务的结束而释放。
如图 1 所示,当回放到 ApplyLSN1 时,表示未获取到 DDL 锁;当回放到 ApplyLSN2 时,表示获取到了该锁;当回放到 ApplyLSN3 时,已经释放了 DDL 锁。
图 2:DDL 锁的获取条件 |
当所有 RO 都回放超过了 Lock LSN 这个位点时(如图 2 所示),可以认为 RW 的事务在集群级别获取到了这把锁。获取到这把锁就意味着 RW / RO 中没有其他的会话能够访问这张表,此时 RW 就可以对这张表做各种文件相关的操作。
说明:Standby 有独立的文件存储,获取锁时不会出现上述情况。
图 3:同步 DDL 流程图 |
图 3 所示流程说明如下:
DDL 锁是 PostgreSQL 数据库最高级别的锁,当对一个表进行 DROP / ALTER / LOCK / VACUUM (FULL) table 等操作时,需要先获取到 DDL 锁。RW 是通过用户的主动操作来获取锁,获取锁成功时会写入到日志中,RO 则通过回放日志获取锁。
当以下操作的对象都是某张表,<
表示时间先后顺序时,同步 DDL 的执行逻辑如下:
结合以上执行逻辑可以得到以下操作的先后顺序:各个 RW / RO 查询操作结束 < RW 获取全局 DDL 锁 < RW 写数据 < RW 释放全局 DDL 锁 < RW / RO 新增查询操作。
可以看到在写共享存储的数据时,RW / RO 上都不会存在查询,因此不会造成正确性问题。在整个操作的过程中,都是遵循 2PL 协议的,因此对于多个表,也可以保证正确性。
上述机制中存在一个问题,就是锁同步发生在主备同步的主路径中,当 RO 的锁同步被阻塞时,会造成 RO 的数据同步阻塞(如图 1 所示,回放进程的 3、4 阶段在等待本地查询会话结束后才能获取锁)。PolarDB 默认设置的同步超时时间为 30s,如果 RW 压力过大,有可能造成较大的数据延迟。
RO 中回放的 DDL 锁还会出现叠加效果,例如 RW 在 1s 内写下了 10 个 DDL 锁日志,在 RO 却需要 300s 才能回放完毕。数据延迟对于 PolarDB 是十分危险的,它会造成 RW 无法及时刷脏、及时做检查点,如果此时发生崩溃,恢复系统会需要更长的时间,这会导致极大的稳定性风险。
针对此问题,PolarDB 对 RO 锁回放进行了优化。
图 4:RO 异步 DDL 锁回放 |
优化思路:设计一个异步进程来回放这些锁,从而不阻塞主回放进程的工作。
整体流程如图 4 所示,和图 3 不同的是,回放进程会将锁获取的操作卸载到锁回放进程中进行,并且立刻回到主回放流程中,从而不受锁回放阻塞的影响。
锁回放冲突并不是一个常见的情况,因此主回放进程并非将所有的锁都卸载到锁回放进程中进行,它会尝试获取锁,如果获取成功了,就不需要卸载到锁回放进程中进行,这样可以有效减少进程间的同步开销。
该功能在 PolarDB 中默认启用,能够有效的减少回放冲突造成的回放延迟,以及衍生出来的稳定性问题。在 AWS Aurora 中不具备该特性,当发生冲突时会严重增加延迟。
在异步回放的模式下,仅仅是获取锁的操作者变了,但是执行逻辑并未发生变化,依旧能够保证 RW 获取到全局 DDL 锁、写数据、释放全局 DDL 锁这期间不会存在任何查询,因此不会存在正确性问题。
PolarDB 采用了共享存储一写多读架构,读写节点 RW 和多个只读节点 RO 共享同一份存储,读写节点可以读写共享存储中的数据;只读节点仅能各自通过回放日志,从共享存储中读取数据,而不能写入,只读节点 RO 通过内存同步来维护数据的一致性。此外,只读节点可同时对外提供服务用于实现读写分离与负载均衡,在读写节点异常 crash 时,可将只读节点提升为读写节点,保证集群的高可用。基本架构图如下所示:
传统 share nothing 的架构下,只读节点 RO 有自己的内存及存储,只需要接收 RW 节点的 WAL 日志进行回放即可。如下图所示,如果需要回放的数据页不在 Buffer Pool 中,需将其从存储文件中读至 Buffer Pool 中进行回放,从而带来 CacheMiss 的成本,且持续性的回放会带来较频繁的 Buffer Pool 淘汰问题。
此外,RW 节点多个事务之间可并行执行,RO 节点则需依照 WAL 日志的顺序依次进行串行回放,导致 RO 回放速度较慢,与 RW 节点的延迟逐步增大。
与传统 share nothing 架构不同,共享存储一写多读架构下 RO 节点可直接从共享存储上获取需要回放的 WAL 日志。若共享存储上的数据页是最新的,那么 RO 可直接读取数据页而不需要再进行回放操作。基于此,PolarDB 设计了 LogIndex 来加速 RO 节点的日志回放。
LogIndex 中保存了数据页与修改该数据页的所有 LSN 的映射关系,基于 LogIndex 可快速获取到修改某个数据页的所有 LSN,从而可将该数据页对应日志的回放操作延迟到真正访问该数据页的时刻进行。LogIndex 机制下 RO 内存同步的架构如下图所示。
RW / RO 的相关流程相较传统 share nothing 架构下有如下区别:
PolarDB 通过仅传输 WAL Meta 降低 RW 与 RO 之间的延迟,通过 LogIndex 实现 WAL 日志的延迟回放 + 并行回放以加速 RO 的回放速度,以下则对这两点进行详细介绍。
WAL 日志又称为 XLOG Record,如下图,每个 XLOG Record 由两部分组成:
共享存储模式下,读写节点 RW 与只读节点 RO 之间无需传输完整的 WAL 日志,仅传输 WAL Meta 数据,WAL Meta 即为上图中的 general header portion + header part + main data,RO 节点可基于 WAL Meta 从共享存储上读取完整的 WAL 日志内容。该机制下,RW 与 RO 之间传输 WAL Meta 的流程如下:
RW 与 RO 节点的流复制不传输具体的 payload 数据,减少了网络数据传输量;此外,RW 节点的 WalSender 进程从内存中的 WAL Meta queue 中获取 WAL Meta 信息,RO 节点的 WalReceiver 进程接收到 WAL Meta 后也同样将其保存至内存的 WAL Meta queue 中,相较于传统主备模式减少了日志发送及接收的磁盘 I/O 过程,从而提升传输速度,降低 RW 与 RO 之间的延迟。
LogIndex 实质为一个 HashTable 结构,其 key 为 PageTag,可标识一个具体数据页,其 value 即为修改该 page 的所有 LSN。LogIndex 的内存数据结构如下图所示,除了 Memtable ID、Memtable 保存的最大 LSN、最小 LSN 等信息,LogIndex Memtable 中还包含了三个数组,分别为:
内存中保存的 LogIndex Memtable 又可分为 Active LogIndex Memtable 和 Inactive LogIndex Memtable。如下图所示,基于 WAL Meta 数据生成的 LogIndex 记录会写入 Active LogIndex Memtable,Active LogIndex Memtable 写满后会转为 Inactive LogIndex Memtable,并重新申请一个新的 Active LogIndex Memtable,Inactive LogIndex Memtable 可直接落盘,落盘后的 Inactive LogIndex Memtable 可再次转为 Active LogIndex Memtable。
磁盘上保存了若干个 LogIndex Table,LogIndex Table 与 LogIndex Memtable 结构类似,一个 LogIndex Table 可包含 64 个 LogIndex Memtable,Inactive LogIndex Memtable 落盘的同时会生成其对应的 Bloom Filter。如下图所示,单个 Bloom Filter 的大小为 4096 字节,Bloom Filter 记录了该 Inactive LogIndex Memtable 的相关信息,如保存的最小 LSN、最大 LSN、该 Memtable 中所有 Page 在 bloom filter bit array 中的映射值等。通过 Bloom Filter 可快速判断某个 Page 是否存在于对应的 LogIndex Table 中,从而可忽略无需扫描的 LogIndex Table 以加速检索。
当 Inactive LogIndex MemTable 成功落盘后,LogIndex Meta 文件也被更新,该文件可保证 LogIndex Memtable 文件 I/O 操作的原子性。如下,LogIndex Meta 文件保存了当前磁盘上最小 LogIndex Table 及最大 LogIndex Memtable 的相关信息,其 Start LSN 记录了当前已落盘的所有 LogIndex MemTable 中最大的 LSN。若 Flush LogIndex MemTable 时发生部分写,系统会从 LogIndex Meta 记录的 Start LSN 开始解析日志,如此部分写舍弃的 LogIndex 记录也会重新生成,保证了其 I/O 操作的原子性。
由 Buffer 管理 可知,一致性位点之前的所有 WAL 日志修改的数据页均已持久化到共享存储中,RO 节点通过流复制向 RW 节点反馈当前回放的位点和当前使用的最小 WAL 日志位点,故 LogIndex Table 中小于两个位点的 LSN 均可清除。RW 据此 Truncate 掉存储上不再使用的 LogIndex Table,在加速 RO 回放效率的同时还可减少 LogIndex Table 占用的空间。
LogIndex 机制下,RO 节点的 Startup 进程基于接收到的 WAL Meta 生成 LogIndex,同时将该 WAL Meta 对应的已存在于 Buffer Pool 中的页面标记为 Outdate 后即可推进回放位点,Startup 进程本身并不对日志进行回放,日志的回放操作交由背景回放进程及真正访问该页面的 Backend 进程进行,回放过程如下图所示,其中:
为降低回放时读取磁盘 WAL 日志带来的性能损耗,同时添加了 XLOG Buffer 用于缓存读取的 WAL 日志。如下图所示,原始方式下直接从磁盘上的 WAL Segment File 中读取 WAL 日志,添加 XLog Page Buffer 后,会先从 XLog Buffer 中读取,若所需 WAL 日志不在 XLog Buffer 中,则从磁盘上读取对应的 WAL Page 到 Buffer 中,然后再将其拷贝至 XLogReaderState 的 readBuf 中;若已在 Buffer 中,则直接将其拷贝至 XLogReaderState 的 readBuf 中,以此减少回放 WAL 日志时的 I/O 次数,从而进一步加速日志回放的速度。
与传统 share nothing 架构下的日志回放不同,LogIndex 机制下,Startup 进程解析 WAL Meta 生成 LogIndex 与 Backend 进程基于 LogIndex 对 Page 进行回放的操作是并行的,且各个 Backend 进程仅对其需要访问的 Page 进行回放。由于一条 XLog Record 可能会对多个 Page 进行修改,以索引分裂为例,其涉及对 Page_0、Page_1 的修改,且其对 Page_0 及 Page_1 的修改为一个原子操作,即修改要么全部可见,要么全部不可见。针对此,设计了 mini transaction 锁机制以保证 Backend 进程回放过程中内存数据结构的一致性。
如下图所示,无 mini transaction lock 时,Startup 进程对 WAL Meta 进行解析并按序将当前 LSN 插入到各个 Page 对应的 LSN List 中。若 Startup 进程完成对 Page_0 LSN List 的更新,但尚未完成对 Page_1 LSN List 的更新时,Backend_0 和 Backend_1 分别对 Page_0 及 Page_1 进行访问,Backend_0 和 Backend_1 分别基于 Page 对应的 LSN List 进行回放操作,Page_0 被回放至 LSN_N + 1 处,Page_1 被回放至 LSN_N 处,可见此时 Buffer Pool 中两个 Page 对应的版本并不一致,从而导致相应内存数据结构的不一致。
Mini transaction 锁机制下,对 Page_0 及 Page_1 LSN List 的更新被视为一个 mini transaction。Startup 进程更新 Page 对应的 LSN List 时,需先获取该 Page 的 mini transaction lock,如下先获取 Page_0 对应的 mtr lock,获取 Page mtr lock 的顺序与回放时的顺序保持一致,更新完 Page_0 及 Page_1 LSN List 后再释放 Page_0 对应的 mtr lock。Backend 进程基于 LogIndex 对特定 Page 进行回放时,若该 Page 对应在 Startup 进程仍处于一个 mini transaction 中,则同样需先获取该 Page 对应的 mtr lock 后再进行回放操作。故若 Startup 进程完成对 Page_0 LSN List 的更新,但尚未完成对 Page_1 LSN List 的更新时,Backend_0 和 Backend_1 分别对 Page_0 及 Page_1 进行访问,此时 Backend_0 需等待 LSN List 更新完毕并释放 Page_0 mtr lock 之后才可进行回放操作,而释放 Page_0 mtr lock 时 Page_1 的 LSN List 已完成更新,从而实现了内存数据结构的原子修改。
PolarDB 基于 RW 节点与 RO 节点共享存储这一特性,设计了 LogIndex 机制来加速 RO 节点的内存同步,降低 RO 节点与 RW 节点之间的延迟,确保了 RO 节点的一致性与可用性。本文对 LogIndex 的设计背景、基于 LogIndex 的 RO 内存同步架构及具体细节进行了分析。除了实现 RO 节点的内存同步,基于 LogIndex 机制还可实现 RO 节点的 Online Promote,可加速 RW 节点异常崩溃时,RO 节点提升为 RW 节点的速度,从而构建计算节点的高可用,实现服务的快速恢复。
羁鸟
2022/08/22
30 min
Sequence 作为数据库中的一个特别的表级对象,可以根据用户设定的不同属性,产生一系列有规则的整数,从而起到发号器的作用。
在使用方面,可以设置永不重复的 Sequence 用来作为一张表的主键,也可以通过不同表共享同一个 Sequence 来记录多个表的总插入行数。根据 ANSI 标准,一个 Sequence 对象在数据库要具备以下特征:
为了解释上述特性,我们分别定义 a
、b
两种序列来举例其具体的行为。
CREATE SEQUENCE a start with 5 minvalue -1 increment -2;
+CREATE SEQUENCE b start with 2 minvalue 1 maxvalue 4 cycle;
+
两个 Sequence 对象提供的序列值,随着序列申请次数的变化,如下所示:
PostgreSQL | Oracle | SQLSERVER | MySQL | MariaDB | DB2 | Sybase | Hive |
---|---|---|---|---|---|---|---|
支持 | 支持 | 支持 | 仅支持自增字段 | 支持 | 支持 | 仅支持自增字段 | 不支持 |
为了更进一步了解 PostgreSQL 中的 Sequence 对象,我们先来了解 Sequence 的用法,并从用法中透析 Sequence 背后的设计原理。
PostgreSQL 提供了丰富的 Sequence 调用接口,以及组合使用的场景,以充分支持开发者的各种需求。
PostgreSQL 对 Sequence 对象也提供了类似于 表 的访问方式,即 DQL、DML 以及 DDL。我们从下图中可一览对外提供的 SQL 接口。
分别来介绍以下这几个接口:
该接口的含义为,返回 Session 上次使用的某一 Sequence 的值。
postgres=# select nextval('seq');
+ nextval
+---------
+ 2
+(1 row)
+
+postgres=# select currval('seq');
+ currval
+---------
+ 2
+(1 row)
+
需要注意的是,使用该接口必须使用过一次 nextval
方法,否则会提示目标 Sequence 在当前 Session 未定义。
postgres=# select currval('seq');
+ERROR: currval of sequence "seq" is not yet defined in this session
+
该接口的含义为,返回 Session 上次使用的 Sequence 的值。
postgres=# select nextval('seq');
+ nextval
+---------
+ 3
+(1 row)
+
+postgres=# select lastval();
+ lastval
+---------
+ 3
+(1 row)
+
同样,为了知道上次用的是哪个 Sequence 对象,需要用一次 nextval('seq')
,让 Session 以全局变量的形式记录下上次使用的 Sequence 对象。
lastval
与 curval
两个接口仅仅只是参数不同,currval
需要指定是哪个访问过的 Sequence 对象,而 lastval
无法指定,只能是最近一次使用的 Sequence 对象。
该接口的含义为,取 Sequence 对象的下一个序列值。
通过使用 nextval
方法,可以让数据库基于 Sequence 对象的当前值,返回一个递增了 increment
数量的一个序列值,并将递增后的值作为 Sequence 对象当前值。
postgres=# CREATE SEQUENCE seq start with 1 increment 2;
+CREATE SEQUENCE
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 3
+(1 row)
+
increment
称作 Sequence 对象的步长,Sequence 的每次以 nextval
的方式进行申请,都是以步长为单位进行申请的。同时,需要注意的是,Sequence 对象创建好以后,第一次申请获得的值,是 start value 所定义的值。对于 start value 的默认值,有以下 PostgreSQL 规则:
另外,nextval
是一种特殊的 DML,其不受事务所保护,即:申请出的序列值不会再回滚。
postgres=# BEGIN;
+BEGIN
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
+postgres=# ROLLBACK;
+ROLLBACK
+postgres=# select nextval('seq');
+ nextval
+---------
+ 2
+(1 row)
+
PostgreSQL 为了 Sequence 对象可以获得较好的并发性能,并没有采用多版本的方式来更新 Sequence 对象,而是采用了原地修改的方式完成 Sequence 对象的更新,这种不用事务保护的方式几乎成为所有支持 Sequence 对象的 RDMS 的通用做法,这也使得 Sequence 成为一种特殊的表级对象。
该接口的含义是,设置 Sequence 对象的序列值。
postgres=# select nextval('seq');
+ nextval
+---------
+ 4
+(1 row)
+
+postgres=# select setval('seq', 1);
+ setval
+--------
+ 1
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 2
+(1 row)
+
该方法可以将 Sequence 对象的序列值设置到给定的位置,同时可以将第一个序列值申请出来。如果不想申请出来,可以采用加入 false
参数的做法。
postgres=# select nextval('seq');
+ nextval
+---------
+ 4
+(1 row)
+
+postgres=# select setval('seq', 1, false);
+ setval
+--------
+ 1
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
通过在 setval
来设置好 Sequence 对象的值以后,同时来设置 Sequence 对象的 is_called
属性。nextval
就可以根据 Sequence 对象的 is_called
属性来判断要返回的是否要返回设置的序列值。即:如果 is_called
为 false
,nextval
接口会去设置 is_called
为 true
,而不是进行 increment。
CREATE
和 ALTER SEQUENCE
用于创建/变更 Sequence 对象,其中 Sequence 属性也通过 CREATE
和 ALTER SEQUENCE
接口进行设置,前面已简单介绍部分属性,下面将详细描述具体的属性。
CREATE [ TEMPORARY | TEMP ] SEQUENCE [ IF NOT EXISTS ] name
+ [ AS data_type ]
+ [ INCREMENT [ BY ] increment ]
+ [ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE ]
+ [ START [ WITH ] start ] [ CACHE cache ] [ [ NO ] CYCLE ]
+ [ OWNED BY { table_name.column_name | NONE } ]
+ALTER SEQUENCE [ IF EXISTS ] name
+ [ AS data_type ]
+ [ INCREMENT [ BY ] increment ]
+ [ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE ]
+ [ START [ WITH ] start ]
+ [ RESTART [ [ WITH ] restart ] ]
+ [ CACHE cache ] [ [ NO ] CYCLE ]
+ [ OWNED BY { table_name.column_name | NONE } ]
+
AS
:设置 Sequence 的数据类型,只可以设置为 smallint
,int
,bigint
;与此同时也限定了 minvalue
和 maxvalue
的设置范围,默认为 bigint
类型(注意,只是限定,而不是设置,设置的范围不得超过数据类型的范围)。INCREMENT
:步长,nextval
申请序列值的递增数量,默认值为 1。MINVALUE
/ NOMINVALUE
:设置/不设置 Sequence 对象的最小值,如果不设置则是数据类型规定的范围,例如 bigint
类型,则最小值设置为 PG_INT64_MIN
(-9223372036854775808)MAXVALUE
/ NOMAXVALUE
:设置/不设置 Sequence 对象的最大值,如果不设置,则默认设置规则如上。START
:Sequence 对象的初始值,必须在 MINVALUE
和 MAXVALUE
范围之间。RESTART
:ALTER 后,可以重新设置 Sequence 对象的序列值,默认设置为 start value。CACHE
/ NOCACHE
:设置 Sequence 对象使用的 Cache 大小,NOCACHE
或者不设置则默认为 1。OWNED BY
:设置 Sequence 对象归属于某张表的某一列,删除列后,Sequence 对象也将删除。下面描述了一种序列回滚的场景
CREATE SEQUENCE
+postgres=# BEGIN;
+BEGIN
+postgres=# ALTER SEQUENCE seq maxvalue 10;
+ALTER SEQUENCE
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 2
+(1 row)
+
+postgres=# ROLLBACK;
+ROLLBACK
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
与之前描述的不同,此处 Sequence 对象受到了事务的保护,序列值发生了发生回滚。实际上,此处事务保护的是 ALTER SEQUENCE
(DDL),而非 nextval
(DML),因此此处发生的回滚是将 Sequence 对象回滚到 ALTER SEQUENCE
之前的状态,故发生了序列回滚现象。
DROP SEQUENCE
,如字面意思,去除数据库中的 Sequence 对象。TRUNCATE
,准确来讲,是通过 TRUNCATE TABLE
完成 RESTART SEQUENCE
。postgres=# CREATE TABLE tbl_iden (i INTEGER, j int GENERATED ALWAYS AS IDENTITY);
+CREATE TABLE
+postgres=# insert into tbl_iden values (100);
+INSERT 0 1
+postgres=# insert into tbl_iden values (1000);
+INSERT 0 1
+postgres=# select * from tbl_iden;
+ i | j
+------+---
+ 100 | 1
+ 1000 | 2
+(2 rows)
+
+postgres=# TRUNCATE TABLE tbl_iden RESTART IDENTITY;
+TRUNCATE TABLE
+postgres=# insert into tbl_iden values (1234);
+INSERT 0 1
+postgres=# select * from tbl_iden;
+ i | j
+------+---
+ 1234 | 1
+(1 row)
+
此处相当于在 TRUNCATE
表的时候,执行 ALTER SEQUENCE RESTART
。
SEQUENCE 除了作为一个独立的对象时候以外,还可以组合其他 PostgreSQL 其他组件进行使用,我们总结了一下几个常用的场景。
CREATE SEQUENCE seq;
+CREATE TABLE tbl (i INTEGER PRIMARY KEY);
+INSERT INTO tbl (i) VALUES (nextval('seq'));
+SELECT * FROM tbl ORDER BY 1 DESC;
+ tbl
+---------
+ 1
+(1 row)
+
CREATE SEQUENCE seq;
+CREATE TABLE tbl (i INTEGER PRIMARY KEY, j INTEGER);
+CREATE FUNCTION f()
+RETURNS TRIGGER AS
+$$
+BEGIN
+NEW.i := nextval('seq');
+RETURN NEW;
+END;
+$$
+LANGUAGE 'plpgsql';
+
+CREATE TRIGGER tg
+BEFORE INSERT ON tbl
+FOR EACH ROW
+EXECUTE PROCEDURE f();
+
+INSERT INTO tbl (j) VALUES (4);
+
+SELECT * FROM tbl;
+ i | j
+---+---
+ 1 | 4
+(1 row)
+
显式 DEFAULT
调用:
CREATE SEQUENCE seq;
+CREATE TABLE tbl(i INTEGER DEFAULT nextval('seq') PRIMARY KEY, j INTEGER);
+
+INSERT INTO tbl (i,j) VALUES (DEFAULT,11);
+INSERT INTO tbl(j) VALUES (321);
+INSERT INTO tbl (i,j) VALUES (nextval('seq'),1);
+
+SELECT * FROM tbl;
+ i | j
+---+-----
+ 2 | 321
+ 1 | 11
+ 3 | 1
+(3 rows)
+
SERIAL
调用:
CREATE TABLE tbl (i SERIAL PRIMARY KEY, j INTEGER);
+INSERT INTO tbl (i,j) VALUES (DEFAULT,42);
+
+INSERT INTO tbl (j) VALUES (25);
+
+SELECT * FROM tbl;
+ i | j
+---+----
+ 1 | 42
+ 2 | 25
+(2 rows)
+
注意,SERIAL
并不是一种类型,而是 DEFAULT
调用的另一种形式,只不过 SERIAL
会自动创建 DEFAULT
约束所要使用的 Sequence。
CREATE TABLE tbl (i int GENERATED ALWAYS AS IDENTITY,
+ j INTEGER);
+INSERT INTO tbl(i,j) VALUES (DEFAULT,32);
+
+INSERT INTO tbl(j) VALUES (23);
+
+SELECT * FROM tbl;
+ i | j
+---+----
+ 1 | 32
+ 2 | 23
+(2 rows)
+
AUTO_INC
调用对列附加了自增约束,与 default
约束不同,自增约束通过查找 dependency 的方式找到该列关联的 Sequence,而 default
调用仅仅是将默认值设置为一个 nextval
表达式。
在 PostgreSQL 中有一张专门记录 Sequence 信息的系统表,即 pg_sequence
。其表结构如下:
postgres=# \d pg_sequence
+ Table "pg_catalog.pg_sequence"
+ Column | Type | Collation | Nullable | Default
+--------------+---------+-----------+----------+---------
+ seqrelid | oid | | not null |
+ seqtypid | oid | | not null |
+ seqstart | bigint | | not null |
+ seqincrement | bigint | | not null |
+ seqmax | bigint | | not null |
+ seqmin | bigint | | not null |
+ seqcache | bigint | | not null |
+ seqcycle | boolean | | not null |
+Indexes:
+ "pg_sequence_seqrelid_index" PRIMARY KEY, btree (seqrelid)
+
不难看出,pg_sequence
中记录了 Sequence 的全部的属性信息,该属性在 CREATE/ALTER SEQUENCE
中被设置,Sequence 的 nextval
以及 setval
要经常打开这张系统表,按照规则办事。
对于 Sequence 序列数据本身,其实现方式是基于 heap 表实现的,heap 表共计三个字段,其在表结构如下:
typedef struct FormData_pg_sequence_data
+{
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} FormData_pg_sequence_data;
+
last_value
记录了 Sequence 的当前的序列值,我们称之为页面值(与后续的缓存值相区分)log_cnt
记录了 Sequence 在 nextval
申请时,预先向 WAL 中额外申请的序列次数,这一部分我们放在序列申请机制剖析中详细介绍。is_called
标记 Sequence 的 last_value
是否已经被申请过,例如 setval
可以设置 is_called
字段:-- setval false
+postgres=# select setval('seq', 10, false);
+ setval
+--------
+ 10
+(1 row)
+
+postgres=# select * from seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 10 | 0 | f
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 10
+(1 row)
+
+-- setval true
+postgres=# select setval('seq', 10, true);
+ setval
+--------
+ 10
+(1 row)
+
+postgres=# select * from seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 10 | 0 | t
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 11
+(1 row)
+
每当用户创建一个 Sequence 对象时,PostgreSQL 总是会创建出一张上面这种结构的 heap 表,来记录 Sequence 对象的数据信息。当 Sequence 对象因为 nextval
或 setval
导致序列值变化时,PostgreSQL 就会通过原地更新的方式更新 heap 表中的这一行的三个字段。
以 setval
为例,下面的逻辑解释了其具体的原地更新过程。
static void
+do_setval(Oid relid, int64 next, bool iscalled)
+{
+
+ /* 打开并对Sequence heap表进行加锁 */
+ init_sequence(relid, &elm, &seqrel);
+
+ ...
+
+ /* 对buffer进行加锁,同时提取tuple */
+ seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+
+ ...
+
+ /* 原地更新tuple */
+ seq->last_value = next; /* last fetched number */
+ seq->is_called = iscalled;
+ seq->log_cnt = 0;
+
+ ...
+
+ /* 释放buffer锁以及表锁 */
+ UnlockReleaseBuffer(buf);
+ relation_close(seqrel, NoLock);
+}
+
可见,do_setval
会直接去设置 Sequence heap 表中的这一行元组,而非普通 heap 表中的删除 + 插入的方式来完成元组更新,对于 nextval
而言,也是类似的过程,只不过 last_value
的值需要计算得出,而非用户设置。
讲清楚 Sequence 对象在内核中的存在形式之后,就需要讲清楚一个序列值是如何发出的,即 nextval
方法。其在内核的具体实现在 sequence.c
中的 nextval_internal
函数,其最核心的功能,就是计算 last_value
以及 log_cnt
。
last_value
和 log_cnt
的具体关系如下图:
其中 log_cnt
是一个预留的申请次数。默认值为 32,由下面的宏定义决定:
/*
+ * We don't want to log each fetching of a value from a sequence,
+ * so we pre-log a few fetches in advance. In the event of
+ * crash we can lose (skip over) as many values as we pre-logged.
+ */
+#define SEQ_LOG_VALS 32
+
每当将 last_value
增加一个 increment 的长度时,log_cnt
就会递减 1。
当 log_cnt
为 0,或者发生 checkpoint
以后,就会触发一次 WAL 日志写入,按下面的公式设置 WAL 日志中的页面值,并重新将 log_cnt
设置为 SEQ_LOG_VALS
。
通过这种方式,PostgreSQL 每次通过 nextval
修改页面中的 last_value
后,不需要每次都写入 WAL 日志。这意味着:如果 nextval
每次都需要修改页面值的话,这种优化将会使得写 WAL 的频率降低 32 倍。其代价就是,在发生 crash 前如果没有及时进行 checkpoint,那么会丢失一段序列。如下面所示:
postgres=# create sequence seq;
+CREATE SEQUENCE
+postgres=# select nextval('seq');
+ nextval
+---------
+ 1
+(1 row)
+
+postgres=# select * from seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 1 | 32 | t
+(1 row)
+
+-- crash and restart
+
+postgres=# select * from seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 33 | 0 | t
+(1 row)
+
+postgres=# select nextval('seq');
+ nextval
+---------
+ 34
+(1 row)
+
显然,crash 以后,Sequence 对象产生了 2-33 这段空洞,但这个代价是可以被接受的,因为 Sequence 并没有违背唯一性原则。同时,在特定场景下极大地降低了写 WAL 的频率。
通过上述描述,不难发现 Sequence 每次发生序列申请,都需要通过加入 buffer 锁的方式来修改页面,这意味着 Sequence 的并发性能是比较差的。
针对这个问题,PostgreSQL 使用对 Sequence 使用了 Session Cache 来提前缓存一段序列,来提高并发性能。如下图所示:
Sequence Session Cache 的实现是一个 entry 数量固定为 16 的哈希表,以 Sequence 的 OID 为 key 去检索已经缓存好的 Sequence 序列,其缓存的 value 结构如下:
typedef struct SeqTableData
+{
+ Oid relid; /* Sequence OID(hash key) */
+ int64 last; /* value last returned by nextval */
+ int64 cached; /* last value already cached for nextval */
+ int64 increment; /* copy of sequence's increment field */
+} SeqTableData;
+
其中 last
即为 Sequence 在 Session 中的当前值,即 current_value,cached
为 Sequence 在 Session 中的缓存值,即 cached_value,increment
记录了步长,有了这三个值即可满足 Sequence 缓存的基本条件。
对于 Sequence Session Cache 与页面值之间的关系,如下图所示:
类似于 log_cnt
,cache_cnt
即为用户在定义 Sequence 时,设置的 Cache 大小,最小为 1。只有当 cache domain 中的序列用完以后,才会去对 buffer 加锁,修改页中的 Sequence 页面值。调整过程如下所示:
例如,如果 CACHE 设置的值为 20,那么当 cache 使用完以后,就会尝试对 buffer 加锁来调整页面值,并重新申请 20 个 increment 至 cache 中。对于上图而言,有如下关系:
在 Sequence Session Cache 的加持下,nextval
方法的并发性能得到了极大的提升,以下是通过 pgbench 进行压测的结果对比。
Sequence 在 PostgreSQL 中是一类特殊的表级对象,提供了简单而又丰富的 SQL 接口,使得用户可以更加方便的创建、使用定制化的序列对象。不仅如此,Sequence 在内核中也具有丰富的组合使用场景,其使用场景也得到了极大地扩展。
本文详细介绍了 Sequence 对象在 PostgreSQL 内核中的具体设计,从对象的元数据描述、对象的数据描述出发,介绍了 Sequence 对象的组成。本文随后介绍了 Sequence 最为核心的 SQL 接口——nextval
,从 nextval
的序列值计算、原地更新、降低 WAL 日志写入三个方面进行了详细阐述。最后,本文介绍了 Sequence Session Cache 的相关原理,描述了引入 Cache 以后,序列值在 Cache 中,以及页面中的计算方法以及对齐关系,并对比了引入 Cache 前后,nextval
方法在单序列和多序列并发场景下的对比情况。