-
Notifications
You must be signed in to change notification settings - Fork 40
Home
Pavel Paulau edited this page Feb 26, 2014
·
26 revisions
On Linux we set swappiness to 0 and disable transparent huge pages (THP). ext4 with no tuning is the most common file system.
We make no changes to Windows settings apart from disabling scheduled disk defragmentation (also called "Disk optimization").
We also enable parallel bucket and views compaction. Fragmentation threshold differs from default settings and depends on test case and workload.
- Disable auto-compaction.
- Create initial dataset, wait for persistence and TAP replication.
- Mutate (update) all items in dataset. Wait for persistence and TAP replication.
- Trigger bucket compaction, report total compaction throughput (MBytes/sec) measured as:
(data_disk_size_before_compaction - data_disk_size_after_compaction) / total_compaction_time
- Disable auto-compaction, disable automatic index updates.
- Create initial dataset, wait for persistence and TAP replication.
- Define design documents.
- Trigger index build, wait for indexing to finish.
- Mutate all items in dataset. Wait for persistence and TAP replication.
- Trigger index build, wait for indexing to finish.
- Trigger index compaction, report total compaction throughput (MBytes/sec) measured as:
(views_disk_size_before_compaction - views_disk_size_after_compaction) / total_compaction_time
- Disable auto-compaction.
- Create initial dataset, wait for persistence and TAP replication.
- Compact bucket.
- Define design documents.
- Trigger index build, report total indexing time in minutes.
- Disable auto-compaction, disable automatic index updates.
- Create initial dataset, wait for persistence and TAP replication.
- Compact bucket.
- Define design documents.
- Trigger index build, wait for indexing to finish.
- Double dataset. Wait for persistence and TAP replication.
- Compact bucket.
- Trigger index build, report total indexing time in minutes.
- Disable auto-compaction.
- Create initial DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start read-heavy (70/30) front-end workload with high cache miss ratio (~40%).
- Run workload for predefined time (e.g., 1 hour), report average
ep_bg_fetched
per node.
- Create initial DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start write-heavy (90/10) front-end workload.
- Run workload for predefined time (e.g., 30 minutes), report average
ep_diskqueue_drain
per node.
- Create initial DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start write-heavy (80/20) front-end workload.
- Run workload for predefined time (e.g., 1 hour), report 95th percentile of OBSERVE latency.
- Create initial non-DGM dataset, wait for persistence and TAP replication.
- Compact bucket.
- Start mixed (50/50) front-end workload.
- Run workload for predefined time (e.g., 1 hour), report 95th percentile of SET and GET latency.
- Create initial DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start mixed (50/50) front-end workload with high cache miss ratio (~30%).
- Run workload for predefined time (e.g., 1 hour), report 95th percentile of GET latency.
- Create initial dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start mixed (50/50) front-end workload.
- Run workload for predefined time (e.g., 1 hour).
- Restart all nodes, report master's
ep_warmup_time
in minutes.
- Single node, single bucket.
- Create initial dataset, wait for persistence and TAP replication.
- Read all data via TAP or UPR protocol, report average throughput (items/sec) measured as:
total_items / total_time
- Create initial non-DGM or DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Define design documents.
- Trigger index build, wait for indexing to finish.
- Start read-heavy (80/20) front-end workload.
- Run workload for predefined time (e.g., 1 hour), report 80th of query latency ("stale=update_after" or "stale=false").
- Create initial dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start front-end workload.
- Trigger cluster rebalance (IN/OUT/SWAP), report total rebalance time. Notice that there is a delay before and after rebalance.
- Create initial dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Define design documents.
- Trigger index build, wait for indexing to finish.
- Start front-end workload with view queries.
- Trigger cluster rebalance (IN/OUT), report total rebalance time. Notice that there is a delay before and after rebalance.
- Disable auto-compaction.
- Create initial dataset (source cluster), wait for persistence and TAP replication.
- Compact bucket.
- Initialize remote replication, report average replication rate.
- Create initial dataset (source cluster), wait for persistence and TAP replication.
- Initialize remote replication, wait for initial replication, wait for persistence and TAP replication.
- Compact bucket.
- Run front-end workload, report maximum XDCR lag and maximum XDCR queue length.