-
Notifications
You must be signed in to change notification settings - Fork 40
Home
On Linux we set swappiness to 0 and disable transparent huge pages (THP). ext4 with no tuning is the most common file system.
We make no changes to Windows settings apart from disabling scheduled disk defragmentation (also called "Disk optimization").
We also enable parallel bucket and views compaction. Fragmentation threshold differs from default settings and depends on test case and workload.
Changing internal settings:
curl -XPOST -u user:password http://host:port/internalSettings -d 'maxBucketCount=30'
Passing alternative "swt":
ERL_AFLAGS="+swt low" /etc/init.d/couchbase-server restart
Changing number of vbuckets per bucket:
COUCHBASE_NUM_VBUCKETS=64 /etc/init.d/couchbase-server restart
Changing memcached settings in 3.0:
curl -XPOST -u user:password http://host:port/diag/eval -d 'ns_config:set({node, node(), {memcached, extra_args}}, ["-t12", "-c5000"]).'
Changing number of shards:
wget -O- --user=Administrator --password=password --post-data='ns_bucket:update_bucket_props("default", [{extra_config_string, "max_num_shards=1"}]).' http://127.0.0.1:9000/diag/eval
-
Disable auto-compaction.
-
Create initial dataset, wait for persistence and TAP replication.
-
Mutate (update) all items in dataset. Wait for persistence and TAP replication.
-
Trigger bucket compaction, report total compaction throughput (MBytes/sec) measured as:
(data_disk_size_before_compaction - data_disk_size_after_compaction) / total_compaction_time
-
Disable auto-compaction, disable automatic index updates.
-
Create initial dataset, wait for persistence and TAP replaication.
-
Define design documents.
-
Trigger index build, wait for indexing to finish.
-
Mutate all items in dataset. Wait for persistence and TAP replication.
-
Trigger index build, wait for indexing to finish.
-
Trigger index compaction, report total compaction throughput (MBytes/sec) measured as:
(views_disk_actual_size_before_compaction - views_disk_size_actual_after_compaction) / total_compaction_time
- Disable auto-compaction.
- Create initial dataset, wait for persistence and TAP replication.
- Compact bucket.
- Define design documents.
- Trigger index build, report total indexing time in minutes.
- Disable auto-compaction, disable automatic index updates.
- Create initial dataset, wait for persistence and TAP replication.
- Compact bucket.
- Define design documents.
- Trigger index build, wait for indexing to finish.
- Double dataset. Wait for persistence and TAP replication.
- Compact bucket.
- Trigger index build, report total indexing time in minutes.
- Disable auto-compaction.
- Create initial DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start read-heavy (70/30) front-end workload with high cache miss ratio (~40%).
- Run workload for predefined time (e.g., 1 hour), report average
ep_bg_fetched
per node.
- Create initial DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start write-heavy (90/10) front-end workload.
- Run workload for predefined time (e.g., 30 minutes), report average
ep_diskqueue_drain
per node.
- Create initial DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start write-heavy (80/20) front-end workload.
- Run workload for predefined time (e.g., 1 hour), report 95th percentile of OBSERVE latency.
- Create initial non-DGM dataset, wait for persistence and TAP replication.
- Compact bucket.
- Start mixed (50/50) front-end workload.
- Run workload for predefined time (e.g., 1 hour), report 95th percentile of SET and GET latency.
- Create initial DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start mixed (50/50) front-end workload with high cache miss ratio (~30%).
- Run workload for predefined time (e.g., 1 hour), report 95th percentile of GET latency.
- Create initial dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start mixed (50/50) front-end workload.
- Run workload for predefined time (e.g., 1 hour).
- Restart all nodes, report master's
ep_warmup_time
in minutes.
- Single node, single bucket.
- Create initial dataset, wait for persistence and TAP replication.
- Read all data via TAP or UPR protocol, report average throughput (items/sec) measured as:
total_items / total_time
- Create initial non-DGM or DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Define design documents.
- Trigger index build, wait for indexing to finish.
- Start read-heavy (80/20) front-end workload with bounded view queries.
- Run workload for predefined time (e.g., 1 hour), report 80th of query latency (
stale=update_after
orstale=false
).
- Create initial non-DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Define design documents.
- Trigger index build, wait for indexing to finish.
- Start read-heavy (80/20) front-end workload with unbounded view queries.
- Run workload for predefined time (e.g., 1 hour), report average
couch_view_ops
.
- Disable auto-compaction.
- Create initial dataset (source cluster), wait for persistence and TAP replication.
- Compact bucket.
- Initialize remote replication, report average replication rate (items/sec).
- Create initial dataset (source cluster), wait for persistence and TAP replication.
- Initialize remote replication, wait for initial replication, wait for persistence and TAP replication.
- Compact buckets.
- Start mixed (50/50) front-end workload
- Run workload for predefined time (e.g., 3 hours), report 90th percentile of XDCR lag.
The way lag is measured is based on following timeline:
t0 - client performs SET operation for key X on source cluster.
t1 - client receives response from source cluster.
t2 - client starts repeating GET requests for key X (with progressive polling interval) on destination.
t3 - client receives successful response from destination cluster.
The lag is calculated as t3 - t2
- Create initial dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start mixed (50/50) front-end workload.
- Wait for predefined time (e.g., 20 minutes).
- Add/remove/swap nodes and trigger cluster rebalance.
- Terminate front-end workload, report total rebalance time in minutes.
- Create initial dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Define design documents.
- Trigger index build, wait for indexing to finish.
- Start mixed (50/50) front-end workload with bounded view queries.
- Wait for predefined time (e.g., 20 minutes).
- Add/remove/swap nodes and trigger cluster rebalance.
- Terminate front-end workload, report total rebalance time in minutes.
- Create initial dataset, wait for persistence and TAP replication.
- Initialize remote replication, wait for initial replication, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact buckets.
- Start mixed (50/50) front-end workload.
- Wait for predefined time (e.g., 20 minutes).
- Add/remove nodes on source on destination sides, trigger cluster rebalance.
- Terminate front-end workload, report total rebalance time in minutes.