-
Notifications
You must be signed in to change notification settings - Fork 40
Home
On Linux we set swappiness to 0 and disable transparent huge pages (THP). ext4 with no tuning is the most common file system.
We make no changes to Windows settings apart from disabling scheduled disk defragmentation (also called "Disk optimization").
We also enable parallel bucket and views compaction. Fragmentation threshold differs from default settings and depends on test case and workload.
Changing internal settings:
curl -XPOST -u Administrator:password http://host:port/internalSettings -d 'maxBucketCount=30'
Passing alternative "swt":
ERL_AFLAGS="+swt low" /etc/init.d/couchbase-server restart
Changing number of vbuckets per bucket:
COUCHBASE_NUM_VBUCKETS=64 /etc/init.d/couchbase-server restart
Changing memcached settings in 3.0:
curl -XPOST -u Administrator:password -d 'ns_config:set({node, node(), {memcached, extra_args}}, ["-t12", "-c5000"]).' http://127.0.0.1:8091/diag/eval
Changing number of shards:
curl -XPOST -u Administrator:password -d 'ns_bucket:update_bucket_props("default", [{extra_config_string, "max_num_shards=1"}]).' http://127.0.0.1:8091/diag/eval
Monitoring Erlang run queues:
wget -O- -q --user=Administrator --password=password --post-data 'erlang:statistics(run_queues).' http://127.0.0.1:8091/diag/eval
tc qdisc del dev em1 root
tc qdisc add dev em1 handle 1: root htb
tc class add dev em1 parent 1: classid 1:1 htb rate 1gbit
tc class add dev em1 parent 1:1 classid 1:11 htb rate 1gbit
tc qdisc add dev em1 parent 1:11 handle 10: netem delay 100ms 5ms loss 0.01% 50% duplicate 0.005% corrupt 0.005%
tc filter add dev em1 protocol ip prio 1 u32 match ip dst 172.23.100.19 flowid 1:11
-
Disable auto-compaction.
-
Create initial dataset, wait for persistence and TAP replication.
-
Mutate (update) all items in dataset. Wait for persistence and TAP replication.
-
Trigger bucket compaction, report total compaction throughput (MBytes/sec) measured as:
(data_disk_size_before_compaction - data_disk_size_after_compaction) / total_compaction_time
-
Disable auto-compaction, disable automatic index updates.
-
Create initial dataset, wait for persistence and TAP replaication.
-
Define design documents.
-
Trigger index build, wait for indexing to finish.
-
Mutate all items in dataset. Wait for persistence and TAP replication.
-
Trigger index build, wait for indexing to finish.
-
Trigger index compaction, report total compaction throughput (MBytes/sec) measured as:
(views_disk_actual_size_before_compaction - views_disk_size_actual_after_compaction) / total_compaction_time
- Disable auto-compaction.
- Create initial dataset, wait for persistence and TAP replication.
- Compact bucket.
- Define design documents.
- Trigger index build, report total indexing time in minutes.
- Disable auto-compaction, disable automatic index updates.
- Create initial dataset, wait for persistence and TAP replication.
- Compact bucket.
- Define design documents.
- Trigger index build, wait for indexing to finish.
- Double dataset. Wait for persistence and TAP replication.
- Compact bucket.
- Trigger index build, report total indexing time in minutes.
- Disable auto-compaction.
- Create initial DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start read-heavy (70/30) front-end workload with high cache miss ratio (~40%).
- Run workload for predefined time (e.g., 1 hour), report average
ep_bg_fetched
per node.
- Create initial DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start write-heavy (90/10) front-end workload.
- Run workload for predefined time (e.g., 30 minutes), report average
ep_diskqueue_drain
per node.
- Create initial DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start write-heavy (80/20) front-end workload.
- Run workload for predefined time (e.g., 1 hour), report 95th percentile of OBSERVE latency.
- Create initial non-DGM dataset, wait for persistence and TAP replication.
- Compact bucket.
- Start mixed (50/50) front-end workload.
- Run workload for predefined time (e.g., 1 hour), report 95th percentile of SET and GET latency.
- Create initial DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start mixed (50/50) front-end workload with high cache miss ratio (~30%).
- Run workload for predefined time (e.g., 1 hour), report 95th percentile of GET latency.
- Create initial dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start mixed (50/50) front-end workload.
- Run workload for predefined time (e.g., 1 hour).
- Restart all nodes, report master's
ep_warmup_time
in minutes.
- Single node, single bucket.
- Create initial dataset, wait for persistence and TAP replication.
- Read all data via TAP or UPR protocol, report average throughput (items/sec) measured as:
total_items / total_time
- Keys look like 'AB_972518218995_0'.
- Values are several fields like this one '{"pn": "972516875596", "nam": "XxxxPhone_i5qbqg7iqugeg96v"};'.
- Single node, single bucket.
- Load 5M items, 700-1400 bytes, average 1KB (11-22 fields).
- Append data 0. Mark first 80% of items as working set. 0. Randomly update 75% of items in working set by adding 1 field at a time (62 bytes). 0. Mark first 40% of items as working set. 0. Randomly update 75% of items in working set by adding 1 field at a time (62 bytes). 0. Mark first 20% of items as working set. 0. Randomly update 75% of items in working set by adding 1 field at a time (62 bytes).
- Repeat step #5 5 times.
- Create initial non-DGM or DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Define design documents.
- Trigger index build, wait for indexing to finish.
- Start read-heavy (80/20) front-end workload with bounded view queries.
- Run workload for predefined time (e.g., 1 hour), report 80th of query latency (
stale=update_after
orstale=false
).
- Create initial non-DGM dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Define design documents.
- Trigger index build, wait for indexing to finish.
- Start read-heavy (80/20) front-end workload with unbounded view queries.
- Run workload for predefined time (e.g., 1 hour), report average
couch_view_ops
.
- Disable auto-compaction.
- Create initial dataset (source cluster), wait for persistence and TAP replication.
- Compact bucket.
- Initialize remote replication, report average replication rate (items/sec).
- Create initial dataset (source cluster), wait for persistence and TAP replication.
- Initialize remote replication, wait for initial replication, wait for persistence and TAP replication.
- Compact buckets.
- Start mixed (50/50) front-end workload
- Run workload for predefined time (e.g., 3 hours), report 90th percentile of XDCR lag.
The way lag is measured is based on following timeline:
t0 - client performs SET operation for key X on source cluster.
t1 - client receives response from source cluster.
t2 - client starts repeating GET requests for key X (with progressive polling interval) on destination.
t3 - client receives successful response from destination cluster.
The lag is calculated as t3 - t2
- Create initial dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start mixed (50/50) front-end workload.
- Wait for predefined time (e.g., 20 minutes).
- Add/remove/swap nodes
- Trigger cluster rebalance, wait for rebalance to finish.
- Wait for predefined time (e.g., 20 minutes).
- Terminate front-end workload, report total rebalance time in minutes.
- Create initial dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Start mixed (50/50) front-end workload.
- Wait for predefined time (e.g., 20 minutes).
- "Failover" one node.
- Add it back.
- Wait for predefined time (e.g., 10 minutes).
- Trigger cluster rebalance, wait for rebalance to finish.
- Wait for predefined time (e.g., 20 minutes).
- Terminate front-end workload, report total rebalance time in minutes.
- Create initial dataset, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact bucket.
- Define design documents.
- Trigger index build, wait for indexing to finish.
- Start mixed (50/50) front-end workload with bounded view queries.
- Wait for predefined time (e.g., 20 minutes).
- Add/remove/swap nodes and trigger cluster rebalance.
- Wait for rebalance to finish.
- Wait for predefined time (e.g., 20 minutes).
- Terminate front-end workload, report total rebalance time in minutes.
- Create initial dataset, wait for persistence and TAP replication.
- Initialize remote replication, wait for initial replication, wait for persistence and TAP replication.
- Create working set via update or read operations, wait for persistence and TAP replication.
- Compact buckets.
- Start mixed (50/50) front-end workload.
- Wait for predefined time (e.g., 20 minutes).
- Add/remove nodes on source on destination sides, trigger cluster rebalance.
- Terminate front-end workload, report total rebalance time in minutes.