Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: segcache: use mutex and jemalloc #459

Draft
wants to merge 13 commits into
base: master
Choose a base branch
from

Conversation

brayniac
Copy link
Contributor

@brayniac brayniac commented Sep 16, 2022

Draft PR, needs performance evaluation

Changes threading model to use shared storage, protected
by a mutex. This removes the overhead of cross-thread
signaling and communication on the request path in favor of
using coarse-locking.

Sets the global allocator to jemalloc for pelikan_segcache_rs

brayniac added 4 commits July 11, 2022 15:41
Replaces the queue between the workers and storage thread with a
locking implementation.
@brayniac brayniac marked this pull request as draft September 16, 2022 19:16
@brayniac brayniac changed the title WIP: segcache: use jemalloc WIP: segcache: use mutex and jemalloc Sep 19, 2022
@brayniac
Copy link
Contributor Author

brayniac commented Sep 19, 2022

For a 9 core configuration on:
Intel(R) Xeon(R) Platinum 8352V CPU @ 2.10GHz (dual-socket, 72 cores, 144 threads, SMT enabled)
100GbE NIC (E810-C) 72 queue pairs 1 per physical core, NIC local to socket 0
backend threads pinned to a set of cores on socket 0, avoiding NIC IRQs

With a workload:
80:20 Read:Write
4byte keys, 128byte values
1024 connections to backend

With a backend config:
1GB heap, 128KB segment, RandomFifo
7 workers + storage as baseline
8 workers (no storage thread) as experiment

This change brings the redline from 728kqps -> 830kqps

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant