Tracking issue: improve VTOrc topo+backend efficiency #17330
Labels
Component: VTorc
Vitess Orchestrator integration
Type: Enhancement
Logical improvement (somewhere between a bug and feature)
Type: Performance
Summary
This issue tracks some efficiency improvements to VTOrc in order to improve:
vtorc
process itselfAreas to Improve: Topology Calls
Number of Calls 🟡 (ongoing)
Every time VTOrc tries to fetch a subset of tablets, for example just a single shard, it's really fetching ALL tablets from ALL cells behind the scenes. This isn't unique to VTOrc, VTGate is also doing the same when
--keyspaces_to_watch
is set for exampleCompounding this, the current logic tries to load shards concurrently, meaning we're pulling
(N x tablets) x keyspace/shards
concurrently. With 10,000 tablets and 3,000 shards in 3 cells loading all shards will trigger 9,000 concurrentList
calls, each returning about 3,333 tablets. In total that is about 29 million topo records returned in 9,000List
calls 😬. Loading a single shard (usually a handful or tablets) causes 3 xList
calls that return 3,333 tablets (10k total) and 99%+ of records will be thrown away.In other areas of the code, state that could be cached/shared is frequently fetched from the topo. This PR hopes to address one case of this: #17319
Concurrency 🟡 (ongoing)
In many places in the VTOrc logic (and other parts of Vitess), the
--topo_read_concurrency
limit is not respected. The majority of cases are due to an unboundedfor
loop launching goroutines undersync.WaitGroup
s. The following PR started to address the unbounded goroutines (and thus topo connections) being used in VTOrc topo calls: #17071The following PR hopes to address this more generally by adding global read limits to the topo: #17276
Topo Schema 🔴 (deferred)
As discussed in #16761, the topo schema is not ideal for calls that only want a subset of tablets.
This issue discusses new index-like functionality for the topo that would allow efficient filtering of tablets without the need to "fetch the world"
Poll Model 🔴 (deferred)
Topo updates are polled on a fixed ticker loop. To reduce delays in topo updates, the ticker frequency is somewhat low (default
15s
). This is somewhat inefficient because polls will occur regardless of if anything changed.The use of the (new?)
WatchRecursive
feature of the Vitess topo would allow VTOrc to fetch all tablets less frequently due to knowledge of when changes occur. I suggest a fixed ticker poll still exists if this model is used, but it could happen much less frequentlyAreas to Improve: Backend
Regex ✅ (done)
It was found that the sqlite3 backend code was translating large SQL query text from ANSI SQL syntax to sqlite3. This was resolved in #17124
On a deployment with a small number of tablets (< 100) this reduced CPU usage by roughly 50%. On deployments with more tablets the high-overhead of topo calls seems to reduce the wins
CPU reduction post #17124 + #17071:
Queries without Index 🟡 (ongoing)
FullStatus
RPCs used by VTOrc do not include the alias of the current replication source. Due to this, VTOrc needs to perform unindexed queries on the backend database usingsource_host
andsource_port
This issue proposes
FullStatus
includes asource_alias
field so that this lookup will use thealias
primary key: #17297An alternative that was considered is indexing
source_host
andsource_port
, but I feel using the Primary Key and exposing thesource_alias
is more usefulLibrary 🔴 (paused/deferred)
The
modernc.org/sqlite
sqlite3 library used by VTOrc is one of the most inefficient implementations of sqlite3 in golangI question if VTOrc really needs a full ACID/MVCC-capable RDBMs for such simple data storage. I believe the use of an RDBMs is mostly legacy from
openark/orchestrator
-days. I plan to review the suitability of:struct
s)This issue discusses moving the backend to an interface that would allow a change in storage. In this repo I POC'd what a badger-based backend might look like: https://github.com/timvaillancourt/vtorc-badger-store
Areas to Improve: Code Structure
🔴 (deferred)
The VTOrc code frequently uses plain-
func
s vsstruct
s. Code-style gripes aside, this sometimes leads to state that could be shared to be re-fetched, or clunky global vars (plus dedicated mutex) are used to share the state. These global vars lead to a lot of boilerplate, one example being tests that need to clear the global vars indefer
sI feel a
struct
-method-based approach would allow more opportunities to optimize the logic. It would also be more consistent with the rest of the Vitess codebasePRs:
vtorc
topo calls #17071vtorc
: make SQL formatting consistent #17154--topo_read_concurrency
#17276vtorc
: fetch shard names only every--topo-information-refresh-duration
#17319vtorc
: fetch all tablets from cells once + filter during refresh #17388go/vt/topo
(to be used in tablet-indexing topo POC)Discussions:
go/vt/discovery
andtopo
#16761vtorc
to support--topo_read_concurrency
limit #17073sqlutils.ToSqlite3Dialect
for queries + DML/DDLs #17114vtorc
more consistent #17164vtorc
#17190FullStatus
RPC response #17297The text was updated successfully, but these errors were encountered: