-
-
Notifications
You must be signed in to change notification settings - Fork 218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Process Repeaters, Part 1 #35033
base: master
Are you sure you want to change the base?
Process Repeaters, Part 1 #35033
Conversation
e06a034
to
e83e925
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indexes on boolean fields typically don't get used but a useful approach can be to use a partial index to match the query you are trying to optimize. In this case I think it it may work to create a partial index on next_attempt_at
with a condition for is_paused=False
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like so? d8d9642
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Still worth testing that to make sure the query uses it.
e83e925
to
db093d0
Compare
db093d0
to
48c3d7c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First pass, didn't dig into tests much.
corehq/motech/repeaters/tasks.py
Outdated
for domain, repeater_id, lock_token in iter_ready_repeater_ids_forever(): | ||
process_repeater.delay(domain, repeater_id, lock_token) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to be checking how long this task has been running for and exit if we are about to exceed the process_repeater_lock
timeout value? I assume it is very unlikely that there are always repeaters that we can iterate over since once we kick them off, they will no longer be returned in this query, but is it safe to still have a check of some sort?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to be checking how long this task has been running for and exit if we are about to exceed the
process_repeater_lock
timeout value?
process_repeater()
should take a maximum of five minutes, if that repeater's requests are timing out. The lock timeout is three hours.
I expect we would only exceed the timeout if Celery was killed and the update_repeater()
task was never called. So the timeout is intended to prevent a repeater from being locked forever.
I think it could be useful to use Datadog to monitor how long the task takes, but probably only to identify problematic endpoints, not to exit the task.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I may not be understanding, but I'm referring to the process_repeaters
task and the process_repeater_lock
with the 24 hour timeout.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for my confusion. I didn't read your question properly.
I wondered about that timeout too, because there is no problem if this loop just keeps going for more than 24 hours as long as new repeat records are being sent in a reasonable time.
What do you think of not having a timeout?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# repeater can use to send repeat records at the same time. This is a | ||
# guardrail to prevent one repeater from hogging repeat_record_queue | ||
# workers and to ensure that repeaters are iterated fairly. | ||
MAX_REPEATER_WORKERS = 144 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a function of # of celery workers right? It is out of scope of this PR, but it would be pretty cool if we could figure out how many workers celery has dedicated to a specific queue and set this to some percentage of that (looks like that is 50% at the moment).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I agree. It would be excellent to be able to calculate this value.
(It is 144 here because that was the number of workers at the time of writing. It was recently doubled.)
corehq/motech/repeaters/tasks.py
Outdated
if lock.acquire(blocking=False, token=lock_token): | ||
yielded = True | ||
yield domain, repeater_id, lock_token | ||
else: | ||
metrics_counter( | ||
'commcare.repeaters.process_repeaters.repeater_locked', | ||
tags={'domain': domain}, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@millerdev @gherceg I wonder whether you can help my to understand what is happening here on Staging.
Take a look at the "Repeater locked by domain" chart on Datadog for a few hours last night.
Staging has one Celery machine, and the repeat records queue has 4 gevent workers. I created 10 repeaters in each of 5 domains, and gave them 1000 repeat records each.
What I expected: for domain, repeater_id in iter_ready_repeater_ids_once()
would loop through the 50 repeaters. For each repeater, this loop would lock the repeater, then the workers would send 7 payloads, and then update_repeater()
would unlock the repeater (8 tasks across 4 workers). So by the time the outer while True
loop got back to the same repeater it would have been unlocked.
(That is how that works when testing locally with only one worker, and CELERY_TASK_ALWAYS_EAGER = True
.)
But Datadog shows that that is not what is happening on Staging. I am really scratching my head to understand why all the repeaters are still locked when the outer loop comes back around.
When this function returns, and then gets called again 5 minutes later by the process_repeaters()
task, then the repeaters are unlocked.
Is update_repeater()
never getting called, maybe, and the locks are timing out every ten minutes, so only half the repeaters are processed every five minutes? 🤔 Maybe. But if that is what is happening, then why does update_repeater()
get called locally (and in unit tests) but not in practice on Staging?
Is there something about Celery or Staging that I'm missing?
Or can you spot something in the code? Although I wasn't logging repeater lock-out at the time, I am pretty sure it was this commit that changed this behavior: 9c8d9fb -- Prior to returning only distinct repeater IDs, the repeaters did get unlocked, and the outer loop did yield them again.
... Is it simply that Celery chord takes a while?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've figured out what I'm missing:
The process_repeaters()
task (note the final "s") spawns an asynchronous process_repeater()
task for every repeater. This takes next to no time at all. Then it loops again, and of course all the repeaters are still locked, because not a single sent payload has got a reply yet. So it exits and waits a minute.
e16f4ab
to
e7b023b
Compare
e7b023b
to
d69113b
Compare
d69113b
to
dba8b4d
Compare
44611c8
to
90bf281
Compare
FindingsWhile working on this change, I found a more performant way to loop through repeaters. I've outlined the approach in the module docstring of The following Datadog screenshot illustrates how this process works. The "Repeat records ready" graph shows that about 1,000 repeat records are being processed, then a few hundred new repeat records are added, and then they are all processed. The "Repeat records by domain" graph shows that at first all repeat records are from the "nhooper" domain. The new repeat records have been added to four other domains. They are processed in a round-robin way where repeat records are pulled from all five domains fairly. Once the repeat records from the four domains have all been sent, the remainder of the repeat records are sent from the "nhooper" domain. The "Iterated all repeaters once" graph shows this from the perspective of looping over the repeaters with repeat records ready to be sent. Initially, only one repeater has repeat records ready to be sent, and so the loop is run through over and over rapidly. After new repeat records have been added to many repeaters across five domains, the loop takes much longer to loop through all of them. And after the new repeat records have been sent, once again The "Repeat record wait times" graph shows the age of the repeat records that are being sent. The first repeater, in the "nhooper" domain, has the oldest repeat records ("lt_466560s" is less than 5 days). Lastly, the gaps in the "Repeat records by domain" and "Repeat record wait times" graphs are interesting. They appear to be when the repeat record queue workers are being used by the old process. Are we there yet?QA is complete. Metrics have been added for visibility into the process. Further testing built off the work that QA had done, in order to improve how this new process deals with backing off. And then more testing resulted in an improvement to the way in which we keep processing repeaters until all repeat records have been processed. I think we're there. Follow-ups
|
A note to reviewersIf you have reviewed this PR before, and you have a rough idea of what this new process does, then the important changes since you read this last are probably just the last three commits: |
corehq/motech/repeaters/tasks.py
Outdated
tasks.append(process_repeater_chord(domain, repeater_id)) | ||
|
||
if tasks: | ||
result = group(*tasks).apply_async() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the maximum size of tasks
? Does group
work well with very large lists of tasks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The maximum possible size is the number of repeaters in the environment if all of the repeaters are in use. In practice, though, the number of repeaters in use at any moment will be a fraction of that.
On Staging I tested with 60 repeaters concurrently in use, across 4 repeat record queue workers.
I think Prod currently has 144 or 288 repeat record queue workers. I'm not sure of the average number of repeaters in use at any moment.
corehq/motech/repeaters/tasks.py
Outdated
# https://docs.celeryq.dev/en/stable/userguide/tasks.html#avoid-launching-synchronous-subtasks | ||
# but in this situation it is safe because the | ||
# `process_repeater()` tasks use a different queue. | ||
result.get(disable_sync_subtasks=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to restructure this to avoid waiting on this result? From the link above
Having a task wait for the result of another task is really inefficient
This wait for result means that the next task cannot be started until all tasks in the current group have completed, which is indeed inefficient. The slowest task will throttle the throughput of the entire queue. For example, if on each pass 100 tasks are enqueued, and 99 of them complete very quickly while one takes 60s, then this loop will execute at most once per minute when throughput could be much higher.
It seems like it would be better if the next task for a slow repeater was delayed (cannot start until the last in-flight task for that repeater has completed) while all remaining queue capacity is used to process tasks as fast as possible. Could this be accomplished by having update_repeater
enqueue the next batch of tasks for the repeater if it does not need to back off?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to restructure this to avoid waiting on this result?
I could not think of a way, but I would be delighted if anyone has suggestions. (Copilot was no help either, but maybe I was asking wrong.)
What I was hoping for was something similar to Executor.map()
that could lazily map repeater IDs or process_repeater()
tasks onto Celery workers as the workers free up. The missing piece in the Celery puzzle is that I couldn't find a way to wait for a worker to free up in order to pass it the next repeater ID or task.
So what you end up with is the problem that I encountered with my first approach: All of the repeaters are locked as the first iteration of iter_ready_repeater_ids()
is consumed. I couldn't find a way to wait for a worker to free up in order to pass it the next ID or task. And so the function that is spawning the tasks will exit.
It is faster to wait for all of the workers to complete (my second approach) than to exit and wait for process_repeaters()
to run again (my first approach).
A third approach could be to reintroduce the repeater lock, but to block on acquiring the lock, instead of to skip locked repeaters. I expect that performance will be the same as waiting for tasks to finish, because you'll still end up waiting for the slowest repeater.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think of 03197dc?
Co-authored-by: Daniel Miller <[email protected]>
WalkthroughThe pull request introduces a comprehensive set of changes to the repeater and lock management systems in the CodeRabbit project. The modifications span multiple files and focus on enhancing the functionality of repeaters, introducing a new feature flag for processing repeaters, and improving lock management. Key changes include the addition of a new The modifications aim to provide more robust and flexible handling of repeat records, with enhanced error management, backoff strategies, and parallel processing capabilities. The changes also introduce more granular control over worker allocation and repeater processing. Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (16)
corehq/util/metrics/tests/test_lockmeter.py (1)
163-168
: Consider mocking Redis connection for faster tests.While the test correctly verifies the behavior, using a real Redis connection makes the test slower and dependent on external services. Consider using a mock Redis connection instead.
@patch('django_redis.get_redis_connection') def test_local(self, mock_redis): mock_lock = Mock(spec=Lock) mock_lock.local = threading.local() mock_redis.return_value = Mock(spec=['lock']) mock_redis.return_value.lock.return_value = mock_lock name = uuid1().hex with Lock(mock_redis(), name, timeout=5) as redis_lock: lock = MeteredLock(redis_lock, name) self.assertEqual(type(lock.local), threading.local)corehq/motech/repeaters/tasks.py (4)
317-321
: Consider Setting a Timeout forprocess_repeaters_lock
Currently, the lock
process_repeaters_lock
is acquired withtimeout=None
, which means it will never expire if not released properly. In cases where theprocess_repeaters()
task is killed unexpectedly without releasing the lock, the lock remains indefinitely, requiring manual intervention via the management commandexpire_process_repeaters_lock
.Setting a reasonable timeout (e.g., a few hours) for the lock can prevent potential deadlocks and reduce the need for manual intervention.
351-355
: Clarify the Use of a Random Task SelectionIn the loop, a random task is selected for immediate execution:
random_task_num = random.randrange(len(tasks)) random_task = tasks.pop(random_task_num)It's unclear why a random task is chosen instead of, for example, always selecting the first task. Clarifying the rationale in a comment can improve code readability. If the intention is to prevent bias towards any particular repeater or to simulate fairness, explicitly stating that would be helpful.
374-397
: Optimize Iteration by Avoiding Modification of the Dictionary During IterationModifying
repeater_ids_by_domain
while iterating over it usinglist(repeater_ids_by_domain.keys())
can lead to unexpected behavior. A better approach is to iterate over a copy of the keys:for domain in list(repeater_ids_by_domain): # existing codeThis ensures that deleting keys within the loop doesn't affect the iteration.
453-476
: Handle Exceptions More SpecificallyCatching the broad
Exception
class can make debugging more difficult and may mask other issues. Consider catching more specific exceptions where possible, or re-raising exceptions after logging to ensure that unexpected issues are not silently ignored.corehq/motech/repeaters/models.py (4)
417-420
: Ensurenum_workers
Does Not Fall Below Minimum ThresholdWhile you cap the
num_workers
atMAX_REPEATER_WORKERS
, consider ensuring that it doesn't fall below a minimum threshold (e.g., 1) to prevent scenarios where no workers are assigned due to misconfiguration:return max(1, min(num_workers, settings.MAX_REPEATER_WORKERS))This ensures at least one worker is always available for processing.
435-444
: Clarify the Reset Logic inreset_backoff
The method
reset_backoff
setslast_attempt_at
andnext_attempt_at
toNone
:self.last_attempt_at = None self.next_attempt_at = NoneIt might be clearer to set
last_attempt_at
to the current time to reflect the last attempt accurately. If resetting toNone
is intentional to trigger immediate retry logic, consider adding a comment to explain this behavior for future maintainability.
1011-1019
: Potential Inefficiency incount_all_ready
MethodThe
count_all_ready
method performs two separate counts and adds them:return ( Repeater.objects.get_all_ready_next_attempt_null().count() + Repeater.objects.get_all_ready_next_attempt_now().count() )This could be inefficient. Consider combining the counts into a single query with an
OR
condition similar to the suggestion made earlier to improve performance.
1474-1496
: Combinedomain_can_forward
anddomain_can_forward_now
FunctionsThe functions
domain_can_forward
anddomain_can_forward_now
serve similar purposes, with the latter adding an extra check for paused data forwarding:def domain_can_forward_now(domain): return ( domain_can_forward(domain) and not toggles.PAUSE_DATA_FORWARDING.enabled(domain) )Consider combining them into a single function with an optional parameter to check for paused forwarding, or at least adding docstrings to clarify their differences. This can reduce redundancy and potential confusion.
corehq/ex-submodules/dimagi/utils/couch/tests/test_redis_lock.py (1)
11-30
: Test could be more robust with additional assertions and cleanup.The test verifies token-based Redis lock functionality but could be enhanced:
- Add cleanup in case of test failure
- Verify lock state after release
- Test negative scenarios (e.g., wrong token)
Consider enhancing the test with a context manager for cleanup and additional assertions:
def test_get_redis_lock_with_token(): lock_name = 'test-1' - metered_lock = get_redis_lock(key=lock_name, name=lock_name, timeout=1) - assert isinstance(metered_lock, MeteredLock) - # metered_lock.lock is a TestLock instance - test_lock = metered_lock.lock - assert isinstance(test_lock, TestLock) - redis_lock = test_lock.lock - assert isinstance(redis_lock, RedisLock) + try: + metered_lock = get_redis_lock(key=lock_name, name=lock_name, timeout=1) + assert isinstance(metered_lock, MeteredLock) + # metered_lock.lock is a TestLock instance + test_lock = metered_lock.lock + assert isinstance(test_lock, TestLock) + redis_lock = test_lock.lock + assert isinstance(redis_lock, RedisLock) - token = uuid.uuid1().hex - acquired = redis_lock.acquire(blocking=False, token=token) - assert acquired + token = uuid.uuid1().hex + acquired = redis_lock.acquire(blocking=False, token=token) + assert acquired + assert redis_lock.local.token == token - # What we want to be able to do in a separate process: - metered_lock_2 = get_redis_lock(key=lock_name, name=lock_name, timeout=1) - redis_lock_2 = metered_lock_2.lock.lock - redis_lock_2.local.token = token - # Does not raise LockNotOwnedError: - redis_lock_2.release() + # What we want to be able to do in a separate process: + metered_lock_2 = get_redis_lock(key=lock_name, name=lock_name, timeout=1) + redis_lock_2 = metered_lock_2.lock.lock + redis_lock_2.local.token = token + # Does not raise LockNotOwnedError: + redis_lock_2.release() + + # Verify lock is released + assert redis_lock.acquire(blocking=False, token=token) + finally: + redis_lock.release() + + # Test negative scenario + wrong_token = uuid.uuid1().hex + with pytest.raises(LockNotOwnedError): + redis_lock_2.local.token = wrong_token + redis_lock_2.release()corehq/motech/repeaters/const.py (2)
9-9
: Consider documenting the rationale for the 5-minute retry wait.The comment "Repeaters back off slower" could be more descriptive about why 5 minutes was chosen as the minimum retry wait time.
-MIN_REPEATER_RETRY_WAIT = timedelta(minutes=5) # Repeaters back off slower +MIN_REPEATER_RETRY_WAIT = timedelta(minutes=5) # Minimum 5-minute delay between retries to prevent overwhelming external systems
23-25
: Convert TODO into a tracked issue.The TODO comment about removing MAX_BACKOFF_ATTEMPTS should be tracked in the issue system.
Would you like me to create a GitHub issue to track the removal of MAX_BACKOFF_ATTEMPTS since it's being conflated with MAX_ATTEMPTS and is redundant with MAX_RETRY_WAIT?
corehq/motech/repeaters/tests/test_tasks.py (3)
234-262
: Test could be more explicit about round-robin behavior.The test verifies the round-robin distribution of repeater IDs across domains but could be more explicit in its documentation and assertions.
def test_iter_ready_repeater_ids(): + """Test that iter_ready_repeater_ids yields repeater IDs in a round-robin fashion across domains. + + Expected behavior: + 1. First round: One repeater from each domain (domain1/id3, domain2/id5, domain3/id6) + 2. Second round: One repeater from remaining domains (domain1/id2, domain2/id4) + 3. Third round: Last repeater from domain1 (domain1/id1) + """ with ( patch( 'corehq.motech.repeaters.tasks.Repeater.objects.get_all_ready_ids_by_domain',
Line range hint
581-598
: Add test for edge cases with PROCESS_REPEATERS flag.The tests verify basic functionality with the PROCESS_REPEATERS flag but could cover more edge cases.
Consider adding tests for:
- Domain not in enabled domains
- Flag enabled globally but disabled for domain
- Transition cases when flag is toggled
Line range hint
813-820
: Test could verify count with mixed states.The test only verifies count with PENDING state records.
def test_count(self): + """Test that count_all_ready returns correct count for records in different states.""" with ( make_repeat_record(self.repeater, RECORD_PENDING_STATE), make_repeat_record(self.repeater, RECORD_PENDING_STATE), - make_repeat_record(self.repeater, RECORD_PENDING_STATE), + make_repeat_record(self.repeater, RECORD_FAILURE_STATE), + make_repeat_record(self.repeater, RECORD_SUCCESS_STATE), # Should not be counted ): count = RepeatRecord.objects.count_all_ready() - self.assertEqual(count, 3) + self.assertEqual(count, 3, "Should count PENDING and FAILURE states only")settings.py (1)
628-635
: Consider making worker counts configurable via environment variables.While the default values are reasonable, consider:
- Making these values configurable through environment variables with the current values as defaults
- Documenting how these values were determined
- Adding validation to ensure MAX_REPEATER_WORKERS is greater than DEFAULT_REPEATER_WORKERS
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
corehq/ex-submodules/dimagi/utils/couch/tests/test_redis_lock.py
(1 hunks)corehq/motech/repeaters/const.py
(1 hunks)corehq/motech/repeaters/management/commands/expire_process_repeaters_lock.py
(1 hunks)corehq/motech/repeaters/models.py
(9 hunks)corehq/motech/repeaters/tasks.py
(4 hunks)corehq/motech/repeaters/templates/repeaters/partials/repeater_row.html
(1 hunks)corehq/motech/repeaters/tests/test_models.py
(4 hunks)corehq/motech/repeaters/tests/test_repeater.py
(7 hunks)corehq/motech/repeaters/tests/test_tasks.py
(2 hunks)corehq/toggles/__init__.py
(1 hunks)corehq/util/metrics/lockmeter.py
(1 hunks)corehq/util/metrics/tests/test_lockmeter.py
(2 hunks)settings.py
(1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
corehq/util/metrics/tests/test_lockmeter.py
174-174: Found useless expression. Either assign it to a variable or remove it.
(B018)
⏰ Context from checks skipped due to timeout of 90000ms (6)
- GitHub Check: tests (python-sharded-and-javascript)
- GitHub Check: tests (python, bf)
- GitHub Check: Lint Javascript
- GitHub Check: tests (python, 6a)
- GitHub Check: tests (python, 05)
- GitHub Check: test
🔇 Additional comments (7)
corehq/util/metrics/lockmeter.py (1)
59-63
: LGTM! Clear and well-documented property implementation.The property implementation correctly delegates to the underlying lock's local attribute and includes clear documentation about the AttributeError behavior.
corehq/motech/repeaters/tasks.py (1)
1-73
: Excellent and Comprehensive Module DocumentationThe added module-level docstring provides a clear and thorough explanation of the
check_repeaters()
andprocess_repeaters()
tasks, their workflows, limitations, and the improvements introduced. This enhances code maintainability and helps new developers understand the system's architecture.corehq/motech/repeaters/management/commands/expire_process_repeaters_lock.py (1)
8-16
: LGTM!The management command is straightforward and correctly expires the
PROCESS_REPEATERS_KEY
lock to allowprocess_repeaters()
to start when necessary.corehq/motech/repeaters/tests/test_models.py (1)
Line range hint
363-372
: Address potential race condition in timing-sensitive test.As noted in a previous review, there could be a race condition if garbage collection or other operations delay execution between record creation and duration calculation.
Use
freezegun.freeze_time
to ensure consistent timing:+ @freeze_time('2025-01-01 12:00:00') def test_repeat_record_no_attempts(self): - five_minutes_ago = datetime.utcnow() - timedelta(minutes=5) + five_minutes_ago = datetime.utcnow() - timedelta(minutes=5) repeat_record = RepeatRecord.objects.create( repeater=self.repeater, domain=DOMAIN, payload_id='abc123', registered_at=five_minutes_ago, ) wait_duration = _get_wait_duration_seconds(repeat_record) self.assertEqual(wait_duration, 300)corehq/motech/repeaters/tests/test_repeater.py (1)
Line range hint
1337-1399
: LGTM! Comprehensive test coverage for repeater backoff behavior.The test class thoroughly verifies:
- Race condition handling between backoff and pause operations
- Initial backoff interval using MIN_REPEATER_RETRY_WAIT
- Exponential backoff with interval doubling
- Maximum backoff capped at MAX_RETRY_WAIT
corehq/toggles/__init__.py (1)
2011-2021
: LGTM! Well-structured feature toggle for repeater processing.The toggle is:
- Properly documented with clear description
- Correctly scoped as internal feature
- Has designated owner for accountability
corehq/motech/repeaters/templates/repeaters/partials/repeater_row.html (1)
8-10
: LGTM! Clear presentation of next attempt timing.The template change:
- Correctly checks all required conditions
- Uses clear date/time format
- Is properly feature flagged
TLDR: On the whole, I think the (new) new approach is better. Here is the breakdown: BeforeIn the example above, if you look from about 15:48 to 15:53, the round-robin approach works well when all 5 domains have repeat records ready to be sent. But there are times when workers are idle, and the downward gradient on "Repeat records ready" is not as steep as below. AfterIn this example, it takes longer to start processing the repeaters for the "nhooper" and "repeat-records-2" domains, because we check every 5 minutes instead of every minute. When we do start processing them, their tasks dominate for a little while (18:33 to 18:34) and then Celery flips back to the tasks for the first three domains, and it cycles like that until about 18:37 when the first three domains run out of repeat records. However, there are no idle workers, and the gradient on "Repeat records ready" is steeper. The process is also fair enough for no domain to be neglected, and no domain to hog resources. It seems clear that this is a better approach. |
Technical Summary
At the moment, repeat records are processed independently, and this prevents us from making better decisions about the APIs that we are sending data to. e.g. If a remote server is offline, we will keep trying to send new payloads to it even if all the previous payloads failed.
The approach taken in this PR intends to process repeaters smarter, and also to share repeat record queue workers more fairly across domains. It honors rate limiting by skipping repeaters or domains when they are rate-limited.
Context:
check_repeaters()
task iterates repeaters #34946This branch implements the feedback given in the draft PR #34946 with commits squashed and rebased. 🐟 🐠 🐬
Almost all new functionality is behind a feature flag. The one change that is not is that up to once a minute (less often if there are repeaters being processed) we loop through the results of
RepeaterManager.all_ready()
. (That happens here.)Enabling the "process_repeaters" feature flag will switch a domain from the current approach to the new approach. The domain can be switched back to the current approach without consequences.
There are two migrations in this PR. I have kept them separate to make it easier to modify the second one. I am looking for confirmation that the indexes in Index fields used by
RepeaterManager.all_ready()
are correct.Currently there are no new Datadog metrics in this PR. I expect that as we test this more / switch domains over, we will add metrics in this PR and follow-up PRs.
Feature Flag
process_repeaters
Safety Assurance
Safety story
Automated test coverage
Includes coverage. Please highlight any gaps.
QA Plan
QA ticket: QA-7038
Migrations
Rollback instructions
Labels & Review