You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I will preface it with that our pod was running on fairly low memory limits with 128mb, but this hasn't caused any issues before.
What happened was that in our main database, which CDC service has access to, we did an update on more than 10_000 rows and this caused the CDC service to shutdown repeatedly until we boosted the memory to a sufficient level. I think the problem is that we wouldn't expect an update on non-CDC related table to affect our CDC service at all. Maybe there are some improvements that can be done to the WAL consumption logic (we're using PostgeSQL).
Why even update the saga_instance table? We wanted to add some custom additional data there for some ad-hoc stuff. But this kind of led us to the thought that might the same issue happen with any update on any table? Because we would expect updates to be ignored totally.
The text was updated successfully, but these errors were encountered:
Oh dear. It looks like I only replied in my head. Sorry about the delay.
By default, the CDC has to 'process' all updates to the database including those for tables other than the MESSAGE table.
However, if the CDC is the only WAL consumer you could use the add-tables property to ignore the other tables.
I suspect that the OOM problem is due to how the wal2json plugin is configured.
It's likely to be using format version 1 which results in a JSON object per transaction.
As a result, a transaction that updates a large number of rows generates a large JSON object, which probably causes the OOM error.
The solution would be to use format version 2 (format-version property), which is a (smaller) JSON object per update. Can you try this and let me know?
I will preface it with that our pod was running on fairly low memory limits with 128mb, but this hasn't caused any issues before.
What happened was that in our main database, which CDC service has access to, we did an update on more than 10_000 rows and this caused the CDC service to shutdown repeatedly until we boosted the memory to a sufficient level. I think the problem is that we wouldn't expect an update on non-CDC related table to affect our CDC service at all. Maybe there are some improvements that can be done to the WAL consumption logic (we're using PostgeSQL).
Why even update the saga_instance table? We wanted to add some custom additional data there for some ad-hoc stuff. But this kind of led us to the thought that might the same issue happen with any update on any table? Because we would expect updates to be ignored totally.
The text was updated successfully, but these errors were encountered: