You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are having some problems with our Postgres integrations with a transaction that generates more than 1 GB on our replication_slot. In these kinds of problems, we are needed to stop the integration and restart it again.
As I can understand this is because of the limitations on Postgres implementation, if there is a transaction that generates more than 1 GB of data wal2json plugin couldn't handle this. Because of this issue, there are some configuration parameters on the wal2json side like format-version and write-in-chunks
We are having some problems with our Postgres integrations with a transaction that generates more than 1 GB on our replication_slot. In these kinds of problems, we are needed to stop the integration and restart it again.
As I can understand this is because of the limitations on Postgres implementation, if there is a transaction that generates more than 1 GB of data
wal2json
plugin couldn't handle this. Because of this issue, there are some configuration parameters on thewal2json
side likeformat-version
andwrite-in-chunks
It seems using
format-version=2
andwrite-in-chunks=true
look promising.Is there a way to use these parameters for wal2json or any other way to handle large transactions for Postgres implementation?
The text was updated successfully, but these errors were encountered: