You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I use simple monitor with redis and mysql.
During night, doing nothing (no tasks at all) it goes wild and fills my disk with error logs with total load on 20% with 100% on 2-3 cores.
When I restart simple monitor container (only), it all goes well again, for some time (one or more hours), then does it again.
I limited the log max size, but still, during that spike, app seems working (displays all previous processes and jobs, but when I start new process instance, zeebe executes it just fine, but it does not appear in simple monitor, until I restart it.
Then it appears, so I suspect it stops pulling the data from redis at all during that incident.
I have a default redis and simple monitor setting, and I have ZEEBE_REDIS_MAX_TIME_TO_LIVE_IN_SECONDS=300 and ZEEBE_REDIS_DELETE_AFTER_ACKNOWLEDGE=true
nothing else.
So even if the log claims that some keys are non existent, how is it that it recovers just fine after simple monito restart, with not touching redis?
As a dirty workaround, I will probably restart the container once per hour, but I would really like to find out, what causes this and have it fixed.
Thanks.
Log says:
2024-04-16T22:18:35.053Z ERROR 1 --- [pool-6-thread-1] io.zeebe.redis.connect.java.ZeebeRedis : Consumer[group=simple-monitor, id=04b269b6-20a6-453d-b6f9-b3a8ee06564f] failed to read from streams 'zeebe:*'
java.util.concurrent.ExecutionException: io.lettuce.core.RedisCommandExecutionException: NOGROUP No such key 'zeebe:ERROR' or consumer group 'simple-monitor' in XREADGROUP with GROUP option
at java.base/java.util.concurrent.CompletableFuture.reportGet(Unknown Source) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.get(Unknown Source) ~[na:na]
at io.zeebe.redis.connect.java.ZeebeRedis.readNext(ZeebeRedis.java:289) ~[zeebe-redis-connector-0.9.10.jar:0.9.10]
at io.zeebe.redis.connect.java.ZeebeRedis.readFromStream(ZeebeRedis.java:263) ~[zeebe-redis-connector-0.9.10.jar:0.9.10]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[na:na]
at java.base/java.util.concurrent.FutureTask.run(Unknown Source) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[na:na]
at java.base/java.lang.Thread.run(Unknown Source) ~[na:na]
Caused by: io.lettuce.core.RedisCommandExecutionException: NOGROUP No such key 'zeebe:ERROR' or consumer group 'simple-monitor' in XREADGROUP with GROUP option
at io.lettuce.core.internal.ExceptionFactory.createExecutionException(ExceptionFactory.java:147) ~[lettuce-core-6.3.1.RELEASE.jar:6.3.1.RELEASE/12e6995]
at io.lettuce.core.internal.ExceptionFactory.createExecutionException(ExceptionFactory.java:116) ~[lettuce-core-6.3.1.RELEASE.jar:6.3.1.RELEASE/12e6995]
at io.lettuce.core.protocol.AsyncCommand.completeResult(AsyncCommand.java:120) ~[lettuce-core-6.3.1.RELEASE.jar:6.3.1.RELEASE/12e6995]
at io.lettuce.core.protocol.AsyncCommand.complete(AsyncCommand.java:111) ~[lettuce-core-6.3.1.RELEASE.jar:6.3.1.RELEASE/12e6995]
at io.lettuce.core.protocol.CommandWrapper.complete(CommandWrapper.java:63) ~[lettuce-core-6.3.1.RELEASE.jar:6.3.1.RELEASE/12e6995]
at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:745) ~[lettuce-core-6.3.1.RELEASE.jar:6.3.1.RELEASE/12e6995]
at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:680) ~[lettuce-core-6.3.1.RELEASE.jar:6.3.1.RELEASE/12e6995]
at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:597) ~[lettuce-core-6.3.1.RELEASE.jar:6.3.1.RELEASE/12e6995]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[netty-transport-4.1.105.Final.jar:4.1.105.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.105.Final.jar:4.1.105.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.105.Final.jar:4.1.105.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.105.Final.jar:4.1.105.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[netty-transport-4.1.105.Final.jar:4.1.105.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.105.Final.jar:4.1.105.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.105.Final.jar:4.1.105.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[netty-transport-4.1.105.Final.jar:4.1.105.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[netty-transport-4.1.105.Final.jar:4.1.105.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) ~[netty-transport-4.1.105.Final.jar:4.1.105.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) ~[netty-transport-4.1.105.Final.jar:4.1.105.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[netty-transport-4.1.105.Final.jar:4.1.105.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.105.Final.jar:4.1.105.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.105.Final.jar:4.1.105.Final]
The text was updated successfully, but these errors were encountered:
Hi, I use simple monitor with redis and mysql.
During night, doing nothing (no tasks at all) it goes wild and fills my disk with error logs with total load on 20% with 100% on 2-3 cores.
When I restart simple monitor container (only), it all goes well again, for some time (one or more hours), then does it again.
I limited the log max size, but still, during that spike, app seems working (displays all previous processes and jobs, but when I start new process instance, zeebe executes it just fine, but it does not appear in simple monitor, until I restart it.
Then it appears, so I suspect it stops pulling the data from redis at all during that incident.
I have a default redis and simple monitor setting, and I have ZEEBE_REDIS_MAX_TIME_TO_LIVE_IN_SECONDS=300 and ZEEBE_REDIS_DELETE_AFTER_ACKNOWLEDGE=true
nothing else.
So even if the log claims that some keys are non existent, how is it that it recovers just fine after simple monito restart, with not touching redis?
As a dirty workaround, I will probably restart the container once per hour, but I would really like to find out, what causes this and have it fixed.
Thanks.
Log says:
The text was updated successfully, but these errors were encountered: