-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
excess load leads to reboots #1317
Comments
Hi, is it ok if we login remotely to analyse? Then I need the vrm url + you need to enable remote support. |
interesting thing is, it's working for six hours with no restart, with no change from my side
where I can send the url and other info privately? |
hm, crashed again in 7min intervals.. seeing interesting logs
|
conversation is in the email, but here are some screenshots. top on v3.33, gui not on moving ants page: top on v3.33, after disabling dbus-generator (not in use); and gui not on moving ants page: And variation is mostly vrmlogger (normal, it has its cycle), and dbus-modbus-cli. It top on v3.22 (mosquitto instead of flashmq), dbus_generator also running since not needed; so running it always seems not to be a regression (but still an opportunity to not run it always) dbus-round trip time of the last 6 months: A normal common max number is 200ms, and what the graph shows is that per May 22nd it first started being higher, here is a close up: But for a long time its too high already. For example here at July 13th its already too high:
dbus roundtrip time while running the older version, v3.22: |
in May I installed grid meter (EM540), then in June installed EV charger |
CPU Load by energy meter is mostly the caused by sending multiple measurement updates per second, which cost a lot of CPU per process receiving those. CPU load by the EV Charging Station will, I think, mostly be in the dbus-modbus-cli spike every x seconds. Here is the number of dbus msges per second, made with dbus_signal_cntr.py. |
|
that is by design. The new Victron meter hits even more. The problem with that is mostly the CPU load used by all the python process in receiving aka processing the updates. |
hw: ccgx
Some time after upgrading to 3.40 I'm experiencing random reboots, data loss or incorrect data displayed/sent to vrm. Tried to downgrade to 3.33, issue persisted. Tried latest beta as well, no change. CCGX worked for two and half years with no issue.
After ssh to ccgx I found in log
top says
anything I can check/modify? Thanks
The text was updated successfully, but these errors were encountered: