-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High CPU usage #116
Comments
Hey @bbashy, how many messages are you sending across the connections and what sort of server are you running? Also, are you running Reverb alongside another an application or standalone? |
Hi. Using it as a presence channel for online user list. Hit 180 users at certain times. It is used on most pages across the site, so people will be leaving/joining on page change. People visit a few pages every few seconds. Using a Hetzner cloud server AMD EPYC (8c 16GB). No other applications, this server is just for this site. Was using Soketi (no CPU issues) but it doesn't work on node v20, which is why I switched to Reverb after Laravel upgrade.
In channels.php it's just a return of user data. Username, country, link to profile. It's been fine since I deployed last - 24/03/2024 22:00 UTC. |
Thanks for the reply. Going to close this for now, but please mention me in this thread if you see the issue again and I will reopen. |
@joedixon Started happening again. If there's any debugging I can do, let me know. |
@bbashy Would be good to get a better picture of your environment and use case. Would you mind answering the following:
// Connections
app(\Illuminate\Broadcasting\BroadcastManager::class)
->getPusher()
->get('/connections');
// Channels
app(\Illuminate\Broadcasting\BroadcastManager::class)
->getPusher()
->get('/channels', ['info' => 'subscription_count']); |
|
No need to install Pulse, just wanted to rule it out as a factor. Would be interesting to run with debug mode enabled for a little while (which in itself will increase CPU) to see if anything untoward is happening when the CPU spikes. |
Running with debug now and I'm running a script to gather the cpu % every 2 seconds. Will report here with data. |
Gone up to 100% again, here's the cpu usage, pretty much instant, not gradual. Running in debug for 24 hours.
The reverb logs don't show much apart from maybe this? This is quite regular in the whole log even when it was low usage.
then after 170~ lines of that, it does loads of;
|
@bbashy the logs look normal, but I'm interested to see if the issue is related to the pinging of connections which Reverb defaults to running every 60 seconds. Can you try increasing the value of |
I've set Must be something to do with the amount of pings along with the connections subscribe/unsubscribe. I use it as an online list and it's included on most pages, so page change will be a disconnect/connect. |
Right, but I would say that's pretty standard usage. Perhaps the difference is the online list as that will cause events to be broadcast across all connected clients on every connection / disconnection. Pusher actually limit this functionality to 100 connections only. |
Yep, I saw that and Soketi limit it too. I increased the limit on Soketi to 500 and the cpu usage was still below 10% even after a month of uptime. The online list is only on the homepage though, other pages only have a counter, but I guess it still updates to all clients. |
Gone up again. Shall I try a higher ping interval? It's weird it goes straight up 0-100% in 4 seconds.
If it's not something others have had and we can't resolve, I'll look into changing the online user list to something ws message based. |
It would be good to try increasing the ping interval to see if it has any impact on when the CPU spikes. Agree it's strange that it ramps up so quickly. Are you in a position to be able to temporarily disable the online user list so we can categorically determine that is actually the issue? |
@bbashy Sudden CPU spikes are definitely not to be expected and if you've encountered the exact same error in both the original Laravel Websockets package and now Reverb, this might in fact indicate a problem in the underlying Ratchet/ReactPHP components. Similar reports have been posted in the past and we have some ideas what could cause this, but we've never been able to reproduce this locally, see ratchetphp/Ratchet#939 (comment) / ratchetphp/Ratchet#939 (comment). If this matches your issue and you can reproduce this locally, please reach out and I'm happy to take a look at this. |
@clue Looks very similar and relates to my use-case of high connect/disconnect events. In theory, all we'd need to do is replicate a lot of users (100+) connecting/disconnecting. |
@bbashy The solution turned out to be to add an extension as indicated in the documentation Dockerfile
Also installed outside the host system in the supervisor settings supervisor.conf
finally |
Might have fixed an issue for you, but I believe mine was slightly different. I have no problems when running it under Soketi on the same machine/ulimit. |
finally
I believe that our answers can help other developers to cope with such a problem. |
This is still on my radar, I just haven't had the chance to take a deep dive on it just yet. |
Yesterday I experimented with installing the package it is necessary to specify in php.ini |
Did you install the ev extension? |
In my case, the problem was solved after installing the ev extension. |
I faced the same problem. I installed it on Forge Laravel using load balance 3 server |
Reverb will automatically switch to an ext-uv powered loop when available. Does ext-event still support it? |
Reverb Version
v1.0.0-beta4
Laravel Version
11.0.8
PHP Version
8.2.17
Description
After a day or so, I'm getting 100% CPU usage on the reverb:start process. If I restart reverb, it goes down to 1% CPU.
It's currently doing 160 odd users online in presence channel and that's about it. Had the same problem with this package: beyondcode/laravel-websockets#379
Steps To Reproduce
Have a reverb server running via
reverb:start
for a day with many presence connections (150+).The text was updated successfully, but these errors were encountered: