Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: MaxListenersExceededWarning eventually causes high CPU and never recovers #538

Open
bradrf opened this issue Feb 22, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@bradrf
Copy link

bradrf commented Feb 22, 2024

Describe the bug

On xmtp-js version 11.2.1 and Node version 18.16.0 (Linux), we have an XMTP client listening on conversations with conversation.streamMessages() as well as listening for new conversations with client.streamConversations(). If the egress network connectivity is lost for about 2-5 minutes (i.e. those connections eventually start timing out and causing XMTP to attempt to reconnect), the following warning is reported multiple times:

warn: MaxListenersExceededWarning: Possible EventTarget memory leak detected. 113 abort listeners added to [AbortSignal]. Use events.set
MaxListeners() to increase limit
    at EventTarget.[kNewListener] (node:internal/event_target:516:17)
    at EventTarget.[kNewListener] (node:internal/abort_controller:189:24)
    at EventTarget.addEventListener (node:internal/event_target:625:23)
    at new Request (node:internal/deps/undici/undici:7182:20)
    at fetch2 (node:internal/deps/undici/undici:10598:25)
    at Object.fetch (node:internal/deps/undici/undici:11455:18)
    at fetch (node:internal/process/pre_execution:230:25)
    at Object.<anonymous> (/workspaces/enum/lessor/node_modules/@xmtp/proto/ts/fetch.pb.ts:142:24)
    at Generator.next (<anonymous>)
    at /workspaces/enum/lessor/node_modules/@xmtp/proto/ts/dist/cjs/fetch.pb.js:13:71

If left in this state and when the client is managing over 6,000 conversations, the process uses 100% CPU with growing memory usage, and rarely recovers without a restart.

Expected behavior

Whatever is registering listeners on the abort signal should also include cleanup handling to ensure they are removed before adding new listeners.

Steps to reproduce the bug

For an easy local repro, set up an xmtp client as described above, even with just a handful of conversations, and then disconnect the upstream network (e.g. I had it running in a Docker container and used docker network disconnect ...) and let it sit in that mode for about 2-5 minutes. Eventually it should start reporting that warning. To see the full stack, I also added process.on('warning', (e) => console.warn(e.stack);).

@bradrf bradrf added the bug Something isn't working label Feb 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant