Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: call type detection util bug fix #375

Merged
merged 1 commit into from
May 7, 2024
Merged

Conversation

ScripterSugar
Copy link
Contributor

Resolving issue #374

@bojand Can you look into the PR? Is this repository still maintained?

@kurayama
Copy link

@anonrig any chance for a release with this fix?
we're getting issues with the grpc-js >= 1.10 that are resolved with this branch.

@diogotorres97
Copy link

@anonrig @bojand can you take a look of this?

@diogotorres97
Copy link

Ping 😢

@fabiosangregorio
Copy link

Ping, we need this fix as well 🙏🏻

@anonrig anonrig merged commit 3859690 into malijs:master May 7, 2024
0 of 2 checks passed
@andres06-hub
Copy link

andres06-hub commented May 7, 2024

Ping!
Hopefully the tag will be published as soon as possible.

@andres06-hub
Copy link

@ScripterSugar You have to validate why the tests failed

@n0v1
Copy link
Contributor

n0v1 commented May 8, 2024

I tried to reproduce the test failures locally and I think tests are flaky in general. Most of the time they run fine but from time to time one or two randomly fail. I tested with Node.js 18.20.2 (as was used in the CI run for master branch).

With the latest master commit (3859690) which includes this PR I saw the following tests fail (failure reason in parentheses):

  • app.context › should merge properties (Rejected promise returned by test -> No address added out of total 1 resolved)
  • create › should dynamically create a named service from defition with multiple services (Rejected promise returned by test -> No address added out of total 1 resolved)
  • metadata › res stream: header and trailer metadata set (t.end() was never called)
  • metadata › duplex: trailer metadata (t.end() was never called)

With the previous master commit (479ef62) i.e. before this PR was merged I got similar test failures:

  • app › should start multipe servers from same application and handle requests (t.end() was never called)
  • create › should dynamically create service -> Rejected promise returned by test (No address added out of total 1 resolved)
  • metadata › req/res: header metadata ctx.sendMetadata and then set new metadata, should get first (t.end() was never called)
  • metadata › req/res: header metadata set even if error occurred (t.end() was never called)

Again: Most of the time all 188 tests pass. But when trying often enough one or two fail with either t.end() was never called or Rejected promise returned by test. It seems tests are more likely to fail after a "clean start":

rm -rf node_modules/
npm cache clean --force
npm ci
npm test

So my conclusion is that the failing tests are not related to the change here but where randomly failing before already.

@ScripterSugar
Copy link
Contributor Author

ScripterSugar commented May 9, 2024

@andres06-hub I didn't touch the tests since the failures were not related to my changes (at least in my judgement), I confirmed that same or similar tests were failing without my changes.

Thank you for the detailed analyzation @n0v1 😄

@fabiosangregorio
Copy link

Hopefully @anonrig is able to make the CI pass and ship this 🙏🏻 Almost there! 🤏

@ScripterSugar
Copy link
Contributor Author

ScripterSugar commented May 10, 2024

I think the reason tests randomly fails is based on randomness of getPort() function. since the tests runs concurrently there's a chance that ports are randomly collide or chosen port might being used by other process or system.

I'm not sure which approach is best to ensure the chosen port is free to use. I've looked over some third-party libraries like node-portfinder, but it feels like a bit of an overshoot. Since they return the port in promise form, we would have to modify all getPort() and getHost() calls if we decide to use these, and I don't really like that idea.

Any suggestions?

@fabiosangregorio
Copy link

How about running them sequentially instead of concurrently? They're not many and they should be fast, right?

@n0v1 n0v1 mentioned this pull request May 14, 2024
@n0v1
Copy link
Contributor

n0v1 commented May 14, 2024

I created #377 to address the randomly failing tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants