-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Turborepo hanging in CI #8281
Comments
Hey, thanks for the issue. Could you try running in verbose mode (pass |
I'm sorry, I cannot modify the workflows again, but I have logs from last week ago; after that, I just used older version of Turborepo, after reading issues like #7382 or #5773, because the logs shows only that Turborepo waits for task, but doesn't say why. The task itself was codegen for EdgeDB, but then I replaced it with other commands (don't remember what these were), and sometimes it worked, sometimes didn't (but consistently for each specific variation of the task). I'm guessing the issue will be really weird and I'm not sure me debugging it will help. Anyway, here are the logs:
|
I'm getting some timeouts on Github actions as well... here are my verbose logs if helpful:
|
Same issue over here inside GitHub Action, just stuck Logs are on
|
Same issue here with v2.0.2. |
Very similar issue here when trying to run a Shortened down the logs to after the script was invoked @react2go/fulfillmentvu2:test:script:
@react2go/fulfillmentvu2:test:script: > @react2go/[email protected] test:script /builds/ware2go/engineering/frontend/react2go/apps/my-app
@react2go/fulfillmentvu2:test:script: > ./cicd/prepare-build.sh
@react2go/fulfillmentvu2:test:script:
@react2go/fulfillmentvu2:test:script: Preparing App...
@react2go/fulfillmentvu2:test:script: Script is going
@react2go/fulfillmentvu2:test:script: Script is going and is in the else block
@react2go/fulfillmentvu2:test:script: Start: Setting up App in non-production mode.
@react2go/fulfillmentvu2:test:script:
2024-06-07T15:07:05.012+0000 [DEBUG] turborepo_telemetry: Starting telemetry event queue flush (num_events=4)
2024-06-07T15:07:05.012+0000 [DEBUG] turborepo_telemetry: Done telemetry event queue flush
2024-06-07T15:07:05.013+0000 [TRACE] log: signal: Want
2024-06-07T15:07:05.013+0000 [TRACE] log: signal: Want
2024-06-07T15:07:05.117+0000 [TRACE] tokio_util::codec::framed_impl: attempting to decode a frame
2024-06-07T15:07:05.117+0000 [TRACE] tokio_util::codec::framed_impl: frame decoded from buffer
2024-06-07T15:07:05.117+0000 [TRACE] tokio_util::codec::framed_impl: attempting to decode a frame
2024-06-07T15:07:05.117+0000 [TRACE] tokio_util::codec::framed_impl: frame decoded from buffer
2024-06-07T15:07:05.117+0000 [TRACE] tokio_util::codec::framed_impl: attempting to decode a frame
2024-06-07T15:10:03.671+0000 [TRACE] log: signal: Closed
2024-06-07T15:10:03.672+0000 [DEBUG] log: Sending warning alert CloseNotify
2024-06-07T15:10:03.672+0000 [TRACE] log: deregistering event source from poller We're using pnpm and on a self hosted Gitlab CI Runner and we're running on k8s for all of this. The same script works fine on the v1 version available right now. |
I believe this is related to the Environment variables being set as strict mode by default on the latest version. https://turbo.build/repo/docs/crafting-your-repository/using-environment-variables#strict-mode
in Vitest if not watch option is set, it will default to the environment variable CI: https://vitest.dev/config/#watch
If that variable is not available to the test task, then it will enable watch mode... adding the env variable to the test task worked for me:
|
### Description By convention, CI systems will set a `CI` environment variable. Given the ubiquity of this variable, there are some tools that rely on this variable, which is what I think we're seeing in #8281.
Folks, we've shipped a few changes in Please do try it out and report back so we know we squashed it. If not, would appreciate that feedback as well. 🙏 |
that fixed it for me! thank you :) |
@anthonyshew Sadly, Turborepo still hangs: https://github.com/jakubmazanec/apps/actions/runs/9453810487/job/26039943627?pr=21 |
@jakubmazanec If you re-run it with |
Doesn't seems to have worked. I will retry once I have credits again for running GitHub Actions (this bug cost me 1'000 min of compute 🫠) |
2.0.4-canary.0 Fixed this for me, thank you 👍 |
My build was hanging but turns out it was because storybook telemetry was timing out in CI. I used env variables to disable and it worked. The env vars to disable telemetry don't seem to work for turborepo though since it still prints out many logs with -vvv saying it's doing telemetry |
This comment was marked as spam.
This comment was marked as spam.
Does setting the |
Setting TURBO_UI=false infront of EDIT: turbo 2.0.7-canary.0 same thing |
|
Also, please remember that - at least for me - the problem started with version 1.13.0. In my original comment I used v1.10.3, and now I have also tested versions 1.11.1 and 1.12.5, both working fine. |
2.0.4-canary.0 seemed to have fix this error for me as well! Thank you :) (Was using turbo 2.0.3 before) |
@jakubmazanec noticing the same thing.. we are trying to upgrade to |
2 is hanging vercel/turborepo#8281
2 is hanging vercel/turborepo#8281
I have a case that it hangs too. https://github.com/repobuddy/repobuddy/tree/turbo-2-hang it hangs when running the When running them directly, they produce these messages but the script passes and terminates properly: [45390:0726/003734.893877:ERROR:bus.cc(407)] Failed to connect to the bus: Failed to connect to socket /run/user/1000/bus: No such file or directory
[45390:0726/003734.893902:ERROR:bus.cc(407)] Failed to connect to the bus: Failed to connect to socket /run/user/1000/bus: No such file or directory This happens on both CI (ubuntu) and WSL (ubuntu) |
@unional can you confirm that they also hang with |
Still hang.
The UI bounces between electron-ts:test:
electron-ts:test: > electron-ts@ test /home/unional/code/repobuddy/repobuddy/testcases/electron-ts
electron-ts:test: > cross-env NODE_NO_WARNINGS=1 jest
electron-ts:test:
electron-renderer-ts:test:
electron-renderer-ts:test: > electron-renderer-ts@ test /home/unional/code/repobuddy/repobuddy/testcases/electron-renderer-ts
electron-renderer-ts:test: > cross-env NODE_NO_WARNINGS=1 jest
electron-renderer-ts:test:
electron-ts:test: RUNS src/feature.spec.ts
electron-ts:test: RUNS src/feature.spec.ts
electron-ts:test: [14460:0726/213008.778683:ERROR:ozone_platform_x11.cc(244)] Missing X server or $DISPLAY
electron-ts:test: [14460:0726/213008.778736:ERROR:env.cc(257)] The platform failed to initialize. Exiting.
electron-renderer-ts:test: [14475:0726/213008.804590:ERROR:ozone_platform_x11.cc(244)] Missing X server or $DISPLAY
electron-renderer-ts:test: [14475:0726/213008.804636:ERROR:env.cc(257)] The platform failed to initialize. Exiting.
2024-07-26T21:30:08.881-0700 [DEBUG] turborepo_telemetry: Starting telemetry event queue flush (num_events=3)
2024-07-26T21:30:08.881-0700 [DEBUG] turborepo_telemetry: Done telemetry event queue flush
2024-07-26T21:30:08.882-0700 [TRACE] log: signal: Want
2024-07-26T21:30:08.882-0700 [TRACE] log: signal: Want
2024-07-26T21:30:09.089-0700 [TRACE] tokio_util::codec::framed_impl: attempting to decode a frame
2024-07-26T21:30:09.089-0700 [TRACE] tokio_util::codec::framed_impl: frame decoded from buffer
2024-07-26T21:30:09.089-0700 [TRACE] tokio_util::codec::framed_impl: attempting to decode a frame
2024-07-26T21:30:09.089-0700 [TRACE] tokio_util::codec::framed_impl: frame decoded from buffer
electron-ts:test: RUNS src/feature.spec.ts
^C2024-07-26T21:30:30.380-0700 [DEBUG] turborepo_lib::process::child: stopping child process
2024-07-26T21:30:30.380-0700 [DEBUG] turborepo_lib::process::child: starting shutdown
2024-07-26T21:30:30.380-0700 [DEBUG] turborepo_lib::process::child: sending SIGINT to child 14303
2024-07-26T21:30:30.380-0700 [DEBUG] turborepo_lib::process::child: waiting for child 14303
2024-07-26T21:30:30.380-0700 [DEBUG] turborepo_lib::process: waiting for 2 processes to exit
2024-07-26T21:30:30.380-0700 [DEBUG] turborepo_lib::process::child: stopping child process
> 2024-07-26T21:30:30.380-0700 [DEBUG] turborepo_lib::process::child: starting shutdown
2024-07-26T21:30:30.380-0700 [DEBUG] turborepo_lib::process::child: sending SIGINT to child 14342
> ...Finishing writing to cache... 2024-07-26T21:30:30.380-0700 [DEBUG] turborepo_lib::process::child: waiting for child 14342
ELIFECYCLE Test failed. See above for more details.
2024-07-26T21:30:30.410-0700 [DEBUG] turborepo_lib::process::child: child process stopped
2024-07-26T21:30:30.410-0700 [TRACE] turborepo_lib::process: process exited: Ok(Some(Killed))
2024-07-26T21:30:30.410-0700 [DEBUG] turborepo_lib::process: waiting for 2 processes to exit
2024-07-26T21:30:30.410-0700 [TRACE] turborepo_lib::process: process exited: Ok(Some(Killed))
2024-07-26T21:30:30.412-0700 [DEBUG] turborepo_lib::process::child: child process stopped
2024-07-26T21:30:30.412-0700 [TRACE] turborepo_lib::process: process exited: Ok(Some(Killed))
2024-07-26T21:30:30.412-0700 [TRACE] turborepo_lib::process: process exited: Ok(Some(Killed))
2024-07-26T21:30:30.412-0700 [DEBUG] turborepo_lib::process: waiting for 0 processes to exit
cyberuni zsh repobuddy main ≡ ~4 9
ERROR run failed: command exited (1)
2024-07-26T21:30:30.418-0700 [TRACE] tower::buffer::worker: worker polling for next message
2024-07-26T21:30:30.418-0700 [TRACE] tower::buffer::worker: buffer already closed
2024-07-26T21:30:30.418-0700 [DEBUG] turborepo_telemetry: Starting telemetry event queue flush (num_events=2)
2024-07-26T21:30:30.418-0700 [DEBUG] turborepo_telemetry: Done telemetry event queue flush
2024-07-26T21:30:30.418-0700 [TRACE] log: signal: Closed
2024-07-26T21:30:30.418-0700 [TRACE] log: deregistering event source from poller
2024-07-26T21:30:30.418-0700 [TRACE] log: signal: Want
2024-07-26T21:30:30.418-0700 [TRACE] log: signal: Want
2024-07-26T21:30:30.629-0700 [TRACE] tokio_util::codec::framed_impl: attempting to decode a frame
2024-07-26T21:30:30.629-0700 [TRACE] tokio_util::codec::framed_impl: frame decoded from buffer
2024-07-26T21:30:30.629-0700 [TRACE] tokio_util::codec::framed_impl: attempting to decode a frame
2024-07-26T21:30:30.629-0700 [TRACE] tokio_util::codec::framed_impl: frame decoded from buffer
2024-07-26T21:30:30.629-0700 [TRACE] tokio_util::codec::framed_impl: attempting to decode a frame
2024-07-26T21:30:30.630-0700 [TRACE] log: signal: Closed
2024-07-26T21:30:30.630-0700 [DEBUG] turborepo_telemetry: telemetry handle closed
2024-07-26T21:30:30.630-0700 [DEBUG] log: Sending warning alert CloseNotify
2024-07-26T21:30:30.630-0700 [TRACE] log: deregistering event source from poller |
fwiw, I have been experiencing this issue with my electron apps on WSL2 since upgrading to turborepo v2. Edit: this flag is also working on |
I was also running in to this issue, and found success after upgrading to 2.0.10 and setting |
@R-Bower For us with 2.0.11 and The log out put before it hangs is the following:
|
|
Just tried with 2.0.14 on repobuddy. The issue still exists |
Having the same issue in 2.1.3, tested with These are the last lines of logs before it hangs
|
We are still having the issue - but only with the lint task. Is there anything we can do from a config perspective to make it work? |
@petergrau Can you try forcibly closing Just double checking a few things:
|
In the interest of casting a wider net for more info, if folks can supply us with reproductions for these cases, it would be a huge help. 🙏 The earlier requests for |
@chris-olszewski thanks for following up.
Tried that and seeing the same behaviour.
Yes - locally the tasks runs fine.
The workflow is using
eslint
nothing like that in the config |
Very sorry for not recommending this earlier, but can you add |
@chris-olszewski no problem - do you want me to combine that with |
|
I have noticed that after upgrading to 2.1.3 the CI job fails much quicker, before 2.1.3 it would just stall and the job would timeout after around 15 minutes, but now it seems to fail after just a couple of minutes |
FYI, I see that you are using
This has caused freeze issues in boh yarn and npm already, I suggest you check out npm/npm-install-checks#120 which is a viable fix for npm, basically the getReport has to run before anything else in the program (like fetch for telemetry) and not be used after that Though I'm not sure this is directly the cause of this particular freeze issue |
@Tofandel From what we can tell at this point, this is happening because folks' tasks aren't closing. The program isn't freezing. Rather, it's continuing to run when we don't want it to. However, that's only what we can intuit from the reports so far. We need dependable reproductions to look at. We've tried to make some ourselves but haven't been able to. |
It's a bit difficult to create a reproduction for this, would it be helpful if I invited someone from the team to our repository? They would have to sign an NDA according to company policy, but other than that it should be all clear. |
@anthonyshew I saw that you remove the owned-by: turborepo label - does that mean work on this will be discontinued? |
I was doing some repository cleanup. Those labels are from when Turbopack was also in this repository, so aren't meaningful anymore. We still need figure out how to resolve this. |
I have an inconsistent repro (is that an oxymoron?) I was in the process of getting self-signed certs working with Playwright+Github actions when I got: Here's a timeline:
I'm unsure if this is a Github actions issue or a Turbo one... but searching for "The operation was canceled" yielded this thread so here's my case report. |
While it is not a minimum repro, I can reproduce this locally in https://github.com/repobuddy/repobuddy/tree/t8281 pnpm i
pnpm test |
Verify canary release
Link to code that reproduces this issue
jakubmazanec/apps#21
What package manager are you using / does the bug impact?
npm
What operating system are you using?
Windows
Which canary version will you have in your reproduction?
2.0.0-canary.2
Describe the Bug
After updating from 1.10.3 to 1.13.3, Turborepo freezes on a
test
task during GitHub workflow run (which uses Linux; https://github.com/jakubmazanec/apps/actions/runs/9357742231/job/25758162977). I was unable to figure out why, nor any further details, since it is working locally on my Windows laptop. Also, I couldn't actually check the canary version, since it has breaking changes and I cannot modify my config.Expected Behavior
It works, like it is working on a version 1.10.3.
To Reproduce
In other issues I see mentions that Turborepo currently randomly freezes on Linux, so maybe it's related?
Additional context
No response
The text was updated successfully, but these errors were encountered: