Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes to deploy output? #3079

Open
gdman opened this issue Oct 24, 2024 · 17 comments
Open

Changes to deploy output? #3079

gdman opened this issue Oct 24, 2024 · 17 comments
Labels
bug Issue or pull request that identifies or fixes a bug

Comments

@gdman
Copy link

gdman commented Oct 24, 2024

Summary

Have there been recent changes to the deploy console output again?

Since the most recent build the output is much less clear than it was in the previous version.

Before - clear output that says how many components and tests:
OutputBefore

After - output that probably looks nice and pretty in an interactive terminal but is much worse in CI output (where deploy is most commonly run):
OutputAfter

Version: 2.63.7

@gdman gdman added the investigating We're actively investigating this issue label Oct 24, 2024
Copy link

Hello @gdman 👋 It looks like you didn't include the full Salesforce CLI version information in your issue.
Please provide the output of version --verbose --json for the CLI you're using (sf or sfdx).

A few more things to check:

  • Make sure you've provided detailed steps to reproduce your issue.
    • A repository that clearly demonstrates the bug is ideal.
  • Make sure you've installed the latest version of Salesforce CLI. (docs)
    • Better yet, try the rc or nightly versions. (docs)
  • Try running the doctor command to diagnose common issues.
  • Search GitHub for existing related issues.

Thank you!

Copy link

Thank you for filing this issue. We appreciate your feedback and will review the issue as soon as possible. Remember, however, that GitHub isn't a mechanism for receiving support under any agreement or SLA. If you require immediate assistance, contact Salesforce Customer Support.

@github-actions github-actions bot added more information required Issue requires more information or a response from the customer and removed investigating We're actively investigating this issue labels Oct 24, 2024
@smithmd
Copy link

smithmd commented Oct 24, 2024

I'd like to also add the feedback that while this change is nice in a normal terminal's output, it makes my CI output completely unreadable and the page is a mile long. I was hoping that SF_USE_PROGRESS_BAR would help, but it seems that env var does nothing now.

@mdonnalley mdonnalley added bug Issue or pull request that identifies or fixes a bug and removed more information required Issue requires more information or a response from the customer labels Oct 24, 2024
Copy link

git2gus bot commented Oct 24, 2024

This issue has been linked to a new work item: W-17060727

@VivekMChawla
Copy link

@smithmd and @gdman - Thank you for your feedback. We're working on a fix now.

Once fixed, the CI output will look like this:

► Deploying Metadata...
   Components: 21/30 (70%)
   Components: 28/30 (93%)
   Components: 29/30 (97%)
   Components: 30/30 (100%)
► Running Tests...
   Tests: 4/1377 (0%)
   Tests: 8/1377 (1%)
   Tests: 70/1377 (5%)

And will follow these rules:

  1. Polling for status updates will occur every five seconds
    • This interval can be modified via an ENV variable.
  2. Updates, like Components: 21/30 (70%), will only print when information changes, preventing long pages of output.
  3. If five minutes pass with no updates, the last update will be printed again.
    • This serves as a "heartbeat" for folks with "no output timeout" configurations in their CI system. See #1126 for additional context.
    • This "heartbeat" interval can be modified via an ENV variable.

Feedback or questions?

Please let us know if you think the proposed fixes will work or if we've missed anything.

Thank you!

@gdman
Copy link
Author

gdman commented Oct 24, 2024

That sounds great to me. Thanks for the fast response!

@gdman
Copy link
Author

gdman commented Oct 25, 2024

@VivekMChawla

Feedback or questions?

Please let us know if you think the proposed fixes will work or if we've missed anything.

Thank you!

I had one thought overnight. How are component/test failures handled? Is there immediate feedback in the build output or does the user need to wait until the end?

There have been a few iterations of deploy logging over the years, some have provided instant logging of a component or test failure, whereas others say there is an error but not what it is or nothing until the end. Deploy processes can be long - test runs can take hours, the sooner the developer knows about any failure the better. For context, the developers don't necessarily have access to the org where a build is being validated so the only thing they have to go on is the logs.

@avesolovksyy
Copy link

@VivekMChawla
Is it possible to keep the old output as default behaviour, but introduce some switcher for guys who want this "better" more detail output?
With such changes to default behaviour of sf cli (not just output formatting), you are making cli less stable and predictable with new releases.

Regarding this particular change, after upgrading to latest cli version I got ~2 million records of output in my CI/CD logs for metadata retrieve, validate and deploy jobs. In some cases browser tab is simply crashing when I try to open jobs with such big logs.

Moreover, we've found that some of our pretty heavy metadata retrieve jobs for big orgs are simply crashing due to heap out of memory errors - I got all this trouble just becuase of this "better" output, I've not been asking for)

@pgumiela-wbd
Copy link

+1, we've also unexpectedly ended up with JavaScript heap out of memory. Such changes should be introduced as optional beta features and tested out thoroughly.

@VivekMChawla
Copy link

@avesolovksyy - The fix described in my previous message will result in fewer output lines in CI/CD logs. I apologize for the manner in which this feature was delivered. The fix should be in the nightly build early next week.

We are also investigating the root cause of the out of memory issue and will address that as well.

@pgumiela-wbd - Regarding your JavaScript heap out of memory issue, can you share additional details, specifically

  • Which command (deploy or retrieve) was running?
  • Was this in a CI environment or an interactive terminal?

Getting out a fix for this is my top priority. Thank you both for your patience.

@janmaj-ibm
Copy link

janmaj-ibm commented Oct 28, 2024

Hello @VivekMChawla , we also started to experience JavaScript heap out of memory issue after this change in output format.

Command: sf project deploy start --dry-run -w 180
Environment: CI (Azure pipelines)

Single output is pretty long and it is repeated each 5 seconds.

@mdonnalley
Copy link
Contributor

@janmaj-ibm @pgumiela-wbd @avesolovksyy @gdman @smithmd

Apologies for the inconvenience this caused you.

We decided to downgrade @salesforce/plugin-deploy-retrieve to 3.13.1 (the last version before these changes were introduced). Updating to 2.63.9 will get you back to the output you're used to.

Going forward, I have two PRs that address the issues/concerns you all brought up.

oclif/multi-stage-output#52 will implement the changes that @VivekMChawla described in this comment.

oclif/table#32 will address the out of memory issue that a couple of you are experiencing.

Those changes will go into our nightly build as soon as they're reviewed and tested, which will likely happen in the next couple of days. I will post back here once they're available in nightly so you can test it out if you'd like.

@JonnyPower
Copy link

yup, we are also experiencing the heap limit issue in CI now as well

@mdonnalley
Copy link
Contributor

The latest nightly (2.65.8) now has all the changes we described above

@geoffswift
Copy link

geoffswift commented Oct 31, 2024

We hit an issue with some code which called out to sf like so, as a kind of "fire and forget" thing (we do not read / flush the STDOUT buffers).

execSync(`sf project deploy start --source-dir ${tempDeployDir}`);

We didn't even do anything with the output from the process, just check the exit code. Presumably because the code was run from a tty, the timer was updating very frequently under the assumption someone would immediately see the clock updating.

As I said previously, we never used the output from the command, and so eventually the buffer just overflows, as there was a huge amount of data being sent via STDOUT which eventually crashed out our code like so (I have truncated the log):

Error: Command failed: sf project deploy start --source-dir /var/folders/wn/g33972tn78g_53vkvvfmwn9c0000gq/T/deploySTzVh8
    at genericNodeError (node:internal/errors:984:15)
    at wrappedFn (node:internal/errors:538:14)
    at checkExecSyncError (node:child_process:890:11)
    at execSync (node:child_process:962:15)
    at updateHistoryTracking (/Users/[email protected]/src/cpm/bin/updateHistoryTracking.js:142:5)
    at Object.<anonymous> (/Users/[email protected]/src/cpm/bin/updateHistoryTracking.js:158:2)
    at Module._compile (node:internal/modules/cjs/loader:1369:14)
    at Module._extensions..js (node:internal/modules/cjs/loader:1427:10)
    at Module.load (node:internal/modules/cjs/loader:1206:32)
    at Module._load (node:internal/modules/cjs/loader:1022:12) {
  status: 1,
  signal: null,
  output: [
    null,
    '\n' +
      ' ─────────────── Deploying Metadata ───────────────\n' +
      '\n' +
      '\n' +
      ' ◼ Preparing\n' +
      ' ◼ Waiting for the org to respond\n' +
      ' ◼ Deploying Metadata\n' +
      ' ◼ Running Tests\n' +
      ' ◼ Updating Source Tracking\n' +
      ' ◼ Done\n' +
      '\n' +
      ' Status: ◜\n' +
      ' Elapsed Time: 0ms\n' +
      '\n' +
      '\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[G\n' +
      ' ─────────────── Deploying Metadata ───────────────\n' +
      '\n' +
      ' Deploying v61.0 metadata to [email protected] using the v62.0 SOAP\n' +
      ' API.\n' +
      '\n' +
      ' ⣾ Preparing 17ms\n' +
      ' ◼ Waiting for the org to respond\n' +
      ' ◼ Deploying Metadata\n' +
      ' ◼ Running Tests\n' +
      ' ◼ Updating Source Tracking\n' +
      ' ◼ Done\n' +
      '\n' +
      ' Status: ◜\n' +
      ' Deploy ID: 0AfD600001ZhBLzKAN\n' +
      ' Target Org: [email protected]\n' +
      ' Elapsed Time: 36ms\n' +
      '\n' +
      '\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[G\n' +
      ' ─────────────── Deploying Metadata ───────────────\n' +
      '\n' +
      ' Deploying v61.0 metadata to [email protected] using the v62.0 SOAP\n' +
      ' API.\n' +
      '\n' +
      ' ⣾ Preparing 39ms\n' +
      ' ◼ Waiting for the org to respond\n' +
      ' ◼ Deploying Metadata\n' +
      ' ◼ Running Tests\n' +
      ' ◼ Updating Source Tracking\n' +
      ' ◼ Done\n' +
      '\n' +
      ' Status: ◜\n' +
      ' Deploy ID: 0AfD600001ZhBLzKAN\n' +
      ' Target Org: [email protected]\n' +
      ' Elapsed Time: 65ms\n' +
      '\n' +
      '\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[G\n' +
<SNIP>

@VivekMChawla
Copy link

@geoffswift - Thank you for sharing the output of your logs. What you shared is extremely helpful.

Two quick questions:

  1. Is the script you're calling execSync() from running on someone's local machine or in a CI environment?
  2. Would it be acceptable to add the --json flag to your deploy start command? That should stop the human-readable output from clogging up the STDIO buffer

FYI: @mdonnalley

@geoffswift
Copy link

The problem I reported was indeed on someone's local machine. I did use --json as the workaround to solve this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Issue or pull request that identifies or fixes a bug
Projects
None yet
Development

No branches or pull requests

9 participants