Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

W-14455221-rtm-autoscaling #706

Open
wants to merge 9 commits into
base: latest
Choose a base branch
from

Conversation

luanamulesoft
Copy link
Contributor

@luanamulesoft luanamulesoft commented Nov 8, 2023

Writer's Quality Checklist

Before merging your PR, did you:

  • Run spell checker
  • Run link checker to check for broken xrefs
  • Check for orphan files
  • Perform a local build and do a final visual check of your content, including checking for:
    • Broken images
    • Dead links
    • Correct rendering of partials if they are used in your content
    • Formatting issues, such as:
      • Misnumbered ordered lists (steps) or incorrectly nested unordered lists
      • Messed up tables
      • Proper indentation
      • Correct header levels
  • Receive final review and signoff from:
    • Technical SME
    • Product Manager
    • Editor or peer reviewer
    • Reporter, if this content is in response to a reported issue (internal or external feedback)
  • If applicable, verify that the software actually got released

@luanamulesoft luanamulesoft self-assigned this Nov 8, 2023
@luanamulesoft luanamulesoft requested a review from a team as a code owner November 8, 2023 18:28
Copy link

@rithishapadmanabh rithishapadmanabh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would change the organization of this section.

  1. The title can be CPU based Horizontal Autoscaling to be very specific about the feature we have.
  2. The description of HPA is really the same for both CH2 and RTF. So I would move that to above the table, and in the table/as bullets just link out to specifically CH2 and RTF sections
  • Horizontal autoscaling makes Mule applications deployed to CloudHub 2.0 and RTF responsive to CPU usage by automatically scaling up or down replica capacity as needed. In Kubernetes, a Horizontal Pod Autoscaler (HPA) automatically updates a workload resource to automatically match demand.

|===
|Deployment Option | Implementation | More Information
|CloudHub 2.0 | Horizontal autoscaling makes Mule applications deployed to CHloudHub 2.0 responsive to resource usage by automatically scaling up or down replica capacity as needed. In Kubernetes, a Horizontal Pod Autoscaler (HPA) automatically updates a workload resource to automatically match demand. | xref:cloudhub-2::ch2-configure-horizontal-autoscaling.adoc[]
|Runtime Fabric | Runtime Fabric instances support horizontal autoscaling Mule application deployments by initiating additional replicas in response to the signals configured. In Kubernetes, a Horizontal Pod Autoscaler (HPA) automatically updates a workload resource to automatically match demand. | xref:runtime-fabric::configure-horizontal-autoscaling.adoc[]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slightly misleading - customers may perceive that they can configure signals. But reality is Mulesoft configures signals (70% etc) out of the box so it is not controllable by the customer, and the only signal we support is CPU, so customers cannot configure type of signal either.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd change to in response to CPU usage the application

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added the same text for both CH2 and RTF, but maintained in separate rows, to respect the structure of the page.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants