Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

W-16132496-CH2-Networking-Guide-LDS #227

Open
wants to merge 33 commits into
base: latest
Choose a base branch
from

Conversation

luanamulesoft
Copy link
Contributor

No description provided.

@luanamulesoft luanamulesoft self-assigned this Aug 21, 2024
@luanamulesoft luanamulesoft requested review from sarathecoubian and a team as code owners August 21, 2024 20:02
@luanamulesoft luanamulesoft requested a review from xuanshi August 30, 2024 14:28
@@ -0,0 +1,189 @@
= CloudHub 2.0 Networking Guide
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's avoid using "guide" in the content.
Maybe "Networking in CloudHub 2.0" or simply "CloudHub 2.0 Networking"

@@ -0,0 +1,189 @@
= CloudHub 2.0 Networking Guide

This guide covers cloud deployments using CloudHub 2.0. See xref:runtime-manager::deployment-strategies.adoc[Deployment Options] for information about different deployment scenarios.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The short description should summarize (rather than introduce) the topic.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove hard-coded link text?


== Basic Networking Architecture Components

CloudHub 2.0 is MuleSoft's fully managed, containerized integration platform as a service (iPaaS) where you can deploy APIs and integrations as lightweight containers in the cloud so they are maintainable, secure, and scalable. The basic components of CloudHub 2.0 networking architecture are: Load Balancer, Private Spaces, and the Mule replica DNS records.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I sort of expect the next level (H3) to cover these three things.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the only H2-level heading? If so, suggest removing this level because you have a bunch of information in H4s. If you remove this level and put this content in the short description, you can elevate those H4s to H3s.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have an outline (just the headings) of this topic that I can look at?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should show where TGW attachments and / VPN sits as well

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also need a diagram for shared spaces.


=== Upstream HTTPS Traffic

By default, the load balancing service forwards traffic deployed application over HTTP. To use HTTPS:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sentence seems incomplete - traffic "to the" deployed app?

Copy link

@ahuynhms ahuynhms Oct 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ye that sentence needs to be reworded.
Example:
"forwards traffic to a deployed"
"forwards traffic to deployed applications"

cloudhub-2/modules/ROOT/pages/ch2-networking-guide.adoc Outdated Show resolved Hide resolved

If you use the internal DNS name, traffic remains within the private space network. You can delete or omit the externally exposed endpoint when deploying an application to a private space. In that case, you can use the application's internal endpoint for internal traffic.

If you use the cluster local endpoint, the traffic doesn't leave the cluster. However, the cluster local endpoint isn't highly available. During some cluster operations such as disaster recovery, the endpoint can be unreachable. The cluster local endpoint allows traffic within the same environment only.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are we referring to when we say 'cluster' here? Can we replace with 'private space' - to avoid exposing details that private space == cluster


By default, applications can make outbound connections to any destination and ports. You can change this behavior to restrict egress traffic.
+
You can remove all ingress and egress rules from and to the internet. In this case, the cluster still functions normally because of the following control measures:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again - should we say 'private space' or 'space' instead of cluster?


The CloudHub 2.0 load balancing service performs a round-robin load distribution across application replicas, which provides horizontal scalability. The load balancing service also provides transparent switchover when an application is upgraded. See xref:ch2-update-apps.adoc[].

Each application deployed to CloudHub 2.0 has both default public and internal DNS records that refer to the load balancer: `<app>.<space-shard>.region.cloudhub.io.` and `<app>.internal-<space-shard>.region.cloudhub.io.`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add brackets to region as well? like ...cloudhub.io?

Copy link

@ahuynhms ahuynhms Oct 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ye the region should be in brackets like <region>.cloudhub.io.

Copy link

@ahuynhms ahuynhms Oct 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to elaborate on what <app> means? According to https://docs.mulesoft.com/cloudhub-2/ch2-deploy-private-space#app-name-reqs, it is <app-name>-<unique-id>.

Copy link

@ahuynhms ahuynhms Oct 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Should app2 in node1 be "app2/replica1" instead of just "app2"? To be consistent with app1/replica1.
  2. For the Client (Customer Network) to Internal NLB, maybe add an image to show it uses Anypoint VPN or Transit Gateway Attachment.
  3. A suggestion: should we have a 2nd more detailed diagram to show where the PS FW comes into the picture and the traffic flow for PS CIDR versus "Local Private Network"? We've had support cases where customer's app to app traffic is failing because they only allowed "PS CIDR" in the PS FW, but didn't allow "Local Private Network".


Each application deployed to CloudHub 2.0 has both default public and internal DNS records that refer to the load balancer: `<app>.<space-shard>.region.cloudhub.io.` and `<app>.internal-<space-shard>.region.cloudhub.io.`.

The CloudHub 2.0 load balancing service accepts public traffic on the standard HTTPS port: `443`. You can also choose to accept HTTP traffic on the standard HTTP port: `80`.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this comment is exclusive to Private Space. For Shared Space, customer cannot choose to accept port 80.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add clarification.


=== Mule Application Ports

Applications must listen on host `0.0.0.0` and port via the reserved property `${http.port}`. You can dynamically allocate the value of this property, but you can't hard-code it.
Copy link

@ahuynhms ahuynhms Oct 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do we mean by You can dynamically allocate the value of this property, but you can't hard-code it.? Can we reword to clarify? What are we allowing the customer to dynamically allocate?

From my understanding, there are 3 options (2 options when using the HTTP Load Balancer).
Option 1: customer can use http.port property.
Option 2: customer can hard code the value 8081 instead of using the property.
Option 3 is not relevant here: can expose TCP port directly, bypassing HTTP Load Balancer.

If customer uses any other port, the request via HTTP Load Balancer will fail.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MuleSof allocates the value of this property, (do not mention hard-coding)

By default, the load balancing service forwards traffic deployed application over HTTP. To use HTTPS:

* Enable the *Last-Mile Security* setting in the *Ingress* tab of the xref:ch2-deploy-private-space.adoc#configure-endpoint-path[deployment settings].
* Configure the application to listen on HTTPS by providing a certificate in the xref:ps-config-domains.adoc[TLS context] of your private space.
Copy link

@ahuynhms ahuynhms Oct 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this statement is wrong regarding the hyperlink we have for TLS context. The hyperlink points to the TLS context configuration for external client to Private Space. Whereas this sentence is talking about last-mile security between, which is something customer needs to configure themselves within the mule app.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll get rid of the link.


=== Load Balancer Connections

For each request a client makes through the CloudHub 2.0 load balancer `(<app>.<space-id>.region.cloudhub.io)`, the load balancer maintains two connections: one connection between the client and the load balancer, and another connection between the load balancer and the application. For each connection, the load balancer manages a default idle timeout of 300 seconds that is triggered when no data is sent over either connection. If no data is sent or received during this duration, the load balancer closes both connections.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we say <space-id> but previously we said <space-shard>. Lets make them consistent. Also the same deal here with region. Needs to be in brackets.

|`*.<space-id>.<region>.cloudhub.io`|Applications running inside a private space.
|`<space-id>.<region>.cloudhub.io`|Record pointing to the load balancer. Use this record as the target of a custom domain CNAME.
|`*.internal-<space-id>.<region>.cloudhub.io`|Applications running inside a private space. The IP addresses for this DNS record are accessible only within your xref:ch2-private-space-about.adoc[private spaces]. They cannot be accessed within xref:ch2-shared-space-about.adoc[shared spaces].
|`internal-<space-id>.<region>.cloudhub.io`|Internal load balancer. Use this record as the target of a custom domain CNAME. The IP addresses for this DNS record are accessible only within your private spaces. They cannot be accessed within shared spaces.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "Use this record as the target of a custom domain CNAME" comment for the last entry (internal load balancer) is wrong. We currently don't support custom domain CNAME for the internal LB. Is that correct?

Copy link
Contributor

@xuanshi xuanshi Oct 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't support a vanity domain for private endpoints. But we can still put the internal domain as a target, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SME confirmed that it is correct as it is now.

Copy link

@ahuynhms ahuynhms Oct 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we planning to have vanity domains for private endpoints released, before we publish this doc?

For the public endpoint scenario:

  1. Customer configures example.com
  2. They create a CNAME for example.com, which points to the public endpoint for purposes of DNS resolution, to get the IP.
  3. When CH2 LB receives the request, it will match the host header example.com against the configured vanity domain (example.com), and it will match.

For the private endpoint scenario:

  1. Let say customer sends request to internal.com
  2. They have CNAME for internal.com, that points to the private endpoint, and DNS resolution works.
  3. But the problem is when CH2 LB receives the request, it will see the host header as internal.com, and it will be unable to match this against any certificate on the CH2 LB, because (currently) we don't support vanity domains for the private endpoint.

Based on my above examples, I don't understand how that statement is valid when we don't support vanity domains for the private endpoint.


If you use the internal DNS name, traffic remains within the private space network. You can delete or omit the externally exposed endpoint when deploying an application to a private space. In that case, you can use the application's internal endpoint for internal traffic.

If you use the cluster local endpoint, the traffic doesn't leave the cluster. However, the cluster local endpoint isn't highly available. During some cluster operations such as disaster recovery, the endpoint can be unreachable. The cluster local endpoint allows traffic within the same environment only.
Copy link

@ahuynhms ahuynhms Oct 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"However, the cluster local endpoint isn’t highly available. During some cluster operations such as disaster recovery, the endpoint can be unreachable."

How is this different from using the internal DNS name? Since in this statement we are saying/suggesting that using the internal DNS name is better than using the cluster local endpoint. We will surely get customers asking what are the specific differences so they can make their decision. My personal understanding was using the cluster local endpoint is better (less hops), and customer just lose out on a common log aggregation point (the ingress LB) for audit purposes.

How is using the cluster local endpoint less highly available compared to using the internal DNS name? Isn't using the internal DNS name the same, because the ingress will be forwarding traffic to the cluster local endpoint anyway.

* The default public DNS name: `app.sxjsip.aus-s1.cloudhub.io`
* The default internal DNS name (in private spaces only): `app.internal-sxjsip.aus-s1.cloudhub.io`
* The custom domain name (if configured): `acme.example.com`
* The cluster local DNS: `app` or `app.envid.svc.cluster.local`
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we include the port number here too "app.envid.svc.cluster.local"?
In the other methods we don't need to include the port because it uses the default 80/443. But when using the cluster local DNS, we need to include the port reference in the request (8081).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to: app.envid.svc.cluster.local:8081


You can apply custom certificates to your private space. CloudHub 2.0 parses the CN and SAN list from the certificate and makes those domains available when deploying applications.

Configure either in the public or the internal DNS record to CNAME. For example:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we don't support custom domain name for internal endpoint/DNS record (at least not yet?).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SME confirmed it is correct as it is

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't support private custom domain. but we can still do the CNAME to the internal DNS record, right?

Copy link
Contributor

@Cristian-Venticinque Cristian-Venticinque left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved with some minor suggestions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants