diff --git a/docs/initial_setup_of_rseries_platform_layer.rst b/docs/initial_setup_of_rseries_platform_layer.rst index fd5abfb..c97665f 100644 --- a/docs/initial_setup_of_rseries_platform_layer.rst +++ b/docs/initial_setup_of_rseries_platform_layer.rst @@ -338,7 +338,7 @@ To set System Time settings, use the following API call as an example. This will PATCH https://{{rseries_appliance1_ip}}:8888/restconf/data/ -Below is the body of the API call contianing the desired configuration: +Below is the body of the API call containing the desired configuration: .. code-block:: json @@ -1105,7 +1105,7 @@ Then send the Base Reg Key in the body of the get-dossier API call below: POST https://{{rseries_appliance1_ip}}:8888/restconf/data/openconfig-system:system/f5-system-licensing:licensing/f5-system-licensing-install:get-dossier -Within the body of API call, enter your registation-key. Note, in the example below the actual Registration Key has been obfuscated with XXXX's. +Within the body of API call, enter your registration-key. Note, in the example below the actual Registration Key has been obfuscated with XXXX's. .. code-block:: json @@ -1653,7 +1653,7 @@ Once the qkview is generated, you can click the checkbox next to it, and then se :align: center :scale: 70% -If you would like to store iHealth credentials within the configuration you may do so via the CLI. Enter config mode, and then use the system diagnostics ihealth config command to configure a username and password. +If you would like to store iHealth credentials within the configuration you may do so via the CLI. Enter config mode, and then use the **system diagnostics ihealth config** command to configure a username and password. .. code-block:: bash diff --git a/docs/rseries_deploying_a_bigip_next_tenant.rst b/docs/rseries_deploying_a_bigip_next_tenant.rst index bd0406e..21618fc 100644 --- a/docs/rseries_deploying_a_bigip_next_tenant.rst +++ b/docs/rseries_deploying_a_bigip_next_tenant.rst @@ -98,13 +98,13 @@ If you need instructions on installing Central Manager, or general BIG-IP Next d Setting up an rSeries Provider in Central Manager ------------------------------------------------- -After logging into Central Manager you can setup an rSeries Provider by going to the **Manage Instances** button on the main home screen, or by using the drop down in the upper right hand corner of the webUI ans selecting +After logging into Central Manager you can setup an rSeries Provider by going to the **Manage Instances** button on the main home screen. .. image:: images/rseries_deploying_a_bigip_next_tenant/central-manager-home.png :align: center :scale: 70% -Alternatively, select the **Infrastructure** option. +Alternatively, select the **Infrastructure** option by using the drop-down in the upper left-hand corner of the webUI. .. image:: images/rseries_deploying_a_bigip_next_tenant/infrastructure.png :align: center @@ -116,7 +116,7 @@ Once on the Infrastructure page, select **Providers**, and then select the **Sta :align: center :scale: 70% -From the drop down menu, select **rSeries**. +From the drop-down menu, select **rSeries**. .. image:: images/rseries_deploying_a_bigip_next_tenant/add-an-instance-provider.png :align: center @@ -167,7 +167,7 @@ Review the requirements of what you'll need before proceeding, then click **Next :align: center :scale: 70% -Enter a hostname for the BIG-IP Next instance, and an optional description. Then, in the drop down box select **rSeries Standalone**, and then click the **Start Creating** button. +Enter a hostname for the BIG-IP Next instance, and an optional description. Then, in the drop-down box select **rSeries Standalone**, and then click the **Start Creating** button. .. image:: images/rseries_deploying_a_bigip_next_tenant/start-creating.png :align: center @@ -206,7 +206,7 @@ For VELOS and rSeries r5000 and higher appliances only a single data interface ( :scale: 70% -Below is an example of an r10900 device. Click on **L1 Networks**, and note that the **DefaultL1Network** already exists and is mapped to the internal interface 1.1. Also note that it has zero VLANs assigned. +Below is an example of an r10900 device. Click on **L1 Networks** and note that the **DefaultL1Network** already exists and is mapped to the internal interface 1.1. Also note that it has zero VLANs assigned. .. image:: images/rseries_deploying_a_bigip_next_tenant/l1networks.png :align: center @@ -218,7 +218,7 @@ Click on **VLANs** and note that the VLANs you previously assigned to the instan :align: center :scale: 70% -In the drop-down box for L1 Networks select the **DefaultL1Network** for all of your VLANs, and then click **Next**. +In the drop-down box for L1 Networks select the **DefaultL1Network** for all your VLANs, and then click **Next**. .. image:: images/rseries_deploying_a_bigip_next_tenant/default-l1network-pick.png :align: center @@ -236,7 +236,7 @@ You'll need to add an IP address in format for each VLAN before you :align: center :scale: 70% -In the **Troubleshooting** section you will setup a new local username and password for the Next instance that you can utilize for direct troubleshooting access. The default username and password will no longer work. Note that one an instance is under central management all configuration should be done though Central Manager, and not direct to the Next instance. Click **Next**. +In the **Troubleshooting** section you will setup a new local username and password for the Next instance that you can utilize for direct troubleshooting access. The default username and password will no longer work. Note that once an instance is under central management all configurations should be done though Central Manager, and not direct to the Next instance. Click **Next**. .. image:: images/rseries_deploying_a_bigip_next_tenant/admin-cm.png :align: center @@ -263,7 +263,7 @@ Current Limitations and Caveats - Currently Link Aggregation Groups (LAGs) are not supported on the r2k / r4k when using BIg-IP Next tenants/instances. - For HA configurations the control plane HA link must be a dedicated link, and it must be the first "up" interface on that rSeries platform. -- When configuring standalone instances from Central Manager, both instance must be configured with the exact same name if they will be joined in an HA pair. +- When configuring standalone instances from Central Manager, both instances must be configured with the exact same name if they will be joined in an HA pair. - VLAN naming must be configured identically on any r2k/r4k platforms that will have tenants/instances in an HA pair. - Within Central Manager, interfaces for L1 Networks must use L1 Network style numbering (1.1, 1.2, 1.3 etc..) instead of the physical interface numbering (1.0, 2.0, 3.0 etc...) - When configuring a standalone instance from Central Manager, all VLAN naming between nodes in an HA cluster must be identical. @@ -290,7 +290,7 @@ Review the requirements of what you'll need before proceeding, then click **Next :align: center :scale: 70% -Enter a hostname for the BIG-IP Next instance, and an optional description. Then, in the drop down box select **rSeries Standalone**, and then click the **Start Creating** button. From the **rSeries Provider** section select the rSeries device that you added previously. Then click **Next**. +Enter a hostname for the BIG-IP Next instance, and an optional description. Then, in the drop-down box select **rSeries Standalone**, and then click the **Start Creating** button. From the **rSeries Provider** section select the rSeries device that you added previously. Then click **Next**. .. Note:: In the current F5OS-A 1.8.0 and BIG-IP Next releases the hostname must be exactly the same for any standalone nodes that wil be later joined as part of an HA cluster. @@ -298,7 +298,7 @@ Enter a hostname for the BIG-IP Next instance, and an optional description. Then :align: center :scale: 70% -Next configure the rSeries Properties, which includes **Disk Size**, **CPU Cores**, **Tenant Image Name**, **Tenant Deployment File**, and **VLAN IDs**. You ill need one or more in-band VLANs for client/server traffic, and one VLAN for data plane HA traffic, and another for control plane HA traffic. When finished, click the **Done** button. Enter the out-of-band **Management IP address**, **Network Prefix Length**, and **Gateway IP Address** and then click **Next**. +Next configure the rSeries Properties, which includes **Disk Size**, **CPU Cores**, **Tenant Image Name**, **Tenant Deployment File**, and **VLAN IDs**. You will need one or more in-band VLANs for client/server traffic, and one VLAN for data plane HA traffic, and another for control plane HA traffic. When finished, click the **Done** button. Enter the out-of-band **Management IP address**, **Network Prefix Length**, and **Gateway IP Address** and then click **Next**. .. Note:: The appropriate BIG-IP Next tenant image file should be loaded on the rSeries platform so that the Tenant Image Name and Tenant Deployment File can be selected in this screen. Currently there is no way to upload the image from Central Manager. @@ -320,13 +320,14 @@ In the EA release the following restrictions apply to the r2000/r4000 appliances - LAGs are not supported with BIG-IP Next - For HA configurations the Control Plane VLAN must run on a dedicated physical interface, and it must be the lowest numbered "up" interface. -Both of these restrictions will be addressed in future releases. + +Both restrictions will be addressed in future releases. In order to understand how to configure the networking when onboarding a BIG-IP Next tenant it is important to understand the mapping of physical interface numbering on the r2000/r4000 platforms and how they map to internal BIG-IP Next L1 Networking interfaces. In the diagram below, you can see that F5OS physical interface numbering follows the format of: - 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0 -Inside the BIG-IP Next instance/tenant these physical interfaces have to be mapped to L1 Network interfaces manually. You only need to create L1 Networks for ports that you are actually using, unused ports do not need L1 networks created. In the diagram below, you can see that Next L1 Networking interface numbering follows the format of: +Inside the BIG-IP Next instance/tenant these physical interfaces must be mapped to L1 Network interfaces manually. You only need to create L1 Networks for ports that you are actually using, unused ports do not need L1 networks created. In the diagram below, you can see that Next L1 Networking interface numbering follows the format of: - 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8 @@ -334,7 +335,7 @@ Inside the BIG-IP Next instance/tenant these physical interfaces have to be mapp :align: center :scale: 70% -Unlike the r5000 and higher rSeries models, L1 Networks are not automatically created. You will need to create an L1 Network for each physical interface you intend to use. You are free to name the L1 Networks however you wish but for the sake of simplicity we recommend using naming as seen below, and remember that they must be identical names and interfaces on both instances in an HA cluster. For this example the following L1 network names and interface mappings are used. +Unlike the r5000 and higher rSeries models, L1 Networks are not automatically created. You will need to create an L1 Network for each physical interface you intend to use. You are free to name the L1 Networks however you wish but for the sake of simplicity we recommend using naming as seen below. Remember that they must be identical names and interfaces on both instances in an HA cluster. For this example, the following L1 network names and interface mappings are used. +------------------+-----------------------------+ | L1 Network Name | L1 Network Interface Number | @@ -353,14 +354,14 @@ Unlike the r5000 and higher rSeries models, L1 Networks are not automatically cr :scale: 70% -Below is an example of an r4800 device. Note there are no default L1 Networks defined. Click on **Create**, and create multiple **L1Networks**. Ideally, you should create one for each physical interface that is going to be used. In this case we will have 3 total. Give Each L1 Network a descriptive name (and it must be the same names between members of an HA cluster), and then map it to the L1 Network interface that maps to the F5OS physical interface you are using. Be sure to use the L1 Networking numbering format. i.e. 1.1, 1.2, 1.3 etc... +Below is an example of an r4800 device. Note there are no default L1 Networks defined. Click on **Create** and create multiple **L1Networks**. Ideally, you should create one for each physical interface that is going to be used. In this case we will have 3 total. Give Each L1 Network a descriptive name (and it must be the same names between members of an HA cluster), and then map it to the L1 Network interface that maps to the F5OS physical interface you are using. Be sure to use the L1 Networking numbering format. i.e. 1.1, 1.2, 1.3 etc... .. image:: images/rseries_deploying_a_bigip_next_tenant/create-3-times.png :align: center :scale: 70% -Click on **VLANs** and note that the VLANs you previously assigned to the instance are listed, however they are not mapped to any L1 Networks yet. In the drop-down box for L1 Networks select the proper L1 Network for all of your VLANs, and then click **Next**. +Click on **VLANs** and note that the VLANs you previously assigned to the instance are listed, however they are not mapped to any L1 Networks yet. In the drop-down box for L1 Networks select the proper L1 Network for all your VLANs, and then click **Next**. .. image:: images/rseries_deploying_a_bigip_next_tenant/next-vlans-4k.png :align: center @@ -374,7 +375,7 @@ Finally, you must assign an IP addresses to each VLAN. Click on **IP Addresses** :scale: 70% -In the **Troubleshooting** section you will setup a new local username and password for the Next instance that you can utilize for direct troubleshooting access. The default username and password will no longer work. Note that one an instance is under central management all configuration should be done though Central Manager, and not direct to the Next instance. Click **Next**. +In the **Troubleshooting** section you will setup a new local username and password for the Next instance that you can utilize for direct troubleshooting access. The default username and password will no longer work. Note that one an instance is under central management all configurations should be done though Central Manager, and not direct to the Next instance. Click **Next**. .. image:: images/rseries_deploying_a_bigip_next_tenant/admin-cm.png :align: center @@ -392,7 +393,7 @@ You can then monitor the status of the instance being created. It will take some :align: center :scale: 70% -You can then begin creating the second standalone instance on your other rSeries device. The **Hostname** must be identical to the first Next instance that was created. (This is a temporary restriction that will be addressed in a subsequent release). Select the provider for the second rseries device. +You can then begin creating the second standalone instance on your other rSeries device. The **Hostname** must be identical to the first Next instance that was created. (This is a temporary restriction that will be addressed in a subsequent release). Select the provider for the second rSeries device. .. image:: images/rseries_deploying_a_bigip_next_tenant/create-second-instance.png :align: center @@ -465,7 +466,7 @@ On the first line change the drop-down box to **Active Node IP Address**, on the :align: center :scale: 70% -Finally, review the configuration and click the **Deploy HA** button. In the my instances screen, eventually the two standalone instances will merge into one instance with the **Mode** set to **HA**. Central Manager will now manage the HA cluster as one entity via the floating management IP address. There is no need to manage the nodes individually, or worry about synchronizing configurations as is the case with BIG-IP. This shows the simplified HA management provided by Central Manager. +Finally, review the configuration and click the **Deploy HA** button. In the **My Instances** screen, eventually the two standalone instances will merge into one instance with the **Mode** set to **HA**. Central Manager will now manage the HA cluster as one entity via the floating management IP address. There is no need to manage the nodes individually or worry about synchronizing configurations as is the case with BIG-IP. This shows the simplified HA management provided by Central Manager. .. image:: images/rseries_deploying_a_bigip_next_tenant/deploy-ha.png @@ -480,7 +481,7 @@ Uploading a BIG-IP Next Tenant Image via CLI BIG-IP Next tenant software images are loaded directly into the F5OS platform layer in the same manner as BIG-IP tenant images. For the initial release of BIG-IP Next on rSeries, supported tenant versions are v20.1 and later. -Before deploying any BIG-IP Next tenant, you must ensure you have a proper tenant software release loaded into the F5OS platform layer. If an HTTPS/SCP/SFTP server is not available, you may upload a BIG-IP Next tenant image using scp directly to the F5OS platform layer. Simply SCP an image to the out-of-band management IP address using the admin account and a path of **IMAGES**. There are also other upload options available in the webUI (Upload from Browser) or API (HTTPS/SCP/SFTP). Below is an example of using SCP from a remote client. Note, in releases prior to F5OS-A 1.8.0 you can only upload tenant images using SCP vai the root account. In F5OS-A 1.8.0 and later the admin account will be used to SCP tenant images, and root will not longer be required. +Before deploying any BIG-IP Next tenant, you must ensure you have a proper tenant software release loaded into the F5OS platform layer. If an HTTPS/SCP/SFTP server is not available, you may upload a BIG-IP Next tenant image using SCP directly to the F5OS platform layer. Simply SCP an image to the out-of-band management IP address using the admin account and a path of **IMAGES**. There are also other upload options available in the webUI (Upload from Browser) or API (HTTPS/SCP/SFTP). Below is an example of using SCP from a remote client. Note, in releases prior to F5OS-A 1.8.0 you can only upload tenant images using SCP via the root account. In F5OS-A 1.8.0 and later the admin account will be used to SCP tenant images, and root access will no longer be required. .. code-block:: bash @@ -697,7 +698,7 @@ The second option is to click the **Upload** button to select an image file that :align: center :scale: 70% -After the image is uploaded, you need to wait until it shows **Verified** status before deploying a tenant. The second option in the webUI to upload files is via the **System Settings > File Utilities** page. In the drop down for the **Base Directory** select **images/tenant**, and here you will see all the available tenant images on the system. You can use the same **Import** and **Upload** options as outlined in the previous example. +After the image is uploaded, you need to wait until it shows **Verified** status before deploying a tenant. The second option in the webUI to upload files is via the **System Settings > File Utilities** page. In the drop-down for the **Base Directory** select **images/tenant**, and here you will see all the available tenant images on the system. You can use the same **Import** and **Upload** options as outlined in the previous example. .. image:: images/rseries_deploying_a_bigip_next_tenant/image50.png :align: center diff --git a/docs/rseries_deploying_a_tenant.rst b/docs/rseries_deploying_a_tenant.rst index 4adc1d1..24a636a 100644 --- a/docs/rseries_deploying_a_tenant.rst +++ b/docs/rseries_deploying_a_tenant.rst @@ -36,9 +36,9 @@ The **T1-F5OS** image type should be used with extreme caution. It is the smalle :align: center :scale: 70% -The remaining images (T2-F5OS, ALL-F5OS, T4-F5OS) all support in-place upgrades (they support multiple boot locations); however, they each default to different consumption of disk space that can be used by the tenant. No matter which image you chose you can always expand tenant disk space later using the **Virtual Disk Size** parameter in the tenant deployment options. This will require an outage. Although you can expand the virtual disk, you cannot shrink it, so it is best to not over estimate the image type you need. +The remaining images (T2-F5OS, ALL-F5OS, T4-F5OS) all support in-place upgrades (they support multiple boot locations); however, they each default to different consumption of disk space that can be used by the tenant. No matter which image you chose you can always expand tenant disk space later using the **Virtual Disk Size** parameter in the tenant deployment options. This will require an outage. Although you can expand the virtual disk, you cannot shrink it, so it is best to not overestimate the image type you need. -The **T2-F5OS** image is intended for a tenant that will run LTM and or DNS only, it is not suitable for tenants needing other modules provisioned (AVR may be an exception). This type of image is best suited in a high-density tenant environment where the number of tenants is going to be high per appliance and using minimum CPU resources (1 or 2 vCPUs per tenant). You may want to limit the amount of disk space each tenant can use as a means of ensuring the filesystem on the appliance does not become full. As an example, there is 1TB of disk space per r5000 and r10000 appliance, and 36 tenants each using the 142GB T4-F5OS image would lead to an over-provisioning situation. Because tenants are deployed in sparse mode which allows over-provisioning, this may not be an issue initially, but could become a problem later in the tenant’s lifespan as it writes more data to the disk. To keep the tenants in check, you can deploy smaller T2-F5OS images which can consume 45GB each. LTM/DNS deployments use much less disk space than other BIG-IP modules, which do extensive local logging and utilize databases on disk. +The **T2-F5OS** image is intended for a tenant that will run LTM and or DNS only, it is not suitable for tenants needing other modules provisioned (AVR may be an exception). This type of image is best suited in a high-density tenant environment where the number of tenants is going to be high per appliance and using minimum CPU resources (1 or 2 vCPUs per tenant). You may want to limit the amount of disk space each tenant can use as a means of ensuring the filesystem on the appliance does not become full. As an example, there is 1TB of disk space per r5000 and r10000 appliance, and 36 tenants each using the 142GB T4-F5OS image would lead to an over-provisioning situation. Because tenants are deployed in sparse mode which allows over-provisioning, this may not be an issue initially but could become a problem later in the tenant’s lifespan as it writes more data to the disk. To keep the tenants in check, you can deploy smaller T2-F5OS images which can consume 45GB each. LTM/DNS deployments use much less disk space than other BIG-IP modules, which do extensive local logging and utilize databases on disk. The **All-F5OS** image is suitable for any module configuration and supports a default of 77GB or 82GB (depending on the TMOS version) for the tenant. It is expected that the number of tenants per blade would be much less, as the module combinations that drive the need for more disk space typically require more CPU/memory which will artificially reduce the tenant count per appliance. Having a handful of 76GB or 156GB images per appliance should not lead to an out of space condition. There are some environments where some tenants may need more disk space, and the T4-F5OS image can provide for that. Now that Virtual Disk expansion utilities are available you can always grow the disk consumption later so starting small and expanding later is a good approach; it may be best to default using the T4-F5OS image (if tenant density is not too dense) as that is essentially the default size for vCMP deployments today. @@ -73,7 +73,7 @@ The Dashboard in the webUI has been enhanced in F5OS-A 1.8.0 to provide more vis :align: center :scale: 70% -There is also more granularity showing **Storage Utilization**. In the below example, you can see that F5OS has utilized 60% of the 109.7GB of disk it has dedicated. You can also see that there is 448.6GB available for **F5OS Tenant Disks** (BIG-IP Tenant) virtual disks, and that currently only 5% is used. This is the space shared by all BIG-IP Tenants virtual disks. It is important to remember that TMOS based BIG-IP virtual disks utilize thin provisioning, so the TMOS tenant may think it has more storage but in reality it is using much less capacity on the physical disk. You can see this by the **BIG-IP Tenant** utilizations. In the output below, there are two BIG-IP tenants (fix-ll & test-tenant). One has been allocated 80GB of disk while the other has been allocated 82GB of disk, however the actual size on disk is much lower (~5-7GB each). Lastly, there is a single BIG-IP Next tenant that has 25GB allocated to it, but is currently utilizing 7% of that space. +There is also more granularity showing **Storage Utilization**. In the below example, you can see that F5OS has utilized 60% of the 109.7GB of disk it has dedicated. You can also see that there is 448.6GB available for **F5OS Tenant Disks** (BIG-IP Tenant) virtual disks, and that currently only 5% is used. This is the space shared by all BIG-IP Tenants virtual disks. It is important to remember that TMOS based BIG-IP virtual disks utilize thin provisioning, so the TMOS tenant may think it has more storage but in reality, it is using much less capacity on the physical disk. You can see this by the **BIG-IP Tenant** utilizations. In the output below, there are two BIG-IP tenants (fix-ll & test-tenant). One has been allocated 80GB of disk while the other has been allocated 82GB of disk, however the actual size on disk is much lower (~5-7GB each). Lastly, there is a single BIG-IP Next tenant that has 25GB allocated to it but is currently utilizing 7% of that space. .. NOTE:: Storage utilization and allocation may be different on various rSeries platforms. @@ -249,7 +249,7 @@ Tenant software images are loaded directly into the F5OS platform layer. For the No other TMOS versions are supported other than hotfixes or rollups based on those versions of software, and upgrades to newer versions of TMOS happen within the tenant itself, not in the F5OS layer. The images inside F5OS are for initial deployment only. rSeries tenants do not support versions 16.0, 16.0 or 17.0, you can run either the minimum 15.1.x release or later for a given platform or any versions 17.1.x and later. -Before deploying any tenant, you must ensure you have a proper tenant software release loaded into the F5OS platform layer. If an HTTPS/SCP/SFTP server is not available, you may upload a tenant image using scp directly to the F5OS platform layer. Simply SCP an image to the out-of-band management IP address using the admin account and a path of **IMAGES**. There are also other upload options available in the webUI (Upload from Browser) or API (HTTPS/SCP/SFTP). Below is an example of using SCP from a remote client. +Before deploying any tenant, you must ensure you have a proper tenant software release loaded into the F5OS platform layer. If an HTTPS/SCP/SFTP server is not available, you may upload a tenant image using SCP directly to the F5OS platform layer. Simply SCP an image to the out-of-band management IP address using the admin account and a path of **IMAGES**. There are also other upload options available in the webUI (Upload from Browser) or API (HTTPS/SCP/SFTP). Below is an example of using SCP from a remote client. .. code-block:: bash @@ -335,7 +335,7 @@ Tenant lifecycle can be fully managed via the CLI using the **tenants** command Value for 'config gateway' (): 10.255.0.1 Boston-r10900-1(config-tenant-tenant2)# -**NOTE: The nodes value is currently required in the interactive CLI mode to remain consistent with VELOS, but should be set for 1 for rSeries tenant deployments.** +**NOTE: The nodes value is currently required in the interactive CLI mode to remain consistent with VELOS but should be set for 1 for rSeries tenant deployments.** When inside the tenant config mode, you can enter each configuration item one line at a time using tab completion and question mark for help. Type **config ?** to see all the available options. @@ -471,7 +471,7 @@ Uploading Tenant Images via webUI Before deploying any tenant, you must ensure you have a proper tenant software release loaded into F5OS. Under **Tenant Management** there is a page for uploading tenant software images. There are TMOS images specifically for rSeries. Only supported rSeries TMOS releases should be loaded into this system. Do not attempt to load older or even newer images unless there are officially supported on rSeries. -You can upload a tenant image via the webUI in two different places. The first is by going to the **Tenant Management > Tenant Images** page. There are two options on this page; you can click the **Import** button and you will receive a pop-up asking for the URL of a remote HTTPS server with optional credentials, and the ability to ignore certificate warnings. +You can upload a tenant image via the webUI in two different places. The first is by going to the **Tenant Management > Tenant Images** page. There are two options on this page; you can click the **Import** button, and you will receive a pop-up asking for the URL of a remote HTTPS server with optional credentials, and the ability to ignore certificate warnings. .. image:: images/rseries_deploying_a_tenant/image71.png :align: center diff --git a/docs/rseries_high_availability.rst b/docs/rseries_high_availability.rst index cf4d1f0..c94dbfd 100644 --- a/docs/rseries_high_availability.rst +++ b/docs/rseries_high_availability.rst @@ -26,7 +26,7 @@ In some customer environments they may not want to run the HA VLANs over a dedic :align: center :scale: 40% -If VPC style interconnects are not used, then the same concepts from above are used but slightly altered. In the first case LAGs are not dual homed due to lack of VPC support and instead are configured as point to point LAGs between one rSeries device and one upstream layer2 switch. Again, a dedicated HA link is optional but preferred. +If VPC style interconnects are not used, then the same concepts from above are used but slightly altered. In the first case LAGs are not dual homed due to lack of VPC support and instead are configured as point-to-point LAGs between one rSeries device and one upstream layer2 switch. Again, a dedicated HA link is optional but preferred. .. image:: images/rseries_high_availability/image4.png :align: center diff --git a/docs/rseries_multitenancy.rst b/docs/rseries_multitenancy.rst index 2f6c875..00df26a 100644 --- a/docs/rseries_multitenancy.rst +++ b/docs/rseries_multitenancy.rst @@ -283,7 +283,7 @@ Since all r2000 models are running on the same hardware appliance, you can easil Tenant Sizing ============= -Single vCPU (Skinny) tenants are supported on the r5000, r10000, and r12000 appliances, but that option is hidden under **Advanced** mode.This would allow for 60 single vCPU tenants per r12900 appliance, 52 tenants for the r12800, and 44 tenants for the r12600.This would allow for 36 single vCPU tenants per r10900 appliance, 28 tenants for the r10800, and 24 tenants for the r10600. For the r5000 platforms this would allow for 26 single vCPU tenants per r5900 appliance, 18 tenants for the r5800, however the r5600 supports a max of 8 tenants. While single vCPU tenants are supported, they are not recommended for most environments. This is because a single vCPU tenant is running on a single hyperthread, and performance of a single thread can be influenced by other services running on the other hyperthread of a CPU. Since this can lead to unpredictable behavior only a very lightly loaded LTM/DNS only type tenant should be considered for this option and ideally for non-production environments. As always proper sizing should be done to ensure the tenant has enough resources. +Single vCPU (Skinny) tenants are supported on the r5000, r10000, and r12000 appliances, but that option is hidden under **Advanced** mode.This would allow for 60 single vCPU tenants per r12900 appliance, 52 tenants for the r12800, and 44 tenants for the r12600. This would allow for 36 single vCPU tenants per r10900 appliance, 28 tenants for the r10800, and 24 tenants for the r10600. For the r5000 platforms this would allow for 26 single vCPU tenants per r5900 appliance, 18 tenants for the r5800, however the r5600 supports a max of 8 tenants. While single vCPU tenants are supported, they are not recommended for most environments. This is because a single vCPU tenant is running on a single hyperthread, and performance of a single thread can be influenced by other services running on the other hyperthread of a CPU. Since this can lead to unpredictable behavior only a very lightly loaded LTM/DNS only type tenant should be considered for this option and ideally for non-production environments. As always proper sizing should be done to ensure the tenant has enough resources. Tenant States ============= diff --git a/docs/rseries_performance_and_sizing.rst b/docs/rseries_performance_and_sizing.rst index 4972a59..163994f 100644 --- a/docs/rseries_performance_and_sizing.rst +++ b/docs/rseries_performance_and_sizing.rst @@ -204,7 +204,7 @@ To see how this translates into real performance, it is good to look at a Layer7 :align: center :scale: 90% -Because each appliance has a different number of CPUs, a common sizing exercise is to look at the per vCPU performance by using the formulas above to come up with a per vCPU metric. In the graph below it is done for Layer7 RPS (Inf-Inf) but you could use the same math for any metric. Note the graph below is not derived from a per vCPU test, it is taking a published appliance metric and dividing it by the number of vCPUs (minus the platform vCPUs) to come up with a per vCPU metric. As mentioned above using the rSeries metric which is (minus the platform CPUs) is the most realistic. As you will note below, migrating from an i5600 to an r5600 will have better per VCPU performance. This is also the case when migrating from an i7600 to an i5900. There are two cases where the per vCPU performance is lower. When going from an i5800 to an r5800 or when going from and i7800 to an r5900 the per vCPU metrics are lower on iSeries. The per vCPU metrics are lower on rSeries **even though the aggregate performance is higher for the entire appliance**. This is due to the speed of the processors, but since there are more processors, the aggregate performance is higher. +Because each appliance has a different number of CPUs, a common sizing exercise is to look at the per vCPU performance by using the formulas above to come up with a per vCPU metric. In the graph below it is done for Layer7 RPS (Inf-Inf), but you could use the same math for any metric. Note the graph below is not derived from a per vCPU test, it is taking a published appliance metric and dividing it by the number of vCPUs (minus the platform vCPUs) to come up with a per vCPU metric. As mentioned above using the rSeries metric which is (minus the platform CPUs) is the most realistic. As you will note below, migrating from an i5600 to an r5600 will have better per VCPU performance. This is also the case when migrating from an i7600 to an i5900. There are two cases where the per vCPU performance is lower. When going from an i5800 to an r5800 or when going from and i7800 to an r5900 the per vCPU metrics are lower on iSeries. The per vCPU metrics are lower on rSeries **even though the aggregate performance is higher for the entire appliance**. This is due to the speed of the processors, but since there are more processors, the aggregate performance is higher. .. image:: images/rseries_performance_and_sizing/image15d.png :align: center @@ -218,7 +218,7 @@ Also consider that these extrapolations for the iSeries appliances are for bare :align: center :scale: 90% -In the cases where there are gaps/decreases in per vCPU performance when migrating to the rSeries, as the number of vCPUs in a tenant grows the gap will widen as seen in the chart below (this is not normalized for vCMP overhead). This will require more focus on tenant sizing when moving to rSeries for these specific scenarios. As an example, if you wanted to migrate an i5800 appliance into a tenant on an rSeries r5800 appliance you may assume that since the i5800 has 8 vCPUs that you can just migrate it into a 8 vCPU tenant. While this may be possible depending on how utilized the i5800 is, it is better to be conservative in sizing and allocate more vCPUs on the r5800 to bring the performance in line with what an i5800 can support for performance. In the example below, to match the i5800 data sheet performance of 1.8M Layer7 RPS, you would need to allocate and additional 2 vCPUs to that tenant on an r5800. The good news is that the r5800 supports up to 18 vCPUs for tenants so more vCPUs can be allocated if needed. The numbers below are an extrapolation and not based on real world environments, so results may vary. +In the cases where there are gaps/decreases in per vCPU performance when migrating to the rSeries, as the number of vCPUs in a tenant grows the gap will widen as seen in the chart below (this is not normalized for vCMP overhead). This will require more focus on tenant sizing when moving to rSeries for these specific scenarios. As an example, if you wanted to migrate an i5800 appliance into a tenant on an rSeries r5800 appliance you may assume that since the i5800 has 8 vCPUs that you can just migrate it into an 8 vCPU tenant. While this may be possible depending on how utilized the i5800 is, it is better to be conservative in sizing and allocate more vCPUs on the r5800 to bring the performance in line with what an i5800 can support for performance. In the example below, to match the i5800 data sheet performance of 1.8M Layer7 RPS, you would need to allocate and additional 2 vCPUs to that tenant on an r5800. The good news is that the r5800 supports up to 18 vCPUs for tenants so more vCPUs can be allocated if needed. The numbers below are an extrapolation and not based on real world environments, so results may vary. .. image:: images/rseries_performance_and_sizing/image15f.png :align: center @@ -293,7 +293,7 @@ To see how this translates into real performance, it is better to look at a publ :align: center :scale: 100% -Because each appliance has a different number of CPUs, a common sizing exercise is to look at the per vCPU performance by using the formulas above to come up with a per vCPU metric. In the graph below it is done for the published Layer7 RPS (Inf-Inf) but you could use the same math for any metric. Note: the graph below is not derived from a per vCPU test, it is taking a published appliance metric and dividing it by the number of vCPUs (or CPUs in the case of the r2000/r4000) to come up with a per vCPU metric. For some rSeries models, (rx600) some CPUs are disabled so they are not included in the equation. As you will note below, migrating from an i2600 to an r2600 will have better per VCPU performance. When going from an i2800 to an r2800 the per vCPU metrics are lower on rSeries. This is due to a combination of the type of processors being used on the rSeries appliances, as well as the CPU Ghz being throttled on the ix600 iSeries models. The i2600 has a throttled CPU running at 1.2Ghz, while the r2600 is not throttled and runs at 2.2 Ghz, so the per vCPU performance is better when migrating from i2600 to r 2600. +Because each appliance has a different number of CPUs, a common sizing exercise is to look at the per vCPU performance by using the formulas above to come up with a per vCPU metric. In the graph below it is done for the published Layer7 RPS (Inf-Inf), but you could use the same math for any metric. Note: the graph below is not derived from a per vCPU test, it is taking a published appliance metric and dividing it by the number of vCPUs (or CPUs in the case of the r2000/r4000) to come up with a per vCPU metric. For some rSeries models, (rx600) some CPUs are disabled so they are not included in the equation. As you will note below, migrating from an i2600 to an r2600 will have better per VCPU performance. When going from an i2800 to an r2800 the per vCPU metrics are lower on rSeries. This is due to a combination of the type of processors being used on the rSeries appliances, as well as the CPU Ghz being throttled on the ix600 iSeries models. The i2600 has a throttled CPU running at 1.2Ghz, while the r2600 is not throttled and runs at 2.2 Ghz, so the per vCPU performance is better when migrating from i2600 to r 2600. This is not the case with the migration from i2800 to r2800. The per vCPU performance is lower on the r2800, but in aggregate it makes up for this by having more vCPUs (8 vs. 4). This is seen in the overall numbers for the appliances in the link above. Since the r2000 appliances only support one tenant, it is less important what a single vCPU/ CPU can do as all the available resources will be used by the single tenant. Where this may make a difference, is understanding the control plane performance between iSeries and rSeries, since the control plane will run on a single vCPU in iSeries or CPU on rSeries. The i2600 to r2600 should see an increase in control plane performance, while the i2800 to r2800 could see a drop in control plane performance based on extrapolations below. @@ -307,7 +307,7 @@ To see how this translates into real performance, we'll repeat the same exercise :align: center :scale: 70% -Because each appliance has a different number of CPUs, a common sizing exercise is to look at the per vCPU performance by using the formulas above to come up with a per vCPU metric. In the graph below it is done for Layer7 RPS (Inf-Inf) but you could use the same math for any metric. Note: the graph below is not derived from a per vCPU test, it is taking a published appliance metric and dividing it by the number of vCPUs (or CPUs in the case of the r2000/r4000) to come up with a per vCPU metric. For some rSeries models (rx600) some CPUs are disabled so they are not included in the equation. As you will note below, migrating from an i4600 to an r4600 will have better per vCPU performance. When going from an i4800 to an r4800 the per vCPU metrics are lower on rSeries. This is due to a combination of the type of processors being used on the rSeries appliances, as well as the CPU Ghz being throttled on the ix600 iSeries models. The i4600 has a throttled CPU running at 1.2Ghz, while the r4600 is only throttled .1Ghz and runs at 2.1 Ghz, so the per vCPU performance is better when migrating from i4600 to r 4600. +Because each appliance has a different number of CPUs, a common sizing exercise is to look at the per vCPU performance by using the formulas above to come up with a per vCPU metric. In the graph below it is done for Layer7 RPS (Inf-Inf), but you could use the same math for any metric. Note: the graph below is not derived from a per vCPU test, it is taking a published appliance metric and dividing it by the number of vCPUs (or CPUs in the case of the r2000/r4000) to come up with a per vCPU metric. For some rSeries models (rx600) some CPUs are disabled so they are not included in the equation. As you will note below, migrating from an i4600 to an r4600 will have better per vCPU performance. When going from an i4800 to an r4800 the per vCPU metrics are lower on rSeries. This is due to a combination of the type of processors being used on the rSeries appliances, as well as the CPU Ghz being throttled on the ix600 iSeries models. The i4600 has a throttled CPU running at 1.2Ghz, while the r4600 is only throttled .1Ghz and runs at 2.1 Ghz, so the per vCPU performance is better when migrating from i4600 to r 4600. This is not the case with the migration from i4800 to r4800. The per vCPU performance is lower on the r4800, but in aggregate it makes up for this by having more vCPUs (16 vs. 8). This is seen in the overall numbers for the appliances in the link above. Since the r4000 appliances support more than one tenant, it is important to know the performance of a single vCPU/CPU so that extrapolations can be made for various tenant sizes. It will also make a difference in understanding the control plane performance between iSeries and rSeries since the control plane will run on a single vCPU in iSeries or CPU on rSeries. The i4600 to r4600 should see an increase in control plane performance, while the i4800 to r4800 could see a drop in control plane performance based on extrapolations below. @@ -315,7 +315,7 @@ This is not the case with the migration from i4800 to r4800. The per vCPU perfor :align: center :scale: 70% -In the cases where there are gaps/decreases when migrating to the rSeries as the number of vCPUs in a tenant grows, the gap will widen as seen in the chart below. This will require more focus on tenant sizing when moving to rSeries for these specific scenarios. As an example, if you wanted to migrate an i4800 appliance into a tenant on an rSeries 4800 appliance, you may assume that since the i4800 has 8 vCPUs that you can just migrate it into a 8 vCPU tenant. While this may be possible depending on how utilized the i4800 is, it is better to be conservative in sizing an allocate more vCPU's on the r4800 to bring the performance in line with what an i4800 can support for performance. In the example below to match the i4800 data sheet performance of 1.1M Layer7 RPS, you would need to allocate and additional 2 vCPUs (CPUs on the r4000) to that tenant. The good news is that the r4800 supports up to 16 vCPUs for tenants so more vCPUs can be allocated if needed, but the supported tenant sizes are 4, 8, 12, and 16. This means that you would have to go to the next supported vCPU allocation for a tenant which is 12. The numbers below are an extrapolation and not based on real world environments, so results may vary. +In the cases where there are gaps/decreases when migrating to the rSeries as the number of vCPUs in a tenant grows, the gap will widen as seen in the chart below. This will require more focus on tenant sizing when moving to rSeries for these specific scenarios. As an example, if you wanted to migrate an i4800 appliance into a tenant on an rSeries 4800 appliance, you may assume that since the i4800 has 8 vCPUs that you can just migrate it into an 8 vCPU tenant. While this may be possible depending on how utilized the i4800 is, it is better to be conservative in sizing an allocate more vCPU's on the r4800 to bring the performance in line with what an i4800 can support for performance. In the example below to match the i4800 data sheet performance of 1.1M Layer7 RPS, you would need to allocate and additional 2 vCPUs (CPUs on the r4000) to that tenant. The good news is that the r4800 supports up to 16 vCPUs for tenants so more vCPUs can be allocated if needed, but the supported tenant sizes are 4, 8, 12, and 16. This means that you would have to go to the next supported vCPU allocation for a tenant which is 12. The numbers below are an extrapolation and not based on real world environments, so results may vary. .. image:: images/rseries_performance_and_sizing/image19g.png :align: center