-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update vpc.max.networks
setting & settings to limit the number of NICs for each hypervisor
#8654
base: main
Are you sure you want to change the base?
Update vpc.max.networks
setting & settings to limit the number of NICs for each hypervisor
#8654
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #8654 +/- ##
============================================
- Coverage 31.04% 30.83% -0.21%
+ Complexity 33902 33647 -255
============================================
Files 5404 5405 +1
Lines 380305 380371 +66
Branches 55506 55519 +13
============================================
- Hits 118056 117298 -758
- Misses 246496 247468 +972
+ Partials 15753 15605 -148
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
I agree this setting should by dynamic, and it is better to be a zone-level setting. But, vpc does not belong to a cluster and vms in vpc might run with different hypervisors, in different clusters. IMHO, It does not make sense to be a cluster-level setting. |
Since the maximum number of networks per VPC depends on the hypervisor where the VR is deployed (as pointed out in the description), I don't think it makes sense to change it to the zone level as the cluster is the highest possible structure to define a hypervisor.
The setting value is defined by the cluster in which the VPC's VR is running, using the lowest value found if the VPC has more than one VR and they are in different clusters (https://github.com/apache/cloudstack/pull/8654/files#diff-07bd71d9f58832d4429d7743f4887188a96aacc913dc48e0101470147ce42032R1893-R1922). Regarding |
I can understand your code. |
@weizhouapache Each hypervisor can support a different number of network adapters. Comparing KVM and VMWare, VMWare defines a limited number of NICs for each ESXi machine (https://configmax.esp.vmware.com/guest?vmwareproduct=vSphere&release=vSphere%207.0&categories=1-0), while the number of tiers that can be allocated using KVM depends on the number of PCI slots availabe. For example, KVM provides 32 PCI slots, which are used to connect several devices, e.g. CD-ROM, keyboard, etc. Every ACS VR already consumes 9 slots of the 32 available; thus, in KVM we can have 23 slots for new tiers to be added. Therefore, in an environment with KVM and VMware clusters under the same zone, applying the VMware limit to KVM is not interesting, as a VPC in KVM supports way more tiers than in VMware. I will update the PR's description to make it clearer. |
@hsato03 However ...
I would suggest you to create a setting for each hypervisor, e.g.
|
@hsato03
|
@weizhouapache Thanks for your suggestion. I agree that this situation should include VMs and VRs but the |
agree.
It looks good. Looking forward to your changes |
This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch. |
vpc.max.networks
settingvpc.max.networks
setting & settings to limit the number of NICs for each hypervisor
@blueorangutan package |
@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
Packaging result [SF]: ✖️ el7 ✖️ el8 ✖️ el9 ✖️ debian ✖️ suse15. SL-JID 9570 |
@blueorangutan package |
@hsato03 a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
Packaging result [SF]: ✖️ el7 ✖️ el8 ✖️ el9 ✖️ debian ✖️ suse15. SL-JID 9751 |
This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch. |
I think it would be a merge problem, but notice that the GHA build also fails with this error: https://github.com/apache/cloudstack/actions/runs/8935283845/job/24543531187?pr=8654#step:7:18232 |
@blueorangutan package |
@hsato03 a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 10329 |
@blueorangutan package |
@JoaoJandre a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 10457 |
@blueorangutan package |
@JoaoJandre a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
Packaging result [SF]: ✖️ el8 ✖️ el9 ✖️ debian ✖️ suse15. SL-JID 10643 |
Packaging result [SF]: ✖️ el8 ✖️ el9 ✖️ debian ✖️ suse15. SL-JID 10662 |
This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch. |
@blueorangutan package |
@hsato03 a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 11368 |
@blueorangutan test |
@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests |
[SF] Trillian test result (tid-11710)
|
@hsato03 these errors seem related to the change, an you have a look? |
Description
Each hypervisor can support a different number of network adapters. Comparing KVM and VMWare, VMWare defines a limited number of NICs for each ESXi machine (https://configmax.esp.vmware.com/guest?vmwareproduct=vSphere&release=vSphere%207.0&categories=1-0), while the number of tiers that can be allocated using KVM depends on the number of PCI slots availabe. For example, KVM provides 32 PCI slots, which are used to connect several devices, e.g. CD-ROM, keyboard, etc. Every ACS VR already consumes 9 slots of the 32 available; thus, in KVM we can have 23 slots for new tiers to be added.
This PR updates the
vpc.max.networks
setting toConfigKey
and changes its scope to the cluster level, as the maximum number of networks per VPC depends on the hypervisor where the VR is deployed.The setting value is defined based on the cluster in which the VPC's VR is running, using the lowest value found if the VPC has more than one VR and they are in different clusters. If the VPC does not have a VR, the value defined in the global setting is used.
Types of changes
Feature/Enhancement Scale or Bug Severity
Feature/Enhancement Scale
Bug Severity
Screenshots (if appropriate):
How Has This Been Tested?
I changed the
vpc.max.networks
value in the following resources:cluster-test
: 7;cluster-test2
: 5.I created 3 VPCs:
cluster-test
;cluster-test
andcluster-test2
.Then, I verfiied that the
vpc.max.networks
setting value was: