Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster with vSphere Storage and maxStorageNodesPerZone #815

Open
mueller-tobias opened this issue Nov 4, 2022 · 0 comments
Open

Cluster with vSphere Storage and maxStorageNodesPerZone #815

mueller-tobias opened this issue Nov 4, 2022 · 0 comments

Comments

@mueller-tobias
Copy link

** BUG REPORT **:

What happened:
I try to deploy a Portworx cluster with vsphere cloud storage and the settings maxStorageNodesPerZone: 2 for some tests. I've a RKE2 Kubernetes Cluster with 9 worker, split in 3 Zones and 2 Regions:

  • region-a
    • zone-1
    • zone-2
  • region-b
    • zone-3

Instead of a Portworx cluster with 9 Nodes, 3 compute-only nodes and 6 storage nodes, split by 2 nodes for each zone i get a storage cluster with 7 compute nodes and 2 storage nodes.
pxctl status on one of the storage nodes will show me the storage pool on the node. The storage pool has the correct zone and region. Also the nodes will get the correct topology.portworx.io Labels.

Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: e9fa0d76-63dd-46c0-8f40-917fd183515f
        IP: 10.10.220.198 
        Local Storage Pool: 1 pool
        POOL    IO_PRIORITY     RAID_LEVEL      USABLE  USED    STATUS  ZONE    REGION
        0       HIGH            raid0           150 GiB 9.5 GiB Online  dc3     duesseldorf
        Local Storage Devices: 1 device
        Device  Path            Media Type              Size            Last-Scan
        0:1     /dev/sdb        STORAGE_MEDIUM_MAGNETIC 150 GiB         04 Nov 22 14:14 UTC
        total                   -                       150 GiB
        Cache Devices:
         * No cache devices
        Kvdb Device:
        Device Path     Size
        /dev/sdc        32 GiB
         * Internal kvdb on this node is using this dedicated kvdb device to store its data.
Cluster Summary
        Cluster ID: px-test
        Cluster UUID: 40db7504-8e09-4d42-9ba0-f265d942d362
        Scheduler: kubernetes
        Nodes: 2 node(s) with storage (2 online), 7 node(s) without storage (7 online)
        IP              ID                                      SchedulerNodeName               Auth            StorageNode     Used    Capacity        Status  StorageStatus   Version         Kernel         OS
        10.10.220.198   e9fa0d76-63dd-46c0-8f40-917fd183515f    tm-rke2-pool4-9f251d72-8f6s5    Disabled        Yes             9.5 GiB 150 GiB         Online  Up (This node)  2.12.0-02bd5b0  5.4.0-131-generic       Ubuntu 20.04.5 LTS
        10.10.220.197   d052a82a-2be4-47c2-889a-aa1a986c1551    tm-rke2-pool3-81fad623-bxghr    Disabled        Yes             9.5 GiB 150 GiB         Online  Up              2.12.0-02bd5b0  5.4.0-131-generic       Ubuntu 20.04.5 LTS
        10.10.220.193   f510b9a2-62bd-4c23-a97a-cc557780d6b6    tm-rke2-pool4-9f251d72-6fn96    Disabled        No              0 B     0 B             Online  No Storage      2.12.0-02bd5b0  5.4.0-131-generic       Ubuntu 20.04.5 LTS
        10.10.220.143   c8c5cd9a-0224-4c84-9c0f-dd1ad46d03ff    tm-rke2-pool2-c2693cf3-549sk    Disabled        No              0 B     0 B             Online  No Storage      2.12.0-02bd5b0  5.4.0-131-generic       Ubuntu 20.04.5 LTS
        10.10.220.196   c563d40b-9fd8-48b2-874a-44762369b9e4    tm-rke2-pool3-81fad623-prz8b    Disabled        No              0 B     0 B             Online  No Storage      2.12.0-02bd5b0  5.4.0-131-generic       Ubuntu 20.04.5 LTS
        10.10.220.195   a787fa15-c59f-4cf6-9758-a7c4a9365c9e    tm-rke2-pool2-c2693cf3-297ct    Disabled        No              0 B     0 B             Online  No Storage      2.12.0-02bd5b0  5.4.0-131-generic       Ubuntu 20.04.5 LTS
        10.10.220.200   6be720d4-ad73-4b25-a97b-a81a5c8b5a5e    tm-rke2-pool4-9f251d72-8pc95    Disabled        No              0 B     0 B             Online  No Storage      2.12.0-02bd5b0  5.4.0-131-generic       Ubuntu 20.04.5 LTS
        10.10.220.134   17961994-c523-4e7e-82b2-67fde1cf08ad    tm-rke2-pool2-c2693cf3-cjrp7    Disabled        No              0 B     0 B             Online  No Storage      2.12.0-02bd5b0  5.4.0-131-generic       Ubuntu 20.04.5 LTS
        10.10.220.194   15848061-71e7-47ef-82fc-645b2a005e36    tm-rke2-pool3-81fad623-xsbg9    Disabled        No              0 B     0 B             Online  No Storage      2.12.0-02bd5b0  5.4.0-131-generic       Ubuntu 20.04.5 LTS
Global Storage Pool
        Total Used      :  19 GiB
        Total Capacity  :  300 GiB

What you expected to happen:

A Portworx cluster with 9 nodes, 6 are storage nodes with 2 nodes on each of the 3 zones.

How to reproduce it (as minimally and precisely as possible):

I use the Operator 1.10.0 with the spec below:

kind: StorageCluster
apiVersion: core.libopenstorage.org/v1
metadata:
  name: px-test
  namespace: portworx
spec:
  deleteStrategy:
    type: UninstallAndWipe
  image: portworx/oci-monitor:2.12.0
  imagePullPolicy: Always
  kvdb:
    internal: true
  cloudStorage:
    provider: vsphere
    deviceSpecs:
      - type=thin,size=150
    kvdbDeviceSpec: type=thin,size=32
    maxStorageNodesPerZone: 2
  secretsProvider: k8s
  stork:
    enabled: true
    args:
      webhook-controller: "true"
  autopilot:
    enabled: true
  csi:
    enabled: true
  monitoring:
    prometheus:
      enabled: true
      exportMetrics: true
  env:
    - name: VSPHERE_INSECURE
      value: "true"
    - name: VSPHERE_USER
      valueFrom:
        secretKeyRef:
          name: px-vsphere-secret
          key: VSPHERE_USER
    - name: VSPHERE_PASSWORD
      valueFrom:
        secretKeyRef:
          name: px-vsphere-secret
          key: VSPHERE_PASSWORD
    - name: VSPHERE_VCENTER
      value: "10.10.211.102"
    - name: VSPHERE_VCENTER_PORT
      value: "443"
    - name: VSPHERE_DATASTORE_PREFIX
      value: "default-container-80975640325584"
    - name: VSPHERE_INSTALL_MODE
      value: "shared"

Anything else we need to know?:

Environment:

  • Container Orchestrator and version: v1.24.4+rke2r1
  • Cloud provider or hardware configuration: vSphere VMs
  • OS (e.g. from /etc/os-release): Ubuntu 20.04
  • Kernel (e.g. uname -a): 5.4.0-131-generic
  • Install tools:
  • Others:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant