Skip to content

Commit

Permalink
Merge branch '1.0' into resource-supervisor-additions
Browse files Browse the repository at this point in the history
  • Loading branch information
MuneebAijaz authored Oct 22, 2024
2 parents e9461e9 + 1448443 commit 90befa1
Show file tree
Hide file tree
Showing 41 changed files with 910 additions and 78 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/closed_pr.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ on:

jobs:
push:
uses: stakater/.github/.github/workflows/mkdocs_pull_request_closed[email protected].79
uses: stakater/.github/.github/workflows/pull_request_closed[email protected].91
secrets:
GH_TOKEN: ${{ secrets.STAKATER_GITHUB_TOKEN }}
GH_TOKEN: ${{ secrets.PUBLISH_TOKEN }}
4 changes: 2 additions & 2 deletions .github/workflows/delete_branch.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,6 @@ on: delete

jobs:
delete:
uses: stakater/.github/.github/workflows/mkdocs_branch_deleted[email protected].79
uses: stakater/.github/.github/workflows/branch_deleted[email protected].91
secrets:
GH_TOKEN: ${{ secrets.STAKATER_GITHUB_TOKEN }}
GH_TOKEN: ${{ secrets.PUBLISH_TOKEN }}
14 changes: 7 additions & 7 deletions .github/workflows/pull_request.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,25 +8,25 @@ on:

jobs:
doc_qa:
uses: stakater/.github/.github/workflows/[email protected].79
uses: stakater/.github/.github/workflows/[email protected].91
with:
MD_CONFIG: .github/md_config.json
DOC_SRC: content
MD_LINT_CONFIG: .markdownlint.yaml
build_container:
if: ${{ github.base_ref == 'main' }}
uses: stakater/.github/.github/workflows/[email protected].79
uses: stakater/.github/.github/workflows/[email protected].91
with:
DOCKER_BUILD_CONTEXTS: content=https://github.com/stakater/mto-docs.git#pull-request-deployments
DOCKER_FILE_PATH: Dockerfile
secrets:
CONTAINER_REGISTRY_URL: ghcr.io/stakater
CONTAINER_REGISTRY_USERNAME: stakater-user
CONTAINER_REGISTRY_PASSWORD: ${{ secrets.STAKATER_GITHUB_TOKEN }}
CONTAINER_REGISTRY_USERNAME: ${{ github.actor }}
CONTAINER_REGISTRY_PASSWORD: ${{ secrets.GHCR_TOKEN }}
SLACK_WEBHOOK_URL: ${{ secrets.STAKATER_DELIVERY_SLACK_WEBHOOK }}
DOCKER_SECRETS: GIT_AUTH_TOKEN=${{ secrets.STAKATER_GITHUB_TOKEN }}
DOCKER_SECRETS: GIT_AUTH_TOKEN=${{ secrets.PUBLISH_TOKEN }}

deploy_doc:
uses: stakater/.github/.github/workflows/mkdocs_pull_request_versioned_doc[email protected].79
uses: stakater/.github/.github/workflows/pull_request_versioned_doc[email protected].91
secrets:
GH_TOKEN: ${{ secrets.STAKATER_GITHUB_TOKEN }}
GH_TOKEN: ${{ secrets.PUBLISH_TOKEN }}
4 changes: 2 additions & 2 deletions .github/workflows/push.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,6 @@ on:

jobs:
push:
uses: stakater/.github/.github/workflows/mkdocs_push_versioned_doc[email protected].79
uses: stakater/.github/.github/workflows/push_versioned_doc[email protected].91
secrets:
GH_TOKEN: ${{ secrets.STAKATER_GITHUB_TOKEN }}
GH_TOKEN: ${{ secrets.PUBLISH_TOKEN }}
12 changes: 6 additions & 6 deletions .github/workflows/release.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,18 +7,18 @@ on:

jobs:
create_release:
uses: stakater/.github/.github/workflows/[email protected].79
uses: stakater/.github/.github/workflows/[email protected].91
secrets:
SLACK_WEBHOOK_URL: ${{ secrets.STAKATER_DELIVERY_SLACK_WEBHOOK }}
build_container:
uses: stakater/.github/.github/workflows/[email protected].79
uses: stakater/.github/.github/workflows/[email protected].91
with:
DOCKER_BUILD_CONTEXTS: content=https://github.com/stakater/mto-docs.git#gh-pages
DOCKER_FILE_PATH: Dockerfile
secrets:
CONTAINER_REGISTRY_URL: ghcr.io/stakater
CONTAINER_REGISTRY_USERNAME: stakater-user
CONTAINER_REGISTRY_PASSWORD: ${{ secrets.STAKATER_GITHUB_TOKEN }}
CONTAINER_REGISTRY_USERNAME: ${{ github.actor }}
CONTAINER_REGISTRY_PASSWORD: ${{ secrets.GHCR_TOKEN }}
SLACK_WEBHOOK_URL: ${{ secrets.STAKATER_DELIVERY_SLACK_WEBHOOK }}
GH_TOKEN: ${{ secrets.STAKATER_GITHUB_TOKEN }}
DOCKER_SECRETS: GIT_AUTH_TOKEN=${{ secrets.STAKATER_GITHUB_TOKEN }}
GH_TOKEN: ${{ secrets.PUBLISH_TOKEN }}
DOCKER_SECRETS: GIT_AUTH_TOKEN=${{ secrets.PUBLISH_TOKEN }}
2 changes: 1 addition & 1 deletion .vale.ini
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
StylesPath = styles
MinAlertLevel = warning

Packages = https://github.com/stakater/vale-package/releases/download/v0.0.25/Stakater.zip
Packages = https://github.com/stakater/vale-package/releases/download/v0.0.37/Stakater.zip
Vocab = Stakater

# Only check MarkDown files
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# syntax=docker/dockerfile:1
FROM nginxinc/nginx-unprivileged:1.26-alpine
FROM nginxinc/nginx-unprivileged:1.27-alpine
WORKDIR /usr/share/nginx/html/

# copy the entire application
Expand Down
2 changes: 1 addition & 1 deletion DockerfileLocal
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ RUN mkdocs build
# remove the build theme because it is not needed after site is build.
RUN rm -rf dist

FROM nginxinc/nginx-unprivileged:1.26-alpine as deploy
FROM nginxinc/nginx-unprivileged:1.27-alpine as deploy
COPY --from=builder $HOME/application/site/ /usr/share/nginx/html/
COPY default.conf /etc/nginx/conf.d/

Expand Down
6 changes: 3 additions & 3 deletions content/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
#### Enhanced

- Updated Tenant CR to v1beta3, more details in [Tenant CRD](./crds-api-reference/tenant.md)
- Added custom pricing support for Opencost, more details in [Opencost](./crds-api-reference/integration-config.md#Custom-Pricing-Model)
- Added custom pricing support for Opencost, more details in [Opencost](./crds-api-reference/integration-config.md#custom-pricing-model)

#### Fix

Expand Down Expand Up @@ -237,7 +237,7 @@

### v0.5.0

- feat: Add support for tenant namespaces off-boarding. For more details check out [onDelete](./tutorials/tenant/deleting-tenant.md#retaining-tenant-namespaces-and-appproject-when-a-tenant-is-being-deleted)
- feat: Add support for tenant namespaces off-boarding.
- feat: Add tenant webhook for spec validation

- fix: TemplateGroupInstance now cleans up leftover Template resources from namespaces that are no longer part of TGI namespace selector
Expand Down Expand Up @@ -460,7 +460,7 @@
### v0.2.32

- refactor: Restructure integration config spec, more details in [relevant docs][def]
- feat: Allow users to input custom regex in certain fields inside of integration config, more details in [relevant docs](./crds-api-reference/integration-config.md#openshift)
- feat: Allow users to input custom regex in certain fields inside of integration config, more details in [relevant docs](./crds-api-reference/integration-config.md)

### v0.2.31

Expand Down
42 changes: 42 additions & 0 deletions content/explanation/console.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,48 @@ Here, admins have a bird's-eye view of all tenants, with the ability to delve in

![tenants](../images/tenants.png)

### Tenants/Quota

#### Viewing Quota in the Tenant Console

In this view, users can access a dedicated tab to review the quota utilization for their Tenants. Within this tab, users have the option to toggle between two different views: **Aggregated Quota** and **Namespace Quota**.

#### Aggregated Quota View

![tenants](../images/tenantQuotaAggregatedView.png)
This view provides users with an overview of the combined resource allocation and usage across all namespaces within their tenant. It offers a comprehensive look at the total limits and usage of resources such as CPU, memory, and other defined quotas. Users can easily monitor and manage resource distribution across their entire tenant environment from this aggregated perspective.

#### Namespace Quota View

![tenants](../images/tenantQuotaNamespaceView.png)
Alternatively, users can opt to view quota settings on a per-namespace basis. This view allows users to focus specifically on the resource allocation and usage within individual namespaces. By selecting this option, users gain granular insights into the resource constraints and utilization for each namespace, facilitating more targeted management and optimization of resources at the namespace level.

### Tenants/Utilization

In the **Utilization** tab of the tenant console, users are presented with a detailed table listing all namespaces within their tenant. This table provides essential metrics for each namespace, including CPU and memory utilization. The metrics shown include:

- **Cost:** The cost associated with CPU and memory utilization.
- **Request Average:** The average amount of CPU and memory resources requested.
- **Usage Average:** The average amount of CPU and memory resources used.
- **Max:** The maximum value between CPU and memory requests and used resources, calculated every 30 seconds and averaged over the selected running minutes.

Users can adjust the interval window using the provided selector to customize the time frame for the displayed data. This table allows users to quickly assess resource utilization across all namespaces, facilitating efficient resource management and cost tracking.

![tenants](../images/tenantUtilizationNamespaces.png)

Upon selecting a specific namespace from the utilization table, users are directed to a detailed view that includes CPU and memory utilization graphs along with a workload table. This detailed view provides:

- **CPU and Memory Graphs:** Visual representations of the namespace's CPU and memory usage over time, enabling users to identify trends and potential issues at a glance.
- **Workload Table:** A comprehensive list of all workloads within the selected namespace, including pods, deployments, and stateful-sets. The table displays key metrics for each workload, including:
- **Cost:** The cost associated with the workload's CPU and memory utilization.
- **Request Average:** The average amount of CPU and memory resources requested by the workload.
- **Usage Average:** The average amount of CPU and memory resources used by the workload.
- **Max:** The maximum value between CPU and memory requests and used resources, calculated every 30 seconds and averaged over the running minutes.

This detailed view provides users with in-depth insights into resource utilization at the workload level, enabling precise monitoring and optimization of resource allocation within the selected namespace.

![tenants](../images/tenantUtilizationNamespaceStats.png)

### Namespaces

Users can view all the namespaces that belong to their tenant, offering a comprehensive perspective of the accessible namespaces for tenant members. This section also provides options for detailed exploration.
Expand Down
6 changes: 1 addition & 5 deletions content/explanation/multi-tenancy-vault.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,7 @@ The Diagram shows how MTO enables ServiceAccounts to read secrets from Vault.

This requires a running `RHSSO(RedHat Single Sign On)` instance integrated with Vault over [OIDC](https://developer.hashicorp.com/vault/docs/auth/jwt) login method.

MTO integration with Vault and RHSSO provides a way for users to log in to Vault where they only have access to relevant tenant paths.

Once both integrations are set up with [IntegrationConfig CR](../crds-api-reference/integration-config.md#rhsso-red-hat-single-sign-on), MTO links tenant users to specific client roles named after their tenant under Vault client in RHSSO.

After that, MTO creates specific policies in Vault for its tenant users.
MTO creates specific policies in Vault for its tenant users.

Mapping of tenant roles to Vault is shown below

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ resources:
```
Once the template has been created, Bill has to edit the `Tenant` to add unique label to namespaces in which the secret has to be deployed.
For this, he can use the support for [common](../tutorials/tenant/assigning-metadata.md#distributing-common-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource) and [specific](../tutorials/tenant/assigning-metadata.md#distributing-specific-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource) labels across namespaces.
For this, he can use the support for [common](../tutorials/tenant/assigning-metadata.md#distributing-common-labels-and-annotations) and [specific](../tutorials/tenant/assigning-metadata.md#distributing-specific-labels-and-annotations) labels across namespaces.

Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case.

Expand Down
6 changes: 4 additions & 2 deletions content/how-to-guides/offboarding/uninstalling.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,11 @@

You can uninstall MTO by following these steps:

* Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not. If yes, please set `spec.onDelete.cleanNamespaces` to `false` for all those tenants whose namespaces you want to retain, and `spec.onDelete.cleanAppProject` to `false` for all those tenants whose AppProject you want to retain. For more details check out [onDelete](../../tutorials/tenant/deleting-tenant.md#retaining-tenant-namespaces-and-appproject-when-a-tenant-is-being-deleted)
* Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not.
For more details check out [onDeletePurgeNamespaces](../../tutorials/tenant/deleting-tenant.md#configuration-for-retaining-resources)
[onDeletePurgeAppProject](../../crds-api-reference/extensions.md#configuring-argocd-integration)

* In case you have enabled console, you will have to disable it first by navigating to `Search` -> `IntegrationConfig` -> `tenant-operator-config` and set `spec.provision.console` and `spec.provision.showback` to `false`.
* In case you have enabled console and showback, you will have to disable it first by navigating to `Search` -> `IntegrationConfig` -> `tenant-operator-config` and set `spec.components.console` and `spec.components.showback` to `false`.

* Remove IntegrationConfig CR from the cluster by navigating to `Search` -> `IntegrationConfig` -> `tenant-operator-config` and select `Delete` from actions dropdown.

Expand Down
Binary file modified content/images/dashboard.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/eks-access-config.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/eks-access-entry.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/eks-denied-ns-access.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/eks-nodegroup.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified content/images/namespaces.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified content/images/quotas.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified content/images/showback.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified content/images/templateGroupInstances.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified content/images/templates.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/tenantQuotaAggregatedView.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/tenantQuotaNamespaceView.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/tenantUtilizationNamespaces.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified content/images/tenants.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
16 changes: 8 additions & 8 deletions content/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,20 +9,20 @@ head:

[//]: # ( introduction.md, features.md)

Kubernetes is designed to support a single tenant platform; OpenShift brings some improvements with its "Secure by default" concepts but it is still very complex to design and orchestrate all the moving parts involved in building a secure multi-tenant platform hence making it difficult for cluster admins to host multi-tenancy in a single OpenShift cluster. If multi-tenancy is achieved by sharing a cluster, it can have many advantages, e.g. efficient resource utilization, less configuration effort and easier sharing of cluster-internal resources among different tenants. OpenShift and all managed applications provide enough primitive resources to achieve multi-tenancy, but it requires professional skills and deep knowledge of OpenShift.
Kubernetes is designed to support a single tenant platform; Managed Kubernetes Services (such as AKS, EKS, GKE and OpenShift) brings some improvements with their "Secure by default" concepts, but it is still very complex to design and orchestrate all the moving parts involved in building a secure multi-tenant platform hence making it difficult for cluster admins to host multi-tenancy in a single Kubernetes cluster. If multi-tenancy is achieved by sharing a cluster, it can have many advantages, e.g. efficient resource utilization, less configuration effort and easier sharing of cluster's internal resources among different tenants. Kubernetes and all managed applications provide enough primitive resources to achieve multi-tenancy, but it requires professional skills and deep knowledge of the respective tool.

This is where Multi Tenant Operator (MTO) comes in and provides easy to manage/configure multi-tenancy. MTO provides wrappers around OpenShift resources to provide a higher level of abstraction to users. With MTO admins can configure Network and Security Policies, Resource Quotas, Limit Ranges, RBAC for every tenant, which are automatically inherited by all the namespaces and users in the tenant. Depending on the user's role, they are free to operate within their tenants in complete autonomy.
This is where Multi Tenant Operator (MTO) comes in and provides easy to manage/configure multi-tenancy. MTO provides wrappers around Kubernetes resources (depending on the version) to provide a higher level of abstraction to users. With MTO, admins can configure Network and Security Policies, Resource Quotas, Limit Ranges, RBAC for every tenant, which are automatically inherited by all the namespaces and users in the tenant. Depending on the user's role, they are free to operate within their tenants in complete autonomy.
MTO supports initializing new tenants using GitOps management pattern. Changes can be managed via PRs just like a typical GitOps workflow, so tenants can request changes, add new users, or remove users.

The idea of MTO is to use namespaces as independent sandboxes, where tenant applications can run independently of each other. Cluster admins shall configure MTO's custom resources, which then become a self-service system for tenants. This minimizes the efforts of the cluster admins.

MTO enables cluster admins to host multiple tenants in a single OpenShift Cluster, i.e.:
MTO enables cluster admins to host multiple tenants in a single Kubernetes Cluster, i.e.:

* Share an **OpenShift cluster** with multiple tenants
* Share a **Kubernetes cluster** with multiple tenants
* Share **managed applications** with multiple tenants
* Configure and manage tenants and their sandboxes

MTO is also [OpenShift certified](https://catalog.redhat.com/software/operators/detail/618fa05e3adfdfc43f73b126)
MTO is also [RedHat certified](https://catalog.redhat.com/software/operators/detail/618fa05e3adfdfc43f73b126)

## Features

Expand All @@ -34,7 +34,7 @@ RBAC is one of the most complicated and error-prone parts of Kubernetes. With Mu

Multi Tenant Operator binds existing ClusterRoles to the Tenant's Namespaces used for managing access to the Namespaces and the resources they contain. You can also modify the default roles or create new roles to have full control and customize access control for your users and teams.

Multi Tenant Operator is also able to leverage existing OpenShift groups or external groups synced from 3rd party identity management systems, for maintaining Tenant membership in your organization's current user management system.
Multi Tenant Operator is also able to leverage existing groups in Kubernetes and OpenShift, or external groups synced from 3rd party identity management systems, for maintaining Tenant membership in your organization's current user management system.

## HashiCorp Vault Multitenancy

Expand All @@ -44,7 +44,7 @@ More details on [Vault Multitenancy](./how-to-guides/enabling-multi-tenancy-vaul

## ArgoCD Multitenancy

Multi Tenant Operator is not only providing strong Multi Tenancy for the OpenShift internals but also extends the tenants permission model to ArgoCD were it can provision AppProjects and Allowed Repositories for your tenants greatly ease the overhead of managing RBAC in ArgoCD.
Multi Tenant Operator is not only providing strong Multi Tenancy for the Kubernetes internals but also extends the tenants permission model to ArgoCD were it can provision AppProjects and Allowed Repositories for your tenants greatly ease the overhead of managing RBAC in ArgoCD.

More details on [ArgoCD Multitenancy](./how-to-guides/enabling-multi-tenancy-argocd.md)

Expand Down Expand Up @@ -114,7 +114,7 @@ Also, by leveraging Multi Tenant Operator's templating mechanism, namespaces can

## Everything as Code/GitOps Ready

Multi Tenant Operator is designed and built to be 100% OpenShift-native and to be configured and managed the same familiar way as native OpenShift resources so is perfect for modern shops that are dedicated to GitOps as it is fully configurable using Custom Resources.
Multi Tenant Operator is designed and built to be 100% Kubernetes-native, and to be configured and managed the same familiar way as native Kubernetes resources so it's perfect for modern companies that are dedicated to GitOps as it is fully configurable using Custom Resources.

## Preventing Clusters Sprawl

Expand Down
29 changes: 0 additions & 29 deletions content/installation/basic-vs-enterprise-tier.md

This file was deleted.

Loading

0 comments on commit 90befa1

Please sign in to comment.