Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

Documentation/binderhub #112

Merged
merged 25 commits into from
May 17, 2020
Merged
Show file tree
Hide file tree
Changes from 18 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
104 changes: 104 additions & 0 deletions doc/RANCHER_PROXMOX.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
Deploy FADI with rancher and proxmox
Sellto marked this conversation as resolved.
Show resolved Hide resolved
=============

* [1. Upload ISO on Proxmox Node](#1-Upload-IS0-on-Proxmox-Node)
Sellto marked this conversation as resolved.
Show resolved Hide resolved
* [2. Install Rancher](#2-Install-Rancher)
* [3. Add docker-machine driver](#1-Add-docker-machine-driver)
* [4. Create Cluster With Rancher](#2-Create-Cluster-With-Rancher)
* [Create Node Template](#Create-Node-Template)
* [Create Cluster](#Create-Cluster)
* [Create The Nodes](#Create-The-Nodes)
* [5.Manage the provisioning of the persistent volumes](#5-Manage-the-provisioning-of-the-persistent-volumes)
* [5. Control Cluster from Local PC](#3-Control-Cluster-from-Local-PC)

This page provides informations on how to create a Kubernetes cluster and how to deploy FADI using [Rancher](https://rancher.com/) and [Proxmox](https://www.proxmox.com/en/).

## 1. Upload ISO on Proxmox Node

<a href="https://www.proxmox.com/" alt="ProxMox"> <img src="images/logos/Proxmox.png" width="150px" /></a>

> "Proxmox VE is a complete open-source platform for enterprise virtualization. With the built-in web interface you can easily manage VMs and containers, software-defined storage and networking, high-availability clustering, and multiple out-of-the-box tools on a single solution."

First you need to download the iso rancheros-proxmoxve-autoformat.iso, you can download it by clicking [here](https://github.com/rancher/os/releases/download/v1.5.5/rancheros-proxmoxve-autoformat.iso).

Once downloaded, you need to upload this iso on your proxmox node.

## 2. Install Rancher

We consider in this documentation that you already have deployed Rancher. However, we give you the instructions that we have followed to deploy our Rancher server: [https://rancher.com/docs/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/advanced/](https://rancher.com/docs/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/advanced/).

## 3. Add docker-machine driver

Then, you need to allow Rancher to access Proxmox. We have contributed to upgrade a existing [docker-machine driver](https://github.com/lnxbil/docker-machine-driver-proxmox-ve/releases/download/v3/docker-machine-driver-proxmoxve.linux-amd64
) to make it compatible with Rancher.

To add this driver in your Rancher, follow these steps :

![Proxmox driver](images/installation/proxmoxdriver.gif)

Driver Url:
```
https://github.com/lnxbil/docker-machine-driver-proxmox-ve/releases/download/v3/docker-machine-driver-proxmoxve.linux-amd64
```

## 4. Create Cluster With Rancher

<a href="https://rancher.com/" alt="Rancher"> <img src="images/logos/rancher.png" width="150px" /></a>

> "Rancher is open source software that combines everything an organization needs to adopt and run containers in production. Built on Kubernetes, Rancher makes it easy for DevOps teams to test, deploy and manage their applications."

After connecting to rancher, you can follow these steps:

### Create Node Template

This is where you have to define the templates you want to use for your nodes (both masters and workers nodes). To do so, you can go to: `profile (top right corner)` > `Node templates` > `Add Template` :

Choose `Proxmoxve`
![Proxmoxve](images/installation/Proxmoxve.png)
and then fill the rest of the fields like the IP of the Proxmox `i.e. proxmoxHost`, the username/password `i.e. proxmoxUserName, proxmoxUserPassword `, storage of the image file `vmImageFile ` which is in our case `local:iso/rancheros-proxmoxve-autoformat.isot` and coming down to the resources you want to allocate for your node `i.e. nodevmCpuCores, vmMemory, vmStorageSize `.

### Create Cluster

To create your cluster:

`Cluster` > `Add Cluster` > `Proxmoxve`

you will need to give a name to your cluster, then precise the nodes in the cluster, at first start with **one master node**, you give it a name, choose of the templates created earlier for that node and then tick all 3 boxes for `etcd`, `Control Plane` and `Worker`, then choose the kubernetes version and click `create`.
Sellto marked this conversation as resolved.
Show resolved Hide resolved

> "you'll have to wait the `VM creation`, `the RancherOS install` and `the IP address retrieving` and that might take a while "

Once the master node gets its IP address, go to `Cluster` > `Edit Cluster` and add an other worker node, untick the worker box from the master node and tick it in the new worker node. It should look to something like this:
![Proxmoxve](images/installation/workernode.png)

If a second (or more) node (master or worker) is needed, you can either add an other one with a different template by following the same way we just did. You can also add as much nodes as you want using the same template by simply going to `YourCluster (not global)` > `nodes` > `+` and it will add an other node of the same kind:

![Proxmoxve](images/installation/addnode.png)

## 5. Manage the provisioning of the persistent volumes.

#### StorageOS

Once all your nodes are up and running, it's time to deploy your services. But before, you need to set your default PVC for the persistent volumes. To do so, we first need to deploy the volume plugin `StorageOS`, go to `YourCluster (not global)` > `system` > `apps` > `launch` and search for `StorageOS`. Make sure all the fields are filled correctly, like the following screenshot:

![Proxmoxve](images/installation/StorageOS.png)

and now, launch it 🚀.

> "launching apps usually takes several minutes, you're going to need to wait a few minutes till the "

Be Careful that this service gives the posibility to allocate maximum 50Gi with the basic License.

![Proxmoxve](images/installation/StorageOS_limits.png)
#### Manualy

TBT

### Deploy FADI
![defaultpvc](images/installation/defaultpvc.png)

selector set as default the StorageClass
#### Longhorn
7. edit the values yaml of fadi
8. run fadi

## 4. Control Cluster from Local PC
7 changes: 5 additions & 2 deletions doc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,8 @@ FADI Documentation
* [Reverse proxy](REVERSEPROXY.md) - Traefik reverse proxy configuration
* [Security](SECURITY.md) - SSL setup
* [TSimulus](TSIMULUS.md) - how to simulate sensors and generate realistic data with [TSimulus](https://github.com/cetic/TSimulus)

For tutorials and examples, see the [examples section](../examples/README.md)
* [Sample self-hosted infrastructure](RANCHER_PROXMOX.md) - How to install FADI on a self hosted infrastructure using
* [Proxmox](https://www.proxmox.com/en/) as a self-hosted private cloud (IaaS) provider. It provides virtual machines for the various Kubernetes nodes.
* [Rancher](https://rancher.com/what-is-rancher/what-rancher-adds-to-kubernetes/) to manage (install, provision, maintain, upgrade, ...) several Kubernetes clusters, e.g. when needing several environments or several well separated tenant installations.

For tutorials and examples, see the [examples section](../examples/README.md)
Binary file added doc/images/installation/Proxmoxve.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/StorageOS.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/StorageOS_limits.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/addnode.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/defaultpvc.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/proxmoxdriver.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/workernode.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/logos/Proxmox.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/logos/binderhub.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/logos/rancher.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
81 changes: 81 additions & 0 deletions examples/binderhub/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
<a href="https://binderhub.readthedocs.io/en/latest/" alt="BinderHub"><img src="images/binderhub.png" width="200px"/></a>
# What's Binderhub

> *The primary goal of BinderHub is creating custom computing environments that can be used by many remote users. BinderHub enables an end user to easily specify a desired computing environment from a Git repo. BinderHub then serves the custom computing environment at a URL which users can access remotely.*

> *BinderHub will build Docker images out of Git repositories, and then push them to a Docker registry so that JupyterHub can launch user servers based on these images*

# Add Binderhub around FADI

We assume in this documentation that you have already a Kubernetes cluster deployed. If not, you can refer and follow our [Installation guide]() until the point 1.2.2.

You will also need to have a valid Docker account.

Follow this step to have FADI with Binderhub installed into your cluster:

1. Clone this repository and go to the binderhub example folder:
```bash
git clone https://github.com/cetic/fadi.git fadi
cd $pwd/fadi/examples/binderhub
```

2. Edit the **config.yaml** file to set your docker credentials and the the name of your project for the following inputs:
```yaml
config:
BinderHub:
use_registry: true
image_prefix: <DOCKER_ID>/<PROJECT_NAME>-
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please explain what is Docker ID, Project Name

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added a remarks to explain the construction of a image prefix

registry:
username: <DOCKER_ID>
password: >DOCKER_PASSWORD>
```

3. Launch the Helm script, this will deploy all the FADI services and Binderhub on the cluster (and may take some time).
```bash
./deploy.sh
# see deploy.log for connection information to the various services
```

# Basic example of binderhub workflow
This example is used to test the deployment of binderhub with a project using a requirement.txt file.

## Input
The first step is to access the BinderHub page.

If your Kubernetes cluster is deployed with **minikube**, the command `minikube service list` will allow you to recover the address to copy/paste in your browser. On the other hand, in the case of a bare-metal cluster, the command: `kubectl get svc -n binderhub` will allow you to recover the service port.
As this service is NodePort type, you can use the IP address of any node to reach the BinderHub home page.

Once on the binderhub page is opened, simply fill in the fields with the following inputs (as the figure below):
- **github repository name or url:** `https://github.com/binder-examples/requirements`
- **branch:** `master`

Finally, click on the `launch` button:

![images/1_input.png](images/1_input.png)

## Building

From now, everything is automated. BinderHub will create a container image based on what resides on the Git project.

![images/2_building.png](images/2_building.png)

## Pushing

The image is now built, it will be saved in your docker repository, e.g. ([hub.docker.com](hub.docker.com)). It will no longer be necessary to go back through the build stages to access this project. The time of this step will be dependent on your internet connection. Indeed, the image size is almost 600mb.

![images/3_pushing.png](images/3_pushing.png)

## Server

The project will automatically be launched in JupyterHub once all these steps are completed.

![images/4_jupyter.png](images/4_jupyter.png)

## Launch

You can now enjoy your work environment!

![images/5_notebook.png](images/5_notebook.png)

# References
- [https://binderhub.readthedocs.io/en/latest/](https://binderhub.readthedocs.io/en/latest/)
29 changes: 29 additions & 0 deletions examples/binderhub/config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
config:
BinderHub:
use_registry: true
image_prefix: <DOCKER_ID>/<PROJECT_NAME>-
registry:
username: <DOCKER_ID>
password: >DOCKER_PASSWORD>


dind:
enabled: true
daemonset:
image:
name: docker
tag: 18.09.2-dind

jupyterhub:
hub:
services:
binder:
apiToken: 8675d9b1ff09ff2502886dfd4f0f36fd45c916372536aa09613cc9c5563d8d1d
proxy:
secretToken: 613e0ace7628f92bab45478873451f00e65977ca6a61d2f9255667b7bbd71d0e
db:
type: sqlite-memory
service:
type: NodePort
nodePorts:
http: 30902
53 changes: 53 additions & 0 deletions examples/binderhub/deploy.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
#!/usr/bin/env bash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe delete/separate the fadi installation part form this script? If we let the fadi installation, we will need to modify the genral bash script and this one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I deleted the part concerning FADI and explain into the README that there are two scripts.

# this script will deploy the various FADI services on a Kubernetes cluster using Helm and kubectl
# usage: ./deploy.sh [namespace]
set -o errexit

LOG_FILE="deploy.log"
[ -e ${LOG_FILE} ] && rm ${LOG_FILE}
exec > >(tee -a ${LOG_FILE} )
exec 2> >(tee -a ${LOG_FILE} >&2)

# default namespace is fadi
NAMESPACE=${1:-fadi}

printf "\n\nCreating namespaces...\n"

kubectl get namespace ${NAMESPACE} 2> /dev/null || kubectl create namespace ${NAMESPACE}


printf "\n\nHelm all the things!...\n"
# add stable repo
helm repo add stable https://kubernetes-charts.storage.googleapis.com


# add cetic helm repo
helm repo add cetic https://cetic.github.io/helm-charts/
helm repo update

# create clusterrole for traefik
kubectl get clusterrole traefik-ingress-controller 2> /dev/null || kubectl create -f ./traefik/rbac-config.yaml

# install/upgrade traefik
helm upgrade --install traefik stable/traefik -f ./traefik/values.yaml --namespace kube-system
# install/upgrade FADI
helm upgrade --install ${NAMESPACE} cetic/fadi -f ./values.yaml --namespace ${NAMESPACE}

printf "\n\nFadi is deployed...Now helm will Install Binderhub Around FADI...\n"
# install/binderhub around FADI
kubectl get namespace binderhub 2> /dev/null || kubectl create namespace binderhub
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
helm upgrade --install binderhub jupyterhub/binderhub --version=0.2.0-n132.h1a8ce62 -f ./config.yaml --namespace binderhub

sleep 5s
# Get the node IP where jupyterhub is deployed
nodeIP=$(kubectl get po -n binderhub -o wide |sed -n '/proxy/p' |awk '{ print $7 }')
if [ $nodeIP = "minikube" ];then
nodeIP=$(minikube ip)
fi

#Upgrade the binderhub release with hub_url
printf "\n\n Find jupyterhub deployed at $nodeIP\n"
helm upgrade binderhub jupyterhub/binderhub --version=0.2.0-n132.h1a8ce62 -f ./config.yaml --set config.BinderHub.hub_url=http://$nodeIP:30902 --namespace binderhub
printf "\n\nInstallation successful!\n"
Binary file added examples/binderhub/images/1_input.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added examples/binderhub/images/2_building.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added examples/binderhub/images/3_pushing.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added examples/binderhub/images/4_jupyter.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added examples/binderhub/images/5_notebook.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added examples/binderhub/images/binderhub.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
38 changes: 38 additions & 0 deletions examples/binderhub/traefik/rbac-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe point to these files directly : https://github.com/cetic/fadi/tree/master/helm/traefik ? We don't need to duplicate them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Delete this part from the binderhub example

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

7 changes: 7 additions & 0 deletions examples/binderhub/traefik/values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
#loadBalancerIP: "your CloudProvider LoadBalancer IP"
dashboard:
enabled: true
serviceType: NodePort
ingress:
annotations: {}
domain: fadi.minikube
8 changes: 8 additions & 0 deletions examples/binderhub/values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
# Default values for FADI are defined here: https://github.com/cetic/helm-fadi.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# Values you define here will overwrite default values from helm-fadi.

jupyterhub:
enabled: false