diff --git a/.all-contributorsrc b/.all-contributorsrc
index d92496d..83de567 100644
--- a/.all-contributorsrc
+++ b/.all-contributorsrc
@@ -118,7 +118,18 @@
"contributions": [
"review"
]
- }
+ },
+ {
+ "login": "zakaria2905",
+ "name": "zakaria.hajja",
+ "avatar_url": "https://avatars.githubusercontent.com/u/48456087?v=4",
+ "profile": "https://github.com/zakaria2905",
+ "contributions": [
+ "code",
+ "doc"
+ ]
+ },
+
],
"contributorsPerLine": 6,
"projectName": "fadi",
diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md
index e085888..90e27e4 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.md
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -21,7 +21,7 @@ A clear and concise description of what the bug is.
Provide the environment in which the bug has happened (minikube on a workstation, full fledged Kubernetes cluster, ...)
* **OS** (e.g. from `/etc/os-release`)
-* **VM driver** (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName)
+* **VM driver** (e.g. `cat ~/.minikube/machines/minikube/config.json | grep DriverName`)
* **Minikube version** (e.g. `minikube version`)
**What happened**:
@@ -34,6 +34,14 @@ Provide the environment in which the bug has happened (minikube on a workstation
**Output of `minikube logs` (if applicable)**:
+**Output of Kubectl for pods, events**
+
+```bash
+kubectl get events --all-namespaces
+kubectl get events -n fadi
+kubectl get pods -n fadi
+kubectl logs fadi-nifi
+```
**Anything else we need to know**:
diff --git a/.gitignore b/.gitignore
index 61b929a..919205a 100644
--- a/.gitignore
+++ b/.gitignore
@@ -48,4 +48,6 @@ teardown.log
*.tgz
# https://github.com/ekalinin/github-markdown-toc
-gh-md-toc
\ No newline at end of file
+gh-md-toc
+
+.vscode
\ No newline at end of file
diff --git a/.gitlab-ci.sample.yml b/.gitlab-ci.sample.yml
index bdef78d..f6f5783 100644
--- a/.gitlab-ci.sample.yml
+++ b/.gitlab-ci.sample.yml
@@ -9,6 +9,7 @@ stages:
- tf_plan
- tf_apply
- deployWithHelm
+- test
variables:
KUBECONFIG: /etc/deploy/config
@@ -132,3 +133,14 @@ deployWithHelm:
url: http://$PROJECT
only:
- master
+
+test:
+ stage: test
+ image: ceticasbl/puppeteer-jest
+ script:
+ - cd tests/
+ - npm run test
+ tags:
+ - docker
+ only:
+ - develop
diff --git a/FAQ.md b/FAQ.md
index 70a881a..e898906 100644
--- a/FAQ.md
+++ b/FAQ.md
@@ -10,9 +10,13 @@ FAQ - Frequently asked questions
In case you encounter an issue with FADI, have a feature request or any other question, feel free to [open an issue](https://github.com/cetic/fadi/issues/new/choose).
+## How can I extend FADI
+
+FADI relies on Helm to integrate the various service together. To add another service to the stack, you can package it inside a [Helm chart](https://helm.sh/docs/howto/) and [add it to your own FADI chart](helm/README.md).
+
## Why "FADI"?
-FADI is the acronym for "Framework d'Analyse de Données Industrielles" ("A Framework for Industrial Data Analysis")
+FADI is the acronym for "Framework for Automating the Deployment and orchestration of container-based Infrastructures"
## FADI is not working
@@ -22,7 +26,15 @@ Please make sure the following steps have been taken beforehand:
* update Minikube to the latest version
* update Helm to the latest version
-* check the logs (`minikube logs`) for any suspicious error message
+* check the logs for any suspicious error message:
+
+```bash
+minikube logs
+kubectl get events --all-namespaces
+kubectl get events -n fadi
+kubectl get pods -n fadi
+kubectl logs fadi-nifi
+```
## OSx - slow installation
@@ -30,7 +42,7 @@ Please make sure the following steps have been taken beforehand:
## Windows Installation
-This is still not totally supported, some guidelines here #55
+Windows support for the Minikube installation should work but is not tested frequently.
## How to configure external access to the deployed services?
@@ -38,7 +50,7 @@ When deploying on a generic Kubernetes cluster, you will want to make the servic
See
-* https://github.com/cetic/fadi/blob/feature/documentation/doc/REVERSEPROXY.md for the reverse proxy configuration guide
+* [doc/REVERSEPROXY.md](doc/REVERSEPROXY.md) for the reverse proxy configuration guide
* https://github.com/cetic/fadi/issues/81 for port forwarding instructions
In a Minikube setting, make sure the ingress plugin is enabled (`minikube addons enable ingress`), and populate your `/etc/hosts` file accordingly.
diff --git a/INSTALL.md b/INSTALL.md
index 593edf9..f0e9868 100644
--- a/INSTALL.md
+++ b/INSTALL.md
@@ -26,8 +26,6 @@ The deployment of the FADI stack is achieved with:
* [Helm v3](https://helm.sh/).
* [Kubernetes](https://kubernetes.io/).
-![](doc/images/architecture/helm-architecture.png)
-
## 1. Local installation
This type of installation provides a quick way to test the platform, and also to adapt it to your needs.
@@ -70,7 +68,7 @@ To get the Kubernetes dashboard, type:
minikube dashboard
```
-This will open a browser window with the [Kubernetes Dashboard](http://127.0.0.1:40053/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/), it should look like this:
+This will open a browser window with the Kubernetes Dashboard:
![Minikube initial dashboard](doc/images/installation/minikube_dashboard.png)
@@ -195,7 +193,16 @@ It is also possible to create the Kubernetes cluster in command line, see: https
## 4. Troubleshooting
* Installation logs are located in the `helm/deploy.log` file.
-* Enable local monitoring in minikube: `minikube addons enable metrics-server`
+* Check the Minikube and Kubernetes logs:
+```bash
+minikube logs
+kubectl get events --all-namespaces
+kubectl get events -n fadi
+kubectl get pods -n fadi
+kubectl logs fadi-nifi-xxxxx -n fadi
+```
+* Enable [metrics server](https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server) in minikube: `minikube addons enable metrics-server`
+* The [FAQ](FAQ.md) provides some guidance on common issues
* For Windows users, please refer to the following [issue](https://github.com/cetic/fadi/issues/55).
## 5. Continuous integration (CI) and deployment (CD)
diff --git a/USERGUIDE.md b/USERGUIDE.md
index 02887d5..3dfb576 100644
--- a/USERGUIDE.md
+++ b/USERGUIDE.md
@@ -348,7 +348,9 @@ Choose `Minimal environment` and click on `Spawn`.
![Jupyter processing](examples/basic/images/spark_results.png)
-For more information on how to use Superset, see the [official Jupyter documentation](https://jupyter.readthedocs.io/en/latest/)
+
+For more information on how to use Jupyter, see the [official Jupyter documentation](https://jupyter.readthedocs.io/en/latest/)
+
## 7. Summary
diff --git a/doc/MONITORING.md b/doc/MONITORING.md
index 30c5a07..758f000 100644
--- a/doc/MONITORING.md
+++ b/doc/MONITORING.md
@@ -1,9 +1,9 @@
-Montoring
-==========
+Monitoring
+=======
diff --git a/doc/RANCHER_PROXMOX.md b/doc/RANCHER_PROXMOX.md
new file mode 100644
index 0000000..2e3df0c
--- /dev/null
+++ b/doc/RANCHER_PROXMOX.md
@@ -0,0 +1,174 @@
+Deploy FADI with Rancher and Proxmox
+=============
+
+* [0. Prerequisites](#0-prerequisites)
+* [1. Upload RancherOS ISO on Proxmox Node](#1-upload-rancheros-iso-on-proxmox)
+* [2. Add Proxmox docker-machine driver to Rancher](#2-add-proxmox-docker-machine-driver-to-rancher)
+* [3. Create Cluster With Rancher](#3-create-the-kubernetes-cluster-with-rancher)
+ * [Create Node Template](#Create-Node-Template)
+ * [Create Cluster](#Create-Cluster)
+ * [Create The Nodes](#Create-The-Nodes)
+* [4.Manage the provisioning of the persistent volumes](#5-Manage-the-provisioning-of-the-persistent-volumes)
+ * [StorageOS](#StorageOS)
+ * [Longhorn](#Longhorn)
+ * [NFS Server](#NFS-Server)
+ * [Manually](#Manually)
+* [5. Control Cluster from local workstation](#5-Control-Cluster-from-local-workstation)
+
+
+This page provides information on how to create a Kubernetes cluster on the [Proxmox](https://www.proxmox.com/en/) IaaS provider using [Rancher](https://rancher.com/).
+
+
+
+> "Proxmox VE is a complete open-source platform for enterprise virtualization. With the built-in web interface you can easily manage VMs and containers, software-defined storage and networking, high-availability clustering, and multiple out-of-the-box tools on a single solution."
+
+
+
+> "Rancher is open source software that combines everything an organization needs to adopt and run containers in production. Built on Kubernetes, Rancher makes it easy for DevOps teams to test, deploy and manage their applications."
+
+
+## 0. Prerequisites
+
+This documentation assumes the following prerequisites are met:
+
+* a [Proxmox installation](https://pve.proxmox.com/wiki/Installation) on your self hosted infrastructure
+* a [Rancher installation](https://rancher.com/docs/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/advanced/) that can access the Proxmox cluster, this can be done in another Kubernetes cluster to provide high availability, or simply in a virtual machine somewhere on your infrastructure.
+
+## 1. Upload RancherOS ISO on Proxmox
+
+First, download the [rancheros-proxmoxve-autoformat.iso](https://github.com/rancher/os/releases/latest) image and upload it to one of your Proxmox nodes.
+
+## 2. Add Proxmox docker-machine driver to Rancher
+
+Then, you need to allow Rancher to access Proxmox. We have contributed to upgrade an existing [docker-machine driver](https://github.com/lnxbil/docker-machine-driver-proxmox-ve/releases/download/v3/docker-machine-driver-proxmoxve.linux-amd64) to make it compatible with Rancher.
+
+To add [this driver](https://github.com/lnxbil/docker-machine-driver-proxmox-ve/releases/download/v3/docker-machine-driver-proxmoxve.linux-amd64) in your Rancher, follow these steps :
+
+![Proxmox driver](images/installation/proxmoxdriver.gif)
+
+## 3. Create the Kubernetes cluster with Rancher
+
+After connecting to rancher, follow these steps:
+
+### Create Node Template
+
+This is where you have to define the templates to use for the nodes (both masters and workers nodes). To do so, go to: `profile (top right corner)` > `Node templates` > `Add Template` :
+
+Choose `Proxmoxve`
+![Proxmoxve](images/installation/Proxmoxve.png)
+
+and then fill the rest of the fields:
+
+* IP of the Proxmox `proxmoxHost`,
+* username/password `proxmoxUserName, proxmoxUserPassword `,
+* storage of the image file `vmImageFile ` which is in our case `local:iso/rancheros-proxmoxve-autoformat.iso`,
+* resources you want to allocate for your node `nodevmCpuCores, vmMemory, vmStorageSize`.
+
+### Create the Kubernetes Cluster
+
+To create your virtual machines cluster on Proxmox:
+
+ `Cluster` > `Add Cluster` > `Proxmoxve`
+
+You will need to give a name to your cluster, then specify the nodes in the cluster:
+
+* at first, you may want to start with **one master node**,
+* give it a name,
+* choose the template created earlier for that node,
+* tick all 3 boxes for `etcd`, `Control Plane` and `Worker`,
+* choose the Kubernetes version,
+* and finally click `create`.
+
+> "you will have to wait for the `VM creation`, the `RancherOS install` and the `IP address retrieving` steps, that might take a while."
+
+### Adding Nodes the Cluster
+
+Once the master node gets its IP address, go to `Cluster` > `Edit Cluster` and add another worker node, untick the worker box from the master node and tick it in the new worker node. It should look to something like this:
+ ![Proxmoxve](images/installation/workernode.png)
+
+If a second (or more) node (master or worker) is needed, you can add another one with a different template by following the same way we just did. You can also add as much nodes as you want using the same template by simply going to `YourCluster (not global)` > `nodes` > `+` and it will add another node of the same kind:
+
+ ![Proxmoxve](images/installation/addnode.png)
+
+## 4. Persistent storage configuration
+
+Once all your nodes are up and running it is time to deploy your services, but before you do, you need to set your default PVC for the persistent volumes.
+
+Several ways are possible to manage its aspects. We will describe three of them, and leave it to you to choose the method that best meets your requirements.
+
+### StorageOS
+
+
+
+> *StorageOS is a cloud native storage solution that delivers persistent container storage for your stateful applications in production.
+Dynamically provision highly available persistent volumes by simply deploying StorageOS anywhere with a single container.*
+
+To deploy the volume plugin `StorageOS`, go to `YourCluster (not global)` > `system` > `apps` > `launch` and search for `StorageOS`. make sure all the fields are filled correctly like the following screenshot:
+
+![StorageOSConfig](images/installation/StorageOS.png)
+
+and now, launch it🚀.
+
+A small animation take back this all steps:
+
+![StorageOSGuide](images/installation/StorageOSGuide.gif)
+
+launching apps usually takes several minutes, you're going to need to wait a few minutes
+
+StorageOS is a very good turnkey solution. However this service gives only the possibility to allocate maximum 50Gi with the basic License.
+
+![StorageOS limits](images/installation/StorageOS_limits.png)
+
+Finally, all that remains is to define the StorageClass **StorageOS** as the one that will be used by default. To do this, go to `Storage`> `StorageClass` and click on the menu (the three little points on the right side). Now, click on `Set as Default`.
+
+This procedure is shown in the below animation :
+
+![StorageClass](images/installation/StorageClassDefault.gif)
+
+### Longhorn
+
+
+
+> *Longhorn is a distributed block storage system for Kubernetes. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes.*
+
+This tool is really very powerful, based on iSCSI technology. Unfortunately it is not yet supported by RancherOS (The operating system used in this example).
+
+We report the bugs and problems encountered in two opened github issues:
+
+[https://github.com/rancher/os/issues/2937](https://github.com/rancher/os/issues/2937)
+[https://github.com/longhorn/longhorn/issues/828](https://github.com/longhorn/longhorn/issues/828)
+
+### NFS Server Provisioner
+
+
+
+>*The Network File System (NFS) is a client/server application that lets a computer user view and optionally store and update files on a remote computer as though they were on the user's own computer.
+NFS Server Provisioner is an out-of-tree dynamic provisioner for Kubernetes. You can use it to quickly & easily deploy shared storage that works almost anywhere.*
+
+This solution is very easy to deploy and set up, a basic installation does not require any particular configuration. This plugin supports both the deployment of the NFS server and the management of persistent volumes.
+
+One of the limits and caveat would be that the NFS server is attached to a node: if it crashes, it is possible that the data is lost.
+
+To add this plugin to your cluster go to `Apps` and click on `Launch`. On the `Search bar`, put `nfs-provisioner`.
+
+![images/installation/nfsapp.png](images/installation/nfsapp.png)
+
+Select the plugin and click the `launch` button🚀.
+
+### Manually
+
+It is also possible to manually create the persistent volumes, this way offers a complete control of the volumes but is less flexible. If you choose this option, please refer to the [official documentation of Kubernetes](https://kubernetes.io/docs/concepts/storage/volumes/).
+
+## 5. Control Cluster from local workstation
+
+There are ways to interact with your cluster using the `kubectl` command line tool.
+
+First, **Rancher** offers a restricted terminal where only this tool is available. To access it, go to the monitoring page of your cluster and click on the launch `kubectl` button.
+
+![images/installation/ranchermonitoring.png](images/installation/ranchermonitoring.png)
+
+![images/installation/rancherkubectl.png](images/installation/rancherkubectl.png)
+
+The second approach is to use the `kubectl` tool on your machine. to do so, go to the monitoring page of your cluster again and click on `Kubeconfig File`. Copy and paste all of the informations into the file `~/.kube/config` present on your machine.
+
+> ** You can now use your cluster created with rancher and deploy in Proxmox, enjoy!**
diff --git a/doc/README.md b/doc/README.md
index c5279c3..e7230bb 100644
--- a/doc/README.md
+++ b/doc/README.md
@@ -5,6 +5,11 @@ FADI Documentation
* [Users management](USERMANAGEMENT.md) - user identification and authorization (LDAP, RBAC, ...)
* [Reverse proxy](REVERSEPROXY.md) - Traefik reverse proxy configuration
* [Security](SECURITY.md) - SSL setup
+* [Testing](/tests/README.md) - tests for the FADI framework
* [TSimulus](TSIMULUS.md) - how to simulate sensors and generate realistic data with [TSimulus](https://github.com/cetic/TSimulus)
-
+* [Machine learning models management](SELDON.md) - how to package and score machine learning models using [Seldon Core](https://www.seldon.io/tech/products/core/)
+* [Sample self-hosted infrastructure](RANCHER_PROXMOX.md) - How to install FADI on a self hosted infrastructure using
+ * [Proxmox](https://www.proxmox.com/en/) as a self-hosted private cloud (IaaS) provider. It provides virtual machines for the various Kubernetes nodes.
+ * [Rancher](https://rancher.com/what-is-rancher/what-rancher-adds-to-kubernetes/) to manage (install, provision, maintain, upgrade, ...) several Kubernetes clusters, e.g. when needing several environments on various IaaS providers or several well separated tenant installations, or doing airgapped installations on premises.
+
For tutorials and examples, see the [examples section](../examples/README.md)
\ No newline at end of file
diff --git a/doc/SELDON.md b/doc/SELDON.md
new file mode 100644
index 0000000..ac9d32d
--- /dev/null
+++ b/doc/SELDON.md
@@ -0,0 +1,98 @@
+Manage machine learning models with Seldon Core
+==========
+
+* [Install Seldon Core service](#install-seldon-core-service)
+* [Deploy your model](#deploy-your-model)
+ * [1. Package your model](#1-package-your-model)
+ * [2. Create your inference graph](#2-create-your-inference-graph)
+ * [3. Deploy the model to the Kubernetes cluster](#3-deploy-the-model-to-the-kubernetes-cluster)
+
+
+
+[Seldon Core](https://www.seldon.io/tech/products/core/) is an open source platform for deploying machine learning models on a Kubernetes cluster. It extends Kubernetes with **its own custom resource `SeldonDeployment`** where you can define your runtime inference graph made up of models and other components that Seldon will manage.
+
+## Install Seldon Core service
+
+To deploy the Seldon Core service inside your FADI installation, set `seldon-core-operator.enabled` option to `true` in your FADI `values.yaml` configuration file and reapply the chart:
+
+```yaml
+seldon-core-operator:
+ enabled: true
+ usageMetrics:
+ enabled: false
+```
+
+## Deploy your model
+
+### 1. Package your model
+
+To allow your component (model, router etc.) to be managed by Seldon Core it needs to be built into a **Docker container** and to expose the appropriate [service microservice APIs over REST or gRPC](https://docs.seldon.io/projects/seldon-core/en/latest/reference/apis/internal-api.html).
+
+To wrap your model follow the [official Seldon instructions](https://docs.seldon.io/projects/seldon-core/en/v1.1.0/python/index.html).
+
+NB: currently only Python is ready for production use, but other languages ([Java, R, Go, ...](https://docs.seldon.io/projects/seldon-core/en/latest/wrappers/language_wrappers.html)) are compatible.
+
+### 2. Create your inference graph
+
+Seldon Core extends Kubernetes with its own custom resource `SeldonDeployment` where you can define your runtime [inference graph](https://docs.seldon.io/projects/seldon-core/en/latest/graph/inference-graph.html) made up of models and other components that Seldon will manage.
+
+A `SeldonDeployment` is a JSON or YAML file that allows you to define your graph of component images and the resources each of those images will need to run (using a Kubernetes PodTemplateSpec). Below is a minimal example for a single model, in YAML:
+
+```yaml
+apiVersion: machinelearning.seldon.io/v1alpha2
+kind: SeldonDeployment
+metadata:
+ name: seldon-model
+spec:
+ name: test-deployment
+ predictors:
+ - componentSpecs:
+ - spec:
+ containers:
+ - name: classifier
+ image: seldonio/mock_classifier:1.0
+ graph:
+ children: []
+ endpoint:
+ type: REST
+ name: classifier
+ type: MODEL
+ name: example
+ replicas: 1
+```
+
+[ref](https://docs.seldon.io/projects/seldon-core/en/v1.1.0/graph/inference-graph.html)
+
+The key components are:
+
+* A list of **`predictors`**, each with a specification for the number of replicas.
+ * Each predictor defines a graph and its set of deployments. Having multiple predictors is useful when you want to split traffic between a main graph and a [canary](https://martinfowler.com/bliki/CanaryRelease.html), or for other production rollout scenarios.
+* For each predictor, a **list of `componentSpecs`**. Each `componentSpec` is a Kubernetes `PodTemplateSpec` that Seldon will build into a Kubernetes Deployment. Place here the images from your graph and their requirements, e.g. `Volumes`, `ImagePullSecrets`, Resources Requests, etc.
+* A **`graph`** specification that describes how the components are joined together.
+
+To understand the inference graph definition in detail see the [Seldon Deployment Reference Types
+ reference](https://docs.seldon.io/projects/seldon-core/en/latest/reference/seldon-deployment.html)
+
+### 3. Deploy the model to the Kubernetes cluster
+
+Once the inference graph is created as a JSON or YAML Seldon Deployment resource, you can deploy it to the Kubernetes cluster:
+
+```bash
+kubectl apply -f my_deployment.yaml
+```
+
+To delete ( or manage ) your `SeldonDeployment` you can use kubectl for the custom resource `SeldonDeployment`, for example to see if there are any models deployed:
+
+```bash
+kubectl get seldondeployment
+```
+
+To delete the model `seldon-model`:
+
+```bash
+kubectl delete seldondeployment seldon-model
+```
diff --git a/doc/USERMANAGEMENT.md b/doc/USERMANAGEMENT.md
index ccad531..1c99b32 100644
--- a/doc/USERMANAGEMENT.md
+++ b/doc/USERMANAGEMENT.md
@@ -7,34 +7,43 @@ User Management
* [JupyterHub](#jupyterhub)
* [Superset](#superset)
* [PostgreSQL](#postgresql)
+ * [NiFi](#NiFi)
+ * [Configuration](#Configuration)
+ * [sign in](#sign_in)
+ * [Authorizers & Initial Admin Identity](#Authorizers_&_Initial_Admin_Identity)
+ * [Adding users](#Adding_users)
+ * [Multi-Tenancy](#Multi-Tenancy)
* [3. Manage your LDAP server](#3-manage-your-ldap-server)
* [Adding a user](#adding-a-user)
-
+* [4. Creating groups](#4-Creating-groups)
+ * [1. PostgresQL](#1-PostgresQL)
+ * [2. Grafana](#2-Grafana)
+ * [3. JupyterHub](#3-JupyterHub)
This page provides information on how to configure FADI user authentication and authorization (LDAP, RBAC, ...).
-For user management FADI uses [OpenLDAP](https://www.openldap.org) to ensure the [LDAP user authentication](https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol) for the platform services.
+For user management, FADI uses [OpenLDAP](https://www.openldap.org) to ensure the [LDAP user authentication](https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol) for the platform services.
## 1. Create the LDAP server
-
+
> "OpenLDAP Software is an open source implementation of the Lightweight Directory Access Protocol."
-The **OpenLDAP** service creates an empty LDAP server for the company `Example Inc.` and the domain `example.org` by default, which we will overwrite via the environment variables in the Helm chart.
+The **OpenLDAP** service creates an empty LDAP server for the company `Example Inc.` and the domain `example.org` by default, which we will overwrite via the environment variables in the Helm chart.
-The first entry that will be created is for the administrator user ; to initially connect to any of the services you can use the following credentials:
+The first entry that will be created is for the administrator user. To initially connect any of the services, use the following credentials:
* Username: `admin`
* Password: `password1`
-Once created we either add the users/groups manually through the phpLDAPadmin web interface, or you can pass a [LDIF file](https://en.wikipedia.org/wiki/LDAP_Data_Interchange_Format) (see the [sample ldif file](/examples/basic/example.ldif)).
+Once created, we either add the users/groups manually through the phpLDAPadmin web interface, or pass a [LDIF file](https://en.wikipedia.org/wiki/LDAP_Data_Interchange_Format) (see the [sample ldif file](/examples/basic/example.ldif)) to the chart.
-## 2. Configure the various services
+## 2. Configure services
### Grafana
-Grafana has 3 roles by default: **Admin** , **Editor** and **Viewer**. To assign these roles to the different groups of LDAP users, you need to pass that in the configuration. Let's assume you have a group of developers in your LDAP server with the entry `cn=developers,ou=groups,dc=ldap,dc=cetic,dc=be` that you want to give the role of **Editor**. You can add these 3 lines of configuration under the default LDAP configuration that FADI already provides:
+Grafana has 3 roles by default: **Admin** , **Editor** and **Viewer**. To assign these roles to the different groups of LDAP users, we need to pass that information through the configuration. Let's assume we have a group of developers in the LDAP server with the entry `cn=developers,ou=groups,dc=ldap,dc=cetic,dc=be` that we want to give the role of **Editor**. We can add these 3 lines of configuration under the default LDAP configuration that FADI already provides:
```
[[servers.group_mappings]]
@@ -42,66 +51,165 @@ group_dn = "cn=developers,ou=groups,dc=ldap,dc=cetic,dc=be"
org_role = "Editor"
```
-For more information [grafana LDAP configuration](https://grafana.com/docs/auth/ldap/#configuration-examples) is very well documented.
+For more information, see [Grafana LDAP documentation](https://grafana.com/docs/auth/ldap/#configuration-examples).
### JupyterHub
-JupyterHub configuration allows you to give access to users/groups through templates, the templates usually follow this syntax:
+JupyterHub configuration allows to give access to users/groups through templates, the templates usually follow this syntax:
* `uid={username},cn=admin,dc=ldap,dc=cetic,dc=be`
* `uid={username},ou=developers,dc=ldap,dc=cetic,dc=be`
-where `{username}` will be overwrought by the value the user passes as username in the authentication screen. Let's suppose we only have those two templates, when the user david passes his name for authentication, for him to successfully sign on, his entry should be one of the following:
+where `{username}` will be overwritten by the value the user passes as username in the authentication screen. Let's suppose we only have those two templates, and the user David provides his name for authentication. For him to successfully sign in, his entry should be one of the following:
* `uid=david,ou=admins,dc=ldap,dc=cetic,dc=be`
* `uid=david,ou=developers,dc=ldap,dc=cetic,dc=be`
-which means if david isn't in the developers group or the admins group, he will not be able to sign in.
+which means that if David is not in the `developers` or `admins` groups, he will not be able to sign in.
A sample configuration can be found in the `jupyterhub:auth` section of the default FADI [`values.yaml` file](https://github.com/cetic/helm-fadi/blob/master/values.yaml)
-More details on using LDAP with JupyterHub in the [Jupyter documentation](https://z2jh.jupyter.org/en/stable/authentication.html#authenticating-with-ldap),
+More details on using LDAP with JupyterHub in the [Jupyter documentation](https://z2jh.jupyter.org/en/stable/authentication.html#authenticating-with-ldap).
### Superset
Superset uses **Flask-AppBuilder** Security for the LDAP authentication, in order to activate we need to pass the configuration inside python config `configFile.py`.
-For more information on how to configure Superset with LDAP: the official documentation for the [flask-appbuilder authentication-ldap](https://flask-appbuilder.readthedocs.io/en/latest/security.html#authentication-ldap).
+For more information on how to configure Superset with LDAP, follow the official documentation for the [flask-appbuilder authentication-ldap](https://flask-appbuilder.readthedocs.io/en/latest/security.html#authentication-ldap).
-For more information about the different options you can use to configure your Superset LDAP authentication: the official documentation for the [Base Configuration](https://flask-appbuilder.readthedocs.io/en/latest/config.html).
+For more information about the different options you can use to configure your Superset LDAP authentication, follow the official documentation for the [Base Configuration](https://flask-appbuilder.readthedocs.io/en/latest/config.html).
### PostgreSQL
-LDAP authentication method in PostgreSQL uses LDAP as the password verification method. LDAP is used only to validate the username/password pairs. Therefore there's a Cron job that executes the tool [pg-ldap-sync](https://github.com/larskanis/pg-ldap-sync) to synchronise the users between the LDAP server and the database.
-
-Client authentication is controlled by a configuration file called `pg_hba.conf`, you can pass your authentication config through the variable `pghba` in the `values.yaml` file.
+LDAP authentication method in PostgreSQL uses LDAP as the password verification method. LDAP is used only to validate the username/password pairs. Therefore there is a Cron job that executes the tool [pg-ldap-sync](https://github.com/larskanis/pg-ldap-sync) to synchronise the users between the LDAP server and the database management system.
-The most common formats of authentication configuration are :
+Client authentication is controlled by a configuration file called [`pg_hba.conf`](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html), authentication configuration can be provided through the variable `pghba` in the `values.yaml` file.
+The configuration for the most common methods of authentication are:
```
local database user auth-method [auth-options]
host database user address auth-method [auth-options]
```
-For example, to use LDAP authentication for local users, your configuration should look something like this :
+For example, to use LDAP authentication for local users, the configuration should look like this:
```
local all all ldap ldapserver=example.com ldapport=389 [other-ldap-options]
```
-For more information about how to add LDAP authentication to PostgreSQL: [LDAP authentication in PostgreSQL](https://www.postgresql.org/docs/11/auth-ldap.html)
+For more information about how to add LDAP authentication to PostgreSQL, follow [LDAP authentication in PostgreSQL](https://www.postgresql.org/docs/11/auth-ldap.html).
For more information about pg-ldap-sync: [Use LDAP permissions in PostgreSQL](https://github.com/larskanis/pg-ldap-sync)
+### NiFi
+
+#### Configuration
+
+To secure NiFi with LDAP we need to enable SSL for NiFi. As part of enabling SSL, NiFi will also automatically enable authentication requiring an additional authentication method which in our case will be LDAP, to get detailed description of the whole process: [Apache NiFi - Authorization and Multi-Tenancy](https://bryanbende.com/development/2016/08/17/apache-nifi-1-0-0-authorization-and-multi-tenancy)
+
+To configure LDAP in a Helm chart, we need to first enable it by setting the variable `auth.ldap.enabled` to `true` then configure the rest of the variables. Here is an example for the default FADI LDAP:
+
+
+```yaml
+auth:
+ ldap:
+ enabled: true
+ host: ldap://fadi-openldap:389
+ searchBase: cn=admin,dc=ldap,dc=cetic,dc=be
+ admin: cn=admin,dc=ldap,dc=cetic,dc=be
+ pass: password1
+ searchFilter: (objectClass=*)
+ userIdentityAttribute: cn
+```
+
+Then we make sure to pre-set the `nodePort`, let's say we want the node port to be 34567, our service configuration should look like this :
+
+```yaml
+service:
+ type: NodePort
+ nodePort: 34567
+```
+
+And then we set the properties as follows, the `nifi.properties.webProxyHost` variable should have the exact url with the exact port that we are going to use to access NIFI later, if our dns is nifi.example.cetic.be and/or the ip address 10.10.10.10, our configuration should look like this:
+
+
+```yaml
+ properties:
+ externalSecure: false
+ isNode: false
+ httpPort: null
+ httpsPort: 9443
+ webProxyHost: nifi.example.cetic.be:34567, 10.10.10.10:34567
+ clusterPort: 6007
+ clusterSecure: true
+```
+
+#### Sign in
+
+
+When accessing Nifi, a sign-in screen will show up. Sign in using the LDAP username/password normally (not the full DN).
+
+
+
+**Important note:** only one user can connect the first time and it is the user with the **Initial Admin Identity**, in FADI it is the default LDAP admin user (admin / password1). Once connected, the user interface will look like this :
+
+
+
+Everything is disabled and that is because there are no users registered and no policies (permissions) distributed, but the Initial Admin has all the rights to create policies and give permissions.
+
+#### Authorizers & Initial Admin Identity
+
+We can see that the initial admin has READ/WRITE access to /flow, /tenants, /policies, and /controller, and the cluster nodes have READ/WRITE access to /proxy. This allows the initial admin to get into the UI (/flow) and to create new users/groups (/tenants), and to create new policies (/policies). There are no policies granting access to the root process group, or any other components. This is why all of the actions in the toolbar are grayed-out.
+
+We can create a policy for the root process group by clicking the key icon in the operate palette on the left:
+
+
+
+After creating the necessary policies, the “view component” action on the root process group (which is essentially the READ action) and the “modify the component” (which is the WRITE policy for the root process group), the admin user will have the appropriate authorizations.
+
+Note that this can be also done for the rest of policies.
+
+#### Adding users
+
+Adding the LDAP users has to be done manually through the user interface as **there is no syncing mechanism to automatically add LDAP users/groups into NiFi**.
+
+When connected with the initial admin account (using the individual certificate), go into users to add users, and then into policies to grant access and rights to the users. In this example we are going to add the user John who already exists in the LDAP server (to add the user we should use the full DN: `cn=john,cn=admin,dc=ldap,dc=cetic,dc=be`).
+
+
+
+After adding the user **do not forget to assign the policies** for that user to give the needed permissions: by default the new user cannot even access the user interface.
+
+#### Multi-Tenancy
+
+This is copied from this great article: [Apacche Nifi - Authorization and Multi-Tenancy](https://bryanbende.com/development/2016/08/17/apache-nifi-1-0-0-authorization-and-multi-tenancy).
+
+> The policy structure is hierarchical, such that when access is checked for a component, if no policy is found then it checks the parent, and if not policy is found for the parent, it checks the parent’s parent, and so on, until reaching the root process group. This means that by giving ourselves READ/WRITE to the root group we now have READ/WRITE for all sub-components **until a more restrictive policy is created.**
+
+Let's simulate how two development teams might share a single NiFi instance by creating two process groups:
+
+
+
+Let's pretend we are a member of Team 1 so we should have full access to the first process group, but we should not be able to know anything about what Team 2 is doing, and should not be able to modify their flow. We can simulate this by creating a more restrictive policy on the "Team 2" process group that does not include the current user (the initial admin).
+
+When selecting the "Team 2" process group, the palette on the left changes and says "Team 2". This palette always operate in the context of the selected component, so if we click the key icon while "Team 2" is selected, we are now editing the policies for the "Team 2" process group. If we click the "Override this policy" link for "view component" and create a new policy without adding users, we should get the following:
+
+
+
+If we do the same thing for "modify the component" and then return to the main canvas we should see the following on the next refresh:
+
+
+
+We can no longer see the name of the group, and we now have a more restrictive context menu that prevents us from configuring the group.
+
## 3. Manage your LDAP server
-> " phpLDAPadmin is a web app for administering Lightweight Directory Access Protocol (LDAP) servers.."
+> "phpLDAPadmin is a web app for administering Lightweight Directory Access Protocol (LDAP) servers."
-In order to use [phpLDAPadmin](http://phpldapadmin.sourceforge.net/wiki/index.php/Main_Page) you have to pass the configuration for your LDAP server through the environmental variable `_PHPLDAPADMIN_LDAP_HOSTS_`. To connect this service with the OpenLDAP server, you need to pass **the name of the service** (`fadi-openldap`). To connect to the web application, simply run the following command:
+In order to use [phpLDAPadmin](http://phpldapadmin.sourceforge.net/wiki/index.php/Main_Page), pass the configuration for the LDAP server through the environmental variable `_PHPLDAPADMIN_LDAP_HOSTS_`. To connect this service with the OpenLDAP server, pass **the name of the service** (`fadi-openldap`). To connect to the web application, run the following command:
```bash
minikube service fadi-phpldapadmin -n fadi
@@ -109,14 +217,14 @@ minikube service fadi-phpldapadmin -n fadi
The main page for phpLDAPadmin will open in your default browser where you can connect to your LDAP server and manage it.
-
+
The first entry that will be created is for the administrator and the password is initialized to `password1` which makes the credentials to use to connect to this server in phpLDAPadmin the following:
* Login DN: `cn=admin,dc=ldap,dc=cetic,dc=be`
* Password: `password1`
-For more information on how to use phpLDAPadmin, see the [phpLDAPadmin documentation](http://phpldapadmin.sourceforge.net/function-ref/1.2/)
+For more information on how to use phpLDAPadmin, see the [phpLDAPadmin documentation](http://phpldapadmin.sourceforge.net/function-ref/1.2/).
### Adding a user
@@ -126,16 +234,16 @@ This section provides an example on how to add a user through phpLDAPadmin and a
-Access your phpLDAPadmin service and connect using the admin Login DN & password, the default Login DN & password are:
+Access your phpLDAPadmin service and connect using the admin Login DN and password, defaults are:
* Login DN: `cn=admin,dc=ldap,dc=cetic,dc=be`
* Password: `password1`
-
+
-#### 2. Add the user
+### 2. Add users
-To add users, there are two ways: using a tempalte and manually.
+To add users, there are two ways: using a template and manually.
#### Import the user using a template
@@ -153,7 +261,7 @@ uid: John Doe
userpassword: Johnpassword
```
-Change the user name and other misc info ( mail, etc.) and copy/paste it in the import field, here is an example of a modified template for a user called `Luke Skywalker`.
+Change the user name and other misc info (mail, etc.) and copy/paste it in the import field, here is an example of a modified template for a user called `Luke Skywalker`:
```
dn: cn=Luke,cn=admin,dc=ldap,dc=cetic,dc=be
@@ -167,26 +275,194 @@ uid: Luke Skywalker
userpassword: ThereIsNoTry
```
-Now you can go to `import`, paste that template and click `proceed` and the user will be added.
+Now go to `import`, paste that template and click `proceed` and the user will be added.
#### Add the user manually
-You can add a user manually through phpLDAPadmin, after connecting go to `⭐️Create new entry here` :
+Add a user manually through phpLDAPadmin, after connecting go to `Create new entry here`:
You can for example create a user in the default admin group `cn=admin,dc=ldap,dc=cetic,dc=be`, or create a new group in which you can create new users.
-In this example we are going to create a simple user under the default admin user (which is also a group).
+In this example, we are going to create a simple user under the default admin user (which is also a group).
When you click on `⭐️Create new entry here`, a new window called `Select a template for the creation process` will show up with all the different entries you can create:
Go to `Generic: User Account` and a list of fields will show up. Enter the information about the user you want to create and click `Create Object`.
+
+## 4. Creating groups
+
+The LDAP protocol does not define how programs function either on the server or client, but the messages exchanged between an LDAP server and an LDAP client.
+To manage your users you need to know how to create users/groups in the LDAP server and then you need to assign every user/group to the right service or application **through the application's configuration in the `values.yaml` file**.
+
+We are going to create a group called **devs** and a group called **admins** and add a user in each group and then **configure each service** to authenticate the newly created users/groups.
+
+### Create groups in openLDAP
+
+Here is a simple ldif code to import that will create:
+
+* An Organizational Unit `OU=people`
+* A group called **admins** under `ou=people,dc=ldap,dc=cetic,dc=be` so the dn will be `cn=admins,ou=people,dc=ldap,dc=cetic,dc=be`
+* A user called `John` under `cn=admin,dc=ldap,dc=cetic,dc=be` so the dn will be `cn=john,cn=admins,ou=people,dc=ldap,dc=cetic,dc=be` with the password `john123`
+* A group called **devs** under `ou=people,dc=ldap,dc=cetic,dc=be` so the dn will be `cn=devs,ou=people,dc=ldap,dc=cetic,dc=be`
+* A user called `Luke` under `cn=devs,dc=ldap,dc=cetic,dc=be` so the dn will be `cn=luke,cn=devs,ou=people,dc=ldap,dc=cetic,dc=be` with the password `luke123`
+
+```
+dn: ou=people,dc=ldap,dc=cetic,dc=be
+ou: people
+objectClass: organizationalUnit
+
+dn: cn=admins,ou=people,dc=ldap,dc=cetic,dc=be
+cn: admins
+gidnumber: 501
+objectclass: posixGroup
+objectclass: top
+
+dn: cn=devs,ou=people,dc=ldap,dc=cetic,dc=be
+cn: devs
+gidnumber: 500
+objectclass: posixGroup
+objectclass: top
+
+dn: cn=luke,cn=devs,ou=people,dc=ldap,dc=cetic,dc=be
+cn: luke
+gidnumber: 500
+givenname: luke
+homedirectory: /home/users/lskywalker
+loginshell: /bin/sh
+objectclass: inetOrgPerson
+objectclass: posixAccount
+objectclass: top
+sn: skywalker
+uid: luke
+uidnumber: 1000
+userpassword: {MD5}hSQr2UGesHOpB9f3VrX43Q==
+
+dn: cn=john,cn=admins,ou=people,dc=ldap,dc=cetic,dc=be
+cn: john
+gidnumber: 501
+givenname: john
+homedirectory: /home/users/John
+loginshell: /bin/sh
+objectclass: inetOrgPerson
+objectclass: posixAccount
+objectclass: top
+sn: Doe
+uid: john
+uidnumber: 1001
+userpassword: john123
+```
+
+### 1. PostgreSQL
+
+To copy the groups/users in postgreSQL we need to configure the Cron job that executes the tool [pg-ldap-sync](https://github.com/larskanis/pg-ldap-sync) to synchronise the users between the LDAP server and the database, therefore we are configuring pg-ldap-sync to add the users of our group.
+
+In the `values.yaml` file, head to the variable `postgresql.ldap.pgldapconfig` and make sure the `ldap_users` section looks like this:
+
+```
+ldap_users:
+base: DC=ldap,DC=cetic,DC=be
+# LDAP filter (according to RFC 2254)
+# defines to users in LDAP to be synchronized
+filter: (!(cn=admin))
+# this attribute is used as PG role name
+name_attribute: uid
+# lowercase name for use as PG role name
+lowercase_name: true
+```
+
+And the `ldap_groups` section looks like this:
+
+
+```
+ldap_groups:
+base: DC=ldap,DC=cetic,DC=be
+filter: (|(cn=devs)(ou=people)(cn=admins))
+# this attribute is used as PG role name
+name_attribute: cn
+# this attribute must reference to all member DN's of the given group
+member_attribute: member
+```
+The main change here is the **filter `filter: (|(cn=devs)(ou=people)(cn=admins))`** in which we add the names of the groups we want to be added to PostgreSQL, for example if our filter is `filter: (|(cn=devs)(ou=people))` the group **admins** will not be added.
+
+### 2. Grafana
+
+For Grafana, head to the variable `grafana.ldap.config` and make sure it looks like this:
+
+```
+ config: |-
+ verbose_logging = true
+ [[servers]]
+ host = "fadi-openldap"
+ port = 389
+ use_ssl = false
+ start_tls = false
+ ssl_skip_verify = false
+ bind_dn = "cn=admin,DC=ldap,DC=cetic,DC=be"
+ bind_password = 'password1'
+ search_filter = "(|(cn=%s)(uid=%s))"
+ search_base_dns = ["dc=ldap,dc=cetic,dc=be"]
+ group_search_base_dns = ["ou=people,dc=ldap,dc=cetic,dc=be"]
+
+ [servers.attributes]
+ name = "givenName"
+ surname = "sn"
+ username = "cn"
+ member_of = "memberOf"
+ email = "email"
+
+ [[servers.group_mappings]]
+ group_dn = "cn=admins,ou=people,dc=ldap,dc=cetic,dc=be"
+ org_role = "Admin"
+ grafana_admin = true
+
+ [[servers.group_mappings]]
+ group_dn = "*"
+ org_role = "Viewer"
+```
+
+The main change here is `group_search_base_dns = ["ou=people,dc=ldap,dc=cetic,dc=be"]` in which we add Organizational Unit `OU=people` so it can find the newly created groups **devs** and **admins**. Then, to manage the access (and/or roles), follow the [documentation](https://grafana.com/docs/grafana/latest/auth/ldap/). Adding the following sample of configuration will give the **group admins** the **admin rights** and others will receive the **Viewer rights**.
+
+
+```
+ [[servers.group_mappings]]
+ group_dn = "cn=admins,ou=people,dc=ldap,dc=cetic,dc=be"
+ org_role = "Admin"
+ grafana_admin = true
+
+ [[servers.group_mappings]]
+ group_dn = "*"
+ org_role = "Viewer"
+```
+
+The **admin rights** make the user a Super Admin. This means they can access the Server Admin views where all users and organizations can be administrated in addition of course to creating/editing dashboards, data sources, etc. The **Viewer rights** allow the users to only **see** the created dashboards.
+For more information, see the [Grafana permissions overview](https://grafana.com/docs/grafana/latest/permissions/overview/).
+
+### 3. JupyterHub
+
+For JupyterHub, the variable `jupyterhub.auth.ldap.dn.templates` is a list of DNs to be accepted.
+If we want to add the **group devs** and give them access, we add this line `cn={username},cn=devs,dc=ldap,dc=cetic,dc=be` where `{username}` is the username corresponding to the user.
+Here we are not adding `cn={username},cn=admins,dc=ldap,dc=cetic,dc=be` so the group **admins** will not have access, the list should look like this:
+
+```yaml
+ auth:
+ type: ldap
+ ldap:
+ server:
+ address: fadi-openldap
+ dn:
+ templates:
+ - 'cn={username},cn=admin,dc=ldap,dc=cetic,dc=be'
+ - 'uid={username},cn=admins,ou=people,dc=ldap,dc=cetic,dc=be'
+ - 'cn={username},dc=ldap,dc=cetic,dc=be'
+ - 'cn={username},cn=devs,ou=people,dc=ldap,dc=cetic,dc=be'
+```
diff --git a/doc/deployment/GKE.md b/doc/deployment/GKE.md
deleted file mode 100644
index e69de29..0000000
diff --git a/doc/deployment/openshift.md b/doc/deployment/openshift.md
deleted file mode 100644
index e69de29..0000000
diff --git a/doc/images/architecture/helm-architecture.png b/doc/images/architecture/helm-architecture.png
deleted file mode 100644
index 7ce80fb..0000000
Binary files a/doc/images/architecture/helm-architecture.png and /dev/null differ
diff --git a/doc/images/installation/AddNodefix.gif b/doc/images/installation/AddNodefix.gif
new file mode 100644
index 0000000..889ea2b
Binary files /dev/null and b/doc/images/installation/AddNodefix.gif differ
diff --git a/doc/images/installation/CreateCluster.gif b/doc/images/installation/CreateCluster.gif
new file mode 100644
index 0000000..57f5a91
Binary files /dev/null and b/doc/images/installation/CreateCluster.gif differ
diff --git a/doc/images/installation/Proxmoxve.png b/doc/images/installation/Proxmoxve.png
new file mode 100644
index 0000000..d7b4141
Binary files /dev/null and b/doc/images/installation/Proxmoxve.png differ
diff --git a/doc/images/installation/StorageClassDefault.gif b/doc/images/installation/StorageClassDefault.gif
new file mode 100644
index 0000000..07b22fb
Binary files /dev/null and b/doc/images/installation/StorageClassDefault.gif differ
diff --git a/doc/images/installation/StorageOS.png b/doc/images/installation/StorageOS.png
new file mode 100644
index 0000000..19fa4d8
Binary files /dev/null and b/doc/images/installation/StorageOS.png differ
diff --git a/doc/images/installation/StorageOSGuide.gif b/doc/images/installation/StorageOSGuide.gif
new file mode 100644
index 0000000..a89258f
Binary files /dev/null and b/doc/images/installation/StorageOSGuide.gif differ
diff --git a/doc/images/installation/StorageOS_limits.png b/doc/images/installation/StorageOS_limits.png
new file mode 100644
index 0000000..cab4bec
Binary files /dev/null and b/doc/images/installation/StorageOS_limits.png differ
diff --git a/doc/images/installation/Team1.png b/doc/images/installation/Team1.png
new file mode 100644
index 0000000..6659a13
Binary files /dev/null and b/doc/images/installation/Team1.png differ
diff --git a/doc/images/installation/Team2.png b/doc/images/installation/Team2.png
new file mode 100644
index 0000000..29a2fb8
Binary files /dev/null and b/doc/images/installation/Team2.png differ
diff --git a/doc/images/installation/Teams.png b/doc/images/installation/Teams.png
new file mode 100644
index 0000000..5c6327b
Binary files /dev/null and b/doc/images/installation/Teams.png differ
diff --git a/doc/images/installation/addnode.png b/doc/images/installation/addnode.png
new file mode 100644
index 0000000..dffe883
Binary files /dev/null and b/doc/images/installation/addnode.png differ
diff --git a/doc/images/installation/create.gif b/doc/images/installation/create.gif
new file mode 100644
index 0000000..5260013
Binary files /dev/null and b/doc/images/installation/create.gif differ
diff --git a/doc/images/installation/defaultpvc.png b/doc/images/installation/defaultpvc.png
new file mode 100644
index 0000000..744baeb
Binary files /dev/null and b/doc/images/installation/defaultpvc.png differ
diff --git a/doc/images/installation/grey.png b/doc/images/installation/grey.png
new file mode 100644
index 0000000..9a077f1
Binary files /dev/null and b/doc/images/installation/grey.png differ
diff --git a/doc/images/installation/nfsapp.png b/doc/images/installation/nfsapp.png
new file mode 100644
index 0000000..f08ba93
Binary files /dev/null and b/doc/images/installation/nfsapp.png differ
diff --git a/doc/images/installation/proxmoxdriver.gif b/doc/images/installation/proxmoxdriver.gif
new file mode 100644
index 0000000..7bbcf1d
Binary files /dev/null and b/doc/images/installation/proxmoxdriver.gif differ
diff --git a/doc/images/installation/rancherkubectl.png b/doc/images/installation/rancherkubectl.png
new file mode 100644
index 0000000..e57916d
Binary files /dev/null and b/doc/images/installation/rancherkubectl.png differ
diff --git a/doc/images/installation/ranchermonitoring.png b/doc/images/installation/ranchermonitoring.png
new file mode 100644
index 0000000..7aa5fc2
Binary files /dev/null and b/doc/images/installation/ranchermonitoring.png differ
diff --git a/doc/images/installation/sign-in.png b/doc/images/installation/sign-in.png
new file mode 100644
index 0000000..50690c7
Binary files /dev/null and b/doc/images/installation/sign-in.png differ
diff --git a/doc/images/installation/users.gif b/doc/images/installation/users.gif
new file mode 100644
index 0000000..4c2c395
Binary files /dev/null and b/doc/images/installation/users.gif differ
diff --git a/doc/images/installation/workernode.png b/doc/images/installation/workernode.png
new file mode 100644
index 0000000..dcd1e0e
Binary files /dev/null and b/doc/images/installation/workernode.png differ
diff --git a/doc/images/logos/Proxmox.png b/doc/images/logos/Proxmox.png
new file mode 100644
index 0000000..ab33daa
Binary files /dev/null and b/doc/images/logos/Proxmox.png differ
diff --git a/doc/images/logos/binderhub.png b/doc/images/logos/binderhub.png
new file mode 100644
index 0000000..9df01b1
Binary files /dev/null and b/doc/images/logos/binderhub.png differ
diff --git a/doc/images/logos/longhorn.png b/doc/images/logos/longhorn.png
new file mode 100644
index 0000000..70bd854
Binary files /dev/null and b/doc/images/logos/longhorn.png differ
diff --git a/doc/images/logos/nfs.jpg b/doc/images/logos/nfs.jpg
new file mode 100644
index 0000000..27ad247
Binary files /dev/null and b/doc/images/logos/nfs.jpg differ
diff --git a/doc/images/logos/rancher.png b/doc/images/logos/rancher.png
new file mode 100644
index 0000000..02ab6c9
Binary files /dev/null and b/doc/images/logos/rancher.png differ
diff --git a/doc/images/logos/seldon_logo.jpg b/doc/images/logos/seldon_logo.jpg
new file mode 100644
index 0000000..ed325e3
Binary files /dev/null and b/doc/images/logos/seldon_logo.jpg differ
diff --git a/doc/images/logos/storageos.svg b/doc/images/logos/storageos.svg
new file mode 100644
index 0000000..1bc8005
--- /dev/null
+++ b/doc/images/logos/storageos.svg
@@ -0,0 +1,83 @@
+
+
+
+
diff --git a/doc/images/logos/tensorflow.png b/doc/images/logos/tensorflow.png
new file mode 100644
index 0000000..4ab0678
Binary files /dev/null and b/doc/images/logos/tensorflow.png differ
diff --git a/examples/README.md b/examples/README.md
index aedf11b..6f548b5 100644
--- a/examples/README.md
+++ b/examples/README.md
@@ -4,4 +4,6 @@ FADI examples
This section contains various usage examples for FADI:
* [basic example](/USERGUIDE.md) with batch ingestion
-* [streaming ingestion](examples/kafka/README.md) with streaming ingestion with the help of the [Apache Kafka](https://kafka.apache.org) message broker
\ No newline at end of file
+* [streaming ingestion](examples/kafka/README.md) with streaming ingestion with the help of the [Apache Kafka](https://kafka.apache.org) message broker
+* [on-demand compute environments](examples/binderhub/README.md) with [BinderHub](https://binderhub.readthedocs.io/en/latest/)
+* [Tensorflow example](examples/tensorflow/README.md) for image classification
\ No newline at end of file
diff --git a/examples/basic/Tensorflow_usecase.md b/examples/basic/Tensorflow_usecase.md
new file mode 100644
index 0000000..9f81af9
--- /dev/null
+++ b/examples/basic/Tensorflow_usecase.md
@@ -0,0 +1,96 @@
+# Tensorflow simple use case
+
+This is a simple Tensorflow use case, first we authenticate to JupyterHub, then we choose the tensorflow envirement and click spawn and we follow the steps.
+
+
+![Jupyter web interface](./images/Tensorflow.png)
+
+
+* Import TensorFlow:
+
+```
+from __future__ import absolute_import, division, print_function, unicode_literals
+
+
+import tensorflow as tf
+```
+* Load and prepare the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. Convert the samples from integers to floating-point numbers:
+
+```
+mnist = tf.keras.datasets.mnist
+
+(x_train, y_train), (x_test, y_test) = mnist.load_data()
+x_train, x_test = x_train / 255.0, x_test / 255.0
+```
+* Build the `tf.keras.Sequential` model by stacking layers. Choose an optimizer and loss function for training:
+
+```
+model = tf.keras.models.Sequential([
+ tf.keras.layers.Flatten(input_shape=(28, 28)),
+ tf.keras.layers.Dense(128, activation='relu'),
+ tf.keras.layers.Dropout(0.2),
+ tf.keras.layers.Dense(10)
+])
+```
+* For each example the model returns a vector of "[logits](https://developers.google.com/machine-learning/glossary#logits)" or "[log-odds](https://developers.google.com/machine-learning/glossary#log-odds)" scores, one for each class.:
+
+```
+predictions = model(x_train[:1]).numpy()
+predictions
+```
+![Jupyter web interface](./images/Tensorflowusecase.png)
+
+* The `tf.nn.softmax` function converts these logits to "probabilities" for each class:
+
+```
+tf.nn.softmax(predictions).numpy()
+```
+![Jupyter web interface](./images/tensor2.png)
+
+* The `losses.SparseCategoricalCrossentropy` loss takes a vector of logits and a True index and returns a scalar loss for each example.
+
+```
+loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
+```
+* This loss is equal to the negative log probability of the the true class: It is zero if the model is sure of the correct class.
+
+This untrained model gives probabilities close to random (1/10 for each class), so the initial loss should be close to -tf.log(1/10) ~= 2.3.
+
+```
+loss_fn(y_train[:1], predictions).numpy()
+```
+![Jupyter web interface](./images/tensor3.png)
+
+```
+model.compile(optimizer='adam',
+ loss=loss_fn,
+ metrics=['accuracy'])
+```
+* The `Model.fit` method adjusts the model parameters to minimize the loss:
+
+```
+model.fit(x_train, y_train, epochs=5)
+```
+
+![Jupyter web interface](./images/tensor4.png)
+
+* The `Model.evaluate` method checks the models performance, usually on a "[Validation-set](https://developers.google.com/machine-learning/glossary#validation-set)".
+
+```
+model.evaluate(x_test, y_test, verbose=2)
+```
+![Jupyter web interface](./images/tensor5.png)
+
+* The image classifier is now trained to ~98% accuracy on this dataset, If you want your model to return a probability, you can wrap the trained model, and attach the softmax to it:
+
+```
+probability_model = tf.keras.Sequential([
+ model,
+ tf.keras.layers.Softmax()
+])
+```
+
+```
+probability_model(x_test[:5])
+```
+![Jupyter web interface](./images/tensor6.png)
\ No newline at end of file
diff --git a/examples/basic/images/Tensorflow.png b/examples/basic/images/Tensorflow.png
new file mode 100644
index 0000000..bf7d9a6
Binary files /dev/null and b/examples/basic/images/Tensorflow.png differ
diff --git a/examples/basic/images/Tensorflowusecase.png b/examples/basic/images/Tensorflowusecase.png
new file mode 100644
index 0000000..4b02269
Binary files /dev/null and b/examples/basic/images/Tensorflowusecase.png differ
diff --git a/examples/basic/images/tensor2.png b/examples/basic/images/tensor2.png
new file mode 100644
index 0000000..f543412
Binary files /dev/null and b/examples/basic/images/tensor2.png differ
diff --git a/examples/basic/images/tensor3.png b/examples/basic/images/tensor3.png
new file mode 100644
index 0000000..b7a1231
Binary files /dev/null and b/examples/basic/images/tensor3.png differ
diff --git a/examples/basic/images/tensor4.png b/examples/basic/images/tensor4.png
new file mode 100644
index 0000000..c6ebb9c
Binary files /dev/null and b/examples/basic/images/tensor4.png differ
diff --git a/examples/basic/images/tensor5.png b/examples/basic/images/tensor5.png
new file mode 100644
index 0000000..c3cd244
Binary files /dev/null and b/examples/basic/images/tensor5.png differ
diff --git a/examples/basic/images/tensor6.png b/examples/basic/images/tensor6.png
new file mode 100644
index 0000000..1d36e00
Binary files /dev/null and b/examples/basic/images/tensor6.png differ
diff --git a/examples/binderhub/README.md b/examples/binderhub/README.md
new file mode 100644
index 0000000..7ca79e7
--- /dev/null
+++ b/examples/binderhub/README.md
@@ -0,0 +1,108 @@
+BinderHub
+===========
+
+* [What is BinderHub](#what-is-binderhub)
+* [Add Binderhub to FADI](#add-binderhub-to-fadi)
+* [Basic example of BinderHub workflow](#basic-example-of-binderhub-workflow)
+ * [BinderHub Configuration](#binderhub-configuration)
+ * [Build the image](#building-the-image)
+ * [Publish the image](#publishing-the-image)
+ * [View the project in JupyterHub](#view-the-project-in-jupyterhub)
+ * [Launch the project](#launch-the-project)
+* [References](#references)
+
+## What is Binderhub
+
+
+
+> *The primary goal of BinderHub is creating custom computing environments that can be used by many remote users. BinderHub enables an end user to easily specify a desired computing environment from a Git repo. BinderHub then serves the custom computing environment at a URL which users can access remotely.*
+
+> *BinderHub will build Docker images out of Git repositories, and then push them to a Docker registry so that JupyterHub can launch user servers based on these images*
+
+## Prerequisites
+
+We assume in this documentation that
+
+* you have already a Kubernetes cluster deployed. If not, you can refer and follow our [Installation guide](https://github.com/cetic/fadi/blob/master/INSTALL.md) to install a local minikube cluster.
+* you will have a [valid Docker account](https://hub.docker.com/signup/).
+
+## Add BinderHub to FADI
+
+Follow the following steps to install FADI with BinderHub on your cluster:
+
+1. Clone this repository and go to the [BinderHub example folder](/examples/binderhub):
+
+```bash
+git clone https://github.com/cetic/fadi.git fadi
+cd fadi/examples/binderhub
+```
+
+2. Edit the [`config.yaml`](/examples/binderhub/config.yaml) file to set your Docker credentials (you need a Docker account because the containerized notebook will be stored on the official Docker registry - [Docker Hub](https://hub.docker.com/signup/)) and the name of your project:
+
+```yaml
+config:
+ BinderHub:
+ use_registry: true
+ image_prefix: /-
+registry:
+ username:
+ password:
+```
+
+> The prefix of an image is always built in the same way when you want to save a container image in the Docker Hub. First the *username*, *a backslash* and then the *name of the project*. In our case, BinderHub will be responsible for adding a tag to version the container image.
+
+3. Launch the Helm scripts, those will deploy all the FADI services and BinderHub on the cluster (this may take some time).
+```bash
+../helm/deploy.sh
+./deploy_binderhub.sh
+# see deploy.log for connection information to the various services
+```
+> The first script deploys FADI. It is important to do this from the **fadi/examples/binderhub** folder so that the **values.yaml** file is taken into account and that JupyterHub is not deployed. The second script will deploy BinderHub.
+
+## Basic example of BinderHub workflow
+
+This example is used to test the deployment of BinderHub with a project using a `requirement.txt` file.
+
+### BinderHub configuration
+
+The first step is to access the BinderHub page.
+
+If your Kubernetes cluster is deployed with **minikube**, the command `minikube service list` will allow you to recover the address to copy/paste in your browser. On the other hand, in the case of a bare-metal cluster, the command: `kubectl get svc -n binderhub` will allow you to recover the service port.
+As this service is of `NodePort` type, you can use the IP address of any node to reach the BinderHub home page.
+
+Once on the BinderHub page is opened, simply fill in the fields with the following inputs:
+
+- **GitHub repository name or url:** `https://github.com/binder-examples/requirements`
+- **branch:** `master`
+
+Finally, click on the `launch` button:
+
+![images/1_input.png](images/1_input.png)
+
+### Build the image
+
+From now on, everything is automated. BinderHub will create a container image based on what resides in the git project.
+
+![images/2_building.png](images/2_building.png)
+
+### Publish the image
+
+The image is now built, it will be saved in your Docker repository, e.g. ([hub.docker.com](https://hub.docker.com)). It will no longer be necessary to go back through the build stages to access this project. The duration of this step will be dependent on your internet connection (the image size is almost 600MB).
+
+![images/3_pushing.png](images/3_pushing.png)
+
+### View the project in JupyterHub
+
+The project will automatically be launched in JupyterHub once all the previous steps are completed.
+
+![images/4_jupyter.png](images/4_jupyter.png)
+
+### Launch the project
+
+You can now enjoy your work environment!
+
+![images/5_notebook.png](images/5_notebook.png)
+
+## References
+
+- [https://binderhub.readthedocs.io/en/latest/](https://binderhub.readthedocs.io/en/latest/)
\ No newline at end of file
diff --git a/examples/binderhub/config.yaml b/examples/binderhub/config.yaml
new file mode 100644
index 0000000..a92ebb9
--- /dev/null
+++ b/examples/binderhub/config.yaml
@@ -0,0 +1,29 @@
+config:
+ BinderHub:
+ use_registry: true
+ image_prefix: /-
+registry:
+ username:
+ password:
+
+
+dind:
+ enabled: true
+ daemonset:
+ image:
+ name: docker
+ tag: 18.09.2-dind
+
+jupyterhub:
+ hub:
+ services:
+ binder:
+ apiToken: 8675d9b1ff09ff2502886dfd4f0f36fd45c916372536aa09613cc9c5563d8d1d
+ proxy:
+ secretToken: 613e0ace7628f92bab45478873451f00e65977ca6a61d2f9255667b7bbd71d0e
+ db:
+ type: sqlite-memory
+ service:
+ type: NodePort
+ nodePorts:
+ http: 30902
diff --git a/examples/binderhub/deploy_binderhub.sh b/examples/binderhub/deploy_binderhub.sh
new file mode 100755
index 0000000..0c9281a
--- /dev/null
+++ b/examples/binderhub/deploy_binderhub.sh
@@ -0,0 +1,29 @@
+#!/usr/bin/env bash
+# This script will deploy the various FADI services on a Kubernetes cluster using Helm and kubectl
+# See https://github.com/cetic/fadi/examples/binderhub/ for usage documentation
+# Usage: ./deploy_binderhub.sh [namespace]
+set -o errexit
+
+LOG_FILE="deploy.log"
+[ -e ${LOG_FILE} ] && rm ${LOG_FILE}
+exec > >(tee -a ${LOG_FILE} )
+exec 2> >(tee -a ${LOG_FILE} >&2)
+
+printf "\n\nFadi is deployed... Now helm will Install BinderHub in FADI...\n"
+# Install BinderHub in FADI
+kubectl get namespace binderhub 2> /dev/null || kubectl create namespace binderhub
+helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
+helm repo update
+helm upgrade --install binderhub jupyterhub/binderhub --version=0.2.0-n132.h1a8ce62 -f ./config.yaml --namespace binderhub
+
+sleep 5s
+# Get the node IP where JupyterHub is deployed
+nodeIP=$(kubectl get po -n binderhub -o wide |sed -n '/proxy/p' |awk '{ print $7 }')
+if [ $nodeIP = "minikube" ];then
+ nodeIP=$(minikube ip)
+fi
+
+# Upgrade the BinderHub release with hub_url
+printf "\n\n Found JupyterHub deployed at $nodeIP\n"
+helm upgrade binderhub jupyterhub/binderhub --version=0.2.0-n132.h1a8ce62 -f ./config.yaml --set config.BinderHub.hub_url=http://$nodeIP:30902 --namespace binderhub
+printf "\n\nInstallation successful, !\n"
diff --git a/examples/binderhub/images/1_input.png b/examples/binderhub/images/1_input.png
new file mode 100644
index 0000000..14c8d64
Binary files /dev/null and b/examples/binderhub/images/1_input.png differ
diff --git a/examples/binderhub/images/2_building.png b/examples/binderhub/images/2_building.png
new file mode 100644
index 0000000..7644331
Binary files /dev/null and b/examples/binderhub/images/2_building.png differ
diff --git a/examples/binderhub/images/3_pushing.png b/examples/binderhub/images/3_pushing.png
new file mode 100644
index 0000000..25174a2
Binary files /dev/null and b/examples/binderhub/images/3_pushing.png differ
diff --git a/examples/binderhub/images/4_jupyter.png b/examples/binderhub/images/4_jupyter.png
new file mode 100644
index 0000000..75dda75
Binary files /dev/null and b/examples/binderhub/images/4_jupyter.png differ
diff --git a/examples/binderhub/images/5_notebook.png b/examples/binderhub/images/5_notebook.png
new file mode 100644
index 0000000..03d58f4
Binary files /dev/null and b/examples/binderhub/images/5_notebook.png differ
diff --git a/examples/binderhub/images/binderhub.png b/examples/binderhub/images/binderhub.png
new file mode 100644
index 0000000..9df01b1
Binary files /dev/null and b/examples/binderhub/images/binderhub.png differ
diff --git a/examples/binderhub/values.yaml b/examples/binderhub/values.yaml
new file mode 100644
index 0000000..9cd8317
--- /dev/null
+++ b/examples/binderhub/values.yaml
@@ -0,0 +1,8 @@
+---
+# Default values for FADI are defined here: https://github.com/cetic/helm-fadi.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+# Values you define here will overwrite default values from helm-fadi.
+
+jupyterhub:
+ enabled: false
diff --git a/examples/tensorflow/README.md b/examples/tensorflow/README.md
new file mode 100644
index 0000000..c02d69e
--- /dev/null
+++ b/examples/tensorflow/README.md
@@ -0,0 +1,101 @@
+# Tensorflow simple use case
+
+This is a simple [Tensorflow](https://www.tensorflow.org/) and [Jupyter](https://jupyter.readthedocs.io/en/latest/) use case with FADI for image classification using the [MNIST dataset](http://yann.lecun.com/exdb/mnist/).
+
+
+Before starting, change the environment:
+ * Click on `Control panel`
+ * Click on `Stop my server`
+ * Finally, click on `Start server`, choose `tensorflow environment` and click on `Spawn`.
+
+![Jupyter web interface](./images/Tensorflow.png)
+
+
+* Import TensorFlow:
+
+```python
+from __future__ import absolute_import, division, print_function, unicode_literals
+
+
+import tensorflow as tf
+```
+* Load and prepare the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. Convert the samples from integers to floating-point numbers:
+
+```python
+mnist = tf.keras.datasets.mnist
+
+(x_train, y_train), (x_test, y_test) = mnist.load_data()
+x_train, x_test = x_train / 255.0, x_test / 255.0
+```
+* Build the `tf.keras.Sequential` model by stacking layers. Choose an optimizer and loss function for training:
+
+```python
+model = tf.keras.models.Sequential([
+ tf.keras.layers.Flatten(input_shape=(28, 28)),
+ tf.keras.layers.Dense(128, activation='relu'),
+ tf.keras.layers.Dropout(0.2),
+ tf.keras.layers.Dense(10)
+])
+```
+* For each example the model returns a vector of "[logits](https://developers.google.com/machine-learning/glossary#logits)" or "[log-odds](https://developers.google.com/machine-learning/glossary#log-odds)" scores, one for each class.:
+
+```python
+predictions = model(x_train[:1]).numpy()
+predictions
+```
+![Jupyter web interface](./images/Tensorflowusecase.png)
+
+* The `tf.nn.softmax` function converts these logits to "probabilities" for each class:
+
+```python
+tf.nn.softmax(predictions).numpy()
+```
+![Jupyter web interface](./images/tensor2.png)
+
+* The `losses.SparseCategoricalCrossentropy` loss takes a vector of logits and a True index and returns a scalar loss for each example.
+
+```python
+loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
+```
+* This loss is equal to the negative log probability of the the true class: It is zero if the model is sure of the correct class.
+
+This untrained model gives probabilities close to random (1/10 for each class), so the initial loss should be close to -tf.log(1/10) ~= 2.3.
+
+```python
+loss_fn(y_train[:1], predictions).numpy()
+```
+![Jupyter web interface](./images/tensor3.png)
+
+```python
+model.compile(optimizer='adam',
+ loss=loss_fn,
+ metrics=['accuracy'])
+```
+* The `Model.fit` method adjusts the model parameters to minimize the loss:
+
+```python
+model.fit(x_train, y_train, epochs=5)
+```
+
+![Jupyter web interface](./images/tensor4.png)
+
+* The `Model.evaluate` method checks the models performance, usually on a "[Validation-set](https://developers.google.com/machine-learning/glossary#validation-set)".
+
+```python
+model.evaluate(x_test, y_test, verbose=2)
+```
+![Jupyter web interface](./images/tensor5.png)
+
+* The image classifier is now trained to ~98% accuracy on this dataset, If you want your model to return a probability, you can wrap the trained model, and attach the softmax to it:
+
+```python
+probability_model = tf.keras.Sequential([
+ model,
+ tf.keras.layers.Softmax()
+])
+```
+
+```python
+probability_model(x_test[:5])
+```
+![Jupyter web interface](./images/tensor6.png)
diff --git a/examples/tensorflow/images/Tensorflow.png b/examples/tensorflow/images/Tensorflow.png
new file mode 100644
index 0000000..bf7d9a6
Binary files /dev/null and b/examples/tensorflow/images/Tensorflow.png differ
diff --git a/examples/tensorflow/images/Tensorflowusecase.png b/examples/tensorflow/images/Tensorflowusecase.png
new file mode 100644
index 0000000..4b02269
Binary files /dev/null and b/examples/tensorflow/images/Tensorflowusecase.png differ
diff --git a/examples/tensorflow/images/tensor2.png b/examples/tensorflow/images/tensor2.png
new file mode 100644
index 0000000..f543412
Binary files /dev/null and b/examples/tensorflow/images/tensor2.png differ
diff --git a/examples/tensorflow/images/tensor3.png b/examples/tensorflow/images/tensor3.png
new file mode 100644
index 0000000..b7a1231
Binary files /dev/null and b/examples/tensorflow/images/tensor3.png differ
diff --git a/examples/tensorflow/images/tensor4.png b/examples/tensorflow/images/tensor4.png
new file mode 100644
index 0000000..c6ebb9c
Binary files /dev/null and b/examples/tensorflow/images/tensor4.png differ
diff --git a/examples/tensorflow/images/tensor5.png b/examples/tensorflow/images/tensor5.png
new file mode 100644
index 0000000..c3cd244
Binary files /dev/null and b/examples/tensorflow/images/tensor5.png differ
diff --git a/examples/tensorflow/images/tensor6.png b/examples/tensorflow/images/tensor6.png
new file mode 100644
index 0000000..1d36e00
Binary files /dev/null and b/examples/tensorflow/images/tensor6.png differ
diff --git a/helm/deploy.sh b/helm/deploy.sh
index 9451500..96702f6 100755
--- a/helm/deploy.sh
+++ b/helm/deploy.sh
@@ -18,7 +18,7 @@ kubectl get namespace ${NAMESPACE} 2> /dev/null || kubectl create namespace ${N
printf "\n\nHelm all the things!...\n"
# add cetic helm repo
-helm repo add cetic https://cetic.github.io/helm-charts/
+helm repo add cetic https://cetic.github.io/helm-charts/ --force-update
helm repo update
# install/upgrade FADI
diff --git a/helm/tiller/rbac-config.yaml b/helm/tiller/rbac-config.yaml
deleted file mode 100644
index 5b21b67..0000000
--- a/helm/tiller/rbac-config.yaml
+++ /dev/null
@@ -1,19 +0,0 @@
-# To update, See https://github.com/helm/helm/blob/master/docs/rbac.md
-apiVersion: v1
-kind: ServiceAccount
-metadata:
- name: tiller
- namespace: tiller
----
-apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRoleBinding
-metadata:
- name: tiller
-roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: cluster-admin
-subjects:
- - kind: ServiceAccount
- name: tiller
- namespace: tiller
diff --git a/tests/.prettierrc.json b/tests/.prettierrc.json
new file mode 100644
index 0000000..8150522
--- /dev/null
+++ b/tests/.prettierrc.json
@@ -0,0 +1,9 @@
+{
+ "semi": false,
+ "singleQuote": true,
+ "useTabs": true,
+ "tabWidth": 2,
+ "bracketSpacing": true,
+ "arrowParens": "avoid",
+ "trailingComma": "es5"
+}
\ No newline at end of file
diff --git a/tests/README.md b/tests/README.md
new file mode 100644
index 0000000..191d8a2
--- /dev/null
+++ b/tests/README.md
@@ -0,0 +1,106 @@
+# Testing the FADI framework
+
+
+
+[![implemented with puppeteer](https://img.shields.io/badge/implemented%20with-puppeteer-%2300D8A2)](https://pptr.dev) [![tested with jest](https://img.shields.io/badge/tested_with-jest-99424f.svg)](https://github.com/facebook/jest)
+
+* [Introduction](#introduction)
+* [Quick start](#quick-start)
+* [Example](#examples)
+* [Documentation](#documentation)
+* [References](#references)
+
+## Introduction
+
+The FADI framework is tested using Puppeteer and Jest.
+
+[Puppeteer](https://pptr.dev) is a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium.
+
+[Jest](https://jestjs.io) is a JavaScript Testing Framework with a focus on simplicity. It works with projects using: Babel, TypeScript, Node, React, Angular, Vue and more!
+
+## Quick start
+
+To test the FADI framework, you need to implement the following instructions:
+
+1. Install FADI framework. Refer to [the INSTALL section](../INSTALL.md).
+2. Create a Docker container using [the Puppeteer-Jest Docker image](https://hub.docker.com/repository/docker/fzalila/docker-puppeteer-jest). To achieve that, run the following command:
+
+```bash
+docker container run --name testing-fadi fzalila/docker-puppeteer-jest:latest
+```
+
+3. Inside the created container, clone the FADI repository:
+
+```bash
+git clone https://github.com/cetic/fadi.git
+```
+
+4. Configure [here](./lib/config.js) the urls and paths of different FADI platform services
+
+5. Go to the `tests` folder and launch the tests:
+
+```bash
+cd fadi/tests
+npm run test
+```
+
+If tests pass, you should obtain the following results:
+
+
+
+## Examples
+
+The following example checks the creation of a `example_basic` table in the `postgres` database.
+
+```js
+it('should create a table', async () => {
+ // Go to the indicated page
+ await page.goto(url)
+
+ // Click on SQL query button
+ await click(page, '.ltr > #menu > .links > a:nth-child(1)')
+
+ // type the query
+ await typeText(page, 'CREATE TABLE example_basic (measure_ts TIMESTAMP NOT NULL,temperature FLOAT (50));', '.ltr > #content > #form > p > .jush')
+
+ // Execute the table creation query
+ await click(page, '.ltr > #content > #form > p > input:nth-child(1)')
+
+ // Check the creation of the table
+ await shouldExist(page, '#content > p.message')
+})
+```
+
+More examples are available in the [test-scripts folder](doc/test-scripts/).
+
+## Documentation
+
+Test cases of the FADI framework are specified using Cockburns[[1](#references)] templates, available [here](doc/Cockburns-specification.md).
+
+Test scripts specifications are available [here](doc/Test-scripts-specifications.md).
+
+Two templates are available in order to define a new [test case](doc/cockburns/TC-template.md) and a new [test script](doc/test-scripts/TS-template.md).
+
+## Continuous integration
+
+To automate testing inside a continuous integration process, you can for example add a `test` stage to a Gitlab-CI pipeline by editing the [`.gitlab-ci.yml`](../.gitlab-ci.sample.yml) configuration:
+
+```yaml
+stages:
+- deployWithHelm
+- test
+
+deployWithHelm:
+[...]
+
+test:
+ stage: test
+ image: ceticasbl/puppeteer-jest
+ script:
+ - cd tests/
+ - npm run test
+```
+
+## References
+
+[1] Alistair Cockburn. 2000. Writing Effective Use Cases (1st. ed.). Addison-Wesley Longman Publishing Co., Inc., USA.
\ No newline at end of file
diff --git a/tests/TestSequencer.js b/tests/TestSequencer.js
new file mode 100644
index 0000000..8bcdd69
--- /dev/null
+++ b/tests/TestSequencer.js
@@ -0,0 +1,9 @@
+const TestSequencer = require('@jest/test-sequencer').default;
+
+class CustomSequencer extends TestSequencer {
+ sort(tests) {
+ const copyTests = Array.from(tests);
+ return copyTests.sort((testA, testB) => (testA.path > testB.path ? 1 : -1));
+}}
+
+module.exports = CustomSequencer;
diff --git a/tests/__tests__/1-adminer.test.js b/tests/__tests__/1-adminer.test.js
new file mode 100644
index 0000000..9d0deed
--- /dev/null
+++ b/tests/__tests__/1-adminer.test.js
@@ -0,0 +1,96 @@
+const puppeteer = require('puppeteer')
+const jestpkg = require('jest')
+const config = require('../lib/config')
+const click = require('../lib/helpers').click
+const typeText = require('../lib/helpers').typeText
+const loadUrl = require('../lib/helpers').loadUrl
+const waitForText = require('../lib/helpers').waitForText
+const pressKey = require('../lib/helpers').pressKey
+const shouldExist = require('../lib/helpers').shouldExist
+const shouldNotExist = require('../lib/helpers').shouldNotExist
+const dragAndDrop = require('../lib/helpers').dragAndDrop
+const Sequencer = require('@jest/test-sequencer').default
+
+const url = config.AdminerUrl
+//const utils = require('../lib/utils')
+
+describe('Test the authentification to the Adminer service', () => {
+ /** @type {puppeteer.Browser} */
+ let browser
+
+ /** @type {puppeteer.Page} */
+ let page
+
+ beforeAll(async function () {
+ browser = await puppeteer.launch({
+ headless: config.isHeadless,
+ slowMo: config.slowMo,
+ devtools: config.isDevtools,
+ timeout: config.launchTimeout,
+ args: ['--no-sandbox']
+ })
+ page = await browser.newPage()
+ await page.setDefaultNavigationTimeout(config.waitingTimeout) //10 seconds is the industry standard
+ await page.setViewport({
+ width: config.viewportWidth,
+ height: config.viewportHeight
+ })
+ })
+ afterAll(async function () {
+ await browser.close()
+ })
+ it('should success the authentification process', async () => {
+ // Go to the indicated page
+ await page.goto(url)
+
+ // Click on the DBMS list
+ await click(page, '.layout > tbody > tr > td > select')
+
+ // Select the postgresql DBMS
+ await page.select('.layout > tbody > tr > td > select', 'pgsql')
+
+ // Empty the server value content (usually 'db')
+ await page.evaluate(() => document.querySelector(".layout > tbody > tr:nth-child(2) > td > input").value = "")
+
+ // Insert the server name fadi-postgresql
+ await typeText(page, 'fadi-postgresql', ".layout > tbody > tr:nth-child(2) > td > input")
+
+ // Insert the user
+ await typeText(page, 'admin', '.layout #username')
+
+ // Insert password
+ await typeText(page, 'password1', '.layout > tbody > tr:nth-child(4) > td > input')
+
+ // insert the db name
+ await typeText(page, 'postgres', '.layout > tbody > tr:nth-child(5) > td > input')
+
+ // Click on the authentification
+ await click(page, '.ltr > #content > form > p > input')
+
+ await page.waitFor(6000)
+
+ // Check access to the authentification page
+ const selector = await page.$('#content > h2')
+ const text = await page.evaluate(selector => selector.textContent, selector);
+ const result = text.includes('public')
+ //console.log(text.includes('public'))
+ expect(result).toBe(true)
+ })
+
+ it('should create a table', async () => {
+ // Go to the indicated page
+ //await page.goto(url)
+
+ // Click on SQL query button
+ await click(page, '.ltr > #menu > .links > a:nth-child(1)')
+
+ // type the query
+ await typeText(page, 'CREATE TABLE example_basic (measure_ts TIMESTAMP NOT NULL,temperature FLOAT (50));', '.ltr > #content > #form > p > .jush')
+
+ // Execute the table creation query
+ await click(page, '.ltr > #content > #form > p > input:nth-child(1)')
+
+ // Check the creation of the table
+ await shouldExist(page, '#content > p.message')
+ })
+})
\ No newline at end of file
diff --git a/tests/__tests__/2-nifi.test.js b/tests/__tests__/2-nifi.test.js
new file mode 100644
index 0000000..dd399cb
--- /dev/null
+++ b/tests/__tests__/2-nifi.test.js
@@ -0,0 +1,212 @@
+const puppeteer = require('puppeteer')
+const jestpkg = require('jest')
+const config = require('../lib/config')
+const click = require('../lib/helpers').click
+const typeText = require('../lib/helpers').typeText
+const loadUrl = require('../lib/helpers').loadUrl
+const waitForText = require('../lib/helpers').waitForText
+const pressKey = require('../lib/helpers').pressKey
+const shouldExist = require('../lib/helpers').shouldExist
+const shouldNotExist = require('../lib/helpers').shouldNotExist
+const dragAndDrop = require('../lib/helpers').dragAndDrop
+const Sequencer = require('@jest/test-sequencer').default
+
+const url = config.NifiUrl
+const template_path = config.NifiTemplatePath
+
+//const utils = require('../lib/utils')
+
+describe('Test the upload template feature of Apache Nifi service', () => {
+ /** @type {puppeteer.Browser} */
+ let browser
+
+ /** @type {puppeteer.Page} */
+ let page
+
+ beforeAll(async function () {
+ browser = await puppeteer.launch({
+ headless: config.isHeadless,
+ slowMo: config.slowMo,
+ devtools: config.isDevtools,
+ timeout: config.launchTimeout,
+ args: ['--no-sandbox']
+ })
+ page = await browser.newPage()
+ await page.setDefaultNavigationTimeout(config.waitingTimeout) //10 seconds is the industry standard
+ await page.setViewport({
+ width: config.viewportWidth,
+ height: config.viewportHeight
+ })
+ })
+ afterAll(async function () {
+ await browser.close()
+ })
+ it('should show the Nifi dashboard', async () => {
+ // Go to the indicated page
+ await page.goto(url)
+ // Check that the nifi logo appears
+ await shouldExist(page, '#nifi-logo')
+ })
+
+ it('click the template uploader', async () => {
+ // check that the Operate frame appears
+ await shouldExist(page, '#operation-control')
+ // click on the 'Upload Template' button
+ await click(page, '#operate-template-upload')
+ })
+
+ it('select the template to upload', async () => {
+ // check that the upload template dialog appears
+ await shouldExist(page, '#upload-template-dialog')
+
+ // select the approapriate template
+ const [fileChooser] = await Promise.all([
+ page.waitForFileChooser(),
+ page.click('#select-template-button'), // some button that triggers file selection
+ ]);
+ await fileChooser.accept([template_path]);
+ })
+
+ it('upload the chosen template', async () => {
+ // click on the 'upload' button
+ await page.click('#canvas-body > #upload-template-dialog > .dialog-buttons > .button:nth-child(1)')
+ })
+ //TODO verify the button "unable to upload" is enabled - see https://github.com/cetic/fadi/issues/125
+
+ // Click on the upload button
+ it('confirm the upload of the template', async () => {
+ // check if the success upload template dialog appears
+ await shouldExist(page, "#nf-ok-dialog")
+
+ // check that the success message appears
+ const shownMessage = await page.evaluate(() => {
+ const string = 'Success';
+ const selector = '#canvas-body > #nf-ok-dialog > .dialog-header > .dialog-header-text';
+ return document.querySelector(selector).innerText.includes(string);
+ });
+ expect(shownMessage).toBe(true);
+ })
+})
+
+describe('Test instantiating template of Apache Nifi service', () => {
+ /** @type {puppeteer.Browser} */
+ let browser
+
+ /** @type {puppeteer.Page} */
+ let page
+
+ beforeAll(async function () {
+ browser = await puppeteer.launch({
+ headless: config.isHeadless,
+ slowMo: config.slowMo,
+ devtools: config.isDevtools,
+ timeout: config.launchTimeout,
+ args: ['--no-sandbox']
+ })
+ page = await browser.newPage()
+ await page.setDefaultNavigationTimeout(config.waitingTimeout) //10 seconds is the industry standard
+ await page.setViewport({
+ width: config.viewportWidth,
+ height: config.viewportHeight
+ })
+ })
+ afterAll(async function () {
+ await browser.close()
+ })
+ it('should show the Nifi dashboard', async () => {
+ // Go to the indicated page
+ await page.goto(url)
+ // Check that the nifi logo appears
+ await shouldExist(page, '#nifi-logo')
+ })
+
+ it('drag and drop a template', async () => {
+ // Upload the template
+
+ // TODO choose the appropriate one - see https://github.com/cetic/fadi/issues/126
+ await shouldExist(page, '#template-component')
+ await dragAndDrop(page, '#template-component')
+ await shouldExist(page, '#instantiate-template-dialog')
+ await click(page, '#instantiate-template-dialog > .dialog-buttons > .button:nth-child(1) > span')
+ await page.waitFor(5000)
+
+ // Unselect the template to active the configure button
+ // await page.mouse.down();
+ // await page.mouse.up();
+ await page.mouse.click(700, 200, {
+ button : 'right'
+ });
+ })
+
+ it('configure a template', async () => {
+ // click on the configure button
+ await click(page, '#operate-configure')
+
+ // click on Controller services
+ await shouldExist(page,'.settings-container > div > #process-group-configuration-tabs > .tab-pane > .tab:nth-child(2)')
+ await click(page, '.settings-container > div > #process-group-configuration-tabs > .tab-pane > .tab:nth-child(2)')
+
+
+ // click on 'configure' button of DBCP controller
+ await shouldExist(page,'.settings-container > #process-group-configuration-tabs-content > #process-group-controller-services-tab-content > #process-group-controller-services-table > .slick-viewport > .grid-canvas > .ui-widget-content:nth-child(2) > .l6 > .pointer:nth-child(1)')
+ await click(page, '.settings-container > #process-group-configuration-tabs-content > #process-group-controller-services-tab-content > #process-group-controller-services-table > .slick-viewport > .grid-canvas > .ui-widget-content:nth-child(2) > .l6 > .pointer:nth-child(1)')
+
+ // Click on Properties tab
+ await shouldExist(page,'#controller-service-configuration > .controller-service-configuration-tab-container > #controller-service-configuration-tabs > .tab-pane > .tab:nth-child(2)')
+ await click(page, '#controller-service-configuration > .controller-service-configuration-tab-container > #controller-service-configuration-tabs > .tab-pane > .tab:nth-child(2)')
+
+ // Click on Password tab
+ await shouldExist(page,'.slick-viewport > .grid-canvas > .ui-widget-content:nth-child(6) > .slick-cell > .unset')
+ await click(page, '.slick-viewport > .grid-canvas > .ui-widget-content:nth-child(6) > .slick-cell > .unset')
+
+ // Define the password
+ await typeText(page, 'password1', '.slickgrid-nf-editor > .nf-editor > .CodeMirror > div > textarea')
+
+ // Click Ok on the password tab
+ await shouldExist(page,'#canvas-body > .slickgrid-nf-editor > div > .button')
+ await click(page, '#canvas-body > .slickgrid-nf-editor > div > .button')
+
+ // Click 'Apply' on the Configure Controller Service
+ await shouldExist(page,'#canvas-body > #controller-service-configuration > .dialog-buttons > .button:nth-child(1) > span')
+ await click(page, '#canvas-body > #controller-service-configuration > .dialog-buttons > .button:nth-child(1) > span')
+
+ // Click on Enable icon of the first service controller
+ await shouldExist(page,'.slick-viewport > .grid-canvas > .ui-widget-content:nth-child(1) > .l6 > .pointer:nth-child(2)')
+ await click(page, '.slick-viewport > .grid-canvas > .ui-widget-content:nth-child(1) > .l6 > .pointer:nth-child(2)')
+
+ // Click on Enable button of the first service controller
+ await shouldExist(page,'#canvas-body > #enable-controller-service-dialog > .dialog-buttons > .button:nth-child(1)')
+ await click(page, '#canvas-body > #enable-controller-service-dialog > .dialog-buttons > .button:nth-child(1)')
+
+ // await page.waitForFunction(
+ // 'document.querySelector("#canvas-body > #enable-controller-service-dialog > .dialog-buttons > .button > span").innerText.includes("Close")'
+ // );
+
+ // Click on Close button of the first service controller
+ await shouldExist(page,'#canvas-body > #enable-controller-service-dialog > .dialog-buttons > .button > span')
+ await page.waitFor(5000)
+ await click(page, '#canvas-body > #enable-controller-service-dialog > .dialog-buttons > .button > span')
+
+ // Click on Enable icon of the second service controller
+ await shouldExist(page,'.slick-viewport > .grid-canvas > .ui-widget-content:nth-child(1) > .l6 > .pointer:nth-child(2)')
+ await click(page, '.slick-viewport > .grid-canvas > .ui-widget-content:nth-child(1) > .l6 > .pointer:nth-child(2)')
+
+ // Click on Enable button of the second service controller
+ await shouldExist(page,'#canvas-body > #enable-controller-service-dialog > .dialog-buttons > .button:nth-child(1)')
+ await click(page, '#canvas-body > #enable-controller-service-dialog > .dialog-buttons > .button:nth-child(1)')
+
+ // Click on Close button of the second service controller
+ await shouldExist(page,'#canvas-body > #enable-controller-service-dialog > .dialog-buttons > .button > span')
+ await page.waitFor(5000)
+ await click(page, '#canvas-body > #enable-controller-service-dialog > .dialog-buttons > .button > span')
+
+ // Close configuration window
+ await shouldExist(page,'#shell-dialog > #shell-container > #shell-close-container > #shell-close-button > .fa')
+ await click(page, '#shell-dialog > #shell-container > #shell-close-container > #shell-close-button > .fa')
+ })
+
+ it('start a template', async () => {
+ // click on the start button
+ await click(page, '#operate-start')
+ })
+})
\ No newline at end of file
diff --git a/tests/__tests__/3-grafana.test.js b/tests/__tests__/3-grafana.test.js
new file mode 100644
index 0000000..ffcf883
--- /dev/null
+++ b/tests/__tests__/3-grafana.test.js
@@ -0,0 +1,123 @@
+const puppeteer = require('puppeteer')
+const jestpkg = require('jest')
+const config = require('../lib/config')
+const click = require('../lib/helpers').click
+const typeText = require('../lib/helpers').typeText
+const loadUrl = require('../lib/helpers').loadUrl
+const waitForText = require('../lib/helpers').waitForText
+const pressKey = require('../lib/helpers').pressKey
+const shouldExist = require('../lib/helpers').shouldExist
+const shouldNotExist = require('../lib/helpers').shouldNotExist
+const dragAndDrop = require('../lib/helpers').dragAndDrop
+const Sequencer = require('@jest/test-sequencer').default
+
+const url = config.GrafanaUrl
+const dashboard_path = config.GrafanaDashboardPath
+//const utils = require('../lib/utils')
+
+describe('Test the authentification to the Grafana service', () => {
+ /** @type {puppeteer.Browser} */
+ let browser
+
+ /** @type {puppeteer.Page} */
+ let page
+
+ beforeAll(async function () {
+ browser = await puppeteer.launch({
+ headless: config.isHeadless,
+ slowMo: config.slowMo,
+ devtools: config.isDevtools,
+ timeout: config.launchTimeout,
+ args: ['--no-sandbox']
+ })
+ page = await browser.newPage()
+ await page.setDefaultNavigationTimeout(config.waitingTimeout) //10 seconds is the industry standard
+ await page.setViewport({
+ width: config.viewportWidth,
+ height: config.viewportHeight
+ })
+ })
+ afterAll(async function () {
+ await browser.close()
+ })
+ it('should authentificate to the Grafana index page', async () => {
+ // Go to the indicated page
+ await page.goto(url)
+ await shouldExist(page, '.login-content')
+
+ // Login to the Grafana service
+ await typeText(page, 'admin', '[name=user]')
+ await typeText(page, 'password1', '[name=password]')
+ await click(page, '[type=submit]')
+ await shouldExist(page, '.sidemenu__logo')
+ })
+
+ it('should create a configuration of data source', async () => {
+ // Click on Configuration
+ await click(page, '.sidemenu__top > .sidemenu-item:nth-child(5)')
+
+ // Click on Data sources
+ // await page.waitForSelector('.sidemenu__top > .sidemenu-item:nth-child(5) > .dropdown-menu > li:nth-child(2) > a')
+ // await page.click('.sidemenu__top > .sidemenu-item:nth-child(5) > .dropdown-menu > li:nth-child(2) > a')
+
+ //Click on Add data source
+ await click(page, '.css-eb113e-button')
+ //await page.waitFor(5000)
+
+ // Click on Postgresql option
+ await click(page, "[aria-label='PostgreSQL datasource plugin']")
+ //await page.waitFor(5000)
+
+ // Insert the Host
+ await typeText(page, 'fadi-postgresql:5432', "[placeholder='localhost:5432']")
+
+ //Inser the database name
+ await typeText(page, 'postgres', "[placeholder='database name']")
+
+ //Insert the user name
+ await typeText(page, 'admin', "[placeholder='user']")
+
+ //Insert the password
+ await typeText(page, 'password1', "[placeholder='Password']")
+
+ // Disable SSL mode
+ await click(page, 'ds-config-postgres > .gf-form-group > .gf-form > .max-width-15 > .gf-form-input')
+ await page.select('ds-config-postgres > .gf-form-group > .gf-form > .max-width-15 > .gf-form-input', 'string:disable')
+
+ // Choose the version
+ await click(page, 'ds-config-postgres > .gf-form-group > .gf-form > .gf-form-select-wrapper > .gf-size-auto')
+ await page.select('ds-config-postgres > .gf-form-group > .gf-form > .gf-form-select-wrapper > .gf-size-auto', 'number:1000')
+
+ // Click on Save and test
+ await click(page, '.page-container > div > form > .gf-form-button-row > .btn-primary')
+
+ })
+
+ it('import a Grafana dashboard on the Templates views ', async () => {
+
+ // Click on dashboard menu
+ await click(page, '.sidemenu__top > .sidemenu-item:nth-child(2)')
+
+ //click on Manage dashboard
+ await click(page, '.sidemenu__top > .sidemenu-item:nth-child(2) > .dropdown-menu > li:nth-child(4) > a')
+
+ //Click on Import Dashboard
+ await click(page, '.page-container > manage-dashboards > .dashboard-list > .page-action-bar > .btn:nth-child(5)')
+
+ // select the approapriate template
+ const [fileChooser] = await Promise.all([
+ page.waitForFileChooser(),
+ page.click('.page-container > div > .page-action-bar > dash-upload > .btn'), // some button that triggers file selection
+ ]);
+ await fileChooser.accept([dashboard_path]);
+
+ // Confirm the Import
+ await click(page, 'div > .page-container > div > .gf-form-button-row > .btn-primary')
+ await page.waitFor(10000)
+
+ await page.screenshot({
+ path: './files/Grafana_screenshot.jpg',
+ fullPage: true
+ });
+ })
+})
\ No newline at end of file
diff --git a/tests/__tests__/5-adminer-delete.test.js b/tests/__tests__/5-adminer-delete.test.js
new file mode 100644
index 0000000..1770aa7
--- /dev/null
+++ b/tests/__tests__/5-adminer-delete.test.js
@@ -0,0 +1,94 @@
+const puppeteer = require('puppeteer')
+const jestpkg = require('jest')
+const config = require('../lib/config')
+const click = require('../lib/helpers').click
+const typeText = require('../lib/helpers').typeText
+const loadUrl = require('../lib/helpers').loadUrl
+const waitForText = require('../lib/helpers').waitForText
+const pressKey = require('../lib/helpers').pressKey
+const shouldExist = require('../lib/helpers').shouldExist
+const shouldNotExist = require('../lib/helpers').shouldNotExist
+const dragAndDrop = require('../lib/helpers').dragAndDrop
+const Sequencer = require('@jest/test-sequencer').default
+
+const url = config.AdminerUrl
+//const utils = require('../lib/utils')
+
+describe('Test the delete of the table', () => {
+ /** @type {puppeteer.Browser} */
+ let browser
+
+ /** @type {puppeteer.Page} */
+ let page
+
+ beforeAll(async function () {
+ browser = await puppeteer.launch({
+ headless: config.isHeadless,
+ slowMo: config.slowMo,
+ devtools: config.isDevtools,
+ timeout: config.launchTimeout,
+ args: ['--no-sandbox']
+ })
+ page = await browser.newPage()
+ await page.setDefaultNavigationTimeout(config.waitingTimeout) //10 seconds is the industry standard
+ await page.setViewport({
+ width: config.viewportWidth,
+ height: config.viewportHeight
+ })
+ })
+ afterAll(async function () {
+ await browser.close()
+ })
+
+ it('should success the authentification process', async () => {
+ // Go to the indicated page
+ await page.goto(url)
+
+ // Click on the DBMS list
+ await click(page, '.layout > tbody > tr > td > select')
+
+ // Select the postgresql DBMS
+ await page.select('.layout > tbody > tr > td > select', 'pgsql')
+
+ // Empty the server value content (usually 'db')
+ await page.evaluate(() => document.querySelector(".layout > tbody > tr:nth-child(2) > td > input").value = "")
+
+ // Insert the server name fadi-postgresql
+ await typeText(page, 'fadi-postgresql', ".layout > tbody > tr:nth-child(2) > td > input")
+
+ // Insert the user
+ await typeText(page, 'admin', '.layout #username')
+
+ // Insert password
+ await typeText(page, 'password1', '.layout > tbody > tr:nth-child(4) > td > input')
+
+ // insert the db name
+ await typeText(page, 'postgres', '.layout > tbody > tr:nth-child(5) > td > input')
+
+ // Click on the authentification
+ await click(page, '.ltr > #content > form > p > input')
+
+ await page.waitFor(6000)
+
+ // Check access to the authentification page
+ const selector = await page.$('#content > h2')
+ const text = await page.evaluate(selector => selector.textContent, selector);
+ const result = text.includes('public')
+ //console.log(text.includes('public'))
+ expect(result).toBe(true)
+ })
+
+ it('should delete the table', async () => {
+ // Click on SQL query button
+ await click(page, '.ltr > #menu > .links > a:nth-child(1)')
+
+ // type the delete query
+ await typeText(page, 'DROP TABLE example_basic;', '.ltr > #content > #form > p > .jush')
+
+ // Execute the table creation query
+ await click(page, '.ltr > #content > #form > p > input:nth-child(1)')
+
+ // Check the creation of the table
+ await shouldExist(page, '#content > p.message')
+ })
+})
\ No newline at end of file
diff --git a/tests/__tests__/6-grafana-delete.test.js b/tests/__tests__/6-grafana-delete.test.js
new file mode 100644
index 0000000..1e90a9e
--- /dev/null
+++ b/tests/__tests__/6-grafana-delete.test.js
@@ -0,0 +1,88 @@
+const puppeteer = require('puppeteer')
+const jestpkg = require('jest')
+const config = require('../lib/config')
+const click = require('../lib/helpers').click
+const typeText = require('../lib/helpers').typeText
+const loadUrl = require('../lib/helpers').loadUrl
+const waitForText = require('../lib/helpers').waitForText
+const pressKey = require('../lib/helpers').pressKey
+const shouldExist = require('../lib/helpers').shouldExist
+const shouldNotExist = require('../lib/helpers').shouldNotExist
+const dragAndDrop = require('../lib/helpers').dragAndDrop
+const Sequencer = require('@jest/test-sequencer').default
+
+const url = config.GrafanaUrl
+//const utils = require('../lib/utils')
+
+describe('Test the authentification to the Grafana service', () => {
+ /** @type {puppeteer.Browser} */
+ let browser
+
+ /** @type {puppeteer.Page} */
+ let page
+
+ beforeAll(async function () {
+ browser = await puppeteer.launch({
+ headless: config.isHeadless,
+ slowMo: config.slowMo,
+ devtools: config.isDevtools,
+ timeout: config.launchTimeout,
+ args: ['--no-sandbox']
+ })
+ page = await browser.newPage()
+ await page.setDefaultNavigationTimeout(config.waitingTimeout) //10 seconds is the industry standard
+ await page.setViewport({
+ width: config.viewportWidth,
+ height: config.viewportHeight
+ })
+ })
+ afterAll(async function () {
+ await browser.close()
+ })
+ it('should authentificate to the Grafana index page', async () => {
+ // Go to the indicated page
+ await page.goto(url)
+ await shouldExist(page, '.login-content')
+
+ // Login to the Grafana service
+ await typeText(page, 'admin', '[name=user]')
+ await typeText(page, 'password1', '[name=password]')
+ await click(page, '[type=submit]')
+ await shouldExist(page, '.sidemenu__logo')
+ })
+
+ it('should delete a configuration of data source', async () => {
+ // Click on the configuration menu
+ await click(page, '.sidemenu__top > .sidemenu-item > .sidemenu-link > .icon-circle > .gicon-cog')
+
+ // Select the postgresql data source
+ await click(page, '.card-section > .card-list > .card-item-wrapper > .card-item > .card-item-body')
+
+ // Click on the delete button
+ await click(page, '.page-container > div > form > .gf-form-button-row > .btn-danger')
+
+ // Confirm the delete
+ await click(page, '.modal-body > .modal-content > .confirm-modal-buttons > .btn-danger')
+
+ })
+
+ it('should delete a Grafana dashboard ', async () => {
+
+ // Click on Dashboard menu
+ await click(page, '.sidemenu > .sidemenu__top > .sidemenu-item:nth-child(2) > .sidemenu-link > .icon-circle')
+
+ // Click on Manage section
+ await click(page, '.sidemenu__top > .sidemenu-item:nth-child(2) > .dropdown-menu > li:nth-child(4) > a')
+
+ // Check the dashboard to delete
+ await shouldExist(page, '.search-item > .center-vh > gf-form-checkbox > .gf-form-switch-container > .gf-form-checkbox > .gf-form-switch__checkbox')
+ await click(page, '.search-item > .center-vh > gf-form-checkbox > .gf-form-switch-container > .gf-form-checkbox > .gf-form-switch__checkbox')
+
+ // Click on the delete button
+ await click(page, '.search-results > .search-results-filter-row > .search-results-filter-row__filters > .gf-form-button-row > .btn-danger')
+
+ // Confirm the delete
+ await click(page, '.modal-body > .modal-content > .confirm-modal-buttons > .btn-danger')
+
+ })
+})
\ No newline at end of file
diff --git a/tests/doc/Cockburns-specification.md b/tests/doc/Cockburns-specification.md
new file mode 100644
index 0000000..96adfb2
--- /dev/null
+++ b/tests/doc/Cockburns-specification.md
@@ -0,0 +1,288 @@
+Test cases specifications
+================
+
+* [General definitions](#general-definitions)
+* [Abbreviations lists](#abbreviations-list)
+* [Actors](#actors)
+* [Test cases list](#test-cases-list)
+
+## General definitions
+
+In this section, the main concepts and technologies are introduced in order to ease the understanding of the different test use cases.
+
+
+
+* **FADI platform** is a Cloud Native platform for Big Data based on mature open source tools. The FADI project is dedicated to making the deployment of Big Data tools simple, portable and scalable. The goal is to provide a straightforward way to deploy open-source systems for Big Data to various infrastructures (private and public clouds).
+* **FADI dashboard** is actually the Kubernetes dashboard that enables to have an idea about the status of the Kubernetes pods by launching the command “minikube dashboard”.
+* **Adminer** is an open source graphical administration tool for relational databases. It is used in FADI in order to ease the management of the PostgreSQL databases.
+* **Apache Nifi** is an open source tool designed to automate the flow of data between software systems. It used in the FADI platform in order to collect data, to extract it, to transform it and to store it in the appropriate data store.
+* **PostgreSQL** is an open source relational database management system and it is used to store the data in the FADI platform.
+* **Grafana** is an open source tool enabling the visualization and the formatting of metrics data coming from different type of databases. It can play the role of the FADI dashboard.
+* **Apache Superset** is an open source tool to visualize big data and it can play the role of th FADI dashboard.
+* **Spark** is an open source analytics engine for large-scale data processing. It is used in the FADI platform to analyse data.
+* **Jupyter** is a notebook that provides an easy interface to the Spark processing engine that runs on your cluster. It is used in the FADI platform to enable the use of Spark and explore data.
+
+
+## Abbreviations list
+
+
+
+
+
Abbreviation
+
+
Meaning
+
+
+
+
+
+
+
+
+
+
+
+
+## Actors
+
+
+
+
+
The client side
+
+
+
+
+
Operator (Fraiseur/responsable de maintenance) (co): operator on machine. He must understand information given by analysis dashboards to achieve his work.
+
+
Factory Leader (cfl):he manages the factory (Tenant) and the Operators.
+
+
Business Analyst (cba): he builds BI analysis dashboard based on the data warehouse.
+
+
Business Leader (cbl): He manages the business of the factory (Tenant) and the Business Analysts.
+
+
Data Scientist (cds): he defines algorithms to analyse data for trends identification. He defines the data warehouse structure and content.
+
+
Data Engineer (cde): he configures the resources the Data Scientist needs and the ingestion process from the factories to the datalake.
+
+
Tenant Admin (cta): it is the user in charge of the Tenant.
+
+
+
+
+
+
The backend side
+
+
+
+
+
Platform Admin (hpa): he manages tenants.
+
+
System Admin (hsa): he installs and maintains the platform.
+
+
Stakeholders (hs): board of directors.
+
+
ICT manager (hm): he manages the project, the Platform Admin and the System Admin.
+
diff --git a/tests/doc/README.md b/tests/doc/README.md
new file mode 100644
index 0000000..7362f61
--- /dev/null
+++ b/tests/doc/README.md
@@ -0,0 +1,5 @@
+# Test specification
+
+Test cases specification using Cockburn template is available [here](./Cockburns-specification.md).
+
+Test scripts specification is available [here](./Test-scripts-specifications.md).
\ No newline at end of file
diff --git a/tests/doc/Test-scripts-specifications.md b/tests/doc/Test-scripts-specifications.md
new file mode 100644
index 0000000..87e31a0
--- /dev/null
+++ b/tests/doc/Test-scripts-specifications.md
@@ -0,0 +1,122 @@
+Test scripts specifications
+============
+
+* [Actors abbreviations list](#actors-abbreviations-list)
+* [Description of the content of the table](#description-of-the-content-of-the-table)
+* [Test scripts list](#test-scripts-list)
+
+## Actors abbreviations list
+
+
+
Abbreviation
+
+
Meaning
+
+
+
+
JEST
+
+
The test runner
+
+
+
+
PUP
+
+
The API controlling operations on Google Chrome
+
+
+
+
+## Description of the content of the table
+
+
+
+
Column
+
+
Meaning
+
+
+
+
Test script ID
+
+
The identifier of the test script. It contains two links: spec and impl to consult the specification and the implementation of the test script respectively
+
diff --git a/tests/doc/cockburns/TC-1.md b/tests/doc/cockburns/TC-1.md
new file mode 100644
index 0000000..c9cbfa5
--- /dev/null
+++ b/tests/doc/cockburns/TC-1.md
@@ -0,0 +1,110 @@
+## TC-1:Authentication to a given tool via LDAP
+
+
+
+
+
Use case ID
+
+
TC-1
+
+
+
+
Use case name
+
+
Authentication to a given tool via LDAP
+
+
+
+
Actors
+
+
+
+
+
All actors
+
+
+
+
+
+
Trigger
+
+
The actor wants to use one service of the FADI platform for the first time
+
+
+
+
Short Description
+
+
This use case denotes the process of the authentication to the FADI platform.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor credentials should exist in the LDAP repository
+
+
The FADI platform is already installed
+
+
The actor already knows the URL address of the desired service (e.g grafana, pgAdmin, etc.)
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The actor is authenticated
+
+
+
+
+
+
Steps
+
+
1
+
+
The actor access to a given service via its URL address
+
+
+
+
2
+
+
The actor enter his/her credentials
+
+
+
+
3
+
+
The actor is authenticated
+
+
+
+
Exceptions
+
+
+
+
+
The actor credentials does’t not exist in the LDAP repository
+
+
The actor makes an error when entering his/her credentials
+
+
+
+
+
+
Frequency
+
+
+
+
+
Every time the use is not logged-in in the FADI platform
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/doc/cockburns/TC-10.md b/tests/doc/cockburns/TC-10.md
new file mode 100644
index 0000000..0006fba
--- /dev/null
+++ b/tests/doc/cockburns/TC-10.md
@@ -0,0 +1,146 @@
+## TC-10: Configuring a data source in the Grafana dashboard
+
+
+
+
+
Use case ID
+
+
TC-10
+
+
+
+
Use case name
+
+
Configuring a data source in the Grafana dashboard
+
+
+
+
Actors
+
+
+
+
+
Business Analyst
+
+
Business Leader
+
+
+
+
+
+
Trigger
+
+
+
+
+
Visualizing results coming from a new database
+
+
Connecting Grafana to a database
+
+
+
+
+
+
Short Description
+
+
The FADI platform enables to visualize data and results in various types of dashboards (e.g. curves, heatmaps, etc.) either by directly querying the databases or by collecting stream data.
+
+In this use case, the Grafana tool will be used and the way to connect this tool to a database is defined.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the Grafana service
+
+
The actor knows the credentials to connect to the Grafana service
+
+
The actor knows the required credentials to connect to the database (i.e. host, database, user, password, SSL mode, Version)
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The Grafana tool is connected to the database
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the Grafana service
+
+
+
+
2
+
+
Authenticate to the Grafana service
+
+
+
+
3
+
+
Add a data source and choose the type “PostgreSQL”
+
+
+
+
4
+
+
Configure the following elements:
+
+
+
Host: fadi-postgresql:5432
+
+
database: postgres
+
+
user: admin
+
+
password: password1
+
+
SSL Mode: disable
+
+
Version: 10
+
+
+
+
+
+
5
+
+
Check that the database is correctly created
+
+
+
+
Exceptions
+
+
+
+
+
An error in the credentials
+
+
+
+
+
+
Frequency
+
+
+
+
+
Every time a new database should be connected to the Grafana tool
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/doc/cockburns/TC-11.md b/tests/doc/cockburns/TC-11.md
new file mode 100644
index 0000000..765a9b5
--- /dev/null
+++ b/tests/doc/cockburns/TC-11.md
@@ -0,0 +1,151 @@
+## TC-11: Defining dashboards based on the analyzed data using Grafana
+
+
+
+
+
Use case ID
+
+
TC-11
+
+
+
+
Use case name
+
+
Defining dashboards based on the analyzed data using Grafana
+
+
+
+
Actors
+
+
+
+
+
Business Analyst
+
+
Business Leader
+
+
+
+
+
+
Trigger
+
+
+
+
+
Visualizing results in dashboards
+
+
Realizing a global overview about the results
+
+
+
+
+
+
Short Description
+
+
The FADI platform enables to visualize data and results in various types of dashboards (e.g. curves, heatmaps, etc.) either by directly querying the databases or by collecting stream data.
+
+In this use case, the Grafana tool will be used.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the Grafana service
+
+
The actor knows the credentials to connect to the Grafana service
+
+
The actor knows the schema of the target database (i.e. name of tables, name of attributes, etc.)
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The actor visualizes dashboards
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the Grafana service
+
+
+
+
2
+
+
Authenticate to the Grafana service
+
+
+
+
3
+
+
Choose the source of data
+
+
+
+
4
+
+
Edit a new dashboard
+
+
+
+
5
+
+
Select the type of a dashboard from the Visualization dropdown list (e.g. graph, heatmap, etc.)
+
+
+
+
6
+
+
Edit the query
+
+
+
+
7
+
+
Configure the time frame (if needed)
+
+
+
+
+
+
8
+
+
Press on the Query inspector button to execute the query and visualize the dashboard
+
+
+
+
Exceptions
+
+
+
+
+
An error in the query
+
+
+
+
+
+
Frequency
+
+
+
+
+
Every time the actor wants to visualize his/her results
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/doc/cockburns/TC-12.md b/tests/doc/cockburns/TC-12.md
new file mode 100644
index 0000000..c563392
--- /dev/null
+++ b/tests/doc/cockburns/TC-12.md
@@ -0,0 +1,183 @@
+## TC-12: Defining alerts using Grafana
+
+
+
+
+
Use case ID
+
+
TC-12
+
+
+
+
Use case name
+
+
Defining alerts using Grafana
+
+
+
+
Actors
+
+
+
+
+
Business Analyst
+
+
Business Leader
+
+
+
+
+
+
Trigger
+
+
+
+
+
Controlling visualized data
+
+
Detecting anomalies
+
+
+
+
+
+
Short Description
+
+
The FADI platform enables to visualize data and results in various types of dashboards (e.g. curves, heatmaps, etc.). In addition, it allows users to define alerts and rules in order to automatically detect misbehaviours, anomalies and errors when analyzing data.
+
+In this use case, the Grafana tool will be used
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the Grafana service
+
+
The actor knows the credentials to connect to the Grafana service
+
+
The actor knows the schema of the target database (i.e. name of tables, name of attributes, etc.)
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The actor visualizes the defined alert
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the Grafana service
+
+
+
+
2
+
+
Authenticate to the Grafana service
+
+
+
+
3
+
+
Add a data source and choose the type “PostgreSQL”
+
+
+
+
4
+
+
Configure the following elements:
+
+
+
Host: fadi-postgresql:5432
+
+
database: postgres
+
+
user: admin
+
+
password: password1
+
+
SSL Mode: disable
+
+
Version: 10
+
+
+
+
+
+
5
+
+
Edit a new dashboard
+
+
+
+
6
+
+
Select the type of a dashboard from the Visualization dropdown list (e.g. graph, heatmap, etc.)
+
+
+
+
7
+
+
Configure the query
+
+
+
+
8
+
+
Configure the time frame (if needed)
+
+
+
+
9
+
+
Press on the Query inspector button to execute the query and visualize the dashboard
+
+
+
+
10
+
+
Go to the “Alert” tab
+
+
+
+
11
+
+
Create a new alert by specifying the alert threshold
+
+
+
+
12
+
+
Visualize the alert in the dashboard
+
+
+
+
Exceptions
+
+
N/A
+
+
+
+
Frequency
+
+
+
+
+
Every time the actor wants to defines alerts
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/doc/cockburns/TC-13.md b/tests/doc/cockburns/TC-13.md
new file mode 100644
index 0000000..9a4fd92
--- /dev/null
+++ b/tests/doc/cockburns/TC-13.md
@@ -0,0 +1,127 @@
+## TC-13: Configuring a database in Superset
+
+
+
+
+
Use case ID
+
+
TC-13
+
+
+
+
Use case name
+
+
Configuring a database in Superset
+
+
+
+
Actors
+
+
+
+
+
Business Analyst
+
+
Business Leader
+
+
+
+
+
+
Trigger
+
+
+
+
+
Visualize results coming from a new database
+
+
Visualize results for the first time
+
+
+
+
+
+
Short Description
+
+
The FADI platform enables to visualize the results of the data analysis and to export these results in reports. In this context, the Superset tool is used. The first thing to do is to link Superset to a data source.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the Superset service
+
+
The actor knows the credentials to connect to the Superset service
+
+
The actor knows the credentials of the database (i.e. the database name, SQLAlchemy URI)
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The database is linked to the Superset
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the Superset service
+
+
+
+
2
+
+
Authenticate to the Superset service
+
+
+
+
3
+
+
Create a new database by entering the following information:
+
+the Database and the SQLAlchemy URI
+
+
+
+
4
+
+
Confirm the creation by clicking on the “Test Connection” button to check the connection to the database
+
+
+
+
Exceptions
+
+
+
+
+
Error in the entered information
+
+
The connection to the database failed
+
+
+
+
+
+
Frequency
+
+
+
+
+
Every time an actor wants to visualize data coming from a database which is not connected to Superset
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/doc/cockburns/TC-14.md b/tests/doc/cockburns/TC-14.md
new file mode 100644
index 0000000..45750ef
--- /dev/null
+++ b/tests/doc/cockburns/TC-14.md
@@ -0,0 +1,143 @@
+## TC-14: Configuring a table in Superset
+
+
+
+
+
Use case ID
+
+
TC-14
+
+
+
+
Use case name
+
+
Configuring a table in Superset
+
+
+
+
Actors
+
+
+
+
+
Business Analyst
+
+
Business Leader
+
+
+
+
+
+
Trigger
+
+
+
+
+
Visualize results coming from a new table
+
+
Visualize results for the first time
+
+
+
+
+
+
Short Description
+
+
The FADI platform enables to visualize the results of the data analysis and to export these results in reports. In this context, the Superset tool is used to configure a table in a given database.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the Superset service
+
+
The actor knows the credentials to connect to the Superset service
+
+
The actor knows the schema of the database
+
+
The actor knows the name of the table
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The table is configured
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the Superset service
+
+
+
+
2
+
+
Authenticate to the Superset service
+
+
+
+
3
+
+
Create a new table by editing the following information: the database name and the table name
+
+
+
+
4
+
+
Save the edited values
+
+
+
+
5
+
+
Edit the columns of the created table by
+
+
+
+
6
+
+
Save the edited information
+
+
+
+
7
+
+
Check that the table and the columns are correctly configured
+
+
+
+
Exceptions
+
+
+
+
+
Error in the entered information
+
+
+
+
+
+
Frequency
+
+
+
+
+
Every time an actor wants to visualize data coming from a given table
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/doc/cockburns/TC-15.md b/tests/doc/cockburns/TC-15.md
new file mode 100644
index 0000000..f4b938c
--- /dev/null
+++ b/tests/doc/cockburns/TC-15.md
@@ -0,0 +1,161 @@
+## TC-15: Creating a chart in Superset
+
+
+
+
+
Use case ID
+
+
TC-15
+
+
+
+
Use case name
+
+
Creating a chart in Superset
+
+
+
+
Actors
+
+
+
+
+
Business Analyst
+
+
Business Leader
+
+
+
+
+
+
Trigger
+
+
+
+
+
Visualize results in a chart
+
+
+
+
+
+
Short Description
+
+
The FADI platform enables to visualize the results of the data analysis and to export these results in reports. In this use case, the Superset tool is used to create a chart in order to visualize the results.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the Superset service
+
+
The actor knows the credentials to connect to the Superset service
+
+
The actor knows the schema of the database
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
A chart is created
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the Superset service
+
+
+
+
2
+
+
Authenticate to the Superset service
+
+
+
+
3
+
+
On the top menu, click on “Chart”
+
+
+
+
4
+
+
Add a new record
+
+
+
+
5
+
+
Choose the datasource
+
+
+
+
6
+
+
Choose the visualization type
+
+
+
+
7
+
+
Click “Create new chart”
+
+
+
+
8
+
+
Configure the chart by defining the time requirements and the query
+
+
+
+
9
+
+
Click “Run query” to fetch the data from the database
+
+
+
+
10
+
+
Check the chart is correctly created
+
+
+
+
Exceptions
+
+
+
+
+
Error during the authentication
+
+
Error in the edited information
+
+
Error in the query
+
+
+
+
+
+
Frequency
+
+
+
+
+
Every time the user wants to create a chart
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/doc/cockburns/TC-16.md b/tests/doc/cockburns/TC-16.md
new file mode 100644
index 0000000..3f8772a
--- /dev/null
+++ b/tests/doc/cockburns/TC-16.md
@@ -0,0 +1,144 @@
+## TC-16: Saving a dashboard in Superset
+
+
+
+
+
Use case ID
+
+
TC-16
+
+
+
+
Use case name
+
+
Saving a dashboard in Superset
+
+
+
+
Actors
+
+
+
+
+
Business Analyst
+
+
Business Leader
+
+
+
+
+
+
Trigger
+
+
+
+
+
Save a new dashboard
+
+
Save a dashboard after a modification
+
+
+
+
+
+
Short Description
+
+
The FADI platform enables to visualize the results of the data analysis and to export these results in reports. In this use case, the Superset tool is used to create and save a dashboard.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the Superset service
+
+
The actor knows the credentials to connect to the Superset service
+
+
The actor knows the schema of the database
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The dashboard is saved
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the Superset service
+
+
+
+
2
+
+
Authenticate to the Superset service
+
+
+
+
3
+
+
Create/modify a dashboard
+
+
+
+
4
+
+
Click on “save” and edit the following information
+
+
+
Save as: Basic example
+
+
Add to new dashboard: Basic example dashboard
+
+
+
+
+
+
5
+
+
Click on Save & go to dashboard.
+
+
+
+
6
+
+
Visualize the saved dashboard
+
+
+
+
Exceptions
+
+
+
+
+
Error during the authentication
+
+
Error editing the information
+
+
+
+
+
+
Frequency
+
+
+
+
+
Every time the user creates or modifies a chart
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/doc/cockburns/TC-17.md b/tests/doc/cockburns/TC-17.md
new file mode 100644
index 0000000..3796ece
--- /dev/null
+++ b/tests/doc/cockburns/TC-17.md
@@ -0,0 +1,141 @@
+## TC-17: Preparing reports using Superet
+
+
+
+
+
Use case ID
+
+
TC-17
+
+
+
+
Use case name
+
+
Preparing reports using Superset
+
+
+
+
Actors
+
+
+
+
+
Business Analyst
+
+
Business Leader
+
+
+
+
+
+
Trigger
+
+
+
+
+
Visualize and report the results
+
+
+
+
+
+
Short Description
+
+
The FADI platform enables to visualize the results of the data analysis and to export these results in reports. In this context, the Superset tool is used.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the Superset service
+
+
The actor knows the credentials to connect to the Superset service
+
+
The actor knows the schema of the target database (i.e. name of tables, name of attributes, etc.)
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
A report is generated
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the Superset service
+
+
+
+
2
+
+
Authenticate to the Superset service
+
+
+
+
3
+
+
Create a new database
+
+
+
+
4
+
+
Create a new table in the database
+
+
+
+
5
+
+
Create and configure a new chart by using the created database and table
+
+
+
+
6
+
+
Visualize the resulted chart
+
+
+
+
7
+
+
Export the resulted chard in a report
+
+
+
+
Exceptions
+
+
+
+
+
Error when creating the database
+
+
Error when creating the table
+
+
+
+
+
+
Frequency
+
+
+
+
+
Every time an actor wants to visualize data and export results in reports
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/doc/cockburns/TC-18.md b/tests/doc/cockburns/TC-18.md
new file mode 100644
index 0000000..f11b5e9
--- /dev/null
+++ b/tests/doc/cockburns/TC-18.md
@@ -0,0 +1,133 @@
+## TC-18: Loading data in Jupyter
+
+
+
+
+
Use case ID
+
+
TC-18
+
+
+
+
Use case name
+
+
Loading data in Jupyter
+
+
+
+
Actors
+
+
+
+
+
Business Analyst
+
+
Business Leader
+
+
+
+
+
+
Trigger
+
+
+
+
+
Using data for the first time
+
+
Having a new database
+
+
+
+
+
+
Short Description
+
+
The FADI platform enables to process analysis techniques on the collected data. It provides to do simple analysis using the Jupyter tool. In this use case, the loading of the scripts is checked.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the Jupyter service
+
+
The actor know the credentials to connect to the Jupyter service
+
+
The actor knows the schema of the target database (i.e. name of tables, name of attributes, etc.)
+
+
The actor possesses the two code scripts to upload
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
Data is loaded
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the Jupyter service
+
+
+
+
2
+
+
Authenticate to the Jupyter service
+
+
+
+
3
+
+
Choose the “Minimal environment” option and click on “Spawn”
+
+
+
+
4
+
+
Import the script(s)
+
+
+
+
5
+
+
Check that the script is correctly loaded
+
+
+
+
Exceptions
+
+
+
+
+
The connection to the database fails
+
+
The authentication fails
+
+
+
+
+
+
Frequency
+
+
+
+
+
Everytime the user wants to load scripts in Jupyter
+
+
+
+
+
diff --git a/tests/doc/cockburns/TC-19.md b/tests/doc/cockburns/TC-19.md
new file mode 100644
index 0000000..8c37e0d
--- /dev/null
+++ b/tests/doc/cockburns/TC-19.md
@@ -0,0 +1,137 @@
+## TC-19: Analyzing data using Jupyter
+
+
+
+
+
Use case ID
+
+
TC-19
+
+
+
+
Use case name
+
+
Analyzing data using Jupyter
+
+
+
+
Actors
+
+
+
+
+
Business Analyst
+
+
Business Leader
+
+
+
+
+
+
Trigger
+
+
+
+
+
The actor wants to analyse data
+
+
+
+
+
+
Short Description
+
+
The FADI platform enables to process analysis techniques on the collected data. It provides to do simple analysis using the Jupyter tool.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the Jupyter service
+
+
The actor know the credentials to connect to the Jupyter service
+
+
The actor knows the schema of the target database (i.e. name of tables, name of attributes, etc.)
+
+
The actor possesses the two code scripts to upload
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The data is analyzed
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the Jupyter service
+
+
+
+
2
+
+
Authenticate to the Jupyter service
+
+
+
+
3
+
+
In the option “Minimal environment”, go the “Files” tab
+
+
+
+
4
+
+
Select the loaded script (i.e. module)
+
+
+
+
5
+
+
Run the scripts to configure the connection to the database and visualize temperature curve as function of date
+
+
+
+
6
+
+
Check that the scripts are executed with success
+
+
+
+
Exceptions
+
+
+
+
+
The connection to the database fails
+
+
The authentication fails
+
+
+
+
+
+
Frequency
+
+
+
+
+
Everytime the user wants to process analysis on data
+
+
+
+
+
diff --git a/tests/doc/cockburns/TC-2.md b/tests/doc/cockburns/TC-2.md
new file mode 100644
index 0000000..d04c229
--- /dev/null
+++ b/tests/doc/cockburns/TC-2.md
@@ -0,0 +1,118 @@
+## TC-2: Access to the FADI dashboard
+
+
+
+
Use case ID
+
+
TC-2
+
+
+
+
Use case name
+
+
Access to the FADI dashboard
+
+
+
+
Actors
+
+
+
+
+
Platform Admin
+
+
System Admin
+
+
Stakeholders
+
+
ICT manager
+
+
+
+
+
+
Trigger
+
+
+
+
+
Check the state of a given pod
+
+
Check the installation of the FADI framework
+
+
+
+
+
+
Short Description
+
+
The dashboard of FADI enables the actor to access to a web interface in order to check the status of the installed services (called also pods).
+
+
+
+
Pre-Conditions
+
+
+
+
+
The FADI is already installed
+
+
The actor knows the namespace of FADI (e.g. fadi)
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The actor checks the status of a service
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the dashboard of FADI platform
+
+
+
+
2
+
+
Select the appropriate dashboard
+
+
+
+
3
+
+
Check if there is some dashboards and information about the installed services
+
+
+
+
Exceptions
+
+
+
+
+
The actor does not find the service
+
+
+
+
+
+
Frequency
+
+
+
+
+
Everytime an actor wants to check the status of a given service
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/doc/cockburns/TC-20.md b/tests/doc/cockburns/TC-20.md
new file mode 100644
index 0000000..b482798
--- /dev/null
+++ b/tests/doc/cockburns/TC-20.md
@@ -0,0 +1,133 @@
+## TC-20: Loading data in Spark
+
+
+
+
+
Use case ID
+
+
TC-20
+
+
+
+
Use case name
+
+
Loading data in Spark
+
+
+
+
Actors
+
+
+
+
+
Business Analyst
+
+
Business Leader
+
+
+
+
+
+
Trigger
+
+
+
+
+
Using data for the first time
+
+
Having a new database
+
+
+
+
+
+
Short Description
+
+
The FADI platform enables to process analysis techniques on the collected data. It provides to do complex analysis using the Spark tool. In this use case, the script loading is described.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the Spark service
+
+
The actor know the credentials to connect to the Spark service
+
+
The actor knows the schema of the target database (i.e. name of tables, name of attributes, etc.)
+
+
The actor possesses the scripts to upload
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The data is analyzed
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the Spark service
+
+
+
+
2
+
+
Authenticate to the Spark service
+
+
+
+
3
+
+
Choose the option “Spark environment” and Click the “Spawn” button
+
+
+
+
4
+
+
Upload the code script to explore Spark
+
+
+
+
5
+
+
Check that script is correctly uploaded
+
+
+
+
Exceptions
+
+
+
+
+
The connection to the database fails
+
+
The authentication fails
+
+
+
+
+
+
Frequency
+
+
+
+
+
Everytime the user wants to load scripts in Spark
+
+
+
+
+
diff --git a/tests/doc/cockburns/TC-21.md b/tests/doc/cockburns/TC-21.md
new file mode 100644
index 0000000..a95bb45
--- /dev/null
+++ b/tests/doc/cockburns/TC-21.md
@@ -0,0 +1,131 @@
+## TC-21: Analyzing data using Spark
+
+
+
+
+
Use case ID
+
+
TC-21
+
+
+
+
Use case name
+
+
Analyzing data using Spark
+
+
+
+
Actors
+
+
+
+
+
Business Analyst
+
+
Business Leader
+
+
+
+
+
+
Trigger
+
+
+
+
+
The actor wants to analyse data
+
+
+
+
+
+
Short Description
+
+
The FADI platform enables to process analysis techniques on the collected data. It provides to do complex analysis using the Spark tool.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the Spark service
+
+
The actor know the credentials to connect to the Spark service
+
+
The actor knows the schema of the target database (i.e. name of tables, name of attributes, etc.)
+
+
The actor possesses the scripts to upload
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The data is analyzed
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the Spark service
+
+
+
+
2
+
+
Authenticate to the Spark service
+
+
+
+
3
+
+
Choose the option “Spark environment” and Click the “Spawn” button
+
+
+
+
4
+
+
Launch the uploaded scripts to analyze data
+
+
+
+
5
+
+
Check that the scripts are executed with success
+
+
+
+
Exceptions
+
+
+
+
+
The connection to the database fails
+
+
The authentication fails
+
+
+
+
+
+
Frequency
+
+
+
+
+
Everytime the actor wants to process analysis on data
+
+
+
+
+
diff --git a/tests/doc/cockburns/TC-3.md b/tests/doc/cockburns/TC-3.md
new file mode 100644
index 0000000..fcc3972
--- /dev/null
+++ b/tests/doc/cockburns/TC-3.md
@@ -0,0 +1,130 @@
+## TC-3: Defining the Nifi workflow
+
+
+
+
+
Use case ID
+
+
TC-3
+
+
+
+
Use case name
+
+
Defining the Nifi workflow
+
+
+
+
Actors
+
+
+
+
+
Data Scientist
+
+
Data Engineer
+
+
+
+
+
+
Trigger
+
+
+
+
+
Adding a new industrial partner
+
+
Adding a new data type
+
+
Adding a new database
+
+
+
+
+
+
Short Description
+
+
The Fadi platform enables the actors to define a workflow denoting how to collect and store data stream/batch. For this purpose, it integrates Nifi which is an open source tool to automate the flow of data between software systems.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the Nifi service
+
+
The actor is already authenticated in the FADI platform
+
+
The actor knows the authentication credentials of the target database (e.g. the username and the password of the PostgreSQL database)
+
+
The actor knows
+
+
+
the name of the target database and the table
+
+
the database connection URL, the database driver class name, database driver location, etc.
+
+
+
+
The actor knows the source sending the data (e.g. data stream, a CSV file, etc.)
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The Nifi workflow is created and is successfully launched
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the Nifi interface
+
+
+
+
2
+
+
Create the desired Nifi workflow
+
+
+
+
3
+
+
Launch the created workflow to start storing the data in the target database
+
+
+
+
Exceptions
+
+
+
+
+
+
+
+
+
+
+
Frequency
+
+
+
+
+
Everytime the actor has new data coming to the FADI framework
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/doc/cockburns/TC-4.md b/tests/doc/cockburns/TC-4.md
new file mode 100644
index 0000000..9acf182
--- /dev/null
+++ b/tests/doc/cockburns/TC-4.md
@@ -0,0 +1,151 @@
+## TC-4: Defining the Nifi workflow by uploading a template
+
+
+
+
+
Use case ID
+
+
TC-4
+
+
+
+
Use case name
+
+
Defining the Nifi workflow by uploading a template
+
+
+
+
Actors
+
+
+
+
+
Data Scientist
+
+
Data Engineer
+
+
+
+
+
+
Trigger
+
+
+
+
+
Adding a new industrial partner
+
+
Adding a new data type
+
+
Adding a new database
+
+
+
+
+
+
Short Description
+
+
The Fadi platform enables the actors to define a workflow denoting how to collect and store data stream/batch. For this purpose, it integrates Nifi which is an open source tool to automate the flow of data between software systems. In this use case, the actor will upload an existing template to create the Nifi workflow
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the Nifi service
+
+
The actor is already authenticated in the FADI platform
+
+
The actor knows the source sending the data (e.g. data stream, a CSV file, etc.)
+
+
The actor has the Nifi template
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The Nifi workflow is created and is successfully launched
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the Nifi interface
+
+
+
+
2
+
+
Upload the Nifi template
+
+
+
+
3
+
+
Drag and Drop the imported template
+
+
+
+
4
+
+
Configure the password to connect to the database
+
+
+
+
5
+
+
Enable both the required controller services.
+
+
+
+
6
+
+
Select the whole workflow and start the process
+
+
+
+
7
+
+
Launch the created workflow to start storing the data in the target database
+
+
+
+
8
+
+
Check that the data is correctly stored
+
+
+
+
Exceptions
+
+
+
+
+
Error when upload the Nifi template
+
+
+
+
+
+
Frequency
+
+
+
+
+
Everytime the actor has new data coming to the FADI framework
+
+
+
+
+
diff --git a/tests/doc/cockburns/TC-5.md b/tests/doc/cockburns/TC-5.md
new file mode 100644
index 0000000..3a6a918
--- /dev/null
+++ b/tests/doc/cockburns/TC-5.md
@@ -0,0 +1,140 @@
+## TC-5: Creating a database server in pgAdmin
+
+
+
+
+
Use case ID
+
+
TC-5
+
+
+
+
Use case name
+
+
Creating a database server in pgAdmin
+
+
+
+
Actors
+
+
+
+
+
Data Engineer
+
+
+
+
+
+
Trigger
+
+
+
+
+
Adding a new industrial partner
+
+
Adding a new data type
+
+
+
+
+
+
Short Description
+
+
The FADI platform enables to create database server using the tool pgAdmin. This use case allows to test if the database creation is successfully achieved.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the pgAdmin service
+
+
The actor is connected to pgAdmin
+
+
The actor knows the credentials of PostreSQL
+
+
+
Host name: fadi-postgresql
+
+
Port: 5432
+
+
Maintenance database: postgres
+
+
Username: admin
+
+
Password: password1
+
+
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The database is created
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the interface to create a database server
+
+
+
+
2
+
+
Enter the name of the database server in the “General” tab
+
+
+
+
3
+
+
Enter the credentials of PostgreSQL in the “Connection” tab
+
+
+
+
4
+
+
Save the edited information
+
+
+
+
5
+
+
Check the existence of the new database
+
+
+
+
Exceptions
+
+
+
+
+
The credentials of PostgreSQL are not correct
+
+
+
+
+
+
Frequency
+
+
+
+
+
Every time a new database needs to be created
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/doc/cockburns/TC-6.md b/tests/doc/cockburns/TC-6.md
new file mode 100644
index 0000000..39f9554
--- /dev/null
+++ b/tests/doc/cockburns/TC-6.md
@@ -0,0 +1,123 @@
+## TC-6: Creating a table in Adminer
+
+
+
+
+
Use case ID
+
+
TC-6
+
+
+
+
Use case name
+
+
Creating a table in Adminer
+
+
+
+
Actors
+
+
+
+
+
Data Engineer
+
+
+
+
+
+
Trigger
+
+
+
+
+
Creating a new table in the PostgreSQL database
+
+
Adding a new industrial partner
+
+
Adding a new data type
+
+
+
+
+
+
Short Description
+
+
The FADI platform enables to create a new table via the interface of the Adminer tool. This use case allows to test whether the table creation is successfully achieved.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the Adminer service
+
+
The database in which the table will be created already exists
+
+
The actor is connected to Adminer
+
+
The actor knows the name of the table to be created
+
+
The actor knows the name of the target database
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The table is created
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the interface to create a table
+
+
+
+
2
+
+
Execute the creation query
+
+
+
+
3
+
+
Check the existence of the new table
+
+
+
+
Exceptions
+
+
+
+
+
The table already exists
+
+
There is an error in the creation query
+
+
+
+
+
+
Frequency
+
+
+
+
+
Everytime the actor wants to create a new table
+
+
+
+
+
diff --git a/tests/doc/cockburns/TC-7.md b/tests/doc/cockburns/TC-7.md
new file mode 100644
index 0000000..715e099
--- /dev/null
+++ b/tests/doc/cockburns/TC-7.md
@@ -0,0 +1,121 @@
+## TC-7: Deleting a table in pgAdmin
+
+
+
+
+
Use case ID
+
+
TC-7
+
+
+
+
Use case name
+
+
Deleting a table in pgAdmin
+
+
+
+
Actors
+
+
+
+
+
Data Engineer
+
+
+
+
+
+
Trigger
+
+
+
+
+
Removing a table from the PostgreSQL database
+
+
+
+
+
+
Short Description
+
+
The FADI platform enables to delete a table via the interface of the pgAdmin tool. This use case allows to test whether the table deletion is successfully achieved.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the pgAdmin service
+
+
The actor is connected to pgAdmin
+
+
The table to delete already exists
+
+
The database in which the table will be deleted already exists
+
+
The actor knows the name of the table to be deleted
+
+
The actor knows the name of the database containing the table
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The table is deleted
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the interface to delete a table
+
+
+
+
2
+
+
Execute the deletion query
+
+
+
+
3
+
+
Check the existence of the table (i.e. it should be deleted)
+
+
+
+
Exceptions
+
+
+
+
+
The table does not exist
+
+
There is an error in the deletion query
+
+
+
+
+
+
Frequency
+
+
+
+
+
Every time the actor wants to delete a table
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/doc/cockburns/TC-8.md b/tests/doc/cockburns/TC-8.md
new file mode 100644
index 0000000..2f63844
--- /dev/null
+++ b/tests/doc/cockburns/TC-8.md
@@ -0,0 +1,117 @@
+## TC-8: Deleting a database in pgAdmin
+
+
+
+
+
Use case ID
+
+
TC-8
+
+
+
+
Use case name
+
+
Deleting a database in pgAdmin
+
+
+
+
Actors
+
+
+
+
+
Data Engineer
+
+
+
+
+
+
Trigger
+
+
+
+
+
Removing a database from PostgreSQL
+
+
+
+
+
+
Short Description
+
+
The FADI platform enables to delete a database via the interface of the pgAdmin tool. This use case allows to test whether the database deletion is successfully achieved.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the pgAdmin service
+
+
The actor is connected to pgAdmin
+
+
The database to delete already exists
+
+
The actor knows the name of the database to be deleted
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The database is deleted
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the interface to delete a database
+
+
+
+
2
+
+
Execute the deletion query
+
+
+
+
3
+
+
Check the existence of the database (i.e. it should be deleted)
+
+
+
+
Exceptions
+
+
+
+
+
The database does not exist
+
+
There is an error in the deletion query
+
+
+
+
+
+
Frequency
+
+
+
+
+
Every time the actor wants to delete a database
+
+
+
+
+
\ No newline at end of file
diff --git a/tests/doc/cockburns/TC-9.md b/tests/doc/cockburns/TC-9.md
new file mode 100644
index 0000000..1502f8e
--- /dev/null
+++ b/tests/doc/cockburns/TC-9.md
@@ -0,0 +1,153 @@
+## TC-9: Inserting data in a given table
+
+
+
The FADI platform enables to collect data and to store it in a given database either via the Adminer interface or by defining a data workflow using Nifi. This use case allows to test whether the data insertion is successfully achieved.
+
+In this use case, we will implement the case of data insertion using Nifi.
+
+
+
+
Pre-Conditions
+
+
+
+
+
The actor knows either the URL address or the command to access to the Nifi service
+
+
The actor is connected to the FADI platform
+
+
The target table already exists
+
+
The actor knows the name of the table to be deleted
+
+
The actor knows the name of the database containing the table
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
The data is correctly inserted
+
+
+
+
+
+
Steps
+
+
1
+
+
Access to the Nifi service
+
+
+
+
2
+
+
Upload a Nifi template
+
+
+
+
3
+
+
Drag and Drop the imported template
+
+
+
+
4
+
+
Configure the password to connect to the database
+
+
+
+
5
+
+
Enable both the required controller services.
+
+
+
+
6
+
+
Select the whole workflow and start the process
+
+
+
+
7
+
+
Check that the data is correctly inserted in the database by verifying the success connection.
+
+
+
+
Exceptions
+
+
+
+
+
Error in the password of the database
+
+
The controller services are enabled
+
+
The data is not stored
+
+
+
+
+
+
Frequency
+
+
+
+
+
Everytime data (stream or batch) is collected by the FADI platform
+
+
+
+
+
diff --git a/tests/doc/cockburns/TC-template.md b/tests/doc/cockburns/TC-template.md
new file mode 100644
index 0000000..bb063e9
--- /dev/null
+++ b/tests/doc/cockburns/TC-template.md
@@ -0,0 +1,89 @@
+## TC-X: Title of the test case
+
+
+
+
+
Use case ID
+
+
+
+
+
+
Use case name
+
+
+
+
+
+
Actors
+
+
+
+
+
+
+
+
Trigger
+
+
+
+
+
+
Short Description
+
+
+
+
+
+
Pre-Conditions
+
+
+
+
+
+
+
+
Post-Conditions
+
+
+
+
+
+
+
+
Steps
+
+
1
+
+
+
+
+
+
2
+
+
A template to be completed
+
+
+
+
3
+
+
+
+
+
+
Exceptions
+
+
+
+
+
+
+
+
Frequency
+
+
+
+
+
+
+
diff --git a/tests/doc/images/test_results.png b/tests/doc/images/test_results.png
new file mode 100644
index 0000000..0434208
Binary files /dev/null and b/tests/doc/images/test_results.png differ
diff --git a/tests/doc/test-scripts/TS-11.md b/tests/doc/test-scripts/TS-11.md
new file mode 100644
index 0000000..a0037e9
--- /dev/null
+++ b/tests/doc/test-scripts/TS-11.md
@@ -0,0 +1,94 @@
+## TS-11: Defining dashboards based on the analyzed data using Grafana
+### User story
+* **As a** Business Analyst/Leader **I want to** define a dashboard **So I can** visualize the results of the analyzed data
+* **As a** Business Analyst/Leader **I want to** define a dashboard **So I can** realize a global overview about the results of the analyzed data
+### Initial data/state:
+* The FADI platform is installed
+* The actor is authenticated to the Grafana service
+### TS dependencies:
+* TS-10 : Configuring a data source in the Grafana dashboard
+
+
+
+Test script ID
+
+
Test Actions
+
+
Assertion
+
+
+
+
Sequence
+
+
Actor
+
+
Action
+
+
Automatic/ Manual
+
+
+
+
TS-11
+
+
1
+
+
JEST
+
+
Launch the Grafana page
+
+
automatic
+
+
- Exit from Grafana
+
+
+
+
2
+
+
PUP
+
+
Choose the data source
+
+
automatic
+
+
+
+
3
+
+
PUP
+
+
Edit the dashboard
+
+
automatic
+
+
+
+
4
+
+
PUP
+
+
Choose the type of the dashboard
+
+
automatic
+
+
+
+
5
+
+
PUP
+
+
Edit the query
+
+
automatic
+
+
+
+
6
+
+
PUP
+
+
Press on the Query inspector button to execute the query and visualize the dashboard
+
+
automatic
+
+
+
\ No newline at end of file
diff --git a/tests/doc/test-scripts/TS-15.md b/tests/doc/test-scripts/TS-15.md
new file mode 100644
index 0000000..f533b22
--- /dev/null
+++ b/tests/doc/test-scripts/TS-15.md
@@ -0,0 +1,114 @@
+## TS-15: Creating a chart in Superset
+### User story
+* **As a** Business Analyst/Leader **I want to** create a chart in Superset **So I can** visualize the results of the analyzed data
+### Initial data/state:
+* The FADI platform is installed
+* The actor is authenticated to the Superset service
+### TS dependencies:
+* TS-13 : Configuring a database in Superset
+* TS-14 : Configuring a table in Superset
+
+
+
+Test script ID
+
+
Test Actions
+
+
Assertion
+
+
+
+
Sequence
+
+
Actor
+
+
Action
+
+
Automatic/ Manual
+
+
+
+
TS-15
+
+
1
+
+
JEST
+
+
Launch the Superset page
+
+
automatic
+
+
- Exit from Superset
+
+
+
+
2
+
+
PUP
+
+
Choose the Chart option menu
+
+
automatic
+
+
+
+
3
+
+
PUP
+
+
Add a new record
+
+
automatic
+
+
+
+
4
+
+
PUP
+
+
Choose the datasource
+
+
automatic
+
+
+
+
5
+
+
PUP
+
+
Choose the visualization type
+
+
automatic
+
+
+
+
6
+
+
PUP
+
+
Click “Create new chart”
+
+
automatic
+
+
+
+
7
+
+
PUP
+
+
Configure the chart by defining the time requirements and the query
+
+
automatic
+
+
+
+
8
+
+
PUP
+
+
Run the query
+
+
automatic
+
+
+
\ No newline at end of file
diff --git a/tests/doc/test-scripts/TS-4.md b/tests/doc/test-scripts/TS-4.md
new file mode 100644
index 0000000..509554b
--- /dev/null
+++ b/tests/doc/test-scripts/TS-4.md
@@ -0,0 +1,96 @@
+## TS-4: Defining the Nifi workflow by uploading a template
+### User story
+* **As a** data scientist/engineer **I want to** define a data workflow management **So I can** consider data coming from a new data source
+* **As a** data scientist/engineer **I want to** define a data workflow management **So I can** support a new data type
+* **As a** data scientist/engineer **I want to** define a data workflow management **So I can** integrate a new industrial partner
+### Initial data/state:
+* The FADI platform is installed
+* The actor is already authenticated in the FADI platform
+* The database is already created
+### TS dependencies:
+* No dependencies
+
diff --git a/tests/doc/test-scripts/TS-6.md b/tests/doc/test-scripts/TS-6.md
new file mode 100644
index 0000000..49ba811
--- /dev/null
+++ b/tests/doc/test-scripts/TS-6.md
@@ -0,0 +1,83 @@
+## TS-6: Creating a table in Adminer
+### User story
+* **As a** data engineer **I want to** create a table **So I can** store the collected data
+* **As a** data engineer **I want to** create a table **So I can** structure the collected data
+### Initial data/state:
+* The FADI platform is installed
+* The actor is authenticated to the Adminer service
+### TS dependencies:
+* No dependencies
+
+
+
+Test script ID
+
+
Test Actions
+
+
Assertion
+
+
+
+
Sequence
+
+
Actor
+
+
Action
+
+
Automatic/ Manual
+
+
+
+
TS-6
+
+
1
+
+
JEST
+
+
Launch the Adminer page
+
+
automatic
+
+
- Exit from Adminer
+
+
+
+
2
+
+
PUP
+
+
Authenticate the user to the Adminer
+
+
automatic
+
+
+
+
3
+
+
PUP
+
+
Connect to the appropriate database
+
+
automatic
+
+
+
+
4
+
+
PUP
+
+
Launch the query to create the table
+
+
automatic
+
+
+
+
5
+
+
PUP
+
+
Check the creation of the table
+
+
automatic
+
+
diff --git a/tests/doc/test-scripts/TS-template.md b/tests/doc/test-scripts/TS-template.md
new file mode 100644
index 0000000..1f7250a
--- /dev/null
+++ b/tests/doc/test-scripts/TS-template.md
@@ -0,0 +1,62 @@
+## TS-X: Title of the test script
+### User story
+* **As a** ..... **I want to** ..... **So I can**
+### Initial data/state:
+*
+### TS dependencies:
+*
+