Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

Commit

Permalink
Merge branch 'develop' of github.com:cetic/fadi into develop
Browse files Browse the repository at this point in the history
  • Loading branch information
Sebastien Dupont committed May 17, 2020
2 parents c3edf57 + b8b1f3a commit 9dbd643
Show file tree
Hide file tree
Showing 30 changed files with 793 additions and 468 deletions.
12 changes: 6 additions & 6 deletions USERGUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,11 +50,11 @@ minikube service -n fadi fadi-adminer

* Access to the adminer service and to the postgreSQL database using the following credentials:

* System: PostgreSQL
* Server: fadi-postgresql
* Username: admin
* Password: passowrd1
* Database: postgres
* System: `PostgreSQL`
* Server: `fadi-postgresql`
* Username: `admin`
* Password: `password1`
* Database: `postgres`

* In the adminer Browser, launch the Query tool by clicking "SQL command".

Expand Down Expand Up @@ -356,4 +356,4 @@ In this use case, we have demonstrated a simple configuration for FADI, where we

You can find the various resources for this sample use case (Nifi flowfile, Grafana dashboards, ...) in the [examples folder](examples/basic)

The examples section contains other more specific examples (e.g. [Kafka streaming ingestion](examples/kafka/README.md))
The examples section contains other more specific examples (e.g. [Kafka streaming ingestion](examples/kafka/README.md))
4 changes: 1 addition & 3 deletions doc/LOGGING.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,7 @@ To create the index pattern and monitor the logs, follow these simple steps:

For more details you can always visit the [Elastic-stack official documentation](https://www.elastic.co/guide/index.html).


### LDAP Authentication
================

KIBANA is not compatible with ldap which means it can't be linked directly, to authenticate against the ldap server before accessing KIBANA we're using [nginx-ldap-auth](https://github.com/nginxinc/nginx-ldap-auth).
> The nginx-ldap-auth software is a reference implementation of a method for authenticating users who request protected resources from servers proxied by NGINX Plus. It includes a daemon (ldap-auth) that communicates with an authentication server which is in this case OpenLDAP.
Expand All @@ -55,5 +53,5 @@ The kibana service isn't accessible directly, to get to it you have to access ng
```
minikube service fadi-nginx-ldapauth-proxy
```
for more info: [nginx plus authenticate users](https://www.nginx.com/blog/nginx-plus-authenticate-users/).

For more informations, see this blog post: [nginx plus authenticate users](https://www.nginx.com/blog/nginx-plus-authenticate-users/).
64 changes: 64 additions & 0 deletions doc/MONITORING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
Montoring
==========

<p align="left";>
<a href="https://www.elastic.co" alt="elk">
<img src="images/logos/zabbix_logo.png" align="center" alt="ELK logo" width="200px" />
</a>
</p>

**[ZABBIX](https://www.zabbix.com)** Zabbix is an open-source monitoring software tool for diverse IT components, including networks, servers, virtual machines and cloud services. Zabbix provides monitoring metrics, among others network utilization, CPU load and disk space consumption.

## Zabbix components

### Zabbix Server

Zabbix server is the central process of Zabbix software.

The server performs the polling and trapping of data, it calculates triggers, sends notifications to users. It is the central component to which Zabbix agents and proxies report data on availability and integrity of systems. The server can itself remotely check networked services (such as web servers and mail servers) using simple service checks.

### Zabbix Agent

Zabbix agent is deployed on a monitoring target to actively monitor local resources and applications (hard drives, memory, processor statistics etc).

### Zabbix Web ( frontend )

Zabbix web interface is a part of Zabbix software. It is used to manage resources under monitoring and view monitoring statistics.

### Zabbix Proxy

Zabbix proxy is a process that may collect monitoring data from one or more monitored devices and send the information to the Zabbix server, essentially working on behalf of the server. All collected data is buffered locally and then transferred to the Zabbix server the proxy belongs to.

## How to use

Make sure to enable zabbix in the `values.yaml` file, then to access the front-end use the following command:

```
minikube service fadi-zabbix-web
```

The default username/password are `Admin`/`zabbix`, once connected make sure the zabbix-server is working on the global view, to see the received metrics ( or graphs ) head to the tab `Latest data` under `monitoring` and then click on select on **host groups** and **hosts** and choose your `zabbix servers`.

![zabbix](images/carousel/zabbix.gif)

## LDAP Authentication


By default, internal [Zabbix authentication](https://www.zabbix.com/documentation/4.0/manual/web_interface/frontend_sections/administration/authentication) is used globally. To change to LDAP - select LDAP as Default authentication and enter **authentication details** in the LDAP settings tab.

The default **authentication details** for FADI:

```
LDAP host: fadi-openldap
Port: 389
Base DN: dc=ldap,dc=cetic,dc=be
Search attribute: cn
Bind DN: cn=admin,dc=ldap,dc=cetic,dc=be
Bind password: password1
User password: password1
```


![zabbix](images/carousel/zabbix-auth.gif)

For more details you can always go to: [Zabbix Documentation 4.0](https://www.zabbix.com/documentation/4.0/manual/introduction).
174 changes: 174 additions & 0 deletions doc/RANCHER_PROXMOX.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
Deploy FADI with Rancher and Proxmox
=============

* [0. Prerequisites](#0-prerequisites)
* [1. Upload RancherOS ISO on Proxmox Node](#1-upload-rancheros-iso-on-proxmox)
* [2. Add Proxmox docker-machine driver to Rancher](#2-add-proxmox-docker-machine-driver-to-rancher)
* [3. Create Cluster With Rancher](#3-create-the-kubernetes-cluster-with-rancher)
* [Create Node Template](#Create-Node-Template)
* [Create Cluster](#Create-Cluster)
* [Create The Nodes](#Create-The-Nodes)
* [4.Manage the provisioning of the persistent volumes](#5-Manage-the-provisioning-of-the-persistent-volumes)
* [StorageOS](#StorageOS)
* [Longhorn](#Longhorn)
* [NFS Server](#NFS-Server)
* [Manually](#Manually)
* [5. Control Cluster from local workstation](#5-Control-Cluster-from-local-workstation)


This page provides information on how to create a Kubernetes cluster on the [Proxmox](https://www.proxmox.com/en/) IaaS provider using [Rancher](https://rancher.com/).

<a href="https://www.proxmox.com/" title="ProxMox"> <img src="images/logos/Proxmox.png" width="150px" alt="Proxmox" /></a>

> "Proxmox VE is a complete open-source platform for enterprise virtualization. With the built-in web interface you can easily manage VMs and containers, software-defined storage and networking, high-availability clustering, and multiple out-of-the-box tools on a single solution."
<a href="https://rancher.com/" title="Rancher"> <img src="images/logos/rancher.png" width="150px" alt="Proxmox" /></a>

> "Rancher is open source software that combines everything an organization needs to adopt and run containers in production. Built on Kubernetes, Rancher makes it easy for DevOps teams to test, deploy and manage their applications."

## 0. Prerequisites

This documentation assumes the following prerequisites are met:

* a [Proxmox installation](https://pve.proxmox.com/wiki/Installation) on your self hosted infrastructure
* a [Rancher installation](https://rancher.com/docs/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/advanced/) that can access the Proxmox cluster, this can be done in another Kubernetes cluster to provide high availability, or simply in a virtual machine somewhere on your infrastructure.

## 1. Upload RancherOS ISO on Proxmox

First, download the [rancheros-proxmoxve-autoformat.iso](https://github.com/rancher/os/releases/latest) image and upload it to one of your Proxmox nodes.

## 2. Add Proxmox docker-machine driver to Rancher

Then, you need to allow Rancher to access Proxmox. We have contributed to upgrade an existing [docker-machine driver](https://github.com/lnxbil/docker-machine-driver-proxmox-ve/releases/download/v3/docker-machine-driver-proxmoxve.linux-amd64) to make it compatible with Rancher.

To add [this driver](https://github.com/lnxbil/docker-machine-driver-proxmox-ve/releases/download/v3/docker-machine-driver-proxmoxve.linux-amd64) in your Rancher, follow these steps :

![Proxmox driver](images/installation/proxmoxdriver.gif)

## 3. Create the Kubernetes cluster with Rancher

After connecting to rancher, follow these steps:

### Create Node Template

This is where you have to define the templates to use for the nodes (both masters and workers nodes). To do so, go to: `profile (top right corner)` > `Node templates` > `Add Template` :

Choose `Proxmoxve`
![Proxmoxve](images/installation/Proxmoxve.png)

and then fill the rest of the fields:

* IP of the Proxmox `proxmoxHost`,
* username/password `proxmoxUserName, proxmoxUserPassword `,
* storage of the image file `vmImageFile ` which is in our case `local:iso/rancheros-proxmoxve-autoformat.iso`,
* resources you want to allocate for your node `nodevmCpuCores, vmMemory, vmStorageSize`.

### Create the Kubernetes Cluster

To create your virtual machines cluster on Proxmox:

`Cluster` > `Add Cluster` > `Proxmoxve`

You will need to give a name to your cluster, then specify the nodes in the cluster:

* at first, you may want to start with **one master node**,
* give it a name,
* choose the template created earlier for that node,
* tick all 3 boxes for `etcd`, `Control Plane` and `Worker`,
* choose the Kubernetes version,
* and finally click `create`.

> "you will have to wait for the `VM creation`, the `RancherOS install` and the `IP address retrieving` steps, that might take a while."
### Adding Nodes the Cluster

Once the master node gets its IP address, go to `Cluster` > `Edit Cluster` and add another worker node, untick the worker box from the master node and tick it in the new worker node. It should look to something like this:
![Proxmoxve](images/installation/workernode.png)

If a second (or more) node (master or worker) is needed, you can add another one with a different template by following the same way we just did. You can also add as much nodes as you want using the same template by simply going to `YourCluster (not global)` > `nodes` > `+` and it will add another node of the same kind:

![Proxmoxve](images/installation/addnode.png)

## 4. Persistent storage configuration

Once all your nodes are up and running it is time to deploy your services, but before you do, you need to set your default PVC for the persistent volumes.

Several ways are possible to manage its aspects. We will describe three of them, and leave it to you to choose the method that best meets your requirements.

### StorageOS

<a href="https://www.storageos.com/" title="storageos"> <img src="images/logos/storageos.svg" width="150px" alt="storageos" /></a>

> *StorageOS is a cloud native storage solution that delivers persistent container storage for your stateful applications in production.
Dynamically provision highly available persistent volumes by simply deploying StorageOS anywhere with a single container.*

To deploy the volume plugin `StorageOS`, go to `YourCluster (not global)` > `system` > `apps` > `launch` and search for `StorageOS`. make sure all the fields are filled correctly like the following screenshot:

![StorageOSConfig](images/installation/StorageOS.png)

and now, launch it🚀.

A small animation take back this all steps:

![StorageOSGuide](images/installation/StorageOSGuide.gif)

launching apps usually takes several minutes, you're going to need to wait a few minutes

StorageOS is a very good turnkey solution. However this service gives only the possibility to allocate maximum 50Gi with the basic License.

![StorageOS limits](images/installation/StorageOS_limits.png)

Finally, all that remains is to define the StorageClass **StorageOS** as the one that will be used by default. To do this, go to `Storage`> `StorageClass` and click on the menu (the three little points on the right side). Now, click on `Set as Default`.

This procedure is shown in the below animation :

![StorageClass](images/installation/StorageClassDefault.gif)

### Longhorn

<a href="https://github.com/longhorn/longhorn" title="longhorn"> <img src="images/logos/longhorn.png" width="150px" alt="longhorn" /></a>

> *Longhorn is a distributed block storage system for Kubernetes. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes.*
This tool is really very powerful, based on iSCSI technology. Unfortunately it is not yet supported by RancherOS (The operating system used in this example).

We report the bugs and problems encountered in two opened github issues:

[https://github.com/rancher/os/issues/2937](https://github.com/rancher/os/issues/2937)
[https://github.com/longhorn/longhorn/issues/828](https://github.com/longhorn/longhorn/issues/828)

### NFS Server Provisioner

<a href="https://rancher.com/docs/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/nfs/" title="nfs"> <img src="images/logos/nfs.jpg" width="150px" alt="nfs" /></a>

>*The Network File System (NFS) is a client/server application that lets a computer user view and optionally store and update files on a remote computer as though they were on the user's own computer.
NFS Server Provisioner is an out-of-tree dynamic provisioner for Kubernetes. You can use it to quickly & easily deploy shared storage that works almost anywhere.*

This solution is very easy to deploy and set up, a basic installation does not require any particular configuration. This plugin supports both the deployment of the NFS server and the management of persistent volumes.

One of the limits and caveat would be that the NFS server is attached to a node: if it crashes, it is possible that the data is lost.

To add this plugin to your cluster go to `Apps` and click on `Launch`. On the `Search bar`, put `nfs-provisioner`.

![images/installation/nfsapp.png](images/installation/nfsapp.png)

Select the plugin and click the `launch` button🚀.

### Manually

It is also possible to manually create the persistent volumes, this way offers a complete control of the volumes but is less flexible. If you choose this option, please refer to the [official documentation of Kubernetes](https://kubernetes.io/docs/concepts/storage/volumes/).

## 5. Control Cluster from local workstation

There are ways to interact with your cluster using the `kubectl` command line tool.

First, **Rancher** offers a restricted terminal where only this tool is available. To access it, go to the monitoring page of your cluster and click on the launch `kubectl` button.

![images/installation/ranchermonitoring.png](images/installation/ranchermonitoring.png)

![images/installation/rancherkubectl.png](images/installation/rancherkubectl.png)

The second approach is to use the `kubectl` tool on your machine. to do so, go to the monitoring page of your cluster again and click on `Kubeconfig File`. Copy and paste all of the informations into the file `~/.kube/config` present on your machine.

> ** You can now use your cluster created with rancher and deploy in Proxmox, enjoy!**
7 changes: 5 additions & 2 deletions doc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,8 @@ FADI Documentation
* [Reverse proxy](REVERSEPROXY.md) - Traefik reverse proxy configuration
* [Security](SECURITY.md) - SSL setup
* [TSimulus](TSIMULUS.md) - how to simulate sensors and generate realistic data with [TSimulus](https://github.com/cetic/TSimulus)

For tutorials and examples, see the [examples section](../examples/README.md)
* [Sample self-hosted infrastructure](RANCHER_PROXMOX.md) - How to install FADI on a self hosted infrastructure using
* [Proxmox](https://www.proxmox.com/en/) as a self-hosted private cloud (IaaS) provider. It provides virtual machines for the various Kubernetes nodes.
* [Rancher](https://rancher.com/what-is-rancher/what-rancher-adds-to-kubernetes/) to manage (install, provision, maintain, upgrade, ...) several Kubernetes clusters, e.g. when needing several environments on various IaaS providers or several well separated tenant installations, or doing airgapped installations on premises.

For tutorials and examples, see the [examples section](../examples/README.md)
6 changes: 3 additions & 3 deletions doc/REVERSEPROXY.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,10 +34,10 @@ kubectl get clusterrole traefik-ingress-controller 2> /dev/null || kubectl creat

Take a look at the [sample file](/helm/traefik/rbac-config.yaml).

Then, you can install Traefik with Helm: (If you want further information, you can follow this [tutorial](https://docs.traefik.io/v1.3/user-guide/kubernetes/#deploy-trfik-using-helm-chart))
Then, you can install Traefik with Helm: (If you want further information, you can follow this [tutorial](https://docs.traefik.io/v1.7/user-guide/kubernetes/#deploy-traefik-using-helm-chart))

```
helm upgrade --install traefik stable/traefik -f ./traefik/values.yaml --namespace kube-system --tiller-namespace tiller
helm upgrade --install traefik stable/traefik -f ./traefik/values.yaml --namespace kube-system
```

The values file can be found [here](/helm/traefik/values.yaml).
Expand Down Expand Up @@ -81,4 +81,4 @@ grafana:

You should now be able to access Grafana through the domain name you have chosen: `http(s)://grafana.yourdomain.com`

Next you will also want to configure SSL access to your services. For that, have a look at the [security documentation](/doc/SECURITY.md).
Next you will also want to configure SSL access to your services. For that, have a look at the [security documentation](/doc/SECURITY.md).
Binary file added doc/images/carousel/zabbix-auth.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/carousel/zabbix.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/AddNodefix.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/CreateCluster.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/Proxmoxve.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/StorageClassDefault.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/StorageOS.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/StorageOSGuide.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/StorageOS_limits.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/addnode.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/defaultpvc.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/nfsapp.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/proxmoxdriver.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/rancherkubectl.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/ranchermonitoring.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/workernode.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/logos/Proxmox.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/logos/longhorn.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/logos/nfs.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/logos/rancher.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 9dbd643

Please sign in to comment.