Skip to content
This repository has been archived by the owner on May 27, 2024. It is now read-only.

Commit

Permalink
Merge pull request #1007 from deNBI/staging
Browse files Browse the repository at this point in the history
Staging
  • Loading branch information
dweinholz authored Mar 15, 2022
2 parents 2833eef + 17bc547 commit e0b44a7
Show file tree
Hide file tree
Showing 45 changed files with 1,310 additions and 1,117 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/blacked.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ jobs:
title: Automated Blacked Linting
body: |
New Linting
- Fixed Linting Errors
- Fixed Linting Errors
- Auto-generated by [create-pull-request][1]
[1]: https://github.com/peter-evans/create-pull-request
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/build_image.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: build-image
on: pull_request
jobs:
jobs:
build-test:
runs-on: ubuntu-latest
steps:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/codeql-analysis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ jobs:
uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
queries: +security-extended, security-and-quality
queries: +security-extended, security-and-quality
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
Expand Down
2 changes: 0 additions & 2 deletions .github/workflows/master-protection.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,5 +20,3 @@ jobs:
run: |
stringContain() { [ -z "$1" ] || { [ -z "${2##*$1*}" ] && [ -n "$2" ];};}
if stringContain ${{github.head_ref}} 'staging' || stringContain ${{github.head_ref}} 'hotfix'; then exit 0; else exit 1; fi
2 changes: 0 additions & 2 deletions .github/workflows/publish_docker.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,5 +34,3 @@ jobs:
password: ${{ secrets.DOCKER_PASSWORD }}
dockerfile: Dockerfile
tags: ${{ steps.tag.outputs.TAG }}


48 changes: 48 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
default_stages: [ commit ]

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.1.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml

- repo: https://github.com/psf/black
rev: 22.1.0
hooks:
- id: black
- repo: https://github.com/sondrelg/pep585-upgrade
rev: 'v1' # Use the sha / tag you want to point at
hooks:
- id: upgrade-type-hints

- repo: https://github.com/PyCQA/isort
rev: 5.10.1
hooks:
- id: isort
- repo: https://github.com/neutrinoceros/flynt/
rev: ''
hooks:
- id: flynt

- repo: https://github.com/myint/autoflake
rev: v1.4
hooks:
- id: autoflake
args:
- --in-place
- --remove-all-unused-imports

- repo: https://github.com/PyCQA/flake8
rev: 4.0.1
hooks:
- id: flake8
args: [ "--config=setup.cfg" ]
additional_dependencies: [ flake8-isort ]

# sets up .pre-commit-ci.yaml to ensure pre-commit dependencies stay up to date
ci:
autoupdate_schedule: weekly
skip: [ ]
submodules: false
10 changes: 5 additions & 5 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@

#### Features

* **Volume:** added resizing
* **mosh:** playbook for installing mosh
* **Volume:** added resizing
* **mosh:** playbook for installing mosh

## (2020-02-27)

Expand Down Expand Up @@ -206,7 +206,7 @@

* **delete-vm:**
* deletes all security groups of server with the same name

### Features

* **bioconda:** init .bashrc and create alias for environment (#141)
Expand Down Expand Up @@ -283,6 +283,6 @@
### Features

* **PR_TEMPLATE:**
* updated with changelog"
* updated with changelog"
* added comment checks
* **pep:** set line max length to 100
* **pep:** set line max length to 100
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
FROM python:3.10.1-buster
RUN apt-get update -y
RUN apt-get update -y
RUN apt-get install -y build-essential
WORKDIR /code
ADD requirements.txt /code
Expand Down
8 changes: 4 additions & 4 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,10 @@ thrift_py: ## Builds python code from thrift file
cp -a gen-py/VirtualMachineService/. VirtualMachineService
rm -rf gen-py
@echo Remember to fix the imports: for pip relative imports are needed, for others absolute imports

dev-build: ## Build and Start the docker-compose.dev.yml
docker-compose -f docker-compose.dev.yml up --build

dev-d: ## Build and Start the docker-compose.dev.yml
docker-compose -f docker-compose.dev.yml up -d

Expand All @@ -43,13 +43,13 @@ dev-build-bibigrid-d: ## Build and Start the docker-compose.dev.yml with bibigri

dev-bibigrid-d: ## Build and Start the docker-compose.dev.yml with bibigrid
docker-compose -f docker-compose.dev.bibigrid.yml up -d

production: ## Build Release from .env
docker-compose -f docker-compose.yml up --build -d

production-bibigrid: ## Build Release from .env and with bibigrid
docker-compose -f docker-compose.bibigrid.yml up --build -d

client_logs: ## Logs from Client
docker logs client_portal-client_1

Expand Down
4 changes: 2 additions & 2 deletions PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ Try to fulfill the following points before the Pull Request is merged:

For releases only:

- [ ] If the review of this PR is approved and the PR is followed by a release then the .env file
in the cloud-portal repo should also be updated.
- [ ] If the review of this PR is approved and the PR is followed by a release then the .env file
in the cloud-portal repo should also be updated.
- [ ] If you are making a release then please sum up the changes since the last release on the release page using the [clog](https://github.com/clog-tool/clog-cli) tool with `clog -F`
20 changes: 10 additions & 10 deletions ProjectGateway.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,26 @@
# Project Gateway
# Project Gateway

The single VM feature of the de.NBI portal allows a registered user to start a virtual machine without having a project located at a specific cloud location. VMs are instantiated in a project associated to the portal. The association between the user and the virtual machine is done and only known by the de.NBI portal.
The single VM feature of the de.NBI portal allows a registered user to start a virtual machine without having a project located at a specific cloud location. VMs are instantiated in a project associated to the portal. The association between the user and the virtual machine is done and only known by the de.NBI portal.

The started VMs can be accessed using ssh (or any technology on top of it, e.g. x2go). However this needs a public available ip address (floating ip) for each running instance. If we don't have IP addresses available (ipv4 addresses are rare), we have to think of another solution.
The started VMs can be accessed using ssh (or any technology on top of it, e.g. x2go). However this needs a public available ip address (floating ip) for each running instance. If we don't have IP addresses available (ipv4 addresses are rare), we have to think of another solution.

A relative simple solution is to create a ssh gateway for the portal project with a fixed mapping between ports and local ip addresses. Linux can be easily configured to act as gateway/router between networks. This linux property is used by a lot of commercial routers.
A relative simple solution is to create a ssh gateway for the portal project with a fixed mapping between ports and local ip addresses. Linux can be easily configured to act as gateway/router between networks. This linux property is used by a lot of commercial routers.

The tutorial was tested on Ubuntu 16.04 LTS, but should work on any modern linux OS since nothing Ubuntu-specific has been used.

## Assumptions

- portal project with at least one portal user
- full configured project network (router, network/subnet e.g. 192.168.0.0/24)
- full configured project network (router, network/subnet e.g. 192.168.0.0/24)
- one public ip address available (e.g. XX.XX.XX.XX)
- accessible and contiguous port range (e.g. 30000-30255), at least one for each local ip address


## Step by Step

The step by step documentation configures one instance to be ssh gateway for another instance in the same network (192.168.0.0/24).
The step by step documentation configures one instance to be ssh gateway for another instance in the same network (192.168.0.0/24).

- **Create a two instance** (192.168.0.10, 192.168.0.11).
- **Create a two instance** (192.168.0.10, 192.168.0.11).
- **Associate a floating ip** (XX.XX.XX.XX) to the first instance (192.168.0.10). This instance will be the ssh gateway for the second instance.
- **Login into** the floating ip instance (XX.XX.XX.XX) and enable ip forwarding (as root).

Expand All @@ -36,7 +36,7 @@ iptables -t nat -A PREROUTING -i ens3 -p tcp -m tcp --dport 30011 -j DNAT --to-d
iptables -t nat -A POSTROUTING -d 192.168.00.11/32 -p tcp -m tcp --dport 22 -j SNAT --to-source 192.168.0.10
```

- **Add a OpenStack security group rule** to allow incoming tcp traffic on port 30011.
- **Add a OpenStack security group rule** to allow incoming tcp traffic on port 30011.

- **Login** into the instance (192.168.0.11) is now possible without adding a floating ip.

Expand All @@ -46,13 +46,13 @@ ssh -i my_cloud_key [email protected] -p 30011

## Configuration using user data

Configure a project gateway manually is a bit plodding. However, since we have a fixed mapping between ports and local ip addresses, we can automate this step by writing a small script and provide it as user data at instance start. The script should do the following steps :
Configure a project gateway manually is a bit plodding. However, since we have a fixed mapping between ports and local ip addresses, we can automate this step by writing a small script and provide it as user data at instance start. The script should do the following steps :

1. wait for metadata server to be available
2. get the CIDR mask from the metadata service
3. enable ip forwarding
4. add a forwarding rules for ssh (Port 22) for each available ip address (2 ... 254)
5. create a new security group that allows incoming tcp connections from port 30002 to port 30254 and associate it to the gateway instance
5. create a new security group that allows incoming tcp connections from port 30002 to port 30254 and associate it to the gateway instance

The full script could look like the following:

Expand Down
44 changes: 22 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,19 +23,19 @@ source NameOfRcFile.sh
~~~

#### Configuration
You can view (almost) all existing parameters in the [yaml file](VirtualMachineService/config/config.yml).
For local development:
Please copy this file and rename it to `config_local.yml` and fill in missing parameters.
For staging/production setup:
Please copy this file and rename it to `config_YOUR_LOCATION.yml` and fill in missing parameters.
You can view (almost) all existing parameters in the [yaml file](VirtualMachineService/config/config.yml).
For local development:
Please copy this file and rename it to `config_local.yml` and fill in missing parameters.
For staging/production setup:
Please copy this file and rename it to `config_YOUR_LOCATION.yml` and fill in missing parameters.
Also you need to provide the path to your config file as the first param when starting a server.

Furthermore there are some parameters you must set in the .env file. Copy the [.env.in](.env.in) to .env and
fill in the missing parameters.
Furthermore there are some parameters you must set in the .env file. Copy the [.env.in](.env.in) to .env and
fill in the missing parameters.
When starting with commandline you will need to export some of them manually.

#### Security Groups
The config file contains a name for the default SimpleVM security group.
The config file contains a name for the default SimpleVM security group.
It can be configured via the `default_simple_vm_security_group_name` key.
The client will set this group for every SimpleVM machine.

Expand All @@ -44,7 +44,7 @@ The client will set this group for every SimpleVM machine.
The client can use a Gateway for starting and stopping machines which allows to use just one floating IP instead of one floating IP per Machine.
You can read [here](ProjectGateway.md) how to setup a gateway on an OpenStack instance.
You can also find complete scripts in the [gateway](gateway) folder.
The client will provide all images with at least one tag, which will be filtered for in the cloud-api.
The client will provide all images with at least one tag, which will be filtered for in the cloud-api.
Also the client provides all flavors, which will also be filtered in the cloud-api.

_**Attention**_: If you are also using the machine where you run the client as a gateway, it is very important to configure the iptables before installing and using docker, otherwise docker could destroy the rules!
Expand Down Expand Up @@ -141,7 +141,7 @@ This command will generate python code from the thrift file.

In order for the cloud-api to use the new/changed methods, [VirtualMachineService.py](VirtualMachineService/VirtualMachineService.py), [ttypes.py](VirtualMachineService/ttypes.py) and [constants.py](VirtualMachineService/constants.py) must be copied over.

Because docker can't use relative imports, you also need to change the import of [ttypes.py](VirtualMachineService/ttypes.py) in [constants.py](VirtualMachineService/constants.py) and [VirtualMachineService.py](VirtualMachineService/VirtualMachineService.py):
Because docker can't use relative imports, you also need to change the import of [ttypes.py](VirtualMachineService/ttypes.py) in [constants.py](VirtualMachineService/constants.py) and [VirtualMachineService.py](VirtualMachineService/VirtualMachineService.py):

```python
from .ttypes import *
Expand All @@ -156,7 +156,7 @@ _**Attention**_: The cloud-api needs the files with the relative imports (from .

A detailed instruction, how to write a thrift file can be found on this link: [thrift](http://thrift-tutorial.readthedocs.io/en/latest/usage-example.html#generating-code-with-thrift)

To use the methods declared in the thrift file you need to write a handler which implements the Iface from the VirtualMachineService.
To use the methods declared in the thrift file you need to write a handler which implements the Iface from the VirtualMachineService.
The handler contains the logic for the methods. Then you can start a server which uses your handler.
Example python code for the server:
```python
Expand Down Expand Up @@ -188,9 +188,9 @@ REMOTE_IP ansible_user=ubuntu ansible_ssh_private_key_file=PATH_TO_SSH_FILE ansi
~~~

where

* REMOTE_IP is the IP of your staging machine

* PATH_TO_SSH_FILE is the path to the ssh key of the virtual machine

#### 2.Set SSH keyforwarding
Expand All @@ -199,7 +199,7 @@ In order to checkout the GitHub project you will have to enable
SSH Key forwarding in your `~/.ssh/config` file.

~~~BASH
Host IP
Host IP
ForwardAgent yes
~~~

Expand All @@ -219,12 +219,12 @@ ansible-galaxy install -r ansible_requirements.yml

#### 6.Set all variables

Set all variables that can be found in `.env` and `VirtualMachineService/config/config.yml` file.
You can have more than one `.env` file (`.env` and `.env_*` are not tracked by git) and specify which you want to copy
by using the `env_file` variable.
You can have more than one `VirtualMachineService/config/config.yml` file (`VirtualMachineService/config/config_*` are
not tracked by git) and specify which you want to copy by using the `client_config` variable.
These options are useful when maintaining multiple client sites.
Set all variables that can be found in `.env` and `VirtualMachineService/config/config.yml` file.
You can have more than one `.env` file (`.env` and `.env_*` are not tracked by git) and specify which you want to copy
by using the `env_file` variable.
You can have more than one `VirtualMachineService/config/config.yml` file (`VirtualMachineService/config/config_*` are
not tracked by git) and specify which you want to copy by using the `client_config` variable.
These options are useful when maintaining multiple client sites.

#### 7.Run the playbook

Expand All @@ -234,10 +234,10 @@ You can run the playbook using the following command:
ansible-playbook -i inventory_openstack site.yml
~~~

where
where

* inventory_openstack is your inventory file which you created in the first step.

* If you also want to start bibigrid use the tag "bibigrid"
**Choose different files**

Expand Down
Loading

0 comments on commit e0b44a7

Please sign in to comment.