Skip to content

Commit

Permalink
Dev (#384)
Browse files Browse the repository at this point in the history
* Documentation nfs (#364)

* Added nfs documentation

* Lengthened the explanation

* Renaming bibigrid2 to bibigrid (#361)

* Removed all occurrences of bibigrid2.

* Removed all occurrences of bibigrid2.

* ignore configurations

* Testing if tests are ignored by pylint now

* removed test

* add apt.bi.denbi.de as package source

* update slurm tasks (now uses self-build slurm packages -> v22.05.7, restructure slurm files)

* add documentation to build newer Slurm package

* fixes

* slurmrestd uses openapi/v0.0.38

* Added check_nfs as a non fatale evaluation (#366)

* Added "." and "-" cases for cid. This allows further rescuing and gives info messages. (#365)

* fix slurmrestd connfiguration

* update task order (slurm-server)

* fix default user chown settings

* Add an additional mariadb repository for Ubuntu 20.04. Zabbix 7.2 needs at least MariaDB 10.5 or higher and Focal comes with MariaDB 10.3.

* Extend slurm documentation.

* Extends documentation that BiBiGrid now supports Ubuntu 20.04/22.04 and Debian 11 (fixes #348).

* cleanup

* fix typos in documentation

* fix typos in documentation

* add workflow-job to lint python/ansible

* add more output

* add more output

* update runner working directory

* make ansible_lint happy

* rewrite linting workflow
add linting dependencies

* fix a typo

* fix pylintrc -> remove ignore-pattern=test/ (not needed, since pylint currently lints bibigrid folder)
make pylint happy

* Fix program crash when image is not active (#382)

* Fixed function missing call

* Fixed linter that wasn't troubled before

---------

Co-authored-by: Jan Krüger <[email protected]>
  • Loading branch information
XaverStiensmeier and jkrue authored Feb 21, 2023
1 parent 564e282 commit 655b958
Show file tree
Hide file tree
Showing 69 changed files with 699 additions and 501 deletions.
20 changes: 20 additions & 0 deletions .github/workflows/linting.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
name: linting
on: [push]
jobs:
linting-job:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python 3.8
uses: actions/setup-python@v4
with:
python-version: 3.8
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: ansible_lint
run: ansible-lint resources/playbook/roles/bibigrid/tasks/main.yml
- name: pylint_lint
run: pylint bibigrid
30 changes: 0 additions & 30 deletions .github/workflows/publish_docker_dev.yml

This file was deleted.

33 changes: 0 additions & 33 deletions .github/workflows/publish_docker_release.yml

This file was deleted.

1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# complete idea
.idea/
.run/

# variable resources
resources/playbook/site.yml
Expand Down
2 changes: 1 addition & 1 deletion .pylintrc
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ persistent=no

# Minimum Python version to use for version dependent checks. Will default to
# the version used to run pylint.
py-version=3.10
#py-version=3.10

# Discover python modules and packages in the file system subtree.
recursive=no
Expand Down
48 changes: 28 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# BiBiGrid
BiBiGrid is a cloud cluster creation and management framework for OpenStack (and more providers in the future).
BiBiGrid is a cloud cluster creation and management framework for OpenStack
(and more providers in the future).

BiBiGrid uses Ansible to configure standard Ubuntu 20.04/22.04 LTS as
well as Debian 11 cloud images. Depending on your configuration BiBiGrid
can set up an HCP cluster for grid computing (Slurm Workload Manager,
a shared filesystem (on local discs and attached volumes),
a cloud IDE for writing, running and debugging (Theia Web IDE) and many more.


> **Note**
> The latest version is currently work in progress. Future changes are likely.
Expand All @@ -15,7 +23,7 @@ However, if you are already quite experienced with *OpenStack* and the previous
might be just what you need.

<details>
<summary> Brief, technical BiBiGrid2 overview </summary>
<summary> Brief, technical BiBiGrid overview </summary>

### How to configure a cluster?
#### Configuration File: bibigrid.yml
Expand All @@ -33,7 +41,7 @@ The configuration template [bibigrid.yml](bibigrid.yml) contains many helpful co
To access the cloud, authentication information is required.
You can download your `clouds.yaml` from OpenStack.

Your `clouds.yaml` is to be placed in `~/.config/bibigrid/` and will be loaded by BiBiGrid2 on execution.
Your `clouds.yaml` is to be placed in `~/.config/bibigrid/` and will be loaded by BiBiGrid on execution.

[You need more details?](documentation/markdown/features/cloud_specification_data.md)

Expand All @@ -49,30 +57,30 @@ a region, an availability zone, an sshUser (most likely ubuntu) and a subnet.
You probably also want at least one worker with a valid type, image and count.
4. If your cloud provider runs post-launch services, you need to set the `waitForServices`
key appropriately which expects a list of services to wait for.
5. Create a virtual environment from `bibigrid2/requirements.txt`.
5. Create a virtual environment from `bibigrid/requirements.txt`.
See [here](https://www.akamai.com/blog/developers/how-building-virtual-python-environment) for more detailed info.
6. Take a look at [First execution](#first-execution)

#### First execution
Before follow the steps described at [Preparation](#preparation).

After cloning the repository navigate to `bibigrid2`.
In order to execute BiBiGrid2 source the virtual environment created during [preparation](#preparation).
Take a look at BiBiGrid2's [Command Line Interface](documentation/markdown/features/CLI.md)
After cloning the repository navigate to `bibigrid`.
In order to execute BiBiGrid source the virtual environment created during [preparation](#preparation).
Take a look at BiBiGrid's [Command Line Interface](documentation/markdown/features/CLI.md)
if you want to explore for yourself.

A first execution run through could be:
1. `./bibigrid.sh -i [path-to-bibigrid.yml] -ch`: checks the configuration
2. `./bibigrid.sh -i 'bibigrid.yml -i [path-to-bibigrid.yml] -c'`: creates the cluster (execute only if check was successful)
3. Use **BiBiGrid2's create output** to investigate the created cluster further. Especially connecting to the ide might be helpful.
3. Use **BiBiGrid's create output** to investigate the created cluster further. Especially connecting to the ide might be helpful.
Otherwise, connect using ssh.
4. While in ssh try `sinfo` to printing node info
5. Run `srun -x $(hostname) hostname` to power up a worker and get its hostname.
6. Run `sinfo` again to see the node powering up. After a while it will be terminated again.
7. Use the terminate command from **BiBiGrid2's create output** to shut down the cluster again.
7. Use the terminate command from **BiBiGrid's create output** to shut down the cluster again.
All floating-ips used will be released.

Great! You've just started and terminated your first cluster using BiBiGrid2!
Great! You've just started and terminated your first cluster using BiBiGrid!

</details>

Expand All @@ -87,21 +95,21 @@ Some quotas can currently not be checked by bibigrid.
**Whenever you contact a developer, please send your logfile along.**

# Documentation
If you would like to learn more about BiBiGrid2 please follow a fitting link:
- [BiBiGrid2 Features](documentation/markdown/bibigrid_feature_list.md)
- [Software used by BiBiGrid2](documentation/markdown/bibigrid_software_list.md)
If you would like to learn more about BiBiGrid please follow a fitting link:
- [BiBiGrid Features](documentation/markdown/bibigrid_feature_list.md)
- [Software used by BiBiGrid](documentation/markdown/bibigrid_software_list.md)

<details>
<summary> Differences to BiBiGrid1 </summary>
<summary> Differences to old Java BiBiGrid</summary>

* BiBiGrid2 no longer uses RC- but cloud.yaml-files for cloud-specification data. Environment variables are no longer used (or supported).
* BiBiGrid no longer uses RC- but cloud.yaml-files for cloud-specification data. Environment variables are no longer used (or supported).
See [Cloud Specification Data](documentation/markdown/features/cloud_specification_data.md).
* BiBiGrid2 has a largely reworked configurations file, because BiBiGrid2 core supports multiple providers this step was necessary.
* BiBiGrid has a largely reworked configurations file, because BiBiGrid core supports multiple providers this step was necessary.
See [Configuration](documentation/markdown/features/configuration.md)
* BiBiGrid2 currently only implements the provider OpenStack.
* BiBiGrid2 only starts the master and will dynamically start workers using slurm when they are needed.
* BiBiGrid currently only implements the provider OpenStack.
* BiBiGrid only starts the master and will dynamically start workers using slurm when they are needed.
Workers are powered down once they are not used for a longer period.
* BiBiGrid2 lays the foundation for clusters that are spread over multiple providers, but Hybrid Clouds aren't fully implemented yet.
* BiBiGrid lays the foundation for clusters that are spread over multiple providers, but Hybrid Clouds aren't fully implemented yet.
</details>

# Development
Expand All @@ -112,5 +120,5 @@ Workers are powered down once they are not used for a longer period.
## On implementing concrete providers
New concrete providers can be implemented very easily. Just copy the `provider.py` file and implement all methods for
your cloud-provider. Also inherit from the `provider` class. After that add your provider to the providerHandler lists; giving it a associated name for the
configuration files. By that, your provider is automatically added to BiBiGrid2's tests and regular execution. By testing
configuration files. By that, your provider is automatically added to BiBiGrid's tests and regular execution. By testing
your provider first, you will see whether all provider methods are implemented as expected.
2 changes: 1 addition & 1 deletion bibigrid.sh
Original file line number Diff line number Diff line change
@@ -1 +1 @@
python3 -m bibigrid2.core.startup "$@"
python3 -m bibigrid.core.startup "$@"
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
"""
Module that acts as a wrapper and uses validateConfiguration to validate given configuration
Module that acts as a wrapper and uses validate_configuration to validate given configuration
"""
import logging
from bibigrid2.core.utility import validate_configuration
from bibigrid.core.utility import validate_configuration

LOG = logging.getLogger("bibigrid")

def check(configurations, providers):
"""
Uses validateConfiguration to validate given configuration.
Uses validate_configuration to validate given configuration.
:param configurations: list of configurations (dicts)
:param providers: list of providers
:return:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,15 @@
import paramiko
import yaml

from bibigrid2.core.actions import terminate_cluster
from bibigrid2.core.utility import ansible_configurator
from bibigrid2.core.utility import id_generation
from bibigrid2.core.utility.handler import ssh_handler
from bibigrid2.core.utility.paths import ansible_resources_path as aRP
from bibigrid2.core.utility.paths import bin_path as biRP
from bibigrid2.models import exceptions
from bibigrid2.models import return_threading
from bibigrid2.models.exceptions import ExecutionException
from bibigrid.core.actions import terminate_cluster
from bibigrid.core.utility import ansible_configurator
from bibigrid.core.utility import id_generation
from bibigrid.core.utility.handler import ssh_handler
from bibigrid.core.utility.paths import ansible_resources_path as aRP
from bibigrid.core.utility.paths import bin_path as biRP
from bibigrid.models import exceptions
from bibigrid.models import return_threading
from bibigrid.models.exceptions import ExecutionException

PREFIX = "bibigrid"
SEPARATOR = "-"
Expand Down Expand Up @@ -350,7 +350,7 @@ def print_cluster_start_info(self):
"""
Prints helpful cluster-info:
SSH: How to connect to master via SSH
Terminate: What bibigrid2 command is needed to terminate the created cluster
Terminate: What bibigrid command is needed to terminate the created cluster
Detailed cluster info: How to print detailed info about the created cluster
:return:
"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
import webbrowser
import sshtunnel

from bibigrid2.core.utility.handler import cluster_ssh_handler
from bibigrid.core.utility.handler import cluster_ssh_handler

DEFAULT_IDE_WORKSPACE = "${HOME}"
REMOTE_BIND_ADDRESS = 8181
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
import pprint
import re

from bibigrid2.core.actions import create
from bibigrid.core.actions import create

SERVER_REGEX = re.compile(r"^bibigrid-((master)-([a-zA-Z0-9]+)|(worker|vpnwkr)\d+-([a-zA-Z0-9]+)-\d+)$")
LOG = logging.getLogger("bibigrid")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
import os
import re

from bibigrid2.core.actions import create
from bibigrid.core.actions import create
LOG = logging.getLogger("bibigrid")

def terminate_cluster(cluster_id, providers, debug=False):
Expand Down Expand Up @@ -83,7 +83,7 @@ def delete_keypairs(provider, tmp_keyname):
"""
Deletes keypairs from all provider
@param provider: provider to delete keypair from
@param tmp_keyname: BiBiGrid2 keyname
@param tmp_keyname: BiBiGrid keyname
@return: True if keypair was deleted
"""
LOG.info("Deleting Keypair on provider %s...", provider.cloud_specification['identifier'])
Expand All @@ -98,7 +98,7 @@ def delete_keypairs(provider, tmp_keyname):
def delete_local_keypairs(tmp_keyname):
"""
Deletes local keypairs of a cluster
@param tmp_keyname: BiBiGrid2 keyname
@param tmp_keyname: BiBiGrid keyname
@return: Returns true if at least one local keyfile (pub or private) was found
"""
success = False
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@

import logging

from bibigrid2.core.utility import ansible_commands as aC
from bibigrid2.core.utility.handler import ssh_handler
from bibigrid2.core.utility.paths import ansible_resources_path as aRP
from bibigrid2.core.utility.paths import bin_path as biRP
from bibigrid2.core.utility.handler import cluster_ssh_handler
from bibigrid.core.utility import ansible_commands as aC
from bibigrid.core.utility.handler import ssh_handler
from bibigrid.core.utility.paths import ansible_resources_path as aRP
from bibigrid.core.utility.paths import bin_path as biRP
from bibigrid.core.utility.handler import cluster_ssh_handler

LOG = logging.getLogger("bibigrid")

Expand Down
File renamed without changes.
File renamed without changes.
15 changes: 7 additions & 8 deletions bibigrid2/core/startup.py → bibigrid/core/startup.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,11 @@

import yaml

from bibigrid2.core.actions import check, create, ide, list_clusters, terminate_cluster, update, version
from bibigrid2.core.utility import command_line_interpreter
from bibigrid2.core.utility.handler import configuration_handler, provider_handler
from bibigrid.core.actions import check, create, ide, list_clusters, terminate_cluster, update, version
from bibigrid.core.utility import command_line_interpreter
from bibigrid.core.utility.handler import configuration_handler, provider_handler

LOGGING_HANDLER_LIST = [logging.StreamHandler(), logging.FileHandler("bibigrid2.log")] # stdout and to file
LOGGING_HANDLER_LIST = [logging.StreamHandler(), logging.FileHandler("bibigrid.log")] # stdout and to file
VERBOSITY_LIST = [logging.WARNING, logging.INFO, logging.DEBUG]
LOGGER_FORMAT = "%(asctime)s [%(levelname)s] %(message)s"

Expand All @@ -38,7 +38,7 @@ def get_cluster_id_from_mem():
return None


def set_logger(verbosity):
def set_logger_verbosity(verbosity):
"""
Sets verbosity, format and handler.
:param verbosity: level of verbosity
Expand All @@ -48,7 +48,6 @@ def set_logger(verbosity):
capped_verbosity = min(verbosity, len(VERBOSITY_LIST) - 1)
# LOG.basicConfig(format=LOGGER_FORMAT, level=VERBOSITY_LIST[capped_verbosity],
# handlers=LOGGING_HANDLER_LIST)
logging.basicConfig(format=LOGGER_FORMAT, handlers=LOGGING_HANDLER_LIST)

log = logging.getLogger("bibigrid")
log.setLevel(VERBOSITY_LIST[capped_verbosity])
Expand Down Expand Up @@ -126,9 +125,9 @@ def main():
Interprets command line, sets logger, reads configuration and runs selected action. Then exits.
:return:
"""

logging.basicConfig(format=LOGGER_FORMAT, handlers=LOGGING_HANDLER_LIST)
args = command_line_interpreter.interpret_command_line()
set_logger(args.verbose)
set_logger_verbosity(args.verbose)
configurations = configuration_handler.read_configuration(args.config_input)
if configurations:
sys.exit(run_action(args, configurations, args.config_input))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
"""

import os
import bibigrid2.core.utility.paths.ansible_resources_path as aRP
import bibigrid.core.utility.paths.ansible_resources_path as aRP

#TO_LOG = "| sudo tee -a /var/log/ansible.log"
#AIY = "apt-get -y install"
Expand Down Expand Up @@ -45,7 +45,8 @@

# Execute
PLAYBOOK_HOME = ("sudo mkdir -p /opt/playbook", "Create playbook home.")
PLAYBOOK_HOME_RIGHTS = ("sudo chown ubuntu:ubuntu /opt/playbook", "Adjust playbook home permission.")
PLAYBOOK_HOME_RIGHTS = ("uid=$(id -u); gid=$(id -g); sudo chown ${uid}:${gid} /opt/playbook",
"Adjust playbook home permission.")
MV_ANSIBLE_CONFIG = (
"sudo install -D /opt/playbook/ansible.cfg /etc/ansible/ansible.cfg", "Move ansible configuration.")
EXECUTE = (f"ansible-playbook {os.path.join(aRP.PLAYBOOK_PATH_REMOTE, aRP.SITE_YML)} -i "
Expand Down
Loading

0 comments on commit 655b958

Please sign in to comment.