Skip to content

Commit

Permalink
Dev (#494)
Browse files Browse the repository at this point in the history
* fixed rule setting for security groups

* fixed multiple network is now list causing error bugs.

* trying to figure out why route applying only works once.

* Added more echo's for better debugging.

* updated most tests

* fixed validate_configuration.py tests.

* Updated tests for startup.py

* fixed bug in terminate that caused assume_yes to work as assume_no

* updated terminate_cluster tests.

* fixed formatting improved pylint

* adapted tests

* updated return threading test

* updated provider_handler

* tests not finished yet

* Fixed server regex issue

* test list clusters updated

* fixed too open cluster_id regex

* added missing "to"

* fixed id_generation tests

* renamed configuration handler to please linter

* removed unnecessary tests and updated remaining

* fixed remaining "subnet list gets handled as a single subnet" bug and finalized multiple routes handling.

* updated tests not finished yet

* improved code style

* fixed tests further. One to fix left.

* fixed additional tests

* fixed all tests for ansible configurator

* fixed comment

* fixed multiple tests

* fixed a few tests

* Fixed create

* fixed some issues regarding

* fixing test_provider.py

* removed infrastructure_cloud.yml

* minor fixes

* fixed all tests

* removed print

* changed prints to log

* removed log

* fixed None bug where [] is expected when no sshPublicKeyFile is given.

* removed master from compute if use master as compute is false

* reconstructured role additional in order to make it easier to include. Added quotes for consistency.

* Updated all tests (#448)

* updated most tests

* fixed validate_configuration.py tests.

* Updated tests for startup.py

* fixed bug in terminate that caused assume_yes to work as assume_no

* updated terminate_cluster tests.

* fixed formatting improved pylint

* adapted tests

* updated return threading test

* updated provider_handler

* tests not finished yet

* Fixed server regex issue

* test list clusters updated

* fixed too open cluster_id regex

* added missing "to"

* fixed id_generation tests

* renamed configuration handler to please linter

* removed unnecessary tests and updated remaining

* updated tests not finished yet

* improved code style

* fixed tests further. One to fix left.

* fixed additional tests

* fixed all tests for ansible configurator

* fixed comment

* fixed multiple tests

* fixed a few tests

* Fixed create

* fixed some issues regarding

* fixing test_provider.py

* removed infrastructure_cloud.yml

* minor fixes

* fixed all tests

* removed print

* changed prints to log

* removed log

* Introduced yaml lock (#464)

* removed unnecessary close

* simplified update_hosts

* updated logging to separate folder and file based on creation date

* many small changes and introducing locks

* restructured log files again. Removed outdated key warnings from bibigrid.yml

* added a few logs

* further improved logging hierarchy

* Added specific folder places for temporary job storage. This might solve the "SlurmSpoolDir full" bug.

* Improved logging

* Tried to fix temps and tried update to 23.11 but has errors so commented that part out

* added initial space

* added existing worker deletion on worker startup if worker already exists as no worker would've been started if Slurm would've known about the existing worker. This is not the best solution. (#468)

* made waitForServices a cloud specific key (#465)

* Improved log messages in validate_configuration.py to make fixing your configuration easier when using a hybrid-/multi-cloud setup (#466)

* removed unnecessary line in provider.py and added cloud information to every log in validate_configuration.py for easier fixing.

* track resources for providers separately to make quota checking precise

* switched from low level cinder to high level block_storage.get_limits()

* added keyword for ssh_timeout and improved argument passing for ssh.

* Update issue templates

* fixed a missing LOG

* removed overwritten variable instantiation

* Update bug_report.md

* removed trailing whitespaces

* added comment about sshTimeout key

* Create dependabot.yml (#479)

* Code cleanup and minor improvement (#482)

* fixed :param and :return to @param and @return

* many spelling mistakes fixed

* added bibigrid_version to common configuration

* added timeout to common_configuration

* removed debug verbosity and improved log message wording

* fixed is_active structure

* fixed pip dependabot.yml

* added documentation. Changed timeout to 2**(2+attempts) to decrease number of unlikely to work attempts

* 474 allow non on demandpermanent workers (#487)

* added worker server start without anything else

* added host entry for permanent workers

* added state unknown for permanent nodes

* added on_demand key for groups and instances for ansible templating

* fixed wording

* temporary solution for custom execute list

* added documentation for onDemand

* added ansible.cfg replacement

* fixed path. Added ansible.cfg to the gitignore

* updated default creation and gitignore. Fixed non-vital bug that didn't reset hosts for new cluster start.

* Code cleanup (#490)

* fixed :param and :return to @param and @return

* many spelling mistakes fixed

* added bibigrid_version to common configuration

* attempted zabbix linting fix. Needs testing.

* fixed double import

* Slurm upgrade fixes (#473)

* removed slurm errors

* added bibilog to show output log of most recent worker start. Tried fixing the slurm23.11 bug.

* fixed a few vpnwkr -> vpngtw remnants. Excluded vpngtw from slurm setup

* improved comments regarding changes and versions

* removed cgroupautomount as it is defunct

* Moved explicit slurm start to avoid errors caused by resume and suspend programs not being copied to their final location yet

* added word for clarification

* Fixed non-fatal bug that lead to non 0 exits on runs without any error.

* changed slurm apt package to slurm-bibigrid

* set version to 23.11.*

* added a few more checks to make sure everything is set up before installing packages

* Added configuration pinning

* changed ignore_error to failed_when false

* fixed or ignored lint fatals

* Update tests (#493)

* updated tests

* removed print

* updated tests

* updated tests

* fixed too loose condition

* updated tests

* added cloudScheduling and userRoles in bibigrid.yml

* added userRoles in documentation

* added varsFiles and comments

* added folder path in documentation

* fixed naming

* added that vars are optional

* polished userRoles documentation

* 439 additional ansible roles (#495)

* added roles structure

* updated roles_path

* fixed upper lower case

* improved customRole implementation

* minor fixes regarding role_paths

* improved variable naming of user_roles

* added documentation for other configurations

* added new feature keys

* fixed template files not being j2

* added helpful comments and removed no longer used roles/additional/

* userRoles crashes if no role set

* fixed ansible.cfg path '"'

* implemented partition system

* added keys customAnsibleCfg and customSlurmConf as keys that stop the automatic copying

* improved spacing

* added logging

* updated documentation

* updated tests. Improved formatting

* fix for service being too fast for startup

* fixed remote src

* changed RESUME to POWER_DOWN and removed delete call which is now handled via Slurm that calls terminate.sh (#503)

* Update check (#499)

* updated validate_configuration.py in order to provide schema validation. Moved cloud_identifier setting even closer to program start in order to be able to log better when performing other actions than create.

* small log change and fix of schema key vpnInstance

* updated tests

* removed no longer relevant test

* added schema validation tests

* fixed ftype. Errors with multiple volumes.

* made automount bound to defined mountPoints and therefore customizable

* added empty line and updated bibigrid.yml

* fixed nfsshare regex error and updated check to fit to the new name mountpoint pattern

* hotfix: folder creation now before accessing hosts.yml

* fixed tests

* moved dnsmasq installation infront of /etc/resolv removal

* fixed tests

* fixed nfs exports by removing unnecessary "/" at the beginning

* fixed master running slurmd but not being listed in slurm.conf. Now set to drained.

* improved logging

* increased timeout. Corrected comment in slurm.j2

* updated info regarding timeouts (changed from 4 to 5).

* added SuspendTimeout as optional to elastic_scheduling

* updated documentation

* permission fix

* fixes #394

* fixes #394 (also for hybrid cluster)

* increased ResumeTimeout by 5 minutes. yml to yaml

* changed all yml to yaml (as preferred by yaml)

* updated timeouts. updated tests

* fixes #394 - remove host from zabbix when terminated

* zabbix api no longer used when not set in configuration

* pleased linting by using false instead of no

* added logging of traceroute even if debug flag is not set when error is not known. Added a few other logs

* Update action 515 (#516)

* configuration update possible 515

* added experimental

* fixed indentation

* fixed missing newline at EOF. Summarized restarts.

* added check for running workers

* fixed multiple workers due to faulty update

* updated tests and removed done todos

* updated documentation

* removed print

* Added apt-reactivate-auto-update to reactivate updates at the end of the playbook run (#518)

* changed theia to 900. Added apt-reactivate-auto-update as new 999.

* added new line at end of file

* changed list representation

* added multiple configuration keys for boot volume handling

* updated documentation

* updated documentation for new volumes and for usually ignored keys

* updated and added tests

---------

Co-authored-by: Jan Krueger <[email protected]>
  • Loading branch information
XaverStiensmeier and jkrue authored Sep 6, 2024
1 parent 44c6d0c commit acb581a
Show file tree
Hide file tree
Showing 100 changed files with 2,552 additions and 1,364 deletions.
33 changes: 33 additions & 0 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''

---

**Describe the bug**
A clear and concise description of what the bug is.

**To Reproduce**
Steps to reproduce the behaviour:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

**Expected behaviour**
A clear and concise description of what you expected to happen.

**Screenshots**
If applicable, add screenshots to help explain your problem.

**Setup (please complete the following information):**
- OS: [e.g. Ubuntu 22.04]
- Cloud Location [e.g. Bielefeld]
- Configuration
- BiBiGrid Version

**Additional context**
Add any other context about the problem here.
20 changes: 20 additions & 0 deletions .github/ISSUE_TEMPLATE/feature_request.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''

---

**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

**Describe the solution you'd like**
A clear and concise description of what you want to happen.

**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.

**Additional context**
Add any other context or screenshots about the feature request here.
13 changes: 13 additions & 0 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file

version: 2
updates:
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "daily"
open-pull-requests-limit: 10
versioning-strategy: "auto"
2 changes: 1 addition & 1 deletion .github/workflows/linting.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,6 @@ jobs:
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: ansible_lint
run: ansible-lint resources/playbook/roles/bibigrid/tasks/main.yml
run: ansible-lint resources/playbook/roles/bibigrid/tasks/main.yaml
- name: pylint_lint
run: pylint bibigrid
15 changes: 11 additions & 4 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,21 @@
.run/

# variable resources
resources/playbook/site.yml
resources/playbook/ansible.cfg
resources/playbook/roles/bibigrid/templates/slurm/slurm.j2
resources/playbook/site.yaml
resources/playbook/ansible_hosts
resources/playbook/vars/
resources/playbook/host_vars/
resources/playbook/group_vars/
tests/resources/*
!test/resources/test_configuration.yml

resources/tests/bibigrid_test.yaml

# Roles
resources/playbook/roles_galaxy/*
!resources/playbook/roles_galaxy/README
resources/playbook/roles_user/*
!resources/playbook/roles_user/README
!resources/playbook/roles_user/resistance_nextflow
# any log files
*.log
log/
Expand Down
66 changes: 34 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# BiBiGrid
BiBiGrid is a cloud cluster creation and management framework for OpenStack
(and more providers in the future).
BiBiGrid is a framework for creating and managing cloud clusters, currently supporting OpenStack.
Future versions will support additional cloud providers.

BiBiGrid uses Ansible to configure standard Ubuntu 20.04/22.04 LTS as
well as Debian 11 cloud images. Depending on your configuration BiBiGrid
Expand All @@ -26,22 +26,23 @@ might be just what you need.
<summary> Brief, technical BiBiGrid overview </summary>

### How to configure a cluster?
#### Configuration File: bibigrid.yml
A [template](bibigrid.yml) file is included in the repository ([bibigrid.yml](bibigrid.yml)).
#### Configuration File: bibigrid.yaml
A [template](bibigrid.yaml) file is included in the repository ([bibigrid.yaml](bibigrid.yaml)).

The cluster configuration file consists of a list of configurations. Every configuration describes the provider specific configuration.
The first configuration additionally contains all the keys that apply to the entire cluster (roles for example).
Currently only clusters with one provider are possible, so focus only on the first configuration in the list.

The configuration template [bibigrid.yml](bibigrid.yml) contains many helpful comments, making completing it easier for you.
The cluster configuration file, `bibigrid.yaml`, consists of a list of configurations.
Each configuration describes provider-specific settings.
The first configuration in the list also contains keys that apply to the entire cluster (e.g., roles).

The configuration template [bibigrid.yaml](bibigrid.yaml) contains many helpful comments, making completing it easier for you.

[You need more details?](documentation/markdown/features/configuration.md)

#### Cloud Specification Data: clouds.yml
#### Cloud Specification Data: clouds.yaml
To access the cloud, authentication information is required.
You can download your `clouds.yaml` from OpenStack.

Your `clouds.yaml` is to be placed in `~/.config/bibigrid/` and will be loaded by BiBiGrid on execution.
Place the `clouds.yaml` file in the `~/.config/bibigrid/` directory. BiBiGrid will load this file during execution.

[You need more details?](documentation/markdown/features/cloud_specification_data.md)

Expand All @@ -50,28 +51,27 @@ If you haven't used BiBiGrid1 in the past or are unfamiliar with OpenStack, we h
[tutorial](https://github.com/deNBI/bibigrid_clum2022) instead.

#### Preparation
1. Download (or create) the `clouds.yaml` (and optionally `clouds-public.yaml`) file as described [above](#cloud-specification-data-cloudsyml).
1. Download (or create) your `clouds.yaml` file (and optionally `clouds-public.yaml`) as described [above](#cloud-specification-data-cloudsyaml).
2. Place the `clouds.yaml` into `~/.config/bibigrid`
3. Fill the configuration, `bibigrid.yml`, with your specifics. At least you need: A master instance with valid type and image,
a region, an availability zone, an sshUser (most likely ubuntu) and a subnet.
You probably also want at least one worker with a valid type, image and count.
3. Fill in the `bibigrid.yaml` configuration file with your specifics. At a minimum you need to specify: a master instance with valid type and image,
an sshUser (most likely ubuntu) and a subnet.
You will likely also want to specify at least one worker instance with a valid type, image, and count.
4. If your cloud provider runs post-launch services, you need to set the `waitForServices`
key appropriately which expects a list of services to wait for.
5. Create a virtual environment from `bibigrid/requirements.txt`.
See [here](https://www.akamai.com/blog/developers/how-building-virtual-python-environment) for more detailed info.
6. Take a look at [First execution](#first-execution)

#### First execution
Before follow the steps described at [Preparation](#preparation).
Before proceeding, ensure you have completed the steps described in the [Preparation section](#preparation).

After cloning the repository navigate to `bibigrid`.
In order to execute BiBiGrid source the virtual environment created during [preparation](#preparation).
Take a look at BiBiGrid's [Command Line Interface](documentation/markdown/features/CLI.md)
if you want to explore for yourself.
After cloning the repository, navigate to the bibigrid directory.
Source the virtual environment created during [preparation](#preparation) to execute BiBiGrid.
Refer to BiBiGrid's [Command Line Interface documentation](documentation/markdown/features/CLI.md) if you want to explore additional options.

A first execution run through could be:
1. `./bibigrid.sh -i [path-to-bibigrid.yml] -ch`: checks the configuration
2. `./bibigrid.sh -i 'bibigrid.yml -i [path-to-bibigrid.yml] -c'`: creates the cluster (execute only if check was successful)
1. `./bibigrid.sh -i [path-to-bibigrid.yaml] -ch`: checks the configuration
2. `./bibigrid.sh -i 'bibigrid.yaml -i [path-to-bibigrid.yaml] -c'`: creates the cluster (execute only if check was successful)
3. Use **BiBiGrid's create output** to investigate the created cluster further. Especially connecting to the ide might be helpful.
Otherwise, connect using ssh.
4. While in ssh try `sinfo` to printing node info
Expand All @@ -85,17 +85,17 @@ Great! You've just started and terminated your first cluster using BiBiGrid!
</details>

### Troubleshooting
If your cluster doesn't start up, please first make sure your configurations file is valid (`-ch`).
If it is not, try to modify the configurations file to make it valid. Use `-v` or `-vv` to get a more verbose output,
so you can find the issue faster. Also double check if you have sufficient permissions to access the project.
If you can't make your configurations file valid, please contact a developer.
If that's the case, please contact a developer and/or manually check if your quotas are exceeded.
Some quotas can currently not be checked by bibigrid.
If your cluster doesn't start up, first ensure your configuration file is valid using the `-ch` option.
If the configuration is invalid, modify the file as needed.
Use the `-v` or `-vv` options for more verbose output to help identify the issue faster.
Also, double-check that you have sufficient permissions to access the project.
If you cannot make your configuration file valid, please contact a developer.
Additionally, manually check if your quotas are exceeded, as some quotas cannot currently be checked by BiBiGrid.

**Whenever you contact a developer, please send your logfile along.**

# Documentation
If you would like to learn more about BiBiGrid please follow a fitting link:
For more information about BiBiGrid, please visit the following links:
- [BiBiGrid Features](documentation/markdown/bibigrid_feature_list.md)
- [Software used by BiBiGrid](documentation/markdown/bibigrid_software_list.md)

Expand All @@ -118,7 +118,9 @@ Workers are powered down once they are not used for a longer period.
[https://github.com/BiBiServ/Development-Guidelines](https://github.com/BiBiServ/Development-Guidelines)

## On implementing concrete providers
New concrete providers can be implemented very easily. Just copy the `provider.py` file and implement all methods for
your cloud-provider. Also inherit from the `provider` class. After that add your provider to the providerHandler lists; giving it a associated name for the
configuration files. By that, your provider is automatically added to BiBiGrid's tests and regular execution. By testing
your provider first, you will see whether all provider methods are implemented as expected.
Implementing new cloud providers is straightforward.
Copy the `provider.py` file and implement all necessary methods for your cloud provider.
Inherit from the `provider` class.
Add your provider to the `providerHandler` lists and assign it an associated name for the configuration files.
This will automatically include your provider in BiBiGrid's tests and regular execution.
Test your provider to ensure all methods are implemented correctly.
96 changes: 47 additions & 49 deletions bibigrid.yml → bibigrid.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,101 +7,99 @@
cloud: openstack # name of clouds.yaml cloud-specification key (which is value to top level key clouds)

# -- BEGIN: GENERAL CLUSTER INFORMATION --
# sshTimeout: 5 # number of attempts to connect to instances during startup with delay in between
# cloudScheduling:
# sshTimeout: 5 # like sshTimeout but during the on demand scheduling on the running cluster

## sshPublicKeyFiles listed here will be added to access the cluster. A temporary key is created by bibigrid itself.
#sshPublicKeyFiles:
# - [public key one]

## Volumes and snapshots that will be mounted to master
# autoMount: False # WARNING: will overwrite unidentified filesystems
#masterMounts: # KEY NOT FULLY IMPLEMENTED YET
# - [mount one]
#masterMounts: (optional) # WARNING: will overwrite unidentified filesystems
# - name: [volume name]
# mountPoint: [where to mount to] # (optional)

#nfsShares: # KEY NOT FULLY IMPLEMENTED YET; /vol/spool/ is automatically created as a nfs
#nfsShares: /vol/spool/ is automatically created as a nfs
# - [nfsShare one]

## Ansible (Galaxy) roles can be added for execution # KEY NOT IMPLEMENTED YET
#ansibleRoles:
# - file: SomeFile
# hosts: SomeHosts
# name: SomeName
# vars: SomeVars
# vars_file: SomeVarsFile

#ansibleGalaxyRoles: # KEY NOT IMPLEMENTED YET
# - hosts: SomeHost
# name: SomeName
# galaxy: SomeGalaxy
# git: SomeGit
# url: SomeURL
# vars: SomeVars
# vars_file: SomeVarsFile
# userRoles: # see ansible_hosts for all options
# - hosts:
# - "master"
# roles: # roles placed in resources/playbook/roles_user
# - name: "resistance_nextflow"
# varsFiles: # (optional)
# - [...]

## Uncomment if you don't want assign a public ip to the master; for internal cluster (Tuebingen).
#useMasterWithPublicIp: False # defaults True if False no public-ip (floating-ip) will be allocated
# useMasterWithPublicIp: False # defaults True if False no public-ip (floating-ip) will be allocated
# gateway: # if you want to use a gateway for create.
# ip: # IP of gateway to use
# portFunction: 30000 + oct4 # variables are called: oct1.oct2.oct3.oct4

# deleteTmpKeypairAfter: False
# dontUploadCredentials: False

# Other keys - default False
#localFS: True
#localDNSlookup: True
# Other keys - these are default False
# Usually Ignored
##localFS: True
##localDNSlookup: True

#zabbix: True
#nfs: True
#ide: True # A nice way to view your cluster as if you were using Visual Studio Code

useMasterAsCompute: True # Currently ignored by slurm
useMasterAsCompute: True

# bootFromVolume: False
# terminateBootVolume: True
# volumeSize: 50

#waitForServices: # existing service name that runs after an instance is launched. BiBiGrid's playbook will wait until service is "stopped" to avoid issues
# waitForServices: # existing service name that runs after an instance is launched. BiBiGrid's playbook will wait until service is "stopped" to avoid issues
# - de.NBI_Bielefeld_environment.service # uncomment for cloud site Bielefeld

# master configuration
masterInstance:
type: # existing type/flavor on your cloud. See launch instance>flavor for options
image: # existing active image on your cloud. Consider using regex to prevent image updates from breaking your running cluster
# features: # list
# partitions: # list
# bootVolume: None
# bootFromVolume: True
# terminateBootVolume: True
# volumeSize: 50

# -- END: GENERAL CLUSTER INFORMATION --

# fallbackOnOtherImage: False # if True, most similar image by name will be picked. A regex can also be given instead.

# worker configuration
#workerInstances:
# workerInstances:
# - type: # existing type/flavor on your cloud. See launch instance>flavor for options
# image: # same as master. Consider using regex to prevent image updates from breaking your running cluster
# count: # any number of workers you would like to create with set type, image combination
# # features: # list
# # partitions: # list
# # bootVolume: None
# # bootFromVolume: True
# # terminateBootVolume: True
# # volumeSize: 50

# Depends on cloud image
sshUser: # for example ubuntu

# Depends on cloud site:
# Berlin : regionOne
# Bielefeld : bielefeld
# DKFZ : regionOne
# Giessen : RegionOne
# Heidelberg : RegionOne
# Tuebingen : RegionOne
region: Bielefeld

# Depends on cloud site:
# Berlin : nova
# Bielefeld : default
# DKFZ : nova
# Giessen : nova
# Heidelberg : nova
# Tuebingen : nova
availabilityZone: default

# Depends on cloud site and project
subnet: # existing subnet on your cloud. See https://openstack.cebitec.uni-bielefeld.de/project/networks/
# or network:
# gateway: # if you want to use a gateway for create.
# ip: # IP of gateway to use
# portFunction: 30000 + oct4 # variables are called: oct1.oct2.oct3.oct4

# Uncomment if no full DNS service for started instances is available.
# Currently, the case in Berlin, DKFZ, Heidelberg and Tuebingen.
#localDNSLookup: True

#features: # list

#- [next configurations] # KEY NOT IMPLEMENTED YET
# elastic_scheduling: # for large or slow clusters increasing these timeouts might be necessary to avoid failures
# SuspendTimeout: 60 # after SuspendTimeout seconds, slurm allows to power up the node again
# ResumeTimeout: 1200 # if a node doesn't start in ResumeTimeout seconds, the start is considered failed.

#- [next configurations]
8 changes: 4 additions & 4 deletions bibigrid/core/actions/check.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@
def check(configurations, providers, log):
"""
Uses validate_configuration to validate given configuration.
:param configurations: list of configurations (dicts)
:param providers: list of providers
:param log:
:return:
@param configurations: list of configurations (dicts)
@param providers: list of providers
@param log:
@return:
"""
success = validate_configuration.ValidateConfiguration(configurations, providers, log).validate()
check_result = "succeeded! Cluster is ready to start." if success else "failed!"
Expand Down
Loading

0 comments on commit acb581a

Please sign in to comment.