Skip to content

Commit

Permalink
423 rest api prototype (#437)
Browse files Browse the repository at this point in the history
* added exact versions for openstsacksdk and python-openstackclient (#413)

* Keep master updated (#401)

* add apt.bi.denbi.de as package source

* update slurm tasks (now uses self-build slurm packages -> v22.05.7, restructure slurm files)

* add documentation to build newer Slurm package

* fixes

* slurmrestd uses openapi/v0.0.38

* Added check_nfs as a non fatale evaluation (#366)

* Added "." and "-" cases for cid. This allows further rescuing and gives info messages. (#365)

* Added identifier for when no profile is defined to have a distinct identifier.

* Activated vpn setup

* Fixed example command

* Added logging info for file push and commands

* fix slurmrestd connfiguration

* Implementing wireguard

* update task order (slurm-server)

* fix default user chown settings

* Add an additional mariadb repository for Ubuntu 20.04. Zabbix 7.2 needs at least MariaDB 10.5 or higher and Focal comes with MariaDB 10.3.

* Extend slurm documentation.

* Extends documentation that BiBiGrid now supports Ubuntu 20.04/22.04 and Debian 11 (fixes #348).

* cleanup

* fix typos in documentation

* Updated wg0

* fix typos in documentation

* add workflow-job to lint python/ansible

* add more output

* add more output

* update runner working directory

* make ansible_lint happy

* rewrite linting workflow
add linting dependencies

* fix a typo

* fix pylintrc -> remove ignore-pattern=test/ (not needed, since pylint currently lints bibigrid folder)
make pylint happy

* fixing jinja

* changed jinja

* Fixed wrong when clause

* Removed unnecessary comments and added index implementation

* this_peer is now used

* Added configuration reload if necessary

* Moved restart to handlers

* Added missing handler

* Changed to systemd setup

* Fixed nfs

* Fixed a few bugs more to come

* added some defaults

* Added vpn wkr without ip

* removed unnecessary print and fixed typo

* added vpn counter

* debugging bug

* debugging vpnwkr naming is wrong

* Commenting out worker creation

* Fixed bug making first worker and numberless

* fixed number order in deletion

* vpn workers added to instances.yml

* Added key generator for wireguard keys
Fixed minor bus and added wireguard vpn support except subnets

* Added subnet cidr

* Fixing default value bugs

* added identifier

* added identifier as variable and changed providers to access all flavors

* reformatted

* slurm

* fixed ip assigning

* foreign workers are now included in compute nodes

* Added vpnwkrs to playbook start

* Fixed formatting. Added identifier instead of "Test" for wireguard configuration to improve debugging

* Larger rework of instances file

* fixing bugs caused by aforementioned rework

* fixing bugs caused by aforementioned rework

* fixing bugs caused by aforementioned rework

* fixing bugs caused by aforementioned rework

* cluster_dict no longer needed for ansible configuration

* Changed instances_yml so it allows grouping by cloud

* Renamed to match jinja extension of other files

* instances.master

* instances.master

* removed master from instances list and fixed minor bugs.

* Fixed slicing

* Removed empty vpnworkers list as there can be only one

* Removed no longer needed import

* minor reference fixes regarding master and vpn

* Changed ip to cidr as it should be in nfs exports

* removed faulty space in nfs export entry

* added vpnwkrs to list of nodes to run ansible-playbook on

* added missing vpnwkr

* Set default partition

* Removed default partition as this key doesn't exist

* default if cloud fits

* all credentials will now be stored. Not compatible with save script yet.

* fixed wrong parameter type due to ac handling multiple providers now instead of just one

* Fixed cidr bug

* changed cloud_specification to use identifier

* Fixed master not being filtered out due to buggy detection

* create is now cloud structured but badly implemented (needs asynchronous implementation)

* Removed master = none

* removed faulty bracket.

* Worker start follows cloud structure now

* fixed badly placed assignment of ac_cloud_yaml

* replaced no longer fitting regex by an actual exact check using slurm's hostname resolution

* fixed old variable name leading to hickups

* Changed nfs exports to add all subnets. Currently not very nice looking, but working.

* Added comments and improved variable names.

* Added delete_server.py routine and connected it to fail.sh (untested).

* Further grouped code and simplified logging.

* fixed minor bugs and added a little bit of logging.

* patch for wait for post-launch services to stop

* Added private_v4 to configuration implementation. Bit dirty.

* Changed nfs for workers back to private_v4. Will crash with vpnwkr as long as security groups are not set correctly.

* Added missing instances

* add dnsmasq support ( #372 ) (#380)

* add dnsmasq support ( #372 )

* extend dnsmasq support ( #372 )

* bugfixes dnsmasq support ( #372 )

* fix ansible syntax
add all vpnworker to dnsmasq.hosts ( #372 )
change order of copying clouds.yaml
many changes

* Added wireguard_ip

* wireguard_ip increased by 1 to ignore master

* Added a print for private_v4 to symbolize the start of dns entry creation

* Add support for additional vars file : hosts.yml
Extend hosts.j2 template to support worker entries

* - extends instances configuration
- add worker_userdata template

* - remove unused wireguard-worker.yml
- add userdata support (create_server.py)
- enable ip forwarding and tcp mtu probing  on vpn gateways

* Fix program crash when image is not active (#382)

* Fixed function missing call

* Fixed linter that wasn't troubled before

* Fix ephemeral not working (#385)

* implemented usage of host_vars

* probably solved, but not best solution yet

* changed from host_vars to group_vars to have fewer files doing the same work

* update requirements.txt

* add ConfigurationException

* Provider and it implementation for Openstack gets another method to add allowed_addresses to an interface/port

* Remove not longer functions/ code fragments.  Add support for extended network configuration, when creating a multi-cloud cluster.

* added hybrid cloud

* updating check documentation

* updating check documentation

* updating check documentation

* Removed artefact

* Filled text beyond headings

* Add security group support to provider and its implementing classes.

* Update create action:
- support for security groups
- slightly restructuring

* add wirguard network to list of allowed addresses

* fix wrong usage of jinja templating

* add usage of security groups when creating a worker

* fix wireguard systemd network configuration

* add firewall rules when running in a multi-cloud setup

* add termination of created security groups
fix a converning adding allowed addresses

* fix "allowed addresses" when running with more than 2 providers

* pin openstacksdk to an older version to avoid deprecation warnings.

* Added host file solution for vpnwkrs. Moved wireguard to configuration.

* Added host vars to deletion process and fixed vpnwkrs using group vars instead of host vars bug.

* Fixing structural changes due to merge

* Fixed vpn workers getting lost

* fixed merge bug, improved data structure ansible/jinja

* Removed another bug regarding passing too many arguments.

* removed delay for now

* fixed worker count

* fixed wireguard

* Added reattempt for ConflictException still not perfect.

* Further fixed vpnwkr merge issues

* Adapted command to new group vpn that contains both master and vpnwkr

* Fixed wireguard ip bug

* fixed bug wireguard not installed on vpn-worker

* Changed "local" to "ssh" in order to avoid sudo right issue on master.

* fixed group name?

* adapted timeout to experiences

* fixed group name now using "-" instead of ":"

* fixed userdata being list cause of using readlines instead of read. Now is string.

* group name cannot contain '-' therefore switched to underscores. Maybe change this in the node naming convention as well.

* Make all clouds default

* first draft add ip routes

* Added ip routes to main.yml

* Changed ip route registration to make use of linux network files

* Workers now save the gateway_ip (private_v4 of master or vpnwkr). Also fixed a counting error.

* now using common variable wireguard_common instead of group_var wireguard which is always missing on workers.

* Added rights.

* Disabling netplan and going full networkd

* Disabling cloud network changes after initialization

* Added netplan deactivation

* Fixed connection issues

* Added missing handler and added a task that updates the host file on worker

* Fixed minor bad namings and added missing ".yaml" extension to task file

* Added implementation of "bibiname" a short script that allows node name creation

* fixed name issue regarding slurm user executing ansible. Now master name is determined without user involvement.

* renamed task to "generate bibiname script"

* Adapted scripts to meet hybrid cloud solution

* Added delete_server.py script to bin copied files

* fixed fail and terminate script

* changed terminate script to timeout delete

* fixed minor code issues

* fixed linting issues delete_server.py

* fixed linting issues provider.py

* fixed linting issues startup_tests.py

* fixed linting issues

* fixed linting issues

* fixed typo

* fixed termination ConflictException not caught

* Added basic structure for multi_cloud.md

* Added elixir compute presentation as an additional light-weight read.

* added this file that - in the future - maybe should hold information regarding other projects that are using BiBiGrid. That makes it easier to keep an eye on all applications that might be affected by BiBiGrid's changes.

* Added basic wireguard.md documentation

* fixed grammar

* removed redundant warning

* added dnsmasq documentation structure

* removed encryption

* updated purpose description

* update DNS

* now creating empty hosts.yml file in order to allow ansible execution

* Remove entire vars folder

* fixed path

* changed provider.NAME provider.cloud_specification['identifier']

* Removed vpnwkr from slurm as it should only be used to establish connection and not for computing

* Decoupled for loop worker ansible host creation from vpnwkr host creation

* fixed vpnwkr still being added to the partition even though the node doesn't exist anymore

* Fixed bug in bibiname.j2 that gave master a number (master never has a number as there is only one)

* removed all references to the instances.master

* removed further references to instances.yml and fixed bugs appearing because of it. Needs rework where master access can be shortened.

* fixed slurm.conf creating NodeName duplicates. Still unordered.

* Added all partition

* Removed instances.yml from create_server.py

* Removed instances.yml from delete_server.py

* removed last remains of instance.yml

* Servers are now created asynchronously.

* Fixed rest error

* Added support for feature in slurm.conf

* Putting features into group_vars

* Updated configuration.md documentation to mention new feature "feature" for instances and configuration.

* Added merge information and updates bibigrid.yml accordingly

* added features to master and workergroups

* fixed features not added as string to slurm.conf

* added missing empty line

* Now a single string instead of a list of features is understood as well.

* Improved cloud_identifier selection and documented the new way: picking clouds.yaml key.

* updated configuration.md and removed many inaccuracies

* changed instances to instance for instance creation as workers are no longer created.

* Improved create.md

* Improved naming of subparagraph

* Fixed indentation, readability and documentation

* Improved logging information.

* Improved logging

* Added warning message when configuration is not list.

* added configuration list parameter

* Added logging when network or subnet couldn't be set

* Improved logging of ConfigurationExceptions

* Improved documentation. Removed unnecessary variable in ide

* Improved documentation.

* Added brief information regarding wireguard and zabbix

* changed vpnwkr to vpngtw

* Fixed security group deletion for not multi-cloud clusters.

---------

Co-authored-by: Jan Krüger <[email protected]>
Co-authored-by: Jan Krüger <[email protected]>

* Added option to generate cluster_id before create process

* Added rest api prototype

* reworked naming convention and added terminate command. Added basic replies.

* Converter global LOG to class attribute self.log to enable different logs per thread

* Reverted logging to global logging because using redirect might be more feasible

* Using contextlib to redirect prints

* Started rewriting prints to logging and make logging not global and thread-safe

* Fixed list_clusters needing log now.

* updated terminate.py and occurences to local logging.

* changed logging to local for ansible configurator

* unfinished: started localizing logging in logging_path_handler.py

* updating ssh_handler.py now logging locally (and affected modules)

* updating ssh_handler.py now logging locally (and affected modules)

* improved variable names

* updated provider_handler.py to local logging

* changed global logging to local logging

* changed global logging to local logging

* Fixed many small logging mistakes and changed validation logging to local

* Fixed formatting

* Cleaned startup.py

* Fixed logging error and made use of logging for all commands

* Added cpu based worker selection

* Added new logging option 42 for "PRINT"

* Improved logger and added an explanation implementation

* Changed info to post and contains list now instead of single element

* Switched to main method.

* fixed many small things regarding log, added gateway mode for ssh_handler.py and fixed rest added get_log option

* Enabled multiple subnets for when network is given. Not fully operational yet.

* Fixed crash causing bug when using network instead of subnet

* Removed unnecessary debug warning

* made print nicer

* further fixed using network instead of subnet

* fixed issues regarding port calculation and gateway_ip

* Added check wether a cluster is running

* removed prints

* removed prints

* Added comments for docs

* Added pydantic base models

* Capitalized names

* added option to terminate with assume_true

* removed as docs fulfills this purpose now

* added option to not upload Credentials

* fixed minor bug causing bibigrid not finding private keys.

* removed print

* fixed name not being capitalized (ansible)

* fixed old linting error

* fixed old linting error

* implemented gateway with portFunction using sympy

* using gateway automatically deactivates public ip usage now.

* updated documentation

* update is now able to use gateway if given.

* ide is now able to use gateway if given.

* new version correctly integrated

* removed unnecessary add to stdout (already standard)

* removed unnecessary add to stdout (already standard) from startup_rest.py

* if regex is found, check will succeed now.

* fixed ssh not using gateway

---------

Co-authored-by: Jan Krüger <[email protected]>
Co-authored-by: Jan Krüger <[email protected]>
  • Loading branch information
3 people authored Sep 21, 2023
1 parent 8bdd30c commit af63c6d
Show file tree
Hide file tree
Showing 32 changed files with 995 additions and 506 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ resources/playbook/group_vars/

# any log files
*.log
log/

# Byte-compiled / optimized / DLL files
__pycache__/
Expand Down
1 change: 1 addition & 0 deletions bibigrid.sh
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
#!/bin/bash
python3 -m bibigrid.core.startup "$@"
6 changes: 6 additions & 0 deletions bibigrid.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,8 @@

## Uncomment if you don't want assign a public ip to the master; for internal cluster (Tuebingen).
#useMasterWithPublicIp: False # defaults True if False no public-ip (floating-ip) will be allocated
# deleteTmpKeypairAfter: False
# dontUploadCredentials: False

# Other keys - default False
#localFS: True
Expand Down Expand Up @@ -91,6 +93,10 @@

# Depends on cloud site and project
subnet: # existing subnet on your cloud. See https://openstack.cebitec.uni-bielefeld.de/project/networks/
# or network:
# gateway: # if you want to use a gateway for create.
# ip: # IP of gateway to use
# portFunction: 30000 + oct4 # variables are called: oct1.oct2.oct3.oct4

# Uncomment if no full DNS service for started instances is available.
# Currently, the case in Berlin, DKFZ, Heidelberg and Tuebingen.
Expand Down
12 changes: 5 additions & 7 deletions bibigrid/core/actions/check.py
Original file line number Diff line number Diff line change
@@ -1,21 +1,19 @@
"""
Module that acts as a wrapper and uses validate_configuration to validate given configuration
"""
import logging
from bibigrid.core.utility import validate_configuration

LOG = logging.getLogger("bibigrid")


def check(configurations, providers):
def check(configurations, providers, log):
"""
Uses validate_configuration to validate given configuration.
:param configurations: list of configurations (dicts)
:param providers: list of providers
:param log:
:return:
"""
success = validate_configuration.ValidateConfiguration(configurations, providers).validate()
success = validate_configuration.ValidateConfiguration(configurations, providers, log).validate()
check_result = "succeeded! Cluster is ready to start." if success else "failed!"
print(f"Total check {check_result}")
LOG.info("Total check returned %s.", success)
log.log(42, f"Total check {check_result}")
log.info("Total check returned %s.", success)
return 0
174 changes: 101 additions & 73 deletions bibigrid/core/actions/create.py

Large diffs are not rendered by default.

39 changes: 20 additions & 19 deletions bibigrid/core/actions/ide.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,16 @@
This module contains methods to establish port forwarding in order to access an ide (theia).
"""

import logging
import random
import re
import signal
import subprocess
import sys
import time
import webbrowser

import sshtunnel
import sympy

from bibigrid.core.utility.handler import cluster_ssh_handler

Expand All @@ -20,7 +21,6 @@
LOCAL_BIND_ADDRESS = 9191
MAX_JUMP = 100
LOCALHOST = "127.0.0.1"
LOG = logging.getLogger("bibigrid")


def sigint_handler(caught_signal, frame): # pylint: disable=unused-argument
Expand Down Expand Up @@ -49,37 +49,38 @@ def is_used(ip_address):
for line in lines:
is_open = re.match(rf'tcp.*{ip_address}:([0-9][0-9]*).*ESTABLISHED\s*$', line)
if is_open is not None:
print(line)
ports_used.append(is_open[1])


def ide(cluster_id, master_provider, master_configuration):
def ide(cluster_id, master_provider, master_configuration, log):
"""
Creates a port forwarding from LOCAL_BIND_ADDRESS to REMOTE_BIND_ADDRESS from localhost to master of specified
cluster
@param cluster_id: cluster_id or ip
@param master_provider: master's provider
@param master_configuration: master's configuration
@param log:
@return:
"""
LOG.info("Starting port forwarding for ide")
log.info("Starting port forwarding for ide")
master_ip, ssh_user, used_private_key = cluster_ssh_handler.get_ssh_connection_info(cluster_id, master_provider,
master_configuration)
master_configuration, log)
used_local_bind_address = LOCAL_BIND_ADDRESS
if master_ip and ssh_user and used_private_key:
attempts = 0
if master_configuration.get("gateway"):
octets = {f'oct{enum + 1}': int(elem) for enum, elem in enumerate(master_ip.split("."))}
port = sympy.sympify(master_configuration["gateway"]["portFunction"]).subs(dict(octets))
gateway = (master_configuration["gateway"]["ip"], int(port))
else:
gateway = None
while attempts < 16:
attempts += 1
try:
with sshtunnel.SSHTunnelForwarder(
ssh_address_or_host=master_ip, # Raspberry Pi in my network

ssh_username=ssh_user,
ssh_pkey=used_private_key,

local_bind_address=(LOCALHOST, used_local_bind_address),
remote_bind_address=(LOCALHOST, REMOTE_BIND_ADDRESS)
) as server:
with sshtunnel.SSHTunnelForwarder(ssh_address_or_host=gateway or master_ip, ssh_username=ssh_user,
ssh_pkey=used_private_key,
local_bind_address=(LOCALHOST, used_local_bind_address),
remote_bind_address=(LOCALHOST, REMOTE_BIND_ADDRESS)) as server:
print("CTRL+C to close port forwarding when you are done.")
with server:
# opens in existing window if any default program exists
Expand All @@ -88,11 +89,11 @@ def ide(cluster_id, master_provider, master_configuration):
time.sleep(5)
except sshtunnel.HandlerSSHTunnelForwarderError:
used_local_bind_address += random.randint(1, MAX_JUMP)
LOG.info("Attempt: %s. Port in use... Trying new port %s", attempts, used_local_bind_address)
log.info("Attempt: %s. Port in use... Trying new port %s", attempts, used_local_bind_address)
if not master_ip:
LOG.warning("Cluster id %s doesn't match an existing cluster with a master.", cluster_id)
log.warning("Cluster id %s doesn't match an existing cluster with a master.", cluster_id)
if not ssh_user:
LOG.warning("No ssh user has been specified in the first configuration.")
log.warning("No ssh user has been specified in the first configuration.")
if not used_private_key:
LOG.warning("No matching sshPublicKeyFiles can be found in the first configuration or in .bibigrid")
log.warning("No matching sshPublicKeyFiles can be found in the first configuration or in .bibigrid")
return 1
61 changes: 32 additions & 29 deletions bibigrid/core/actions/list_clusters.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,23 +3,22 @@
This includes a method to create a dictionary containing all running clusters and their servers.
"""

import logging
import pprint
import re

from bibigrid.core.actions import create

SERVER_REGEX = re.compile(r"^bibigrid-((master)-([a-zA-Z0-9]+)|(worker|vpngtw)-([a-zA-Z0-9]+)-\d+)$")
LOG = logging.getLogger("bibigrid")
SERVER_REGEX = re.compile(r"^bibigrid-((master)-([a-zA-Z0-9]+)|(worker|vpngtw)\d+-([a-zA-Z0-9]+)-\d+)$")


def dict_clusters(providers):
def dict_clusters(providers, log):
"""
Creates a dictionary containing all servers by type and provider information
:param providers: list of all providers
:param log:
:return: list of all clusters in yaml format
"""
LOG.info("Creating cluster dictionary...")
log.info("Creating cluster dictionary...")
cluster_dict = {}
for provider in providers:
servers = provider.list_servers()
Expand Down Expand Up @@ -54,56 +53,59 @@ def setup(cluster_dict, cluster_id, server, provider):
server["cloud_specification"] = provider.cloud_specification["identifier"]


def print_list_clusters(cluster_id, providers):
def log_list(cluster_id, providers, log):
"""
Calls dict_clusters and gives a visual representation of the found cluster.
Detail depends on whether a cluster_id is given or not.
:param cluster_id:
:param providers:
:param log:
:return:
"""
cluster_dict = dict_clusters(providers=providers)
if cluster_id: # pylint: disable=too-many-nested-blocks
cluster_dict = dict_clusters(providers=providers, log=log)
if cluster_id: # pylint: disable=too-many-nested-blocks
if cluster_dict.get(cluster_id):
LOG.info("Printing specific cluster_dictionary")
master_count, worker_count, vpn_count = get_size_overview(cluster_dict[cluster_id])
print(f"\tCluster has {master_count} master, {vpn_count} vpngtw and {worker_count} regular workers. "
f"The cluster is spread over {vpn_count + master_count} reachable provider(s).")
log.info("Printing specific cluster_dictionary")
master_count, worker_count, vpn_count = get_size_overview(cluster_dict[cluster_id], log)
log.log(42, f"\tCluster has {master_count} master, {vpn_count} vpngtw and {worker_count} regular workers. "
f"The cluster is spread over {vpn_count + master_count} reachable provider(s).")
pprint.pprint(cluster_dict[cluster_id])
else:
LOG.info("Cluster with cluster-id {cluster_id} not found.")
print(f"Cluster with cluster-id {cluster_id} not found.")
log.info("Cluster with cluster-id {cluster_id} not found.")
log.log(42, f"Cluster with cluster-id {cluster_id} not found.")
else:
LOG.info("Printing overview of cluster all clusters")
log.info("Printing overview of cluster all clusters")
if cluster_dict:
for cluster_key_id, cluster_node_dict in cluster_dict.items():
print(f"Cluster-ID: {cluster_key_id}")
log.log(42, f"Cluster-ID: {cluster_key_id}")
master = cluster_node_dict.get('master')
if master:
for key in ["name", "user_id", "launched_at", "key_name", "public_v4", "public_v6", "provider"]:
value = cluster_node_dict['master'].get(key)
if value:
print(f"\t{key}: {value}")
log.log(42, f"\t{key}: {value}")
security_groups = get_security_groups(cluster_node_dict)
print(f"\tsecurity_groups: {security_groups}")
log.log(42, f"\tsecurity_groups: {security_groups}")
networks = get_networks(cluster_node_dict)
print(f"\tnetwork: {pprint.pformat(networks)}")
log.log(42, f"\tnetwork: {pprint.pformat(networks)}")
else:
LOG.warning("No master for cluster: %s.", cluster_key_id)
master_count, worker_count, vpn_count = get_size_overview(cluster_node_dict)
print(f"\tCluster has {master_count} master, {vpn_count} vpngtw and {worker_count} regular workers. "
f"The cluster is spread over {vpn_count + master_count} reachable provider(s).")
log.warning("No master for cluster: %s.", cluster_key_id)
master_count, worker_count, vpn_count = get_size_overview(cluster_node_dict, log)
log.log(42,
f"\tCluster has {master_count} master, {vpn_count} vpngtw and {worker_count} regular workers. "
f"The cluster is spread over {vpn_count + master_count} reachable provider(s).")
else:
print("No cluster found.")
log.log(42, "No cluster found.")
return 0


def get_size_overview(cluster_dict):
def get_size_overview(cluster_dict, log):
"""
:param cluster_dict: dictionary of cluster to size_overview
:param log:
:return: number of masters, number of workers, number of vpns
"""
LOG.info("Printing size overview")
log.info("Printing size overview")
master_count = int(bool(cluster_dict.get("master")))
worker_count = len(cluster_dict.get("workers") or "")
vpn_count = len(cluster_dict.get("vpngtws") or "")
Expand Down Expand Up @@ -136,19 +138,20 @@ def get_security_groups(cluster_dict):
return security_groups


def get_master_access_ip(cluster_id, master_provider):
def get_master_access_ip(cluster_id, master_provider, log):
"""
Returns master's ip of cluster cluster_id
:param master_provider: master's provider
:param cluster_id: Id of cluster
:param log:
:return: public ip of master
"""
LOG.info("Finding master ip for cluster %s...", cluster_id)
log.info("Finding master ip for cluster %s...", cluster_id)
servers = master_provider.list_servers()
for server in servers:
master = create.MASTER_IDENTIFIER(cluster_id=cluster_id)
if server["name"].startswith(master):
return server.get("public_v4") or server.get("public_v6") or server.get("private_v4")
LOG.warning("Cluster %s not found on master_provider %s.", cluster_id,
log.warning("Cluster %s not found on master_provider %s.", cluster_id,
master_provider.cloud_specification["identifier"])
return None
Loading

0 comments on commit af63c6d

Please sign in to comment.