Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keto Issues & Feature Requests #69

Open
23 tasks
KashifSaadat opened this issue Jun 23, 2017 · 0 comments
Open
23 tasks

Keto Issues & Feature Requests #69

KashifSaadat opened this issue Jun 23, 2017 · 0 comments

Comments

@KashifSaadat
Copy link
Contributor

These are observations just from using the command line client. I've not gotten as far as logging into and validating the setup of the cluster internals yet.

Initial Boilerplate Setup:

  • (Feature Request) Any benefit implementing the capability to setup a clean VPC for fresh deployments, so you can be up and running from scratch purely through Keto? Although we should be careful that functionality is not too closely coupled with a single cloud provider.

Notifications / Verbosity:

  • (Feature Request) On Create / Delete cluster, list the resources to be modified
  • (Feature Request) Add confirmation prompt on cluster modifications
  • (Feature Request) Add '--force' flag or similar for non interactive mode (if the above is implemented)
  • (Feature Request) On successful creation of each stack / resource group, write out to console (giving some level of progress on build) and a total of remaining resources to build.
  • (Feature Request) Are there any specific resource groups that could be created in parallel to speed up deployment?

Labels / Naming:

  • It appears that the additional labels specified weren't passed through in the build, or added onto the resources created. The following command was executed (keto --cloud aws create cluster kashtestcluster --ssh-key kash-key --networks subnet-33e01554,subnet-6988be31,subnet-e327d0aa --machine-type t2.medium --compute-pools 3 --pool-size 3 --labels kash1=tESt1,KASH2=teST2 --assets-dir ~/Downloads/certs/)
  • (Feature Request) Ability to specify custom names per compute pool in the 'create cluster' command (not as crucial considering you can build compute and master pools independently)
  • Primary network interfaces aren't 'Name' tagged

ELB:

  • On build, all master nodes are out of service on the ELB. (Lewis mentioned this is a known issue on keto-k8 master)
  • (Feature Request) Ability to specify source addresses for incoming connections, default to allow all.
  • With a long cluster name, the address is truncated in creation of the ELB. I'm pretty sure this is a non-issue but wanted to raise just in-case it results in any weird edge case problems: https://keto-kashtestc-elb-z9cjw71ucl3v-9847412.eu-west-1.elb.amazonaws.com

Args / Errors:

  • Clarify the expected key file names in the help output for '--assets-dir'
  • "Error: unable to determine region" - (AWS Provider) Maybe provide additional supporting info to tell the user to make sure their AWS_PROFILE and AWS_DEFAULT_REGION env-vars are set.
  • (Feature Request) Specify machine type independently for master nodes and different compute pools (not as crucial considering you can build compute and master pools independently)
  • (Feature Request) In help output, include column with Y/N for required flags? You already mention default values in brackets, so that might be sufficient.
  • Commands across create / delete don't seem consistent. Creating a master pool is specified as 'keto create masterpool NAME', however deletion is 'keto delete masterpool --cluster NAME'

Deletion:

  • No validation checks on deleting a masterpool, when there are associated computepools
  • ELB not deleted on deleting a masterpool
  • Secondary data volumes not deleted on deleting a masterpool (10GB ones)
  • Secondary network interfaces not deleted on deleting a masterpool
  • Security Groups not deleted on deleting a masterpool

Creation:

  • Can't create a masterpool without having a cluster created, can't create a masterpool in a cluster as one already exists. Is 'create masterpool' a valid command?

Questions:

  • How does the coreos-version work? Does the string match against machine images located with the cloud provider? - Edit: Tested and yep it matches against the name.
  • Master hosts have two volumes, the boot volume is increased when specifying '--disk-size' (which is correct according to the help command) and the additional volume is left at 10GB. What's the additional volume used for?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant