Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New concept of deployment #1

Open
tailhook opened this issue Nov 9, 2017 · 0 comments
Open

New concept of deployment #1

tailhook opened this issue Nov 9, 2017 · 0 comments

Comments

@tailhook
Copy link
Owner

tailhook commented Nov 9, 2017

Configuration

User adds the following to their vagga file:

mixins:
- vagga/deploy.yaml

commands:
  deploy: !CapsuleCommand
  run:
  - vagga
  - _script
  - https://github.com/.../deployment_script
  - --destination=http://internal.network/your-deployment.json
  - --containers="python,assets"

Then they can run vagga deploy staging or vagga deploy production

How Does it Work?

  1. CapsuleCommand downloads a script
  2. Then script downloads your-deployment.json
  3. Then script generates vagga/deploy.yaml according to your-deployment.json
  4. Then it runs ciruela upload to servers described in json
  5. Then it pushes a list of images to verwalter HTTP API

Details follow. The important points here:

Step (4) is configurable. We might allow rsync, or use docker import and docker push
Step (5) is also configurable, it might not use verwalter, or it might put the metadata into intermediate storage and create a release from multiple repositories individually uploaded to servers.

What is in vagga/deploy.yaml

Basically, it wraps each container into:

containers:
  xxx-deploy:
    setup:
    - !SubConfig
      path: vagga.yaml
      container: xxx
    - !Tar
      url: https://github.com/.../container-cooker.tar
      script: "./container-cooker"

What Does "container-cooker" Do ?

Note: the name container-cooker is just for explanatory purposes.

It validates configs and fixes things that lead to many mistakes:

  1. !EnsureDir for all the volumes
  2. Find lithos (or maybe other) configs, check which ones belong to this container (probably by looking at executable), and copy them to the container
  3. Put configs or metadata extracted from them to some well-known place, so that verwalter could find them
  4. Maybe optimize some things in container: clean common tmp folders which vagga doesn't clean by default, reset timestamps and recompile *.pyc files (latter makes containers reproducible)
  5. Might execute some vulnerability checks, or extract package lists so it would be easier to run vulnerability checks later
  6. Might generate some lithos configs from vagga config
  7. Make hosts and resolv.conf symlinks

What Does your-deployment.json Contain?

It should describe the full deployment target, here is the non-exhaustive list of things:

  1. A validator for lithos' metadata. We expect that every deployment can have it's own scripting in verwalter so it might need more or less metadata. Still, validation of metadata is super-useful. [*]
  2. Additional checks for the config, i.e. it may require always having PYTHONIOENCODING if the executable is pythonic, or an /app_version file in the root of the container
  3. An entry point for the ciruela or another way of image upload
  4. An API endpoint for verwalter
  5. Conventions on the configuration files, which are staging and which are production so you can just name the environment.

All of these things except hostnames (3, 4) could be a hardcoded convention, but I feel it would be too restrictive and does not take advantage of full verwalter power (or makes it less convenient if metadata is not validated properly).

No keys/secrets/passwords are contained in json. Keys are passed through the environment variables.

[*] not sure which validator to use though, maybe json_schema or livr

How Verwalter Works?

Currently, verwalter relies on having all needed metadata extracted from the repo/image/config and put into a "runtime" folder. While we're trying to move most things into container itself we still need one thing left: a link between container which constitute a new version, i.e. a version v1.2.3 might have containers app.v1.2.3 and redis.023fed, i.e. the redis container is versioned by hashed and only updated when configuration change.

So the thing pushed into verwalter will be basically a dict:

{
  'app-web-staging': 'app.v1.2.3',
  'app-celery-staging': 'app.v1.2.3',
  'redis': 'redis.023fed',
}

I.e. a mapping with process name to it's container name. The other metadata/configuration files are stored in the image itself with some convention (not a real one, to be determined):

/lithos-configs/app-web-staging.yaml

And presumably, verwalter needs to figure out few things:

  1. Which machines in the cluster have this image
  2. Get the lithos config from image (by accessing ciruela itself) and extract metadata from it (by metadata I mean both: the metadata key and useful things such as memory-limit or cpu-shares, maybe even display the whole thing in GUI)

This should be enough for verwalter to do the work. Note: it's up to scheduler whether to enable version immediately, wait for more machines to fetch image, only upgrade existing processes or run new ones right from the point their config is pushed, add new services to nginx automatically or not and so on.

Notes

  1. Versioning of deployment might be suboptimal, if configs are copied by container-cooker. But since these are deployment containers, it's usually just enough to put !GitDescribe into command, to make them rebuild often enough (basically every commit). We don't want to put it by default because you might want database containers which are not restarted on each deploy. Other option is explicitly opt out of versioning on script's command-line
  2. Caching of json is unclear, but basically it can be cached for dry-run and never cached for deployment (i.e. you can check configs in airplane but obviously not deploy)
  3. At the end of the day, you can fork both the deployment-script and container-cooker and provide very different deployment tool with very same interface, say pack and deploy it to heroku or AMI

/cc @anti-social, @popravich

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant