'+ escapeHtml(title) + '
' + escapeHtml(summary) +'
diff --git a/CNAME b/CNAME new file mode 100644 index 000000000..0da558c57 --- /dev/null +++ b/CNAME @@ -0,0 +1,2 @@ +www.drycc.cc +drycc.cc diff --git a/README.md b/README.md new file mode 100644 index 000000000..bbb184163 --- /dev/null +++ b/README.md @@ -0,0 +1,3 @@ +# Workflow Contrib + +Scripts and tools that are not a core part of Drycc Workflow v2. diff --git a/_includes/install-workflow/index.html b/_includes/install-workflow/index.html new file mode 100644 index 000000000..b9ca82617 --- /dev/null +++ b/_includes/install-workflow/index.html @@ -0,0 +1,897 @@ + + + + + + +
+ + + + + + + + + + +' + escapeHtml(summary) +'
' + noResultsText + '
'); + } +} + +function doSearch () { + var query = document.getElementById('mkdocs-search-query').value; + if (query.length > min_search_length) { + if (!window.Worker) { + displayResults(search(query)); + } else { + searchWorker.postMessage({query: query}); + } + } else { + // Clear results for short queries + displayResults([]); + } +} + +function initSearch () { + var search_input = document.getElementById('mkdocs-search-query'); + if (search_input) { + search_input.addEventListener("keyup", doSearch); + } + var term = getSearchTermFromLocation(); + if (term) { + search_input.value = term; + doSearch(); + } +} + +function onWorkerMessage (e) { + if (e.data.allowSearch) { + initSearch(); + } else if (e.data.results) { + var results = e.data.results; + displayResults(results); + } else if (e.data.config) { + min_search_length = e.data.config.min_search_length-1; + } +} + +if (!window.Worker) { + console.log('Web Worker API not supported'); + // load index in main thread + $.getScript(joinUrl(base_url, "search/worker.js")).done(function () { + console.log('Loaded worker'); + init(); + window.postMessage = function (msg) { + onWorkerMessage({data: msg}); + }; + }).fail(function (jqxhr, settings, exception) { + console.error('Could not load worker.js'); + }); +} else { + // Wrap search in a web worker + var searchWorker = new Worker(joinUrl(base_url, "search/worker.js")); + searchWorker.postMessage({init: true}); + searchWorker.onmessage = onWorkerMessage; +} diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 000000000..a75b33624 --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Welcome to Drycc \u00b6 Drycc Workflow is an open source container cloud platform. Usually we also call it Container as a Service(CaaS) that adds a developer-friendly layer to any Kubernetes cluster, making it easy to deploy and manage applications. Drycc Workflow includes capabilities for building and deploying from source via git push , simple application configuration, creating and rolling back releases, managing domain names and SSL certificates, providing seamless edge routing, aggregating logs, and sharing applications with teams. All of this is exposed through a simple REST API and command line interface. Getting Started \u00b6 To get started with Workflow, follow our Quick Start guide. Take a deep dive into Drycc Workflow in our Concepts , Architecture , and Components sections. Feel like contibuting some code or want to get started as a maintainer? Pick an issue tagged as an easy fix or help wanted and start contributing! Service and Support \u00b6 Coming soon.","title":"Home"},{"location":"#welcome-to-drycc","text":"Drycc Workflow is an open source container cloud platform. Usually we also call it Container as a Service(CaaS) that adds a developer-friendly layer to any Kubernetes cluster, making it easy to deploy and manage applications. Drycc Workflow includes capabilities for building and deploying from source via git push , simple application configuration, creating and rolling back releases, managing domain names and SSL certificates, providing seamless edge routing, aggregating logs, and sharing applications with teams. All of this is exposed through a simple REST API and command line interface.","title":"Welcome to Drycc"},{"location":"#getting-started","text":"To get started with Workflow, follow our Quick Start guide. Take a deep dive into Drycc Workflow in our Concepts , Architecture , and Components sections. Feel like contibuting some code or want to get started as a maintainer? Pick an issue tagged as an easy fix or help wanted and start contributing!","title":"Getting Started"},{"location":"#service-and-support","text":"Coming soon.","title":"Service and Support"},{"location":"_includes/install-workflow/","text":"Check Your Setup \u00b6 First check that the helm command is available and the version is v2.5.0 or newer. $ helm version Client: &version.Version{SemVer:\"v2.5.0\", GitCommit:\"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6\", GitTreeState:\"clean\"} Server: &version.Version{SemVer:\"v2.5.0\", GitCommit:\"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6\", GitTreeState:\"clean\"} Ensure the kubectl client is installed and can connect to your Kubernetes cluster. Install Drycc Workflow \u00b6 Now that Helm is installed and the repository has been added, install Workflow by running: $ helm install drycc oci://registry.drycc.cc/charts/workflow --namespace drycc Helm will install a variety of Kubernetes resources in the drycc namespace. Wait for the pods that Helm launched to be ready. Monitor their status by running: $ kubectl --namespace=drycc get pods If it's preferred to have kubectl automatically update as the pod states change, run (type Ctrl-C to stop the watch): $ kubectl --namespace=drycc get pods -w Depending on the order in which the Workflow components initialize, some pods may restart. This is common during the installation: if a component's dependencies are not yet available, that component will exit and Kubernetes will automatically restart it. Here, it can be seen that the controller, builder and registry all took a few loops before they were able to start: $ kubectl --namespace=drycc get pods NAME READY STATUS RESTARTS AGE drycc-builder-hy3xv 1/1 Running 5 5m drycc-controller-g3cu8 1/1 Running 5 5m drycc-controller-celery-cmxxn 3/3 Running 0 5m drycc-database-rad1o 1/1 Running 0 5m drycc-logger-fluentbit-1v8uk 1/1 Running 0 5m drycc-logger-fluentbit-esm60 1/1 Running 0 5m drycc-logger-sm8b3 1/1 Running 0 5m drycc-storage-4ww3t 1/1 Running 0 5m drycc-registry-asozo 1/1 Running 1 5m drycc-rabbitmq-0 1/1 Running 0 5m Once all of the pods are in the READY state, Drycc Workflow is up and running!","title":"Install workflow"},{"location":"_includes/install-workflow/#check-your-setup","text":"First check that the helm command is available and the version is v2.5.0 or newer. $ helm version Client: &version.Version{SemVer:\"v2.5.0\", GitCommit:\"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6\", GitTreeState:\"clean\"} Server: &version.Version{SemVer:\"v2.5.0\", GitCommit:\"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6\", GitTreeState:\"clean\"} Ensure the kubectl client is installed and can connect to your Kubernetes cluster.","title":"Check Your Setup"},{"location":"_includes/install-workflow/#install-drycc-workflow","text":"Now that Helm is installed and the repository has been added, install Workflow by running: $ helm install drycc oci://registry.drycc.cc/charts/workflow --namespace drycc Helm will install a variety of Kubernetes resources in the drycc namespace. Wait for the pods that Helm launched to be ready. Monitor their status by running: $ kubectl --namespace=drycc get pods If it's preferred to have kubectl automatically update as the pod states change, run (type Ctrl-C to stop the watch): $ kubectl --namespace=drycc get pods -w Depending on the order in which the Workflow components initialize, some pods may restart. This is common during the installation: if a component's dependencies are not yet available, that component will exit and Kubernetes will automatically restart it. Here, it can be seen that the controller, builder and registry all took a few loops before they were able to start: $ kubectl --namespace=drycc get pods NAME READY STATUS RESTARTS AGE drycc-builder-hy3xv 1/1 Running 5 5m drycc-controller-g3cu8 1/1 Running 5 5m drycc-controller-celery-cmxxn 3/3 Running 0 5m drycc-database-rad1o 1/1 Running 0 5m drycc-logger-fluentbit-1v8uk 1/1 Running 0 5m drycc-logger-fluentbit-esm60 1/1 Running 0 5m drycc-logger-sm8b3 1/1 Running 0 5m drycc-storage-4ww3t 1/1 Running 0 5m drycc-registry-asozo 1/1 Running 1 5m drycc-rabbitmq-0 1/1 Running 0 5m Once all of the pods are in the READY state, Drycc Workflow is up and running!","title":"Install Drycc Workflow"},{"location":"applications/deploying-apps/","text":"Deploying an Application \u00b6 An Application is deployed to Drycc using git push or the drycc client. Supported Applications \u00b6 Drycc Workflow can deploy any application or service that can run inside a container. In order to be scaled horizontally, applications must follow the Twelve-Factor App methodology and store any application state in external backing services. For example, if your application persists state to the local filesystem -- common with content management systems like Wordpress and Drupal -- it cannot be scaled horizontally using drycc scale . Fortunately, most modern applications feature a stateless application tier that can scale horizontally inside Drycc. Login to the Controller \u00b6 Important if you haven't yet, now is a good time to install the client and register . Before deploying an application, users must first authenticate against the Drycc Controller using the URL supplied by their Drycc administrator. $ drycc login http://drycc.example.com Opening browser to http://drycc.example.com/v2/login/drycc/?key=4ccc81ee2dce4349ad5261ceffe72c71 Waiting for login... .o.Logged in as admin Configuration file written to /root/.drycc/client.json Select a Build Process \u00b6 Drycc Workflow supports three different ways of building applications: Buildpacks \u00b6 Cloud Native Buildpacks are useful if you want to follow cnb's docs for building applications. Learn how to deploy applications using Buildpacks . Dockerfiles \u00b6 Dockerfiles are a powerful way to define a portable execution environment built on a base OS of your choosing. Learn how to deploy applications using Dockerfiles . Container Image \u00b6 Deploying a Container image onto Drycc allows you to take a Container image from either a public or a private registry and copy it over bit-for-bit, ensuring that you are running the same image in development or in your CI pipeline as you are in production. Learn how to deploy applications using Container images . Tuning Application Settings \u00b6 It is possible to configure a few of the globally tunable settings on per application basis using config:set . Setting Description DRYCC_DISABLE_CACHE if set, this will disable the [imagebuilder cache][] (default: not set) DRYCC_DEPLOY_BATCHES the number of pods to bring up and take down sequentially during a scale (default: number of available nodes) DRYCC_DEPLOY_TIMEOUT deploy timeout in seconds per deploy batch (default: 120) IMAGE_PULL_POLICY the kubernetes [image pull policy][pull-policy] for application images (default: \"IfNotPresent\") (allowed values: \"Always\", \"IfNotPresent\") KUBERNETES_DEPLOYMENTS_REVISION_HISTORY_LIMIT how many revisions Kubernetes keeps around of a given Deployment (default: all revisions) KUBERNETES_POD_TERMINATION_GRACE_PERIOD_SECONDS how many seconds kubernetes waits for a pod to finish work after a SIGTERM before sending SIGKILL (default: 30) Deploy Timeout \u00b6 Deploy timeout in seconds - There are 2 deploy methods, Deployments (see below) and RC (versions prior to 2.4) and this setting affects those a bit differently. Deployments \u00b6 Deployments behave a little bit differently from the RC based deployment strategy. Kubernetes takes care of the entire deploy, doing rolling updates in the background. As a result, there is only an overall deployment timeout instead of a configurable per-batch timeout. The base timeout is multiplied with DRYCC_DEPLOY_BATCHES to create an overall timeout. This would be 240 (timeout) * 4 (batches) = 960 second overall timeout. RC deploy \u00b6 This deploy timeout defines how long to wait for each batch to complete in DRYCC_DEPLOY_BATCHES . Additions to the base timeout \u00b6 The base timeout is extended as well with healthchecks using initialDelaySeconds on liveness and readiness where the bigger of those two is applied. Additionally the timeout system accounts for slow image pulls by adding an additional 10 minutes when it has seen an image pull take over 1 minute. This allows the timeout values to be reasonable without having to account for image pull slowness in the base deploy timeout. Deployments \u00b6 Workflow uses Deployments for deploys. In prior versions ReplicationControllers were used with the ability to turn on Deployments via DRYCC_KUBERNETES_DEPLOYMENTS=1 . The advantage of Deployments is that rolling-updates will happen server-side in Kubernetes instead of in Drycc Workflow Controller, along with a few other Pod management related functionality. This allows a deploy to continue even when the CLI connection is interrupted. Behind the scenes your application deploy will be built up of a Deployment object per process type, each having multiple ReplicaSets (one per release) which in turn manage the Pods running your application. Drycc Workflow will behave the same way with DRYCC_KUBERNETES_DEPLOYMENTS enabled or disabled (only applicable to versions prior to 2.4). The changes are behind the scenes. Where you will see differences while using the CLI is drycc ps:list will output Pod names differently.","title":"Deploying Apps"},{"location":"applications/deploying-apps/#deploying-an-application","text":"An Application is deployed to Drycc using git push or the drycc client.","title":"Deploying an Application"},{"location":"applications/deploying-apps/#supported-applications","text":"Drycc Workflow can deploy any application or service that can run inside a container. In order to be scaled horizontally, applications must follow the Twelve-Factor App methodology and store any application state in external backing services. For example, if your application persists state to the local filesystem -- common with content management systems like Wordpress and Drupal -- it cannot be scaled horizontally using drycc scale . Fortunately, most modern applications feature a stateless application tier that can scale horizontally inside Drycc.","title":"Supported Applications"},{"location":"applications/deploying-apps/#login-to-the-controller","text":"Important if you haven't yet, now is a good time to install the client and register . Before deploying an application, users must first authenticate against the Drycc Controller using the URL supplied by their Drycc administrator. $ drycc login http://drycc.example.com Opening browser to http://drycc.example.com/v2/login/drycc/?key=4ccc81ee2dce4349ad5261ceffe72c71 Waiting for login... .o.Logged in as admin Configuration file written to /root/.drycc/client.json","title":"Login to the Controller"},{"location":"applications/deploying-apps/#select-a-build-process","text":"Drycc Workflow supports three different ways of building applications:","title":"Select a Build Process"},{"location":"applications/deploying-apps/#buildpacks","text":"Cloud Native Buildpacks are useful if you want to follow cnb's docs for building applications. Learn how to deploy applications using Buildpacks .","title":"Buildpacks"},{"location":"applications/deploying-apps/#dockerfiles","text":"Dockerfiles are a powerful way to define a portable execution environment built on a base OS of your choosing. Learn how to deploy applications using Dockerfiles .","title":"Dockerfiles"},{"location":"applications/deploying-apps/#container-image","text":"Deploying a Container image onto Drycc allows you to take a Container image from either a public or a private registry and copy it over bit-for-bit, ensuring that you are running the same image in development or in your CI pipeline as you are in production. Learn how to deploy applications using Container images .","title":"Container Image"},{"location":"applications/deploying-apps/#tuning-application-settings","text":"It is possible to configure a few of the globally tunable settings on per application basis using config:set . Setting Description DRYCC_DISABLE_CACHE if set, this will disable the [imagebuilder cache][] (default: not set) DRYCC_DEPLOY_BATCHES the number of pods to bring up and take down sequentially during a scale (default: number of available nodes) DRYCC_DEPLOY_TIMEOUT deploy timeout in seconds per deploy batch (default: 120) IMAGE_PULL_POLICY the kubernetes [image pull policy][pull-policy] for application images (default: \"IfNotPresent\") (allowed values: \"Always\", \"IfNotPresent\") KUBERNETES_DEPLOYMENTS_REVISION_HISTORY_LIMIT how many revisions Kubernetes keeps around of a given Deployment (default: all revisions) KUBERNETES_POD_TERMINATION_GRACE_PERIOD_SECONDS how many seconds kubernetes waits for a pod to finish work after a SIGTERM before sending SIGKILL (default: 30)","title":"Tuning Application Settings"},{"location":"applications/deploying-apps/#deploy-timeout","text":"Deploy timeout in seconds - There are 2 deploy methods, Deployments (see below) and RC (versions prior to 2.4) and this setting affects those a bit differently.","title":"Deploy Timeout"},{"location":"applications/deploying-apps/#deployments","text":"Deployments behave a little bit differently from the RC based deployment strategy. Kubernetes takes care of the entire deploy, doing rolling updates in the background. As a result, there is only an overall deployment timeout instead of a configurable per-batch timeout. The base timeout is multiplied with DRYCC_DEPLOY_BATCHES to create an overall timeout. This would be 240 (timeout) * 4 (batches) = 960 second overall timeout.","title":"Deployments"},{"location":"applications/deploying-apps/#rc-deploy","text":"This deploy timeout defines how long to wait for each batch to complete in DRYCC_DEPLOY_BATCHES .","title":"RC deploy"},{"location":"applications/deploying-apps/#additions-to-the-base-timeout","text":"The base timeout is extended as well with healthchecks using initialDelaySeconds on liveness and readiness where the bigger of those two is applied. Additionally the timeout system accounts for slow image pulls by adding an additional 10 minutes when it has seen an image pull take over 1 minute. This allows the timeout values to be reasonable without having to account for image pull slowness in the base deploy timeout.","title":"Additions to the base timeout"},{"location":"applications/deploying-apps/#deployments_1","text":"Workflow uses Deployments for deploys. In prior versions ReplicationControllers were used with the ability to turn on Deployments via DRYCC_KUBERNETES_DEPLOYMENTS=1 . The advantage of Deployments is that rolling-updates will happen server-side in Kubernetes instead of in Drycc Workflow Controller, along with a few other Pod management related functionality. This allows a deploy to continue even when the CLI connection is interrupted. Behind the scenes your application deploy will be built up of a Deployment object per process type, each having multiple ReplicaSets (one per release) which in turn manage the Pods running your application. Drycc Workflow will behave the same way with DRYCC_KUBERNETES_DEPLOYMENTS enabled or disabled (only applicable to versions prior to 2.4). The changes are behind the scenes. Where you will see differences while using the CLI is drycc ps:list will output Pod names differently.","title":"Deployments"},{"location":"applications/domains-and-routing/","text":"Domains and Routing \u00b6 You can use drycc domains to add or remove custom domains to the application: $ drycc domains:add hello.bacongobbler.com Adding hello.bacongobbler.com to finest-woodshed... done Once that's done, you can go into a DNS registrar and set up a CNAME from the new appname to the old one: $ dig hello.dryccapp.com [...] ;; ANSWER SECTION: hello.bacongobbler.com. 1759 IN CNAME finest-woodshed.dryccapp.com. finest-woodshed.dryccapp.com. 270 IN A 172.17.8.100 Note Setting a CNAME for a root domain can cause issues. Setting an @ record to be a CNAME causes all traffic to go to the other domain, including mail and the SOA (\"start-of-authority\") records. It is highly recommended that you bind a subdomain to an application, however you can work around this by pointing the @ record to the address of the load balancer (if any). To add or remove the application from the routing mesh, use drycc routing : $ drycc routing:disable Disabling routing for finest-woodshed... done This will make the application unreachable through the Router , but the application is still reachable internally through its Kubernetes Service . To re-enable routing: $ drycc routing:enable Enabling routing for finest-woodshed... done","title":"Domains and Routing"},{"location":"applications/domains-and-routing/#domains-and-routing","text":"You can use drycc domains to add or remove custom domains to the application: $ drycc domains:add hello.bacongobbler.com Adding hello.bacongobbler.com to finest-woodshed... done Once that's done, you can go into a DNS registrar and set up a CNAME from the new appname to the old one: $ dig hello.dryccapp.com [...] ;; ANSWER SECTION: hello.bacongobbler.com. 1759 IN CNAME finest-woodshed.dryccapp.com. finest-woodshed.dryccapp.com. 270 IN A 172.17.8.100 Note Setting a CNAME for a root domain can cause issues. Setting an @ record to be a CNAME causes all traffic to go to the other domain, including mail and the SOA (\"start-of-authority\") records. It is highly recommended that you bind a subdomain to an application, however you can work around this by pointing the @ record to the address of the load balancer (if any). To add or remove the application from the routing mesh, use drycc routing : $ drycc routing:disable Disabling routing for finest-woodshed... done This will make the application unreachable through the Router , but the application is still reachable internally through its Kubernetes Service . To re-enable routing: $ drycc routing:enable Enabling routing for finest-woodshed... done","title":"Domains and Routing"},{"location":"applications/inter-app-communication/","text":"Inter-app Communication \u00b6 A common architecture pattern of multi-process applications is to have one process serve public requests while having multiple other processes supporting the public one to, for example, perform actions on a schedule or process work items from a queue. To implement this system of apps in Drycc Workflow, set up the apps to communicate using DNS resolution, as shown above, and hide the supporting processes from public view by removing them from the Drycc Workflow router. DNS Service Discovery \u00b6 Drycc Workflow supports deploying a single app composed of a system of processes. Each Drycc Workflow app communicates on a single port, so communicating with another Workflow app means finding that app's address and port. All Workflow apps are mapped to port 80 externally, so finding its IP address is the only challenge. Workflow creates a Kubernetes Service for each app, which effectively assigns a name and one cluster-internal IP address to an app. The DNS service running in the cluster adds and removes DNS records which point from the app name to its IP address as services are added and removed. Drycc Workflow apps, then, can simply send requests to the domain name given to the service, which is \"app-name.app-namespace\".","title":"Inter-app Communication"},{"location":"applications/inter-app-communication/#inter-app-communication","text":"A common architecture pattern of multi-process applications is to have one process serve public requests while having multiple other processes supporting the public one to, for example, perform actions on a schedule or process work items from a queue. To implement this system of apps in Drycc Workflow, set up the apps to communicate using DNS resolution, as shown above, and hide the supporting processes from public view by removing them from the Drycc Workflow router.","title":"Inter-app Communication"},{"location":"applications/inter-app-communication/#dns-service-discovery","text":"Drycc Workflow supports deploying a single app composed of a system of processes. Each Drycc Workflow app communicates on a single port, so communicating with another Workflow app means finding that app's address and port. All Workflow apps are mapped to port 80 externally, so finding its IP address is the only challenge. Workflow creates a Kubernetes Service for each app, which effectively assigns a name and one cluster-internal IP address to an app. The DNS service running in the cluster adds and removes DNS records which point from the app name to its IP address as services are added and removed. Drycc Workflow apps, then, can simply send requests to the domain name given to the service, which is \"app-name.app-namespace\".","title":"DNS Service Discovery"},{"location":"applications/managing-app-configuration/","text":"Configuring an Application \u00b6 A Drycc application stores config in environment variables . Setting Environment Variables \u00b6 Use drycc config to modify environment variables for a deployed application. $ drycc help config Valid commands for config: config:list list environment variables for an app config:set set environment variables for an app config:unset unset environment variables for an app config:pull extract environment variables to .env config:push set environment variables from .env Use `drycc help [command]` to learn more. When config is changed, a new release is created and deployed automatically. You can set multiple environment variables with one drycc config:set command, or with drycc config:push and a local .env file. $ drycc config:set FOO=1 BAR=baz && drycc config:pull $ cat .env FOO=1 BAR=baz $ echo \"TIDE=high\" >> .env $ drycc config:push Creating config... done, v4 === yuppie-earthman DRYCC_APP: yuppie-earthman FOO: 1 BAR: baz TIDE: high Attach to Backing Services \u00b6 Drycc treats backing services like databases, caches and queues as attached resources . Attachments are performed using environment variables. For example, use drycc config to set a DATABASE_URL that attaches the application to an external PostgreSQL database. $ drycc config:set DATABASE_URL=postgres://user:pass@example.com:5432/db === peachy-waxworks DATABASE_URL: postgres://user:pass@example.com:5432/db Detachments can be performed with drycc config:unset . Buildpacks Cache \u00b6 By default, apps using the [Imagebuilder][] will reuse the latest image data. When deploying applications that depend on third-party libraries that have to be fetched, this could speed up deployments a lot. In order to make use of this, the buildpack must implement the cache by writing to the cache directory. Most buildpacks already implement this, but when using custom buildpacks, it might need to be changed to make full use of the cache. Disabling and re-enabling the cache \u00b6 In some cases, cache might not speed up your application. To disable caching, you can set the DRYCC_DISABLE_CACHE variable with drycc config:set DRYCC_DISABLE_CACHE=1 . When you disable the cache, Drycc will clear up files it created to store the cache. After having it turned off, run drycc config:unset DRYCC_DISABLE_CACHE to re-enable the cache. Clearing the cache \u00b6 Use the following procedure to clear the cache: $ drycc config:set DRYCC_DISABLE_CACHE=1 $ git commit --allow-empty -m \"Clearing Drycc cache\" $ git push drycc # (if you use a different remote, you should use your remote name) $ drycc config:unset DRYCC_DISABLE_CACHE Custom Health Checks \u00b6 By default, Workflow only checks that the application starts in their Container. If it is preferred to have Kubernetes respond to application health, a health check may be added by configuring a health check probe for the application. The health checks are implemented as Kubernetes container probes . A liveness and a readiness probe can be configured, and each probe can be of type httpGet , exec , or tcpSocket depending on the type of probe the container requires. A liveness probe is useful for applications running for long periods of time, eventually transitioning to broken states and cannot recover except by restarting them. Other times, a readiness probe is useful when the container is only temporarily unable to serve, and will recover on its own. In this case, if a container fails its readiness probe, the container will not be shut down, but rather the container will stop receiving incoming requests. httpGet probes are just as it sounds: it performs a HTTP GET operation on the Container. A response code inside the 200-399 range is considered a pass. exec probes run a command inside the Container to determine its health, such as cat /var/run/myapp.pid or a script that determines when the application is ready. An exit code of zero is considered a pass, while a non-zero status code is considered a fail. tcpSocket probes attempt to open a socket in the Container. The Container is only considered healthy if the check can establish a connection. tcpSocket probes accept a port number to perform the socket connection on the Container. Health checks can be configured on a per-proctype basis for each application using drycc healthchecks:set . If no type is mentioned then the health checks are applied to default proc types, web or cmd, whichever is present. To configure a httpGet liveness probe: $ drycc healthchecks:set liveness httpGet 80 --type cmd === peachy-waxworks Healthchecks cmd: Liveness -------- Initial Delay (seconds): 50 Timeout (seconds): 50 Period (seconds): 10 Success Threshold: 1 Failure Threshold: 3 Exec Probe: N/A HTTP GET Probe: Path=\"/\" Port=80 HTTPHeaders=[] TCP Socket Probe: N/A Readiness --------- No readiness probe configured. If the application relies on certain headers being set (such as the Host header) or a specific URL path relative to the root, you can also send specific HTTP headers: $ drycc healthchecks:set liveness httpGet 80 \\ --path /welcome/index.html \\ --headers \"X-Client-Version:v1.0,X-Foo:bar\" === peachy-waxworks Healthchecks web/cmd: Liveness -------- Initial Delay (seconds): 50 Timeout (seconds): 50 Period (seconds): 10 Success Threshold: 1 Failure Threshold: 3 Exec Probe: N/A HTTP GET Probe: Path=\"/welcome/index.html\" Port=80 HTTPHeaders=[X-Client-Version=v1.0] TCP Socket Probe: N/A Readiness --------- No readiness probe configured. To configure an exec readiness probe: $ drycc healthchecks:set readiness exec -- /bin/echo -n hello --type cmd === peachy-waxworks Healthchecks cmd: Liveness -------- No liveness probe configured. Readiness --------- Initial Delay (seconds): 50 Timeout (seconds): 50 Period (seconds): 10 Success Threshold: 1 Failure Threshold: 3 Exec Probe: Command=[/bin/echo -n hello] HTTP GET Probe: N/A TCP Socket Probe: N/A You can overwrite a probe by running drycc healthchecks:set again: $ drycc healthchecks:set readiness httpGet 80 --type cmd === peachy-waxworks Healthchecks cmd: Liveness -------- No liveness probe configured. Readiness --------- Initial Delay (seconds): 50 Timeout (seconds): 50 Period (seconds): 10 Success Threshold: 1 Failure Threshold: 3 Exec Probe: N/A HTTP GET Probe: Path=\"/\" Port=80 HTTPHeaders=[] TCP Socket Probe: N/A Configured health checks also modify the default application deploy behavior. When starting a new Pod, Workflow will wait for the health check to pass before moving onto the next Pod. Isolate the Application \u00b6 Workflow supports isolating applications onto a set of nodes using drycc tags . Note In order to use tags, you must first launch your cluster with the proper node labels. If you do not, tag commands will fail. Learn more by reading \"Assigning Pods to Nodes\" . Once your nodes are configured with appropriate label selectors, use drycc tags:set to restrict the application to those nodes: $ drycc tags:set environ=prod Applying tags... done, v4 environ prod","title":"Managing App Configuration"},{"location":"applications/managing-app-configuration/#configuring-an-application","text":"A Drycc application stores config in environment variables .","title":"Configuring an Application"},{"location":"applications/managing-app-configuration/#setting-environment-variables","text":"Use drycc config to modify environment variables for a deployed application. $ drycc help config Valid commands for config: config:list list environment variables for an app config:set set environment variables for an app config:unset unset environment variables for an app config:pull extract environment variables to .env config:push set environment variables from .env Use `drycc help [command]` to learn more. When config is changed, a new release is created and deployed automatically. You can set multiple environment variables with one drycc config:set command, or with drycc config:push and a local .env file. $ drycc config:set FOO=1 BAR=baz && drycc config:pull $ cat .env FOO=1 BAR=baz $ echo \"TIDE=high\" >> .env $ drycc config:push Creating config... done, v4 === yuppie-earthman DRYCC_APP: yuppie-earthman FOO: 1 BAR: baz TIDE: high","title":"Setting Environment Variables"},{"location":"applications/managing-app-configuration/#attach-to-backing-services","text":"Drycc treats backing services like databases, caches and queues as attached resources . Attachments are performed using environment variables. For example, use drycc config to set a DATABASE_URL that attaches the application to an external PostgreSQL database. $ drycc config:set DATABASE_URL=postgres://user:pass@example.com:5432/db === peachy-waxworks DATABASE_URL: postgres://user:pass@example.com:5432/db Detachments can be performed with drycc config:unset .","title":"Attach to Backing Services"},{"location":"applications/managing-app-configuration/#buildpacks-cache","text":"By default, apps using the [Imagebuilder][] will reuse the latest image data. When deploying applications that depend on third-party libraries that have to be fetched, this could speed up deployments a lot. In order to make use of this, the buildpack must implement the cache by writing to the cache directory. Most buildpacks already implement this, but when using custom buildpacks, it might need to be changed to make full use of the cache.","title":"Buildpacks Cache"},{"location":"applications/managing-app-configuration/#disabling-and-re-enabling-the-cache","text":"In some cases, cache might not speed up your application. To disable caching, you can set the DRYCC_DISABLE_CACHE variable with drycc config:set DRYCC_DISABLE_CACHE=1 . When you disable the cache, Drycc will clear up files it created to store the cache. After having it turned off, run drycc config:unset DRYCC_DISABLE_CACHE to re-enable the cache.","title":"Disabling and re-enabling the cache"},{"location":"applications/managing-app-configuration/#clearing-the-cache","text":"Use the following procedure to clear the cache: $ drycc config:set DRYCC_DISABLE_CACHE=1 $ git commit --allow-empty -m \"Clearing Drycc cache\" $ git push drycc # (if you use a different remote, you should use your remote name) $ drycc config:unset DRYCC_DISABLE_CACHE","title":"Clearing the cache"},{"location":"applications/managing-app-configuration/#custom-health-checks","text":"By default, Workflow only checks that the application starts in their Container. If it is preferred to have Kubernetes respond to application health, a health check may be added by configuring a health check probe for the application. The health checks are implemented as Kubernetes container probes . A liveness and a readiness probe can be configured, and each probe can be of type httpGet , exec , or tcpSocket depending on the type of probe the container requires. A liveness probe is useful for applications running for long periods of time, eventually transitioning to broken states and cannot recover except by restarting them. Other times, a readiness probe is useful when the container is only temporarily unable to serve, and will recover on its own. In this case, if a container fails its readiness probe, the container will not be shut down, but rather the container will stop receiving incoming requests. httpGet probes are just as it sounds: it performs a HTTP GET operation on the Container. A response code inside the 200-399 range is considered a pass. exec probes run a command inside the Container to determine its health, such as cat /var/run/myapp.pid or a script that determines when the application is ready. An exit code of zero is considered a pass, while a non-zero status code is considered a fail. tcpSocket probes attempt to open a socket in the Container. The Container is only considered healthy if the check can establish a connection. tcpSocket probes accept a port number to perform the socket connection on the Container. Health checks can be configured on a per-proctype basis for each application using drycc healthchecks:set . If no type is mentioned then the health checks are applied to default proc types, web or cmd, whichever is present. To configure a httpGet liveness probe: $ drycc healthchecks:set liveness httpGet 80 --type cmd === peachy-waxworks Healthchecks cmd: Liveness -------- Initial Delay (seconds): 50 Timeout (seconds): 50 Period (seconds): 10 Success Threshold: 1 Failure Threshold: 3 Exec Probe: N/A HTTP GET Probe: Path=\"/\" Port=80 HTTPHeaders=[] TCP Socket Probe: N/A Readiness --------- No readiness probe configured. If the application relies on certain headers being set (such as the Host header) or a specific URL path relative to the root, you can also send specific HTTP headers: $ drycc healthchecks:set liveness httpGet 80 \\ --path /welcome/index.html \\ --headers \"X-Client-Version:v1.0,X-Foo:bar\" === peachy-waxworks Healthchecks web/cmd: Liveness -------- Initial Delay (seconds): 50 Timeout (seconds): 50 Period (seconds): 10 Success Threshold: 1 Failure Threshold: 3 Exec Probe: N/A HTTP GET Probe: Path=\"/welcome/index.html\" Port=80 HTTPHeaders=[X-Client-Version=v1.0] TCP Socket Probe: N/A Readiness --------- No readiness probe configured. To configure an exec readiness probe: $ drycc healthchecks:set readiness exec -- /bin/echo -n hello --type cmd === peachy-waxworks Healthchecks cmd: Liveness -------- No liveness probe configured. Readiness --------- Initial Delay (seconds): 50 Timeout (seconds): 50 Period (seconds): 10 Success Threshold: 1 Failure Threshold: 3 Exec Probe: Command=[/bin/echo -n hello] HTTP GET Probe: N/A TCP Socket Probe: N/A You can overwrite a probe by running drycc healthchecks:set again: $ drycc healthchecks:set readiness httpGet 80 --type cmd === peachy-waxworks Healthchecks cmd: Liveness -------- No liveness probe configured. Readiness --------- Initial Delay (seconds): 50 Timeout (seconds): 50 Period (seconds): 10 Success Threshold: 1 Failure Threshold: 3 Exec Probe: N/A HTTP GET Probe: Path=\"/\" Port=80 HTTPHeaders=[] TCP Socket Probe: N/A Configured health checks also modify the default application deploy behavior. When starting a new Pod, Workflow will wait for the health check to pass before moving onto the next Pod.","title":"Custom Health Checks"},{"location":"applications/managing-app-configuration/#isolate-the-application","text":"Workflow supports isolating applications onto a set of nodes using drycc tags . Note In order to use tags, you must first launch your cluster with the proper node labels. If you do not, tag commands will fail. Learn more by reading \"Assigning Pods to Nodes\" . Once your nodes are configured with appropriate label selectors, use drycc tags:set to restrict the application to those nodes: $ drycc tags:set environ=prod Applying tags... done, v4 environ prod","title":"Isolate the Application"},{"location":"applications/managing-app-gateway/","text":"About gateway for an Application \u00b6 A Gateway describes how traffic can be translated to Services within the cluster. That is, it defines a request for a way to translate traffic from somewhere that does not know about Kubernetes to somewhere that does. For example, traffic sent to a Kubernetes Service by a cloud load balancer, an in-cluster proxy, or an external hardware load balancer. While many use cases have client traffic originating \u201coutside\u201d the cluster, this is not a requirement. Create Gateway for an Application \u00b6 Gateway is a way of exposing services externally, which generates an external IP address to connect route and service. Create service for an Application \u00b6 Service is a way of exposing services internally, creating a service generates an internal DNS that can access procfile_type . Create Route for an Application \u00b6 A Gateway may be attached to one or more Route references which serve to direct traffic for a subset of traffic to a specific service.","title":"Managing App Gateway"},{"location":"applications/managing-app-gateway/#about-gateway-for-an-application","text":"A Gateway describes how traffic can be translated to Services within the cluster. That is, it defines a request for a way to translate traffic from somewhere that does not know about Kubernetes to somewhere that does. For example, traffic sent to a Kubernetes Service by a cloud load balancer, an in-cluster proxy, or an external hardware load balancer. While many use cases have client traffic originating \u201coutside\u201d the cluster, this is not a requirement.","title":"About gateway for an Application"},{"location":"applications/managing-app-gateway/#create-gateway-for-an-application","text":"Gateway is a way of exposing services externally, which generates an external IP address to connect route and service.","title":"Create Gateway for an Application"},{"location":"applications/managing-app-gateway/#create-service-for-an-application","text":"Service is a way of exposing services internally, creating a service generates an internal DNS that can access procfile_type .","title":"Create service for an Application"},{"location":"applications/managing-app-gateway/#create-route-for-an-application","text":"A Gateway may be attached to one or more Route references which serve to direct traffic for a subset of traffic to a specific service.","title":"Create Route for an Application"},{"location":"applications/managing-app-lifecycle/","text":"Managing an Application \u00b6 Track Application Changes \u00b6 Drycc Workflow tracks all changes to your application. Application changes are the result of either new application code pushed to the platform (via git push drycc master ), or an update to application configuration (via drycc config:set KEY=VAL ). Each time a build or config change is made to your application a new release is created. These release numbers increase monotonically. You can see a record of changes to your application using drycc releases : $ drycc releases === peachy-waxworks Releases v4 3 minutes ago gabrtv deployed d3ccc05 v3 1 hour 17 minutes ago gabrtv added DATABASE_URL v2 6 hours 2 minutes ago gabrtv deployed 7cb3321 v1 6 hours 2 minutes ago gabrtv deployed drycc/helloworld Rollback a Release \u00b6 Drycc Workflow also supports rolling back go previous releases. If buggy code or an errant configuration change is pushed to your application, you may rollback to a previously known, good release. Note All rollbacks create a new, numbered release. But will reference the build/code and configuration from the desired rollback point. In this example, the application is currently running release v4. Using drycc rollback v2 tells Workflow to deploy the build and configuration that was used for release v2. This creates a new release named v5 whose contents are the source and configuration from release v2: $ drycc releases === folksy-offshoot Releases v4 4 minutes ago gabrtv deployed d3ccc05 v3 1 hour 18 minutes ago gabrtv added DATABASE_URL v2 6 hours 2 minutes ago gabrtv deployed 7cb3321 v1 6 hours 3 minutes ago gabrtv deployed drycc/helloworld $ drycc rollback v2 Rolled back to v2 $ drycc releases === folksy-offshoot Releases v5 Just now gabrtv rolled back to v2 v4 4 minutes ago gabrtv deployed d3ccc05 v3 1 hour 18 minutes ago gabrtv added DATABASE_URL v2 6 hours 2 minutes ago gabrtv deployed 7cb3321 v1 6 hours 3 minutes ago gabrtv deployed drycc/helloworld Run One-off Administration Tasks \u00b6 Drycc applications use one-off processes for admin tasks like database migrations and other commands that must run against the live application. Use drycc run to execute commands on the deployed application. $ drycc run 'ls -l' Running `ls -l`... total 28 -rw-r--r-- 1 root root 553 Dec 2 23:59 LICENSE -rw-r--r-- 1 root root 60 Dec 2 23:59 Procfile -rw-r--r-- 1 root root 33 Dec 2 23:59 README.md -rw-r--r-- 1 root root 1622 Dec 2 23:59 pom.xml drwxr-xr-x 3 root root 4096 Dec 2 23:59 src -rw-r--r-- 1 root root 25 Dec 2 23:59 system.properties drwxr-xr-x 6 root root 4096 Dec 3 00:00 target Share an Application \u00b6 Use drycc perms:create to allow another Drycc user to collaborate on your application. $ drycc perms:create otheruser Adding otheruser to peachy-waxworks collaborators... done Use drycc perms to see who an application is currently shared with, and drycc perms:delete to remove a collaborator. Note Collaborators can do anything with an application that its owner can do, except delete the application. When working with an application that has been shared with you, clone the original repository and add Drycc' git remote entry before attempting to git push any changes to Drycc. $ git clone https://github.com/drycc/example-java-jetty.git Cloning into 'example-java-jetty'... done $ cd example-java-jetty $ git remote add -f drycc ssh://git@local3.dryccapp.com:2222/peachy-waxworks.git Updating drycc From drycc-controller.local:peachy-waxworks * [new branch] master -> drycc/master Application Troubleshooting \u00b6 Applications deployed on Drycc Workflow treat logs as event streams . Drycc Workflow aggregates stdout and stderr from every Container making it easy to troubleshoot problems with your application. Use drycc logs to view the log output from your deployed application. $ drycc logs -f Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.5]: INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null} Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.8]: INFO:oejs.Server:jetty-7.6.0.v20120127 Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.5]: INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:10005 Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.6]: INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null} Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.7]: INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null} Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.6]: INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:10006 Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.8]: INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null} Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.7]: INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:10007 Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.8]: INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:10008","title":"Managing App Lifecycle"},{"location":"applications/managing-app-lifecycle/#managing-an-application","text":"","title":"Managing an Application"},{"location":"applications/managing-app-lifecycle/#track-application-changes","text":"Drycc Workflow tracks all changes to your application. Application changes are the result of either new application code pushed to the platform (via git push drycc master ), or an update to application configuration (via drycc config:set KEY=VAL ). Each time a build or config change is made to your application a new release is created. These release numbers increase monotonically. You can see a record of changes to your application using drycc releases : $ drycc releases === peachy-waxworks Releases v4 3 minutes ago gabrtv deployed d3ccc05 v3 1 hour 17 minutes ago gabrtv added DATABASE_URL v2 6 hours 2 minutes ago gabrtv deployed 7cb3321 v1 6 hours 2 minutes ago gabrtv deployed drycc/helloworld","title":"Track Application Changes"},{"location":"applications/managing-app-lifecycle/#rollback-a-release","text":"Drycc Workflow also supports rolling back go previous releases. If buggy code or an errant configuration change is pushed to your application, you may rollback to a previously known, good release. Note All rollbacks create a new, numbered release. But will reference the build/code and configuration from the desired rollback point. In this example, the application is currently running release v4. Using drycc rollback v2 tells Workflow to deploy the build and configuration that was used for release v2. This creates a new release named v5 whose contents are the source and configuration from release v2: $ drycc releases === folksy-offshoot Releases v4 4 minutes ago gabrtv deployed d3ccc05 v3 1 hour 18 minutes ago gabrtv added DATABASE_URL v2 6 hours 2 minutes ago gabrtv deployed 7cb3321 v1 6 hours 3 minutes ago gabrtv deployed drycc/helloworld $ drycc rollback v2 Rolled back to v2 $ drycc releases === folksy-offshoot Releases v5 Just now gabrtv rolled back to v2 v4 4 minutes ago gabrtv deployed d3ccc05 v3 1 hour 18 minutes ago gabrtv added DATABASE_URL v2 6 hours 2 minutes ago gabrtv deployed 7cb3321 v1 6 hours 3 minutes ago gabrtv deployed drycc/helloworld","title":"Rollback a Release"},{"location":"applications/managing-app-lifecycle/#run-one-off-administration-tasks","text":"Drycc applications use one-off processes for admin tasks like database migrations and other commands that must run against the live application. Use drycc run to execute commands on the deployed application. $ drycc run 'ls -l' Running `ls -l`... total 28 -rw-r--r-- 1 root root 553 Dec 2 23:59 LICENSE -rw-r--r-- 1 root root 60 Dec 2 23:59 Procfile -rw-r--r-- 1 root root 33 Dec 2 23:59 README.md -rw-r--r-- 1 root root 1622 Dec 2 23:59 pom.xml drwxr-xr-x 3 root root 4096 Dec 2 23:59 src -rw-r--r-- 1 root root 25 Dec 2 23:59 system.properties drwxr-xr-x 6 root root 4096 Dec 3 00:00 target","title":"Run One-off Administration Tasks"},{"location":"applications/managing-app-lifecycle/#share-an-application","text":"Use drycc perms:create to allow another Drycc user to collaborate on your application. $ drycc perms:create otheruser Adding otheruser to peachy-waxworks collaborators... done Use drycc perms to see who an application is currently shared with, and drycc perms:delete to remove a collaborator. Note Collaborators can do anything with an application that its owner can do, except delete the application. When working with an application that has been shared with you, clone the original repository and add Drycc' git remote entry before attempting to git push any changes to Drycc. $ git clone https://github.com/drycc/example-java-jetty.git Cloning into 'example-java-jetty'... done $ cd example-java-jetty $ git remote add -f drycc ssh://git@local3.dryccapp.com:2222/peachy-waxworks.git Updating drycc From drycc-controller.local:peachy-waxworks * [new branch] master -> drycc/master","title":"Share an Application"},{"location":"applications/managing-app-lifecycle/#application-troubleshooting","text":"Applications deployed on Drycc Workflow treat logs as event streams . Drycc Workflow aggregates stdout and stderr from every Container making it easy to troubleshoot problems with your application. Use drycc logs to view the log output from your deployed application. $ drycc logs -f Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.5]: INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null} Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.8]: INFO:oejs.Server:jetty-7.6.0.v20120127 Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.5]: INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:10005 Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.6]: INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null} Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.7]: INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null} Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.6]: INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:10006 Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.8]: INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null} Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.7]: INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:10007 Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.8]: INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:10008","title":"Application Troubleshooting"},{"location":"applications/managing-app-processes/","text":"Managing Application Processes \u00b6 Drycc Workflow manages your application as a set of processes that can be named, scaled and configured according to their role. This gives you the flexibility to easily manage the different facets of your application. For example, you may have web-facing processes that handle HTTP traffic, background worker processes that do async work, and a helper process that streams from the Twitter API. By using a Procfile, either checked in to your application or provided via the CLI you can specify the name of the type and the application command that should run. To spawn other process types, use drycc scale