This doc explains the development workflow so you can get started contributing to Kaniko!
First you will need to setup your GitHub account and create a fork:
Once you have those, you can iterate on kaniko:
When you're ready, you can create a PR!
The Go tools require that you clone the repository to the src/github.com/GoogleContainerTools/kaniko
directory
in your GOPATH
.
To check out this repository:
- Create your own fork of this repo
- Clone it to your machine:
mkdir -p ${GOPATH}/src/github.com/GoogleContainerTools
cd ${GOPATH}/src/github.com/GoogleContainerTools
git clone [email protected]:${YOUR_GITHUB_USERNAME}/kaniko.git
cd kaniko
git remote add upstream [email protected]:GoogleContainerTools/kaniko.git
git remote set-url --push upstream no_push
Adding the upstream
remote sets you up nicely for regularly syncing your
fork.
Images built with kaniko should be no different from images built elsewhere. While you iterate on kaniko, you can verify images built with kaniko by:
- Build the image using another system, such as
docker build
- Use
container-diff
to diff the images
kaniko has both unit tests and integration tests.
Please note that the tests require a Linux machine - use Vagrant to quickly set up the test environment needed if you work with macOS or Windows.
The unit tests live with the code they test and can be run with:
make test
These tests will not run correctly unless you have checked out your fork into your $GOPATH
.
The helper script to install and run lint is placed here at the root of project.
./hack/linter.sh
To fix any gofmt
issues, you can simply run gofmt
with -w
flag like this
find . -name "*.go" | grep -v vendor/ | xargs gofmt -l -s -w
Currently the integration tests that live in integration
can be run against your own gcloud space or a local registry.
These tests will be kicked off by reviewers for submitted PRs using GitHub Actions.
In either case, you will need the following tools:
To run integration tests with your GCloud Storage, you will also need the following tools:
gcloud
gsutil
- A bucket in GCS which you have write access to via
the user currently logged into
gcloud
- An image repo which you have write access to via the user currently logged into
gcloud
- A docker account and a
~/.docker/config.json
with login credentials if you run into rate limiting problems during tests.
Once this step done, you must override the project using environment variables:
GCS_BUCKET
- The name of your GCS bucketIMAGE_REPO
- The path to your docker image repo
This can be done as follows:
export GCS_BUCKET="gs://<your bucket>"
export IMAGE_REPO="gcr.io/somerepo"
Login for both user and application credentials
gcloud auth login
gcloud auth application-default login
Then you can launch integration tests as follows:
make integration-test
You can also run tests with go test
, for example to run tests individually:
go test ./integration -v --bucket $GCS_BUCKET --repo $IMAGE_REPO -run TestLayers/test_layer_Dockerfile_test_copy_bucket
These tests will be kicked off by reviewers for submitted PRs by the kokoro task.
To run integration tests locally against a local registry and gcs bucket, set the LOCAL environment variable
LOCAL=1 make integration-test
In order to test only specific dockerfiles during local integration testing, you can specify a pattern to match against inside the integration/dockerfiles directory.
DOCKERFILE_PATTERN="Dockerfile_test_add*" make integration-test-run
This will only run dockerfiles that match the pattern Dockerfile_test_add*
The goal is for Kaniko to be at least as fast at building Dockerfiles as Docker is, and to that end, we've built
in benchmarking to check the speed of not only each full run, but also how long each step of each run takes. To turn
on benchmarking, just set the BENCHMARK_FILE
environment variable, and kaniko will output all the benchmark info
of each run to that file location.
docker run -v $(pwd):/workspace -v ~/.config:/root/.config \
-e BENCHMARK_FILE=/workspace/benchmark_file \
gcr.io/kaniko-project/executor:latest \
--dockerfile=<path to Dockerfile> --context=/workspace \
--destination=gcr.io/my-repo/my-image
Additionally, the integration tests can output benchmarking information to a benchmarks
directory under the
integration
directory if the BENCHMARK
environment variable is set to true.
BENCHMARK=true go test -v --bucket $GCS_BUCKET --repo $IMAGE_REPO
If you are GCB builds are slow, you can check which phases in kaniko are bottlenecks or taking more time. To do this, add "BENCHMARK_ENV" to your cloudbuild.yaml like this.
steps:
- name: 'gcr.io/kaniko-project/executor:latest'
args:
- --build-arg=NUM=${_COUNT}
- --no-push
- --snapshotMode=redo
env:
- 'BENCHMARK_FILE=gs://$PROJECT_ID/gcb/benchmark_file'
You can download the file gs://$PROJECT_ID/gcb/benchmark_file
using gsutil cp
command.
When you have changes you would like to propose to kaniko, you will need to:
- Ensure the commit message(s) describe what issue you are fixing and how you are fixing it (include references to issue numbers if appropriate)
- Create a pull request
Each PR must be reviewed by a maintainer. This maintainer will add the kokoro:run
label
to a PR to kick of the integration tests, which must pass for the PR
to be submitted.