Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Put the playground UI inside docker #191

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

aidanhs
Copy link
Member

@aidanhs aidanhs commented Jul 28, 2017

I've not yet updated the dev workflow since doing that inside docker isn't ideal (one issue is cargo redownloading packages every time).

@shepmaster
Copy link
Member

When you asked for the Docker version on the official server, I realized that you don't have access to the current setup. It's different from what's documented in the README (that's the i32 deployment). I've been holding off documenting it because I was under the impression that we were going to follow erickt's path where it was very likely we'd need to be building an AMI anyway.

That said, here's the top-of-mind structure as I remember it now:

  1. We run the complete build on every commit to master as well as at least daily using Travis' cron feature. Example build. This is broken up into multiple stages to allow more things to happen in parallel, reducing the total wallclock time.

  2. The build produces a number of artifacts

    • containers for each of the 3 channels — uploaded to Docker Hub
    • containers for rustfmt and clippy — uploaded to Docker Hub
    • The server binary proper — uploaded to S3
    • The frontend files — uploaded to S3
  3. The server has a crontab configured with AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY running hourly:

    #!/bin/bash
    
    set -euv -o pipefail
    
    root=/home/ec2-user
    
    # Get new docker images
    $root/fetch.sh
    
    # Clean old docker images
    docker system prune -f || true
    
    # Get new artifacts
    aws s3 sync s3://playground-artifacts $root/playground-artifacts
    # These artifacts don't change names and might stay the same size
    # https://github.com/aws/aws-cli/issues/1074
    aws s3 sync \
        --exclude='*' \
        --include=ui \
        --include=build/index.html \
        --include=build/robots.txt \
        --exact-timestamps \
        s3://playground-artifacts $root/playground-artifacts
    chmod +x $root/playground-artifacts/ui
    
    # Restart to get new server binary
    sudo stop playground || true
    sudo start playground
    
  4. The server has an Upstart service to ensure the binary keeps running

    description "The Rust Playground"
    author      "Jake Goulding"
    
    start on filesystem or runlevel [2345]
    stop on shutdown
    
    env TMPDIR=/mnt/playground
    env RUST_LOG=info
    env PLAYGROUND_UI_ADDRESS=0.0.0.0
    env PLAYGROUND_UI_PORT=8080
    env PLAYGROUND_UI_ROOT=/home/ec2-user/playground-artifacts/build
    env PLAYGROUND_CORS_ENABLED=1
    
    chdir /home/ec2-user/playground-artifacts
    script
        exec >>/var/log/upstart/playground.log 2>&1
        ./ui
    end script
    

Pieces of this that I find important:

  • Continuous deployment — I don't want to SSH into the server except for troubleshooting.
  • The build in Travis is parallelized and optimized as much as possible — pull requests have to go through a similar process, but I skip building the containers and uploading the artifacts. At some future point, I want to actually run the Ruby integration tests in CI.

Pieces that may or may not be important:

  • The playground assets are cached reasonably heavy. The index page is cached for an hour and the assets proper are cached for a year. To date, I've just been letting old assets pile up because they aren't that big. Since they should be cached client-side, this is probably not a big worry.

Pieces that probably aren't important:

  • I've been cross-compiling to musl simply so I could run the binary directly on Amazon Linux without installing a bunch of gunk. If we are going to be running inside a container where we have control over the runtime, this can probably be removed.

@shepmaster
Copy link
Member

Now that I've had a chance to read through, I'm thinking we might be able to do this:

  1. Reuse the existing build and upload to S3 phases.
  2. Add a new phase at the end that downloads the artifacts from the previous phase, then builds the combined docker image using them and uploads it to Docker Hub

This ensures that we are using a slightly more optimized build (see .travis/compile-frontend.sh). There's also some nuance that I don't currently remember around the specific musl environment to build in — it needs some C dependencies compiled for musl that our own nightly doesn't have (see the chosen Docker image in .travis/compile-backend.sh)

The easiest thing to do would be to copy .travis/build-containers.sh to a new name and have it only call ui/build.sh.

What do you think?

@shepmaster
Copy link
Member

[compiling to musl] can probably be removed.

To be clear, I don't think we should do that during this PR; I'd rather have fewer moving parts.


repository=shepmaster

# Build outside the image so nodejs isn't installed on the main image
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't wait for multi stage builds to land on a version of Docker we have!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, I've wanted that for about three years now. Didn't realise they finally decided to implement it!

# Also don't put a rust compiler in the main playground image
docker run -it --rm -v $PWD:/ui --workdir /ui --entrypoint /bin/bash shepmaster/rust-nightly -c '
rustup target add x86_64-unknown-linux-musl &&
cargo build --target=x86_64-unknown-linux-musl --release
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With my proposed change to reuse the S3-uploaded artifacts, this file might not make sense as ui/build.sh anymore. It could just be moved to the .travis/ directory.

@aidanhs
Copy link
Member Author

aidanhs commented Aug 1, 2017

(it looks like you get parallel builds from travis, so I'll continue based on that assumption)

Just a thought: since the playground frontend and backend don't actually depend on rustfmt or clippy, you can cut ~6min from your build time by just moving them into the tools stage.

If you were then to combine frontend and backend, you'd still be saving 5min30s due to how long clippy takes to compile. This lets you eliminate s3.

I think this is a nice result both because of fewer moving parts and because it lets people run the exact same build scripts themselves with no changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants