-
Notifications
You must be signed in to change notification settings - Fork 207
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Put the playground UI inside docker #191
base: main
Are you sure you want to change the base?
Conversation
When you asked for the Docker version on the official server, I realized that you don't have access to the current setup. It's different from what's documented in the README (that's the i32 deployment). I've been holding off documenting it because I was under the impression that we were going to follow erickt's path where it was very likely we'd need to be building an AMI anyway. That said, here's the top-of-mind structure as I remember it now:
Pieces of this that I find important:
Pieces that may or may not be important:
Pieces that probably aren't important:
|
Now that I've had a chance to read through, I'm thinking we might be able to do this:
This ensures that we are using a slightly more optimized build (see .travis/compile-frontend.sh). There's also some nuance that I don't currently remember around the specific musl environment to build in — it needs some C dependencies compiled for musl that our own nightly doesn't have (see the chosen Docker image in .travis/compile-backend.sh) The easiest thing to do would be to copy What do you think? |
To be clear, I don't think we should do that during this PR; I'd rather have fewer moving parts. |
|
||
repository=shepmaster | ||
|
||
# Build outside the image so nodejs isn't installed on the main image |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't wait for multi stage builds to land on a version of Docker we have!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow, I've wanted that for about three years now. Didn't realise they finally decided to implement it!
# Also don't put a rust compiler in the main playground image | ||
docker run -it --rm -v $PWD:/ui --workdir /ui --entrypoint /bin/bash shepmaster/rust-nightly -c ' | ||
rustup target add x86_64-unknown-linux-musl && | ||
cargo build --target=x86_64-unknown-linux-musl --release |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With my proposed change to reuse the S3-uploaded artifacts, this file might not make sense as ui/build.sh
anymore. It could just be moved to the .travis/
directory.
(it looks like you get parallel builds from travis, so I'll continue based on that assumption) Just a thought: since the playground frontend and backend don't actually depend on rustfmt or clippy, you can cut ~6min from your build time by just moving them into the tools stage. If you were then to combine frontend and backend, you'd still be saving 5min30s due to how long clippy takes to compile. This lets you eliminate s3. I think this is a nice result both because of fewer moving parts and because it lets people run the exact same build scripts themselves with no changes. |
I've not yet updated the dev workflow since doing that inside docker isn't ideal (one issue is cargo redownloading packages every time).