-
-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
environment promotion strategy #63
Comments
Front-end has its own staging environment, separate from our infra. It'd be smarter to be within the same infra system that everybody else is, but in terms of solving the problem - we're good for now :) |
Happy the immediate issue is resolved. Tags are the way to go IMO and I would avoid using |
pybot/ pyback:
staging
branch, targets a test slack but shows what happens when deployed in ECS.master
, targets prod@OperationCode/front-end and current frontend:
Backend does the same as front-end, both front-end and backend you can go into kubectl and manually target whatever container tag you'd like. When I request the current images for all pods, we'll see some oddities:
In the above case, we see that frontend is using both 2334and latest. while backend is using 1269, 1245 and latest.
I don't know the correct command to list the namespace for each of these. But I remember find out that we are setting staging cube as "latest" and the prod as a numerical value. I don't know why we have 3 different backend image versions.
After talking with @kylemh (IIRC) he doesn't need a way to promote a deployed code instance to staging. For their purposes it's fine to have staging only running locally. But I think going to a strategy where we promote to staging on merge to master, by tags, and then manually promote by adding a production tag would be more ideal, or automatically based on some criteria.
Our initial concern was having
k
instances of staging fork
developers. But I don't see that as a current problem.Lastly, we need a way that data sources (backend, pyback) RDS instances, have data in production that propagate down so that when we update those instances we can easily carry that over. I think for the time being a nightly copy over, plus ability to manually trigger, would be fine. With the knowledge that if you are working on something that requires database migrations you'll have to perform the migration again with the updated data.
The text was updated successfully, but these errors were encountered: