-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MNT Split behat jobs #67
MNT Split behat jobs #67
Conversation
I'd prefer to run just all jobs together for kitchen sink - it's simpler and doesn't clog the runners as much. There's already a lot of runners being taken up every time we run sink CI, and we run sink CI pretty frequently with architectural changes (which we're focusing more on right now). |
I'd like to try this first before we rule it out as a bad idea. My expectation is that splitting the jobs be more helpful than hindrance There's really not that much overhead added from the extra jobs, I think probably only a minute or two of total minutes of extra runner time for the entire run time. However the overall run time of the workflow is going to be dramatically shorter, because elemental will go from taking just under an hour to 10 minutes. Now there will be more jobs than runners so perhaps it'll be 20 minutes total (don't know for sure the total time time we actually do it "live"). Personally I get blocked waiting for the recipe-sink integration tests taking so long to wait and I'm forced to context switch for much longer than I'd like. And If I need to change one thing than re-run the whole thing, that's another hour. I think the changes in this PR will have a real benefit to my personal workflow. If it does turn out to be an actual hindrance to the team then we can look at scaling things back. |
These are failing with "Invalid input for endtoend_tags" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way we can get CI to build these for us? It seems likely that at some stage we'll add new job tags to a module and we'll forget to update this.
You've mitigated that for some of these by adding tags that don't exist yet - but if we add a job tag to a module that doesn't currently use them we're out of luck.
This is also now a really long set of hardcoded stuff which just screams "something will go stale" lol.
If you can't think of an easy way to automate this, no worries I can merge as-is.
We shouldn't be out of luck as we're hardcoding the jobs here rather than going via generate-matrix. So if we add tags to a module that does not have them it doesn't matter, it'll just run all the endtoend tests in the module. The risk is that if we add more tags to a module without updating it here - that's why I've added a 'spare' 3 tags on to each module |
Ahh yeah fair point. I still don't like it lol but that's my aversion to hardcoding big lists like this generally, I think. I'll merge when the others are merged. |
The large number of failures here is a bit concerning though, as an aside. |
The failures are because the other PRs that have the job tags defined in the yml config in this PR haven't been merged yet :-) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All green now, LGTM
Issue silverstripe/.github#278
For the 'last' job, I've added a few extra job tags in there so that they run when any new job files are added in the future. I would expect 3x spare jobs to last a very long time