-
Notifications
You must be signed in to change notification settings - Fork 99
Project Meeting 2024.04.25
Michelle Bina edited this page May 7, 2024
·
3 revisions
- Project Admin
- Update on Phase 9b contracting
- Guy presenting at AMPO Symposium
- May 9 Agenda
- Phase 9a Updates
- Phase 9b Contracting: There are things to follow-up on from the most recent partners-only meeting, requiring more internal partner discussions. The budget info has been provided by all the consultants, so nothing else is needed.
- Guy presenting at the AMPO symposium.
- May 9 Agenda: Request for MPOs to present anything they are doing regarding visualization.
- There are currently stable and functioning one- and two-zone canonical models, with and without sharrow enabled. Each has their own repository and data. All consultants have or are about to start their full scale test runs. Jeff ran 100k sample of SANDAG model on his laptop in 2 hours (ran single process, multithreaded), with sharrow on. There was high run time for the trip destination component. A quick look around and Jeff found at least one string comparison, so there is some optimization that can be done there.
- Next steps: Everyone to confirm that they can run both models with 100% sample, with and without sharrow, with performance profiling.
- What do we want to test? options include single processing with and without sharrow, multiprocessing with and without sharrow – at full scale, 100% sample, with both canonical models; sharrow compiling, chunk training on and off, explicit chunking only available on a few models (all the interaction, scheduling models, where memory usage is a problem. Noted that we don’t need to do a full factorial of all combinations and carefully select all the experiments.
- Under this phase of work, can we think through a roadmap of what to do next?
- For example, chunking is a problem. We want to discuss the root – is the issue the memory usage and being able to run on a small machine? Is it difficult for the user to set up? Try to pin down the issue with chunking and what root problem(s) do we need to solve? Users have noted some unpredictability with chunking/chunk training.
- Solution: Add suggested tests in Github. Create a new performance issue test tag, and tag it to the issue that goes to the experiment that they are testing. Issue to be on the repo that goes with each example.