Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tune resource requirements for Sector Studies #480

Open
2 tasks done
trevorb1 opened this issue Nov 26, 2024 · 3 comments
Open
2 tasks done

Tune resource requirements for Sector Studies #480

trevorb1 opened this issue Nov 26, 2024 · 3 comments
Labels
bug Something isn't working Sector Sector Coupling Issue

Comments

@trevorb1
Copy link
Collaborator

trevorb1 commented Nov 26, 2024

Checklist

  • I am using the current master branch
  • I am running on an up-to-date pypsa-usa environment. Update via conda env update -f envs/environment.yaml

The Issue

Running sector studies causes (in particular) the build_demand rule to be much heavier than the electrical counterpart. This often (at least on my computer) causes rules to fail with the error along the lines of:

/usr/bin/bash: line 1: 14318 Killed                  /home/trevor/miniforge3/envs/pypsa-usa/bin/python3.11 /home/trevor/master/pypsa-usa/workflow/.snakemake/scripts/tmp1zwbgps2.build_demand.py

This is just a resource allocation issue, and rerunning the workflow will eventually get past this.

Within the snakemake rules, the resources need to be updated to use the sector wildcard and determine appropriate resource limits.

Steps To Reproduce

  1. Turn on sector studies
  2. Run snakemake -j32 (or similar)

Expected Behavior

No response

Error Message

No response

Anything else?

No response

@trevorb1 trevorb1 added bug Something isn't working Sector Sector Coupling Issue labels Nov 26, 2024
@ktehranchi
Copy link
Collaborator

I don't think the resource allocation (mem_mb / threads ) are actually used when you run it locally, snakemake only uses those for cluster runs. I think to deal with this you should run with -j1 --cores all to reduce the number of jobs running at once, and with all cores being used.

@trevorb1
Copy link
Collaborator Author

Ahhh. Right, thank you. Forgot -j applies to clusters.

@trevorb1
Copy link
Collaborator Author

trevorb1 commented Dec 1, 2024

Im gonna reopen this, as I believe memory allocation is used when run locally? I can't find specifically in the docs where it says one way or another (closest I can find is here). But if when running locally on my computer, if I monitor memory usage, I can see more memory being used if I relax the limits.

@trevorb1 trevorb1 reopened this Dec 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Sector Sector Coupling Issue
Projects
None yet
Development

No branches or pull requests

2 participants