Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue of Training Section #1

Open
fayaz66 opened this issue Jan 23, 2025 · 0 comments
Open

Issue of Training Section #1

fayaz66 opened this issue Jan 23, 2025 · 0 comments

Comments

@fayaz66
Copy link

fayaz66 commented Jan 23, 2025

I am experiencing an issue where the training process does not begin when executing the code. If anyone has experience running this code or has encountered a similar issue, I would greatly appreciate your guidance. Any suggestions or debugging tips would be highly valuable.

Thank you in advance for your help!

""
python -m torch.distributed.launch
--master_port ${port_number} \ # 12346
--nproc_per_node ${NUM_GPUs} \ # e.g., 1
function/train_pipework_dist.py
--cfg cfgs/pipework/${yaml_file} \ # e.g., cfgs/pipework/xxx.yaml
--num_points ${num_points}
--val_freq 10
--save_freq 50
--loss ${LOSS} \ # e.g., smooth
--use_avg_max_pool ${use_avg_max_pool} # e.g., true ""

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant