Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support for round robin allocation of cuda cards to workers #36

Merged
merged 1 commit into from
Oct 9, 2023

Commits on Aug 3, 2023

  1. support for round robin allocation of cuda cards to workers

    A gunicorn post_fork hook has been added to set CUDA_VISIBLE_DEVICES, which
    sets the device torch will use.
    
    A app level config variable "APP_CUDA_DEVICE_COUNT" is required to
    indicate how many devices are to be used.
    
    The devices are allocated to the docker in the docker compose configuration.
    Richard Beare authored and richardbeare committed Aug 3, 2023
    Configuration menu
    Copy the full SHA
    effa069 View commit details
    Browse the repository at this point in the history