Skip to content

Commit

Permalink
Fix vllm cleanup
Browse files Browse the repository at this point in the history
  • Loading branch information
ProbablyFaiz committed Oct 1, 2024
1 parent 2247ce4 commit a2713be
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion rl/llm/engines/local.py
Original file line number Diff line number Diff line change
Expand Up @@ -380,12 +380,15 @@ def __enter__(self):
def __exit__(self, exc_type, exc_value, traceback):
import torch
import torch.distributed
from vllm.model_executor.parallel_utils.parallel_state import (
from vllm.distributed.parallel_state import (
destroy_distributed_environment,
destroy_model_parallel,
)

LOGGER.info("Unloading VLLM model from GPU memory...")
destroy_model_parallel()
destroy_distributed_environment()
del self.vllm.model_executor
del self.vllm
gc.collect()
torch.cuda.empty_cache()
Expand Down

0 comments on commit a2713be

Please sign in to comment.