Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with change cuda device for "SUPIR_device" variable #132

Open
CruelBrutalMan opened this issue Jul 18, 2024 · 2 comments
Open

Problem with change cuda device for "SUPIR_device" variable #132

CruelBrutalMan opened this issue Jul 18, 2024 · 2 comments

Comments

@CruelBrutalMan
Copy link

CruelBrutalMan commented Jul 18, 2024

I have a problem - when i change, for example
SUPIR_device = 'cuda:1'
Then I have an error

Traceback (most recent call last):
  File "test.py", line 195, in neuro_calc
    samples = model.batchify_sample(LQ_img, captions, num_steps=args.edm_steps, restoration_scale=args.s_stage1, s_churn=args.s_churn,
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/nvmedata/Mikhail/arc_upscale/SUPIR/SUPIR/models/SUPIR_model.py", line 121, in batchify_sample
    c, uc = self.prepare_condition(_z, p, p_p, n_p, N)
  File "/nvmedata/Mikhail/arc_upscale/SUPIR/SUPIR/models/SUPIR_model.py", line 166, in prepare_condition
    c, uc = self.conditioner.get_unconditional_conditioning(batch, batch_uc)
  File "/nvmedata/Mikhail/arc_upscale/SUPIR/sgm/modules/encoders/modules.py", line 185, in get_unconditional_conditioning
    c = self(batch_c)
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/nvmedata/Mikhail/arc_upscale/SUPIR/sgm/modules/encoders/modules.py", line 206, in forward
    emb_out = embedder(batch[embedder.input_key])
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/nvmedata/Mikhail/arc_upscale/SUPIR/sgm/util.py", line 59, in do_autocast
    return f(*args, **kwargs)
  File "/nvmedata/Mikhail/arc_upscale/SUPIR/sgm/modules/encoders/modules.py", line 493, in forward
    outputs = self.transformer(
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 823, in forward
    return self.text_model(
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 731, in forward
    hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 229, in forward
    inputs_embeds = self.token_embedding(input_ids)
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 162, in forward
    return F.embedding(
  File "/nvmedata/programs/anaconda3/envs/SUPIR/lib/python3.8/site-packages/torch/nn/functional.py", line 2233, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)


why this happens? I just change cuda device
Maybe I need to change the code somewhere and everything will work as it should? After all, specifying the required GPU is not such a difficult task, but in the current implementation it does not work

@TrNgTinh
Copy link

TrNgTinh commented Sep 6, 2024

I am having the same problem, did you solve it? @CruelBrutalMan

@CruelBrutalMan
Copy link
Author

CruelBrutalMan commented Sep 11, 2024

I am having the same problem, did you solve it? @CruelBrutalMan

@TrNgTinh Yes, I solved this problem.. but it doesn't seem like the right solution

I set
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
and only then
SUPIR_device = 'cuda:0'

Unfortunately, I couldn't find another solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants