You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
运行官方demo,05-Qwen2.5-7B-Instruct Lora .ipynb在notebook的最后:
with torch.no_grad():
outputs = model.generate(**inputs, **gen_kwargs)
报错
C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\TensorCompare.cu:110: block: [0,0,0], thread: [0,0,0] Assertion input[0] != 0 failed.
Traceback (most recent call last):
File "P:\envs\anaconda2024\envs\py312transformers\Lib\site-packages\IPython\core\interactiveshell.py", line 2932, in safe_execfile
py3compat.execfile(
File "P:\envs\anaconda2024\envs\py312transformers\Lib\site-packages\IPython\utils\py3compat.py", line 55, in execfile
exec(compiler(f.read(), fname, "exec"), glob, loc)
File "P:\prjs\learn_transformers\tmp.py", line 28, in
outputs = model.generate(**inputs, **gen_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "P:\envs\anaconda2024\envs\py312transformers\Lib\site-packages\peft\peft_model.py", line 1704, in generate
outputs = self.base_model.generate(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "P:\envs\anaconda2024\envs\py312transformers\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "P:\envs\anaconda2024\envs\py312transformers\Lib\site-packages\transformers\generation\utils.py", line 1989, in generate
result = self._sample(
^^^^^^^^^^^^^
File "P:\envs\anaconda2024\envs\py312transformers\Lib\site-packages\transformers\generation\utils.py", line 2969, in _sample
next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
想请问下这个有没有可能的解决方案呀?
环境:pytorch2.5.0,transformers 4.43.3
The text was updated successfully, but these errors were encountered:
运行官方demo,05-Qwen2.5-7B-Instruct Lora .ipynb在notebook的最后:
with torch.no_grad():
outputs = model.generate(**inputs, **gen_kwargs)
报错
C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\TensorCompare.cu:110: block: [0,0,0], thread: [0,0,0] Assertion
input[0] != 0
failed.Traceback (most recent call last):
File "P:\envs\anaconda2024\envs\py312transformers\Lib\site-packages\IPython\core\interactiveshell.py", line 2932, in safe_execfile
py3compat.execfile(
File "P:\envs\anaconda2024\envs\py312transformers\Lib\site-packages\IPython\utils\py3compat.py", line 55, in execfile
exec(compiler(f.read(), fname, "exec"), glob, loc)
File "P:\prjs\learn_transformers\tmp.py", line 28, in
outputs = model.generate(**inputs, **gen_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "P:\envs\anaconda2024\envs\py312transformers\Lib\site-packages\peft\peft_model.py", line 1704, in generate
outputs = self.base_model.generate(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "P:\envs\anaconda2024\envs\py312transformers\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "P:\envs\anaconda2024\envs\py312transformers\Lib\site-packages\transformers\generation\utils.py", line 1989, in generate
result = self._sample(
^^^^^^^^^^^^^
File "P:\envs\anaconda2024\envs\py312transformers\Lib\site-packages\transformers\generation\utils.py", line 2969, in _sample
next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with
TORCH_USE_CUDA_DSA
to enable device-side assertions.想请问下这个有没有可能的解决方案呀?
环境:pytorch2.5.0,transformers 4.43.3
The text was updated successfully, but these errors were encountered: