llama3.2 on iPhone 16 generates repeated, bad responses #7156
Labels
bug
Something isn't working
module: examples
Issues related to demos under examples directory
need-user-input
The issue needs more information from the reporter before moving forward
🐛 Describe the bug
Running llama3.2 results in an error on iphone 16, making conversation impossible.
Versions
iphone: 16
os:iOS 18.1
PyTorch version: 2.2.2
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.0.1 (x86_64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: version 3.31.1
Libc version: N/A
Python version: 3.10.15 (main, Sep 7 2024, 00:20:06) [Clang 15.0.0 (clang-1500.3.9.4)] (64-bit runtime)
Python platform: macOS-15.0.1-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz
Versions of relevant libraries:
[pip3] executorch==0.4.0a0+6a085ff
[pip3] executorchcoreml==0.0.1
[pip3] numpy==1.21.3
[pip3] torch==2.2.2
[pip3] torchao==0.7.0+git75d06933
[pip3] torchaudio==2.2.2
[pip3] torchsr==1.0.4
[pip3] torchvision==0.17.2
[conda] executorch 0.4.0a0+6a085ff pypi_0 pypi
[conda] executorchcoreml 0.0.1 pypi_0 pypi
[conda] numpy 2.1.3 pypi_0 pypi
[conda] numpydoc 1.7.0 py312hecd8cb5_0 defaults
[conda] torch 2.2.2 pypi_0 pypi
[conda] torchaudio 2.2.2 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchvision 0.17.2 pypi_0 pypi
The text was updated successfully, but these errors were encountered: