Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I pulled the latest code, and the model is reporting errors everywhere #6955

Open
yangh0597 opened this issue Nov 19, 2024 · 5 comments
Open
Labels
bug Something isn't working module: qnn Related to Qualcomm's QNN delegate

Comments

@yangh0597
Copy link

🐛 Describe the bug

I pulled the latest code, and the model is reporting errors everywhere. Two days ago it was fine.Looks like the code forgot to commit

command is:

 python -m examples.models.llama.export_llama --checkpoint "${MODEL_DIR}/consolidated.00.pth" -p "${MODEL_DIR}/params.json" -kv --disable_dynamic_shape --qnn --pt2e_quantize qnn_16a16w -d fp32 --num_sharding 4 --metadata '{"get_bos_id":128000, "get_eos_ids":[128009, 128001]}' --output_name="Llama-Guard-3.2-1B-qnn_16a16w_s4.pte"

error is

Traceback (most recent call last):
  File "/opt/anaconda3/envs/et_qnn/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/anaconda3/envs/et_qnn/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/opt/executorch/examples/models/llama/export_llama.py", line 32, in <module>
    main()  # pragma: no cover
  File "/opt/executorch/examples/models/llama/export_llama.py", line 28, in main
    export_llama(args)
  File "/opt/executorch/examples/models/llama/export_llama_lib.py", line 508, in export_llama
    builder = _export_llama(args)
  File "/opt/executorch/examples/models/llama/export_llama_lib.py", line 783, in _export_llama
    from executorch.backends.qualcomm.utils.utils import canonicalize_program
ImportError: cannot import name 'canonicalize_program' from 'executorch.backends.qualcomm.utils.utils' (/opt/executorch/backends/qualcomm/utils/utils.py)

I checked the/opt/executorch/backends/qualcomm/utils/utils. Py files, do not canonicalize_program this method

Versions

main

@metascroy metascroy added the module: qnn Related to Qualcomm's QNN delegate label Nov 19, 2024
@metascroy
Copy link
Contributor

Tagged this with QNN.

@yangh0597 I noticed that the executorch modules are not calling something from an anaconda environment (/opt/executorch/examples/models/llama/export_llama.py), werhereas runpy is (/opt/anaconda3/envs/et_qnn/lib/python3.10/runpy.py). Did you install ET to your anaconda environment after pulling in the latest code?

@yangh0597
Copy link
Author

Tagged this with QNN.

@yangh0597 I noticed that the executorch modules are not calling something from an anaconda environment (/opt/executorch/examples/models/llama/export_llama.py), werhereas runpy is (/opt/anaconda3/envs/et_qnn/lib/python3.10/runpy.py). Did you install ET to your anaconda environment after pulling in the latest code?
What is ET?
I follow this document:https://github.com/pytorch/executorch/blob/main/examples/demo-apps/android/LlamaDemo/docs/delegates/qualcomm_README.md

@metascroy
Copy link
Contributor

Yeah, it looks like canonicalize_program was removed from utils.py a couple days ago here: 4086509#diff-0439f6a7c1a3a3cfb222cd6409b6754f17a1ce782dd231de1d12bbf957d588f7L205

But this is imported in llama export here: https://github.com/pytorch/executorch/blob/main/examples/models/llama/export_llama_lib.py?lines=765

@haowhsu-quic, it looks like your PR #6657 broke llama export for QNN, can you have a look?

cc @cccclai

@metascroy metascroy added the bug Something isn't working label Nov 20, 2024
@haowhsu-quic
Copy link
Collaborator

Hi @metascroy, sorry for the inconvenience. canonicalize_program was changed to

def update_spill_fill_size(

I wasn't aware of this part when submitting PR, will fire another one to fix it.

@scsonic
Copy link

scsonic commented Nov 21, 2024

Me too
temp fix for me in utils.py

def canonicalize_program(obj):
    update_spill_fill_size(obj)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working module: qnn Related to Qualcomm's QNN delegate
Projects
None yet
Development

No branches or pull requests

4 participants