Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Llama vision build fails #718

Open
davidADSP opened this issue Nov 9, 2024 · 2 comments
Open

Llama vision build fails #718

davidADSP opened this issue Nov 9, 2024 · 2 comments

Comments

@davidADSP
Copy link

When building like this:

jetson-containers build llama-vision
-- L4T_VERSION=36.4.0
-- JETPACK_VERSION=6.1
-- CUDA_VERSION=12.6
-- PYTHON_VERSION=3.10
-- LSB_RELEASE=22.04 (jammy)

I get the following error at the last step of the build:

-- Building container llama-vision:r36.4.0-llama-vision

DOCKER_BUILDKIT=0 docker build --network=host --tag llama-vision:r36.4.0-llama-vision \
--file /home/jetson/code/Test/jetson-containers/packages/vlm/llama-vision/Dockerfile \
--build-arg BASE_IMAGE=llama-vision:r36.4.0-bitsandbytes \
/home/jetson/code/Test/jetson-containers/packages/vlm/llama-vision \
2>&1 | tee /home/jetson/code/Test/jetson-containers/logs/20241109_115523/build/llama-vision_r36.4.0-llama-vision.txt; exit ${PIPESTATUS[0]}

DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
            environment-variable.

Sending build context to Docker daemon  11.26kB
Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
 ---> b4fefebbaacc
Step 3/5 : COPY *.whl /tmp/
COPY failed: no source files were specified
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/jetson/code/Test/jetson-containers/jetson_containers/build.py", line 112, in <module>
    build_container(args.name, args.packages, args.base, args.build_flags, args.build_args, args.simulate, args.skip_tests, args.test_only, args.push, args.no_github_api, args.skip_packages)
  File "/home/jetson/code/Test/jetson-containers/jetson_containers/container.py", line 147, in build_container
    status = subprocess.run(cmd.replace(_NEWLINE_, ' '), executable='/bin/bash', shell=True, check=True)  
  File "/usr/lib/python3.10/subprocess.py", line 526, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'DOCKER_BUILDKIT=0 docker build --network=host --tag llama-vision:r36.4.0-llama-vision --file /home/jetson/code/Test/jetson-containers/packages/vlm/llama-vision/Dockerfile --build-arg BASE_IMAGE=llama-vision:r36.4.0-bitsandbytes /home/jetson/code/Test/jetson-containers/packages/vlm/llama-vision 2>&1 | tee /home/jetson/code/Test/jetson-containers/logs/20241109_115523/build/llama-vision_r36.4.0-llama-vision.txt; exit ${PIPESTATUS[0]}' returned non-zero exit status 1.

I have pulled and installed the latest version of jetson containers repo.

@davidADSP
Copy link
Author

And as a follow up question for @dusty-nv - is llama 3.2 90b supported yet through ollama?

@dusty-nv
Copy link
Owner

@davidADSP that container was just for a special pre-release version of transformers before the model came out, hence it copied the wheel from my drive. However now that is upstreamed into Transformers. IIRC that container doesn't have llama.cpp or ollama, and you would need to check those projects if they support it or not. And if it is supported, it depends on how memory efficient they are with loading the weights/ect if it will fit in AGX Orin 64GB or not (anecdotally I was able to run NVLM-72B on AGX Orin with load_in_4bit=True, but 90B is substantially larger)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants