Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with pull-llama script, receiving error on running docker compose command #1

Open
arpy8 opened this issue Oct 27, 2024 · 0 comments

Comments

@arpy8
Copy link

arpy8 commented Oct 27, 2024

I'm receiving the following error when running the docker compose up --build command. Can you please help me with this?

[+] Running 2/0
 ✔ Container ollama-docker-fastapi-web-1     Created                                                                                            0.0s 
 ✔ Container ollama-docker-fastapi-ollama-1  Created                                                                                            0.0s 
Attaching to ollama-1, web-1
ollama-1  | /pull-llama3.sh: line 2: $'\r': command not found
ollama-1  | /pull-llama3.sh: line 3: $'\r': command not found
ollama-1  | /pull-llama3.sh: line 4: $'\r': command not found                                                                                        
ollama-1  | /pull-llama3.sh: line 6: $'\r': command not found                                                                                        
ollama-1  | sleep: invalid time interval '5\r'                                                                                                       
ollama-1  | Try 'sleep --help' for more information.
ollama-1  | /pull-llama3.sh: line 8: $'\r': command not found                                                                                        
ollama-1  | Pulling llama3 model                                                                                                                     
ollama-1  | 2024/10/27 19:50:47 routes.go:1158: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
ollama-1  | time=2024-10-27T19:50:47.261Z level=INFO source=images.go:754 msg="total blobs: 0"
ollama-1  | time=2024-10-27T19:50:47.262Z level=INFO source=images.go:761 msg="total unused blobs removed: 0"                                        
ollama-1  | time=2024-10-27T19:50:47.262Z level=INFO source=routes.go:1205 msg="Listening on [::]:11434 (version 0.3.14)"                            
ollama-1  | time=2024-10-27T19:50:47.263Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"                                                                                                                                                   
ollama-1  | time=2024-10-27T19:50:47.264Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
ollama-1  | time=2024-10-27T19:50:47.266Z level=INFO source=gpu.go:384 msg="no compatible GPUs were discovered"                                      
ollama-1  | time=2024-10-27T19:50:47.266Z level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="7.6 GiB" available="6.6 GiB"                                                                                                          
ollama-1  | [GIN] 2024/10/27 - 19:50:47 | 200 |      83.954µs |       127.0.0.1 | HEAD     "/"
ollama-1  | [GIN] 2024/10/27 - 19:50:47 | 400 |     317.992µs |       127.0.0.1 | POST     "/api/pull"                                               
ollama-1  |                                                                                                                                          
ollama-1  | Error: invalid model name
ollama-1  | /pull-llama3.sh: line 11: $'\r': command not found                                                                                       
': not a pid or valid job specine 12: wait: `8                                                                                                       
ollama-1 exited with code 1

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant