Issues Running Ollama Container Behind Proxy - No Error Logs Found
#20
by
icemaro
- opened
I'm encountering issues while trying to run an Ollama container behind a proxy. Here are the steps I've taken and the issues I've faced:
Creating an Image with Certificate:
cat Dockerfile FROM ollama/ollama COPY my-ca.pem /usr/local/share/ca-certificates/my-ca.crt RUN update-ca-certificates
Starting a Container Using This Image with Proxy Variables Injected:
docker run -d \ -e HTTPS_PROXY=http://x.x.x.x:3128 \ -e HTTP_PROXY=http://x.x.x.x:3128 \ -e http_proxy=http://x.x.x.x:3128 \ -e https_proxy=http://x.x.x.x:3128 \ -p 11434:11434 ollama-with-ca
Inside the Container:
- Ran
apt-get update
to confirm internet access and proper proxy functionality. - Executed
ollama pull mistral
andollama run mistral:instruct
, but consistently encountered the error: "Error: something went wrong, please see the Ollama server logs for details." - Container logs (
docker logs 8405972b3d6b
) showed no errors, only the following information:Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDppYjymfVcdtDNT/umLfrzlIx1QquQ/gTuSI7SAV194 2024/01/24 08:40:55 images.go:808: total blobs: 0 2024/01/24 08:40:55 images.go:815: total unused blobs removed: 0 2024/01/24 08:40:55 routes.go:930: Listening on [::]:11434 (version 0.1.20) 2024/01/24 08:40:56 shim_ext_server.go:142: Dynamic LLM variants [cuda] 2024/01/24 08:40:56 gpu.go:88: Detecting GPU type 2024/01/24 08:40:56 gpu.go:203: Searching for GPU management library libnvidia-ml.so 2024/01/24 08:40:56 gpu.go:248: Discovered GPU libraries: [] 2024/01/24 08:40:56 gpu.go:203: Searching for GPU management library librocm_smi64.so 2024/01/24 08:40:56 gpu.go:248: Discovered GPU libraries: [] 2024/01/24 08:40:56 routes.go:953: no GPU detected
- Ran
Using Wget to Download the Model:
- Successfully downloaded "mistral-7b-instruct-v0.1.Q5_K_M.gguf" via
wget
. - Created a simple ModelFile:
FROM /home/mistral-7b-instruct-v0.1.Q5_K_M.gguf
- Executed
ollama create mistralModel -f Modelfile
, resulting in the same error: "Error: something went wrong, please see the Ollama server logs for details." - The logs from
docker logs 8405972b3d6b
again showed no error:Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDppYjymfVcdtDNT/umLfrzlIx1QquQ/gTuSI7SAV194 2024/01/24 08:40:55 images.go:808: total blobs: 0 2024/01/24 08:40:55 images.go:815: total unused blobs removed: 0 2024/01/24 08:40:55 routes.go:930: Listening on [::]:11434 (version 0.1.20) 2024/01/24 08:40:56 shim_ext_server.go:142: Dynamic LLM variants [cuda] 2024/01/24 08:40:56 gpu.go:88: Detecting GPU type
- Successfully downloaded "mistral-7b-instruct-v0.1.Q5_K_M.gguf" via
When Making a http request on the ollama server in my Navigator i get an "Ollama running"
i also found that even the "ollama list"
gives the same error " Error: something went wrong, please see the ollama server logs for details " ans still no logs.
i did not find any logs in the files where Ollama saves logs , the only logs are the docker logs , and they contain nothing