Ollama rocm for AMDGPU is not recognizing GPU

Hi I am a user of the operating system Pop! OS. I am not sure I am in the right place. I am having trouble running something.

I chose Pop! OS over Ubuntu regular because I hoped the video drivers for my GPU would run better for gaming, programming, and science. I have an AMD GPU.

I am trying to run ollama in a docker configuration so that it uses the GPU and it absolutely won’t work. I also am able to run GPT4ALL with Vulkan drivers and it goes fast at text generation.

However, using ollama in a docker is helpful for different programming or experimental applications. I have no idea what I am doing wrong because there are so many guides on what to do. I added amdgpu.deb package from the amd website so that it added a repo and I also installed rocm and amdgpu latest drivers. I also am running a llm rocm docker image from amd, although I am not sure if that’s helping or needed and don’t even understand what it does.

When I run docker with ollama/ollama:rocm it indicates it doesn’t recognize my graphics card.

I’m not sure where to get help.

Logs:

(Error, can’t post logs since apparently they have links in the logs. What an annoying rule for new users trying to post logs.)

I have a AMD® Ryzen 7 8840u w/ radeon 780m graphics x 16 and AMD® Radeon graphics

I could add an external GPU at some point but that’s expensive and a hassle, I’d rather not if I can get this to work. The speed on GPT4ALL is acceptable with Vulkan driver usage. Ollama is clearly using CPU generation based on the slow output. It’s very slow, about 1/10th the speed of the Vulkan generation in GPT4ALL

2024/10/11 11:30:20 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: hcczS_PROXY: hccz_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:hccz://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[hccz://localhost hcczs://localhost hccz://localhost:* hcczs://localhost:* hccz://127.0.0.1 hcczs://127.0.0.1 hccz://127.0.0.1:* hcczs://127.0.0.1:* hccz://0.0.0.0 hcczs://0.0.0.0 hccz://0.0.0.0:* hcczs://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: hccz_proxy: hcczs_proxy: no_proxy:]"
time=2024-10-11T11:30:20.215Z level=INFO source=images.go:753 msg="total blobs: 5"
time=2024-10-11T11:30:20.215Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-10-11T11:30:20.215Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.3.12)"
time=2024-10-11T11:30:20.216Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 rocm_v60102]"
time=2024-10-11T11:30:20.216Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
time=2024-10-11T11:30:20.218Z level=WARN source=amd_linux.go:60 msg="ollama recommends running the hcczs://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-10-11T11:30:20.220Z level=WARN source=amd_linux.go:341 msg="amdgpu is not supported" gpu=0 gpu_type=gfx1103 library=/usr/lib/ollama supported_types="[gfx1030 gfx1100 gfx1101 gfx1102 gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942]"
time=2024-10-11T11:30:20.220Z level=WARN source=amd_linux.go:343 msg="See hcczs://github.com/ollama/ollama/blob/main/docs/gpu.md#overrides for HSA_OVERRIDE_GFX_VERSION usage"
time=2024-10-11T11:30:20.220Z level=INFO source=amd_linux.go:361 msg="no compatible amdgpu devices detected"

there is a github issue open about it but i don’t know how to implement it

So just format the post using code blocks.

That is not an issue and the description is not about Docker. I never tried AMD CPUs with Docker. You can find a guide for Nvidia here: Enable GPU support | Docker Docs

For ROCm, I found this: