How to implement code for non-dockerized app inside docker?

In this github repo, they say how to make ollama compatible with my video card:

when using docker ollama/ollama:rocm, I don’t know how to execute these commands inside the docker container and how to test if they worked

@rimelek @rimelek2 rimlek or rimlek2 do you know? i wanted to watch your videos but youtube thinks i am a bot, any other videos not on youtube? not sure how to force command into a docker image

The question is not clear, what exactly do you need and where do you get stuck?

Nice that you found even my test account, but I don’t respond sooner just because I’m mentioned. Otherwise everyone would mention me hoping for a quicker response. I react when I have time and when I have anything to say about the topic. Like I did here:

As I wrote there, the description you found is not for Docker. AMD has a documentation about GPUs with Docker which I shared in the other topic.

I am the person who posted this, but I lost the account email and password when I accidentally reset my computer.

There is a github page showing to get the amdgpu working in ollama. “Ollama could run the iGPU 780M of AMD Ryzen CPU at Linux base on ROCm. There only has a little extra settings than Radeon dGPU like RX7000 series.”

I am trying to follow this guide, but I am running ollama in docker using ollama/ollama:rocm

I am very new to docker without a strong coding background but still trying to figure this out

i saw on this forum there are ways to run code within docker itself. It doesn’t seem like docker apps have terminals. I have a whale application that lets me see the images that are running in containers.

when i run the ollama/ollama:roc I get all sorts of errors. It runs, but it uses the CPUs and is too slow. ollama is used in a container as the backend for other docker images that are on github that I am trying to experiment with

i know rimlek has video on youtube about adding commands to docker. i could try watching those, youtube did ban me at first, but my understanding of docker is so low that even if I watch all those videos I may not understand how to approach this problem to get the best outcome.

rather than trial-and-error to get a solution, i thought it would be better to ask people more knowledgable than me. there may not even be a solution. i am not that smart with computers and am still learning

sorry for finding your test account. i didn’t know what was what. sometimes i have more than 1 account if i forget my password and have to make a new account. i like computers but am slow with stuff

i am using pop-os and kept getting errors when trying to install amdgpu-dkms.

I am at the part where I run the terminal command and I still get errors:

docker run --device /dev/kfd --device /dev/dri --security-opt seccomp=unconfined ollama/ollama:rocm

> 
> 2024/10/12 20:20:07 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
> time=2024-10-12T20:20:07.638Z level=INFO source=images.go:753 msg="total blobs: 0"
> time=2024-10-12T20:20:07.638Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
> time=2024-10-12T20:20:07.638Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.3.12)"
> time=2024-10-12T20:20:07.639Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[rocm_v60102 cpu cpu_avx cpu_avx2]"
> time=2024-10-12T20:20:07.639Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
> time=2024-10-12T20:20:07.640Z level=WARN source=amd_linux.go:60 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
> time=2024-10-12T20:20:07.643Z level=WARN source=amd_linux.go:341 msg="amdgpu is not supported" gpu=0 gpu_type=gfx1103 library=/usr/lib/ollama supported_types="[gfx1030 gfx1100 gfx1101 gfx1102 gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942]"
> time=2024-10-12T20:20:07.643Z level=WARN source=amd_linux.go:343 msg="See https://github.com/ollama/ollama/blob/main/docs/gpu.md#overrides for HSA_OVERRIDE_GFX_VERSION usage"
> time=2024-10-12T20:20:07.643Z level=INFO source=amd_linux.go:361 msg="no compatible amdgpu devices detected"

time=2024-10-12T20:20:07.645Z level=INFO source=gpu.go:347 msg=“no compatible GPUs were discovered”

a rocm 6.2 was released, but this may not help

i am just frustrated because in GPT4ALL, they have the option to select Vulkan AMD GPU and the output is a good speed, it’s not super fast, but it’s not slow

ollama in a docker using CPU is slow. it’s probably 3-4 tokens/sec