Docker@v23.0.4+incompatible with ubuntu 22.04 on arm64?

Hello to everyone.

I’ve just installed ubuntu 22.04 (for arm64) on my jetson nano. I’m trying to compile and build go and docker from the source code because docker stopped to work on the 22.04,but it works on ubuntu 18.04 and 20.04. The kernel that I use is always the same,so it’s not its fault if Docker does not work. I suppose there is some incompatibility between some component present only on ubuntu 22.04 and not on ubuntu 18 and 20. The error is the following :

# docker images

REPOSITORY    TAG       IMAGE ID       CREATED         SIZE
hello-world   latest    46331d942d63   13 months ago   9.14kB

# docker run hello-world

docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error setting cgroup config for procHooks process: bpf_prog_query(BPF_CGROUP_DEVICE) failed: function not implemented: unknown.
ERRO[0004] error waiting for container: context canceled

Dunno why it happens.

I’ve found this tutorial :

and I’ve started to follow it,saving every command issued :

$ cd /home/marietto

$ wget https://dl.google.com/go/go1.20.3.linux-arm64.tar.gz $ tar -xf archive.tar.gz $ mv go go1.20.3

$ nano /home/marietto/.profile

PATH="$HOME/bin:$HOME/.local/bin:$PATH"
GOROOT_BOOTSTRAP="$HOME/go1.20.3" export PATH export GOROOT_BOOTSTRAP

$ source /home/marietto/.profile

$ git clone https://go.googlesource.com/go go_git

Clone in 'go_git' in corso...
remote: Finding sources: 100% (10/10) remote: Total 570024 (delta 454760), reused 570018 (delta 454760) Ricezione degli oggetti: 100% (570024/570024), 333.21 MiB | 6.12 MiB/s, fatto. Risoluzione dei delta: 100% (454760/454760), fatto. Aggiornamento dei file in corso: 100% (12328/12328), fatto.

$ cd go_git

$ git branch -a

* master
  remotes/origin/HEAD -> origin/master   remotes/origin/master   remotes/origin/release-branch.go1   remotes/origin/release-branch.go1.1   remotes/origin/release-branch.go1.10   remotes/origin/release-branch.go1.11   remotes/origin/release-branch.go1.12   remotes/origin/release-branch.go1.13   remotes/origin/release-branch.go1.14   remotes/origin/release-branch.go1.15   remotes/origin/release-branch.go1.16   remotes/origin/release-branch.go1.17   remotes/origin/release-branch.go1.18   remotes/origin/release-branch.go1.19   remotes/origin/release-branch.go1.2   remotes/origin/release-branch.go1.20   remotes/origin/release-branch.go1.3   remotes/origin/release-branch.go1.4   remotes/origin/release-branch.go1.5   remotes/origin/release-branch.go1.6   remotes/origin/release-branch.go1.7   remotes/origin/release-branch.go1.8   remotes/origin/release-branch.go1.9

$ cd /home/marietto/go_git/src

$ bash make.bash

Building Go cmd/dist using /home/marietto/go1.20.3. (go1.20.3 linux/arm64)
Building Go toolchain1 using /home/marietto/go1.20.3. 
Building Go bootstrap cmd/go (go_bootstrap) using Go toolchain1. 
Building Go toolchain2 using go_bootstrap and Go toolchain1. 
Building Go toolchain3 using go_bootstrap and Go toolchain2. 
Building packages and commands for linux/arm64.

Installed Go for linux/arm64 in /home/marietto/go_git Installed commands in /home/marietto/go_git/bin

marietto@marietto-nano:~/go_git/src$ sudo nano /home/marietto/.profile

GOROOT_BOOTSTRAP="/home/marietto/go1.20.3"
GOROOT=$HOME/go_git GOPATH=$HOME/docker_build PATH="$HOME/bin:$HOME/.local/bin:$PATH:$GOROOT/bin:$GOPATH/bin"
export PATH
export GOROOT_BOOTSTRAP export GOROOT export GOPATH

$ go version
go version go1.20.3 linux/arm64

I don’t understand where the real error is. According with the tutorial after having issued this command :

$ go get -d github.com/docker/docker

I should have had this path :

~/docker_build/src/github.com/docker/docker/hack/dockerfile/install

instead,I have this one :

~/docker_build/pkg/mod/github.com/docker/docker@v23.0.4+incompatible/hack/dockerfile/install

I don’t know if it is good or not,since I read “docker@v23.0.4+incompatible”. Anyway I have issued the next commands,to see what was happened.

Where he says :“Note : containerd by default installs as static in the install script. Due to that failing (most likely needs musl to properly compile statically) I instead decided to install it as dynamic, meaning it needed to be one-offed”

According with the tutorial I’ve created the script called compile_docker_utils.sh and I have set it +x and I ran it :

compile_docker_utils.sh :

PREFIX="$HOME/docker_utils" ./install.sh containerd dynamic 
for package in "runc vndr" do  PREFIX="$HOME/docker_utils" ./install.sh $package done

This is what happened :

+ RM_GOPATH=0 
TMP_GOPATH=
: /home/marietto/docker_utils
'[' -z '' ']' ++ mktemp -d
export GOPATH=/tmp/tmp.ZhgEkEF3Ly
GOPATH=/tmp/tmp.ZhgEkEF3Ly
RM_GOPATH=1
case "$(go env GOARCH)" in ++ go env GOARCH
export GO_BUILDMODE=-buildmode=pie
GO_BUILDMODE=-buildmode=pie ++ dirname ./install.sh
dir=.
bin=containerd
shift
'[' '!' -f ./containerd.installer ']'
. ./containerd.installer ++ set -e ++ : v1.6.20
install_containerd dynamic
echo 'Install containerd version v1.6.20' Install containerd version v1.6.20
git clone https://github.com/containerd/containerd.git /tmp/tmp.ZhgEkEF3Ly/src/github.com/containerd/containerd Clone in '/tmp/tmp.ZhgEkEF3Ly/src/github.com/containerd/containerd' in corso... remote: Enumerating objects: 111847, done. remote: Counting objects: 100% (288/288), done. remote: Compressing objects: 100% (159/159), done. remote: Total 111847 (delta 147), reused 237 (delta 126), pack-reused 111559 Ricezione degli oggetti: 100% (111847/111847), 94.08 MiB | 6.21 MiB/s, fatto. Risoluzione dei delta: 100% (70427/70427), fatto. Aggiornamento dei file in corso: 100% (5268/5268), fatto.
cd /tmp/tmp.ZhgEkEF3Ly/src/github.com/containerd/containerd
git checkout -q v1.6.20
export 'BUILDTAGS=netgo osusergo static_build'
BUILDTAGS='netgo osusergo static_build'
export EXTRA_FLAGS=-buildmode=pie
EXTRA_FLAGS=-buildmode=pie
export 'EXTRA_LDFLAGS=-extldflags "-fno-PIC -static"'
EXTRA_LDFLAGS='-extldflags "-fno-PIC -static"'
'[' dynamic = dynamic ']'
export BUILDTAGS=
BUILDTAGS=
export EXTRA_FLAGS=
EXTRA_FLAGS=
export EXTRA_LDFLAGS=
EXTRA_LDFLAGS=
make
bin/ctr go: no such tool "compile" make: *** [Makefile:249: bin/ctr] Errore 2
RM_GOPATH=0
TMP_GOPATH=
: /home/marietto/docker_utils
'[' -z '' ']' ++ mktemp -d
export GOPATH=/tmp/tmp.exqHoLQLtJ
GOPATH=/tmp/tmp.exqHoLQLtJ
RM_GOPATH=1
case "$(go env GOARCH)" in ++ go env GOARCH
export GO_BUILDMODE=-buildmode=pie
GO_BUILDMODE=-buildmode=pie ++ dirname ./install.sh
dir=.
bin=runc
shift
'[' '!' -f ./runc.installer ']'
. ./runc.installer ++ set -e ++ : v1.1.5
install_runc proxy tini tomlv vndr
RUNC_BUILDTAGS=seccomp
echo 'Install runc version v1.1.5 (build tags: seccomp)' Install runc version v1.1.5 (build tags: seccomp)
git clone https://github.com/opencontainers/runc.git /tmp/tmp.exqHoLQLtJ/src/github.com/opencontainers/runc Clone in '/tmp/tmp.exqHoLQLtJ/src/github.com/opencontainers/runc' in corso... remote: Enumerating objects: 38294, done. remote: Counting objects: 100% (77/77), done. remote: Compressing objects: 100% (59/59), done. remote: Total 38294 (delta 23), reused 59 (delta 18), pack-reused 38217 Ricezione degli oggetti: 100% (38294/38294), 18.24 MiB | 4.57 MiB/s, fatto. Risoluzione dei delta: 100% (25163/25163), fatto.
cd /tmp/tmp.exqHoLQLtJ/src/github.com/opencontainers/runc
git checkout -q v1.1.5
'[' -z proxy ']'
target=proxy
make BUILDTAGS=seccomp proxy make: ***  No rule to make target «proxy».  Stop.

The same happens with tini,tomlv,vndr. Are these components fundamentals to run Docker ? Why won’t it compile them ? Is there another method to install them ?

I went on the Golang forum and one developer wrote :

"Please do not install Docker as a snap, as this fundamentally breaks several parts of Docker. This is a well known limitation of snap. Simply install from the Docker repository, instructions are on the Docker website.

Building Docker is complicated, as it predates Go modules and still has no fully working mod (there’s actually a vendoring mog that somehow gets swapped in). If anything, build from moby/moby as this is now the official source repo for building Moby (Docker). This is complicated and the documentation warns any ambitious adventurer to expect a very bumpy ride. Well, they phrase it differently"

so,ok. I have removed docker via snap. Actually I keep Docker installed only via the ubuntu packages. It does not work anyway. He suggests to “install docker from the repository,instructions are on the docker website”. Can someone point me to that instructions ? I haven’t been able to find them. I accept every suggestion you want to give to me. Thanks.

I use Docker on Ubuntu 22.04 from the beginning (Of Ubuntu 22.04) and works perfectly (I also used it on Arm on macOS inside a multipass VM). I fully agree with the recommendations you got from the developer on the Golang forum.

Finding the official installation instructions is actually pretty easy. Just go to google and search for

docker install on ubuntu 22.04

The documentation is “just” the second result for me, but second is still good.

Note: I am going to move the topic from “Docker Desktop for Linux” as it is clear not your question has nothing to do with Docker Desktop.

Nope. Docker does not work on 22.04 installed on the jetson nano,but it works on ubuntu 18.04 and 20.04. I used the same kernel version for Ubuntu 18,20 and 22 (because the kernel 5.x is very incompatible with the nvidia tools) :

Linux marietto-nano 4.9.299+ #0 SMP PREEMPT Wed Mar 29 14:22:17 CEST 2023 aarch64 aarch64 aarch64 GNU/Linux

so, I suppose there is some incompatibility between some components present only on Ubuntu 22.04 and not on Ubuntu 18 and 20. The error is the following :

# docker images

REPOSITORY    TAG       IMAGE ID       CREATED         SIZE
hello-world   latest    46331d942d63   13 months ago   9.14kB

# docker run hello-world

docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error setting cgroup config for procHooks process: bpf_prog_query(BPF_CGROUP_DEVICE) failed: function not implemented: unknown.
ERRO[0004] error waiting for container: context canceled

I don’t know what to do. I even tried to upgrade the docker container files following this guide :

https://www.server-world.info/en/note?os=Ubuntu_22.04&p=nvidia&f=2

so,this is what I did :

# curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | apt-key add -
OK

# curl -s -L https://nvidia.github.io/nvidia-docker/ubuntu22.04/nvidia-docker.list > /etc/apt/sources.list.d/nvidia-docker.list

# apt upgrade

Before the upgrade I had these versions :

nvidia-docker2/stable,now 2.8.0-1 all
nvidia-container-toolkit/stable,now 1.7.0-1 arm64

after :

nvidia-docker2/bionic 2.13.0-1 all
nvidia-container-toolkit/bionic 1.13.1-1 arm64
nvidia-container-toolkit-base/bionic 1.13.1-1 arm64

they have been upgraded,but I still see that those packages come from bionic,but I’ve used the repos of jammy

# curl -s -L https://nvidia.github.io/nvidia-docker/ubuntu22.04/nvidia-docker.list > /etc/apt/sources.list.d/nvidia-docker.list

This is the content of the file /etc/docker/daemon.json :

{
    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}

I’m using the JetPack 4.6.3 / L4T 32.7.3. runc version is :

# runc --version

runc version 1.1.4-0ubuntu1~22.04.1
spec: 1.0.2-dev
go: go1.18.1
libseccomp: 2.5.3

docker version is :

# docker --version
Docker version 20.10.21, build 20.10.21-0ubuntu1~22.04.3

And anyway,even with the packages upgraded,the error hasn’t been fixed.

As a further experiment,I have purged all the packages installed for ubuntu 22.04 and I have installed the versions of the same packages which works for ubuntu 20.04. They are called like this :

cgroup-tools_0.41-10_arm64.deb
docker.io_20.10.21-0ubuntu1~20.04.1_arm64.deb
containerd_1.6.12-0ubuntu1~20.04.1_arm64.deb
runc_1.1.4-0ubuntu1~20.04.1_arm64.deb

but,I’ve got the same exact error as before :

root@marietto-nano:/home/marietto# docker run hello-world

docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error setting cgroup config for procHooks process: bpf_prog_query(BPF_CGROUP_DEVICE) failed: function not implemented: unknown.
ERRO[0000] error waiting for container: context canceled

This is the output of the “check-config.sh” script :

root@marietto-nano:/home/marietto/Scaricati# ./check-config.sh

info: reading kernel config from /proc/config.gz ...

Generally Necessary:
- cgroup hierarchy: cgroupv2
  Controllers:
  **- cpu: missing
  - cpuset: missing**
  - io: available
  - memory: available
  - pids: available
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled

on the logs I see these errors :

cgroup: cgroup2: unknown option "nsdelegate,memory_recursiveprot"
cgroup: cgroup2: unknown option "nsdelegate"
cgroup: cgroup: disabling cgroup2 socket matching due to net_prio or net_cls activation

Furthermore,according with this post :

I tried to enable some kernel items related to cgroup and bpf like follows :

root@marietto-nano:/linux-tegra-4.9# make -j2

scripts/kconfig/conf --silentoldconfig Kconfig

*

* Restart config...

*

*

* Control Group support

*

Control Group support (CGROUPS) [Y/n/?] y

Example debug cgroup subsystem (CGROUP_DEBUG) [Y/n/?] y

Freezer cgroup subsystem (CGROUP_FREEZER) [Y/n/?] y

PIDs cgroup subsystem (CGROUP_PIDS) [Y/n/?] y

Device controller for cgroups (CGROUP_DEVICE) [Y/n/?] y

Cpuset support (CPUSETS) [Y/n/?] y

Include legacy /proc/<pid>/cpuset file (PROC_PID_CPUSET) [Y/n] y

Simple CPU accounting cgroup subsystem (CGROUP_CPUACCT) [Y/n/?] y

Memory controller (MEMCG) [Y/n/?] y

Swap controller (MEMCG_SWAP) [Y/n/?] y

Swap controller enabled by default (MEMCG_SWAP_ENABLED) [Y/n/?] y

IO controller (BLK_CGROUP) [Y/n/?] y

IO controller debugging (DEBUG_BLK_CGROUP) [Y/n/?] y

PIDs controller (CGROUP_PIDS) [Y/n/?] y

Freezer controller (CGROUP_FREEZER) [Y/n/?] y

HugeTLB controller (CGROUP_HUGETLB) [Y/n/?] y

Cpuset controller (CPUSETS) [Y/n/?] y

Include legacy /proc/<pid>/cpuset file (PROC_PID_CPUSET) [Y/n] y

Device controller (CGROUP_DEVICE) [Y/n/?] y

Simple CPU accounting controller (CGROUP_CPUACCT) [Y/n/?] y

Perf controller (CGROUP_PERF) [Y/n/?] y

Support for eBPF programs attached to cgroups (CGROUP_BPF) [N/y/?] (NEW) y

Example controller (CGROUP_DEBUG) [Y/n/?] y

*

* General setup

*

Cross-compiler tool prefix (CROSS_COMPILE) []

Compile also drivers which will not load (COMPILE_TEST) [N/y/?] n

Local version - append to kernel release (LOCALVERSION) []

Automatically append version information to the version string (LOCALVERSION_AUTO) [N/y/?] n

Default hostname (DEFAULT_HOSTNAME) [(none)] (none)

Support for paging of anonymous memory (swap) (SWAP) [Y/n/?] y

System V IPC (SYSVIPC) [Y/n/?] y

POSIX Message Queues (POSIX_MQUEUE) [Y/n/?] y

Enable process_vm_readv/writev syscalls (CROSS_MEMORY_ATTACH) [Y/n/?] y

open by fhandle syscalls (FHANDLE) [Y/n/?] y

uselib syscall (USELIB) [N/y/?] n

Auditing support (AUDIT) [Y/n/?] y

Kernel .config support (IKCONFIG) [Y/n/m/?] y

Enable access to .config through /proc/config.gz (IKCONFIG_PROC) [Y/n/?] y

Kernel log buffer size (16 => 64KB, 17 => 128KB) (LOG_BUF_SHIFT) [15] 15

CPU kernel log buffer size contribution (13 => 8 KB, 17 => 128KB) (LOG_CPU_MAX_BUF_SHIFT) [15] 15

Temporary per-CPU printk log buffer size (12 => 4KB, 13 => 8KB) (PRINTK_SAFE_LOG_BUF_SHIFT) [13] 13

Checkpoint/restore support (CHECKPOINT_RESTORE) [N/y/?] n

Automatic process group scheduling (SCHED_AUTOGROUP) [N/y/?] n

Boosting for CFS tasks (EXPERIMENTAL) (SCHED_TUNE) [N/y/?] n

Default to enabling the Energy Aware Scheduler feature (DEFAULT_USE_ENERGY_AWARE) [N/y/?] n

Enable deprecated sysfs features to support old userspace tools (SYSFS_DEPRECATED) [N/y/?] n

Kernel->user space relay support (formerly relayfs) (RELAY) [Y/?] y

Initial RAM filesystem and RAM disk (initramfs/initrd) support (BLK_DEV_INITRD) [Y/n/?] y

Initramfs source file(s) (INITRAMFS_SOURCE) []

Support initial ramdisks compressed using gzip (RD_GZIP) [Y/n/?] y

Support initial ramdisks compressed using bzip2 (RD_BZIP2) [Y/n/?] y

Support initial ramdisks compressed using LZMA (RD_LZMA) [Y/n/?] y

Support initial ramdisks compressed using XZ (RD_XZ) [Y/n/?] y

Support initial ramdisks compressed using LZO (RD_LZO) [Y/n/?] y

Support initial ramdisks compressed using LZ4 (RD_LZ4) [Y/n/?] y

Compiler optimization level

> 1. Optimize for performance (CC_OPTIMIZE_FOR_PERFORMANCE)

2. Optimize for size (CC_OPTIMIZE_FOR_SIZE)

choice[1-2]: 1

Enable bpf() system call (BPF_SYSCALL) [Y/n/?] y

Permanently enable BPF JIT and remove BPF interpreter (BPF_JIT_ALWAYS_ON) [N/y/?] (NEW) y

Use full shmem filesystem (SHMEM) [Y/n/?] y

Allow shmem to use all RAM (SHMEM_ALL_RAM) [N/y/?] n

Enable AIO support (AIO) [Y/n/?] y

Enable madvise/fadvise syscalls (ADVISE_SYSCALLS) [Y/n/?] y

Enable userfaultfd() system call (USERFAULTFD) [N/y/?] n

Enable PCI quirk workarounds (PCI_QUIRKS) [Y/n/?] y

Enable membarrier() system call (MEMBARRIER) [Y/n/?] y

Embedded system (EMBEDDED) [Y/n/?] y

Enable VM event counters for /proc/vmstat (VM_EVENT_COUNTERS) [Y/n/?] y

Enable SLUB debugging support (SLUB_DEBUG) [Y/n/?] y

Disable heap randomization (COMPAT_BRK) [N/y/?] n

Choose SLAB allocator

1. SLAB (SLAB)

> 2. SLUB (Unqueued Allocator) (SLUB)

3. SLOB (Simple Allocator) (SLOB)

choice[1-3?]: 2

SLAB freelist randomization (SLAB_FREELIST_RANDOM) [N/y/?] n

SLUB per cpu partial cache (SLUB_CPU_PARTIAL) [Y/n/?] y

Profiling support (PROFILING) [Y/n/?] y

Kprobes (KPROBES) [N/y/?] n

Optimize very unlikely/likely branches (JUMP_LABEL) [Y/n/?] y

Static key selftest (STATIC_KEYS_SELFTEST) [N/y/?] n

Stack Protector buffer overflow detection

> 1. None (CC_STACKPROTECTOR_NONE)

2. Regular (CC_STACKPROTECTOR_REGULAR)

3. Strong (CC_STACKPROTECTOR_STRONG)

choice[1-3?]: 1

Link-Time Optimization (LTO) (EXPERIMENTAL)

> 1. None (LTO_NONE)

2. Use clang Link Time Optimization (LTO) (EXPERIMENTAL) (LTO_CLANG)

choice[1-2?]: 1

Number of bits to use for ASLR of mmap base address (ARCH_MMAP_RND_BITS) [18] 18

Number of bits to use for ASLR of mmap base address for compatible applications (ARCH_MMAP_RND_COMPAT_BITS) [16] 16

Unfortunately Docker still does not work : it gives the same error as before. What’s missing ? thanks.

1 Like

Ubuntu 22.04 uses cgroup v2 by default, which requires kernel 5.8 or greater.

For an older kernel, you will need to switch back to cgroup v1:

I just stumbled across it, and it looks like it’s more than a hunch, but at the same time: I never tried to use cgroup v1 on Ubuntu 22.04.

1 Like

Thanks for the detailed description!

This was the information we missed so far. Ubuntu 22.04 came out with the kernel version 5.15. I didn’t know about the kernel requirement of cgroup v2, so thank you @meyay, but using an older kernel than the version with which the Ubuntu release came out could possibly cause other issues too. Possibly, but containers depend on the kernel version more.

I don’t work with nvidia too often, but my dev laptop, which I use as a server has nvidia gpu and I installed the requirements using Ansible. We could start a new topic in “General Discussions” about Nvidia as a related topic that affects the usability of Docker and I can share how I install it. If it turns out that my solution is not good enough either, then I will learn something too. For example I didn’t know about the JetPack SDK.

Thanks to everyone. I’ve found the fix for the docker’s problem. Is to append the parameter “systemd.unified_cgroup_hierarchy=0” to the kernel cmdline like this :

APPEND ${cbootargs} root=/dev/sda1 rw rootwait rootfstype=ext4 console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 systemd.unified_cgroup_hierarchy=0

and not to use another append line like I did before :

       APPEND systemd.unified_cgroup_hierarchy=0

so,now Docker works on ubuntu 22.04 installed on the jetson nano :

marietto@marietto-nano:~$ docker images

REPOSITORY                   TAG       IMAGE ID       CREATED         SIZE
nvcr.io/nvidia/l4t-jetpack   r35.3.1   ff2dd43d5687   2 weeks ago     9.77GB
hello-world                  latest    46331d942d63   13 months ago   9.14kB

marietto@marietto-nano:~$ docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (arm64v8)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

the idea here is to use an updated version of ubuntu /22.04/ for general use and a previous installation of ubuntu (like the 20.04) to run the applications which needs to access the jetson’s nano gpu.

I plan to try to install the kernel 5.x on the jetson nano. So,we can collaborate. Can we keep in contact ? how ? I don’t come here often. I don’t want to miss your experimentation.