I’m running Docker on my Raspberry Pi 3 and exposing an Aeotec Z-Wave Gen5+ Stick to a Docker Container (openHAB 3.3) via docker-compose.
devices:
- /dev/ttyACM0:/dev/ttyACM0
Sometimes this works great, and sometimes (after a Raspberry reboot or a container restart) it doesn’t. Sometimes it comes up again, and sometimes it doesn’t.
Here’s what I believe has to do with it and is reported in the logs:
openHAB log:
2022-06-29 15:27:08.643 [INFO ] [ab.event.ThingStatusInfoChangedEvent] - Thing 'zwave:serial_zstick:35e12a8479' changed from UNINITIALIZED (DISABLED) to INITIALIZING
2022-06-29 15:27:08.655 [INFO ] [ab.event.ThingStatusInfoChangedEvent] - Thing 'zwave:serial_zstick:35e12a8479' changed from INITIALIZING to OFFLINE (BRIDGE_OFFLINE): Controller is offline
==> /var/log/openhab/openhab.log <==
2022-06-29 15:27:08.590 [INFO ] [zwave.handler.ZWaveControllerHandler] - Attempting to add listener when controller is null
2022-06-29 15:27:13.659 [DEBUG] [ort.serial.internal.RxTxPortProvider] - No SerialPortIdentifier found for: /dev/ttyACM0
In Portainer, the log contains the following entry (when the stick works), which looks odd to me?
RXTX Warning: Removing stale lock file. /var/lock/LCK..ttyACM0
The stick itself appears to be recognized properly by Raspberry OS:
Jul 2 20:14:34 raspberrypi kernel: [ 4206.367128] usb 1-1.3: USB disconnect, device number 5
Jul 2 20:14:39 raspberrypi kernel: [ 4211.784142] usb 1-1.3: new full-speed USB device number 7 using dwc_otg
Jul 2 20:14:39 raspberrypi kernel: [ 4211.917443] usb 1-1.3: New USB device found, idVendor=0658, idProduct=0200, bcdDevice= 0.00
Jul 2 20:14:39 raspberrypi kernel: [ 4211.917485] usb 1-1.3: New USB device strings: Mfr=0, Product=0, SerialNumber=0
Jul 2 20:14:39 raspberrypi kernel: [ 4211.919956] cdc_acm 1-1.3:1.0: ttyACM0: USB ACM device
My Raspberry OS syslog reads something which I belive could have to do with this:
dockerd[564]: time="2022-05-27T09:45:13.620500774+02:00"
level=warning msg="path in container /dev/ttyACM0 already exists in
privileged mode"
container=9410337d32adb2f10d4a49180971c7fafaea26cd9864dbb59d201d03919c8744
What always works to solve the problem: Just re-deploying the entire stack, though this can’t be a permanent solution.
Does anyone have the same problem or an idea on how to solve this permanently / root-cause based? Am I just “exposing it wrong”?