Container with Python SPI and chromium

Good day,
I have been developing applications for many years, mainly in the web and business environment.
I have developed a web application that now has a connection via an RFID terminal.
This looks like this, via an API, a Raspberry Pi talks to the backend of the web application. The PI has a python script that starts a web server and displays a website in the Chromium browser.
When an RFID tag is scanned, the python script calls up another website in the browser.
The user interacts with the application via touch screen.
But that’s just for background information.

Since I always have to build a rasppi image for each terminal, I would like to make my work easier.
The software is also updated from time to time and I find that fiddly on the native system, especially when experienced end users or I have to do it remotely on umpteen terminals.

My idea was to build a Docker image that contains the Python script and the data for the WebUi and then control the PI.
In the container, however, I need SPI access for the rfid reader and access so that I can control the browser via Selenium.

I’ve already played around with it a bit and set up a Github workflow that builds the image accordingly.
I have currently reached the point where my script tells me that Chroium and the chromium driver do not work together.

Now the general question is, is this even a sensible approach or would you do something else here?

A chatbot claims you can pass SPI devices and use privileged mode.

docker run --rm -it \
  --privileged \
  --device=/dev/spidev0.0 \
  --device=/dev/spidev0.1 \
  my_spi_image

Untested, could be a hallucination.

Thanks for the reply.

The question is more about feasibility. Whether it makes sense to go down this route, or whether it would be better and easier to take a different approach.

In your example code, the browser is not taken into account, or is it? I have to control the Chromium browser using Selenium.

Thanks.

If you have input on host (hardware) and output in host (browser), then of course it is more complicated to have the processing in a container, because you need to bridge the isolation layer twice.

Not sure how to connect to selenium on host or how to let selenium in Docker talk to browser on host.

We can help with Docker-related parts, but I don’t have experience with Selenium personally, so I’m not sure I understand everything. Probably not. I found this image:

https://hub.docker.com/r/selenium/standalone-chromium

It writes about Remote WebDriver, so it should be able to control a chromium web browser even remotely

Would these images help you? Or are you already using these images and base yours on these?

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.