Running docker containers in Synology NAS

2024/01/01

If this advice helped you, let me know.

How to run docker containers under Synology NAS, easily and with custom settings

Prompted by a recent need to run a local Jupyter instance using the Jupyter Docker Stacks.

The easiest way to run docker containers under Synology NAS is to use the Container Manager, which is available through Synology’s Package Center right from the user interface.

The Container Manager allows you to run custom containers and set some, but not all, options. I found various attempts at this, but none are simple, and some are quite elaborate, such as installing Portainer, which is a whole other and separate can of worms. Not to mention that it’s also vying for the not that much memory of your NAS.

Very often you want to run a Docker container under a specific user ID (UID) and group ID (GID). Whoever wants to do this has two questions to answer:

  1. What are the UID and GID of the user that I want to run the docker container as?
  2. How to run a container with the --user flag set to the appropriate values.

It seems that the only way to answer question (1) is to enable SSH access to your NAS, then log in with the credentials of the user you want to utilize (the ones you use to log into the NAS user interface), and issue the command id on the command line to see your default UID and GID values.

If you now try to use these findings in the Container Manager, you will find that there is no place to put the --user flag.

Various solutions have been proposed. Including in answer here. Unfortunately, starting from the command line does not work. This is because the docker daemon runs as root by default, and even the administrator group users are not part of the root group. This, of course, can be resolved by reconfiguring your NAS, but I’m not interested in elaborate solutions. I’m interested in easy ones.

There is a simple workaround, however. If you open “Control Panel > Task Scheduler”, you can select “Create > Scheduled Task > User created script” with a custom command line. Make sure, however, to turn off the default schedule while creating the task. You can set a custom script here, and that can include running a script as a user of your choice. Here’s what I added to get Jupyter running in a docker container, as one of the predefined users.

docker run --tty \
  --name jupyter-1 \
  --user root \
  -p 8888:8888 \
  -e NB_UID=1026 \
  -e NB_GID=100 \
  -v /volume1/docker/jupyter:/home/jovyan/work  \
  quay.io/jupyter/datascience-notebook:2023-11-17

I have a reason to believe that the backslashes actually don’t work in the NAS UI (but too lazy to check). So here they are only present for exposition clarity. Therefore, if you add the above command line to the NAS UI, remove the backslashes and ensure that the entire text has no intervening newline characters.

You will need to agree to a scary-looking warning about manual fiddling with configuration. However, I think this particular setup does nothing sinister as is, so it should be OK to agree to it.

Using a predefined user allows Jupyter to read and write the mounted directory, which is why it was important to have the UID and GID set up correctly. Each of the flags was quite important to get everything up and running:

Once you set everything up as noted above, you can run this task, and with a bit of luck you will have a running container. The first run may take quite a while, since the container for data science is over 5GB and may take a while to download. Once everything runs, you will be able to see and manage the container from within the Container Manager UI.

I defined another manual task trigger, as user root which has only:

docker logs jupyter-1

I use this to get the logs from the started Jupyter container to my email. This is important because a started Jupyter docker container will print an access token in its logs. But you won’t be able to cut-and-paste it directly from the containers logs from Container Manager, as that text is not cut-and-pasteable for some reason. So a workaround is to issue a docker logs command and have its results emailed to you by your NAS. Since you can’t run docker commands in the ssh command line, as you normally don’t have access to the docker.sock socket, this was the most convenient way to proceed here that I found.

References