I am an old school sysadmin. I still believe that operating systems should be installed with the minimum possible options, that services should only have the packages they actually need, and that separation matters. Historically, each service got its own machine. That approach was clean, predictable, and easy to debug. When something broke, you knew exactly where to look.
My Linux journey started back in 2002 with Gentoo. Anyone who cut their teeth compiling kernels and waiting overnight for a full world update will understand what Gentoo teaches you. It teaches discipline. It teaches you to read before acting. It also instils a deep and lasting dislike of bloat, abstraction for its own sake, and magic that cannot be explained. That mindset has stuck with me ever since.
Before long, running one physical server per service became obviously wasteful. Xen arrived at just the right moment. Virtualisation let me keep the same mental model while using hardware properly. One host, many guests, full operating systems, clean separation. It felt like a natural evolution rather than a revolution.
As environments grew, the pain shifted again. Managing lots of virtual machines is fine, but sometimes you just want to deploy a service without standing up an entire operating system around it. I still want full command line access and I still want to understand what is happening under the hood, but I am also realistic enough to admit that not everything needs to be done by hand every single time.
That is how Proxmox entered the picture. It struck a sensible balance. Strong fundamentals, no nonsense access underneath, and a web interface that exists to help rather than hinder. I am still running it today, and I have no intention of moving away from it.
While I was happily building and maintaining virtual machines, the rest of the industry took another step. Containers stopped being a niche tool and became the default assumption. Documentation increasingly started with Docker and treated anything else as an edge case. For a long time I resisted. Containers felt under documented, over hyped, and very keen on hiding complexity behind YAML files and buzzwords.
Eventually, resistance becomes self inflicted pain. If every new tool expects containers, then containers become part of the job whether you like them or not.
Rather than replacing my existing setup, I decided to layer containerisation on top of what I already trust. Proxmox remains the foundation. Docker sits above it. That keeps the blast radius sensible and avoids turning the entire environment into an experiment.
Docker is not perfect, but it is ubiquitous. Everything supports it, everything assumes it, and fighting that reality is rarely productive. As with virtualisation, I want the command line underneath, but I also want a GUI when managing multiple services or exploring unfamiliar images. Portainer fills that gap nicely. It does not remove the CLI. It simply saves time.
On a Debian based Proxmox host, getting started is refreshingly dull, which is exactly how infrastructure should be.
First, install the packages needed to support HTTPS repositories and sane tooling.
apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common
Add the Docker GPG key.
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
Add the Docker repository.
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
Update the package lists and install Docker and Docker Compose.
apt-get update && apt-get install docker-ce docker-compose -y --allow-unauthenticated
Enable Docker so it starts on boot.
sudo systemctl enable docker
Finally, start Portainer itself.
docker run -d -p 9000:9000 -v /mnt/storage/vps/docker/data:/data -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer
At this point, Portainer is available on port 9000 of the host. Open a browser and head to http://yourproxmoxserver:9000 to complete the initial setup.
This is not the end of the journey. Containers bring a different set of trade offs. Networking behaves differently. Storage needs deliberate thought. It becomes very easy to pile too much onto a single host because it all feels lightweight and disposable.
Used carefully, though, containers make sense. Virtual machines where isolation and long lived state matter. Containers where speed, density, and reproducibility matter. Proxmox underneath, Docker on top, and just enough GUI to stay efficient without losing understanding.
That balance feels familiar. Old ideas, slightly rearranged, with better tooling.
Leave a Reply