We always aim to build services as resource-efficient as possible. This is true for the services we offer externally and, just as importantly, for our own internal infrastructure. Many of our internal tools, like billing and monitoring systems, rely on PostgreSQL databases. While essential, these databases often sit idle for long periods, consuming RAM and CPU cycles for no reason.
So, we asked ourselves: can we apply our scale-to-zero philosophy to our own databases? The answer is yes. We’ve developed a system to provision PostgreSQL instances that only run when they are actively being used. This design is incredibly resource-efficient but does come with some trade-offs, which we’ll explore.
Here is a schematic overview of what we built and how we achieved this dynamic scaling.
flowchart LR subgraph Machine A[systemd.socket] B[systemd-socket-proxyd] D[(local disk)] A -- port 15432 --> B subgraph Docker-Compose C[postgres container] end B -- port 5432 --> C C -- data-dir --> D end X[Internet] -- port 25432 --> A
The core of this setup is systemd socket activation. Instead of having a PostgreSQL container running 24/7, we let the systemd init system listen on the database port. When an application attempts to connect, systemd intercepts the request, starts the database container on-demand, and then hands the connection over. Once the database is no longer in use, it’s automatically shut down.
This approach combines the power of standard, battle-tested Linux tools: systemd
for service management and socket activation, and Docker Compose
for defining our containerized database environment. It’s simple, robust, and requires no custom software.
We made two specific technology choices for this setup: running PostgreSQL in a container and managing it with Docker Compose.
Decoupling from the Host OS: By running PostgreSQL inside a Docker container, we decouple the database version from the host operating system’s version. This gives us the flexibility to run different versions of PostgreSQL for different internal services on the same host without conflicts or dependency issues. We can upgrade a database for one service without impacting any others.
Compatibility with systemd: We chose Docker Compose because its lifecycle commands fit perfectly with how systemd manages services. The systemd
ExecStart
directive expects a command that runs in the foreground until the service is stopped. docker-compose up
does exactly this. A more classic docker create
followed by docker start
semantic is harder to manage, as systemd would need a more complex script to handle the lifecycle. docker-compose down
provides a single, clean command for the ExecStop
directive, ensuring the entire environment is torn down gracefully.
Let’s break down the configuration files that make this possible.
We use a combination of a docker-compose.yml
file to define the database and three systemd
unit files to manage the scale-to-zero lifecycle.
This is a standard docker-compose.yml
file. It defines a PostgreSQL 18 container, maps an internal port to the host, and mounts a volume to persist the database data on the local disk. This ensures that even though the container stops, the data remains safe. All settings documented in the official PostgreSQL image on Docker Hub can be used here, allowing for further customization like creating specific users or databases on startup.
/root/pg/pg1/docker-compose.yml
1version: "3"
2services:
3 database:
4 image: 'postgres:18'
5 ports:
6 - 127.0.0.1:14532:5432
7 volumes:
8 - /root/pg/pg1/data:/var/lib/postgresql
9 environment:
10 POSTGRES_PASSWORD: SuperSecretAdminPassword
This .socket
unit tells systemd to listen on port 24532
on all network interfaces. When a TCP connection arrives, systemd will activate pg1-proxy.service
. This is the entry point for all database connections.
/etc/systemd/system/pg1-proxy.socket
1[Unit]
2Description=Socket for pg1 pg proxy (24532->127.0.0.1:14532)
3
4[Socket]
5ListenStream=0.0.0.0:24532
6ReusePort=true
7NoDelay=true
8Backlog=128
9
10[Install]
11WantedBy=sockets.target
This is where the on-demand logic lives. When activated by the socket, this service first starts the actual database service (Requires=pg1-postgres.service
). The ExecStartPre
command is a small but critical shell loop that repeatedly checks if the internal PostgreSQL port is open. Without this check, a race condition could occur where the proxy starts and forwards the client’s connection before the PostgreSQL container has finished initializing. This would result in an immediate “Connection Refused” error for the client. This pre-start script ensures the handoff is smooth and the client only connects once the database is fully ready.
The main process is systemd-socket-proxyd
, a built-in tool that forwards the incoming connection to the internal port where the PostgreSQL container is listening (127.0.0.1:14532
). The crucial part is --exit-idle-time=3min
. This tells the proxy to automatically exit if it has been idle for three minutes.
/etc/systemd/system/pg1-proxy.service
1[Unit]
2Description=Socket-activated TCP proxy to local Postgres on 14532
3
4Requires=pg1-postgres.service
5After=pg1-postgres.service
6
7[Service]
8Type=simple
9Sockets=pg1-proxy.socket
10ExecStartPre=/bin/bash -c 'for i in {1..10}; do nc -z 127.0.0.1 14532 && exit 0; sleep 1; done; exit 0'
11ExecStart=/usr/lib/systemd/systemd-socket-proxyd --exit-idle-time=3min 127.0.0.1:14532
This service manages the Docker Compose lifecycle. It’s started by the proxy service. The key directive is StopWhenUnneeded=true
. This links its lifecycle to the proxy service. When pg1-proxy.service
stops (because its idle timer expired), systemd sees that this service is no longer needed and automatically stops it by running docker-compose down
. The container is shut down, freeing up all its resources.
/etc/systemd/system/pg1-postgres.service
1[Unit]
2Description=postgres container
3PartOf=pg1-proxy.service
4StopWhenUnneeded=true
5
6[Service]
7WorkingDirectory=/root/pg/pg1
8
9Type=simple
10ExecStart=/usr/bin/docker-compose up
11ExecStop=/usr/bin/docker-compose down
12
13Restart=on-failure
14RestartSec=2s
15TimeoutStopSec=30s
This setup is incredibly efficient, but it comes with one major consideration: the “cold start” latency. The very first connection to the database after a period of inactivity will be delayed. The client has to wait for systemd to run docker-compose up
and for the PostgreSQL container to initialize. In our experience, this takes about one second for a small database, but increases with storage size.
For many internal systems—CI/CD, batch jobs, or admin dashboards with infrequent use—this delay is a perfectly acceptable trade-off for the significant resource savings. For high-traffic, latency-sensitive production applications, a traditional, always-on database is still the right choice.
To bring a new database online, we just need to enable the systemd units.
1systemctl daemon-reload
2systemctl enable pg1-proxy.service
3systemctl enable pg1-postgres.service
4systemctl enable --now pg1-proxy.socket
Once enabled, the database is ready to accept connections, but it won’t be consuming any resources until the first one arrives. This is another small step in our mission to eliminate waste, proving that even essential infrastructure like a relational database can be run in a lean, on-demand fashion.