If you’ve ever tried to get different processes on a Linux system to talk to each other efficiently, you know the landscape is… let’s say, fragmented. We have D-Bus, Unix sockets with custom protocols, REST APIs over localhost, gRPC, and countless other approaches. Each comes with its own complexity, tooling requirements, and learning curve.
Recently, I stumbled upon Varlink (and its newer Rust-centric sibling Zlink), and it immediately clicked with what we’re trying to achieve at DownToZero. We’re building infrastructure that needs reliable, low-overhead inter-process communication - from container orchestration to machine settings management. The more I dug into Varlink, the more I realized this could be exactly what we need.
In this post, I’ll walk you through my experiments with Varlink. We’ll build a simple hello world service together, step by step, and I’ll share my thoughts on why this technology excites me for our platform’s future.
Varlink is an interface description format and protocol designed for defining and implementing service interfaces. Think of it as a simpler, more modern alternative to D-Bus, or a lighter alternative to gRPC for local communication.
If you’ve worked with D-Bus before, you probably know the pain: complex type systems, introspection that requires special tools, XML configuration files that seem designed to confuse. D-Bus is powerful, but it’s also from an era when “simple” wasn’t a design priority. Varlink takes a different approach - it’s what D-Bus might look like if it were designed today, with modern sensibilities about developer experience.
Here’s what makes Varlink interesting:
Self-describing interfaces: Every Varlink service can describe its own API. You can connect to any service and ask “what can you do?” and get a machine-readable (and human-readable) answer.
Language agnostic: The protocol is simple enough that implementations exist in Rust, Go, Python, C, and more. But importantly, the interfaces themselves are language-neutral.
Socket-based: Communication happens over Unix sockets (or TCP for remote connections), which means it plays nicely with the Linux ecosystem, containers, and systemd.
JSON-based protocol: The wire format is JSON, which makes debugging trivial. You can literally use netcat to talk to a Varlink service if you want.
systemd integration: This is huge. systemd already uses Varlink for some of its internal services, which means the protocol is battle-tested and has first-party support for socket activation and service management.
At DTZ, we’re constantly dealing with machine-level configuration and orchestration. Our infrastructure spans physical hardware (you might remember our solar-powered nodes), containers, and various system services that need to coordinate.
Currently, we have a mix of approaches for inter-process communication:
What we’re missing is a unified way for our system-level services to communicate. Consider these scenarios:
For all of these, Varlink offers a compelling solution. It’s local-first (great for latency), self-documenting (great for debugging), and has native systemd support (great for reliability).
The fact that systemd itself uses Varlink for services like systemd-resolved and systemd-hostnamed means we can potentially integrate directly with system services using the same protocol we use for our own services. That’s powerful.
Enough theory - let’s get our hands dirty. I’ve created a simple hello world implementation to test the waters, and I’ll walk you through building it from scratch.
The complete source code is available at https://github.com/DownToZero-Cloud/varlink-helloworld.
First, create a new Rust project:
cargo new varlink-helloworld
cd varlink-helloworld
Now we need to add our dependencies. Open Cargo.toml and add:
[package]
name = "varlink-helloworld"
version = "0.1.0"
edition = "2024"
[dependencies]
futures-util = "0.3"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tokio = { version = "1", features = ["full"] }
zlink = { version = "0.2" }
We’re using zlink, the modern Rust implementation of Varlink. It’s async-first and built on tokio, which fits perfectly with how we build services at DTZ. We also need serde for JSON serialization (Varlink’s wire format) and futures-util for stream handling.
Now for the fun part - implementing our service from scratch. The beauty of zlink is that we don’t need code generation or separate interface definition files. We define everything directly in Rust, which means full IDE support, type checking, and no build-time magic.
Create src/main.rs:
use serde::{Deserialize, Serialize};
use zlink::{
self, Call, Connection, ReplyError, Server, Service,
connection::Socket, service::MethodReply,
unix, varlink_service::Info,
};
const SOCKET_PATH: &str = "/tmp/hello.varlink";
#[tokio::main]
async fn main() {
println!("starting varlink hello world server");
run_server().await;
}
pub async fn run_server() {
// Clean up any existing socket file
let _ = tokio::fs::remove_file(SOCKET_PATH).await;
// Bind to the Unix socket
let listener = unix::bind(SOCKET_PATH).unwrap();
// Create our service and server
let service = HelloWorld {};
let server = Server::new(listener, service);
match server.run().await {
Ok(_) => println!("server done."),
Err(e) => println!("server error: {:?}", e),
}
}
This is our entry point - simple and clean. We bind to a Unix socket at /tmp/hello.varlink, create our service, and let the server handle incoming connections.
Here’s where Varlink’s elegance shows through. We define our protocol entirely using Rust types with serde annotations. Let’s look at each piece:
Method Calls (Incoming Requests)
#[derive(Debug, Deserialize)]
#[serde(tag = "method")]
enum HelloWorldMethod {
#[serde(rename = "rocks.dtz.HelloWorld.Hello")]
Hello,
#[serde(rename = "rocks.dtz.HelloWorld.NamedHello")]
NamedHello {
#[serde(default)]
parameters: NamedHelloParameters,
},
#[serde(rename = "org.varlink.service.GetInfo")]
VarlinkGetInfo,
}
#[derive(Debug, Serialize, Deserialize, Default)]
pub struct NamedHelloParameters {
name: String,
}
The HelloWorldMethod enum represents all the methods our service can handle. The #[serde(tag = "method")] attribute tells serde to use the JSON method field to determine which variant to deserialize into. The #[serde(rename = "...")] attributes map our Rust enum variants to the actual Varlink method names.
Notice how NamedHello has a nested parameters field - this matches the Varlink protocol where method parameters are wrapped in a parameters object in the JSON.
Replies (Outgoing Responses)
#[derive(Debug, Serialize)]
#[serde(untagged)]
enum HelloWorldReply {
Hello(HelloResponse),
VarlinkInfo(Info<'static>),
}
#[derive(Debug, Serialize)]
pub struct HelloResponse {
message: String,
}
The reply enum uses #[serde(untagged)] because Varlink responses don’t include a type discriminator - the response type is implicit based on the method called. HelloResponse is our simple response struct containing just a message field.
Error Handling
#[derive(Debug, ReplyError)]
#[zlink(interface = "rocks.dtz.HelloWorld")]
enum HelloWorldError {
Error { message: String },
}
The #[derive(ReplyError)] macro from zlink generates the necessary code to serialize our errors according to the Varlink error format. The #[zlink(interface = "...")] attribute specifies which interface these errors belong to.
Now we tie everything together by implementing the Service trait:
struct HelloWorld {}
impl Service for HelloWorld {
type MethodCall<'de> = HelloWorldMethod;
type ReplyParams<'ser> = HelloWorldReply;
type ReplyStreamParams = ();
type ReplyStream = futures_util::stream::Empty<zlink::Reply<()>>;
type ReplyError<'ser> = HelloWorldError;
async fn handle<'ser, 'de: 'ser, Sock: Socket>(
&'ser mut self,
call: Call<Self::MethodCall<'de>>,
_conn: &mut Connection<Sock>,
) -> MethodReply<Self::ReplyParams<'ser>, Self::ReplyStream, Self::ReplyError<'ser>> {
println!("handling call: {:?}", call.method());
match call.method() {
HelloWorldMethod::Hello => {
MethodReply::Single(Some(HelloWorldReply::Hello(HelloResponse {
message: "Hello, World!".to_string(),
})))
}
HelloWorldMethod::NamedHello { parameters } => {
MethodReply::Single(Some(HelloWorldReply::Hello(HelloResponse {
message: format!("Hello, {}!", parameters.name),
})))
}
HelloWorldMethod::VarlinkGetInfo => {
MethodReply::Single(Some(HelloWorldReply::VarlinkInfo(Info::<'static> {
vendor: "DownToZero",
product: "hello-world",
url: "https://github.com/DownToZero-Cloud/varlink-helloworld",
interfaces: vec!["rocks.dtz.HelloWorld", "org.varlink.service"],
version: "1.0.0",
})))
}
}
}
}
The Service trait is the heart of zlink. Let’s break down what’s happening:
Associated Types: We declare what types our service uses for method calls, replies, streaming responses, and errors. This gives us complete type safety throughout.
The handle method: This is where all incoming calls are routed. We pattern match on the deserialized method call and return the appropriate response.
MethodReply::Single: For non-streaming responses, we wrap our reply in MethodReply::Single. Varlink also supports streaming responses (useful for monitoring or subscriptions), but we keep it simple here.
VarlinkGetInfo: Every Varlink service should implement the org.varlink.service.GetInfo method. This returns metadata about our service - vendor, product name, version, URL, and the list of interfaces we implement.
Start the server:
cargo run
You should see:
starting varlink hello world server
Now, in another terminal, we can test it using varlinkctl, which is part of systemd. First, let’s see what the service exposes:
varlinkctl info /tmp/hello.varlink
Output:
Vendor: DownToZero
Product: hello-world
Version: 1.0.0
URL: https://github.com/DownToZero-Cloud/varlink-helloworld
Interfaces: org.varlink.service
rocks.dtz.HelloWorld
This is the self-describing nature of Varlink in action. The client can discover exactly what this service offers.
Now let’s call our methods:
varlinkctl call /tmp/hello.varlink rocks.dtz.HelloWorld.Hello {}
Output:
{
"message" : "Hello, World!"
}
And with a parameter:
varlinkctl call /tmp/hello.varlink rocks.dtz.HelloWorld.NamedHello '{"name":"jens"}'
Output:
{
"message" : "Hello, jens!"
}
It works! We have a fully functional Varlink service.
One thing I love about Varlink is how easy it is to explore and debug. Since the protocol is JSON-based, you can even use basic tools like socat or netcat for manual testing:
echo '{"method":"rocks.dtz.HelloWorld.Hello","parameters":{}}' | \
socat - UNIX-CONNECT:/tmp/hello.varlink
You’ll get back a JSON response you can pipe through jq or just read directly. No special debugging tools needed, no binary protocols to decode. When you’re debugging at 2 AM and something isn’t working, this simplicity is invaluable.
You can also introspect the interface definition itself:
varlinkctl introspect /tmp/hello.varlink rocks.dtz.HelloWorld
This returns the exact interface definition we wrote earlier. Combined with the info command, you have complete visibility into what any Varlink service can do - even services you’ve never seen before.
One of the most powerful features of Varlink is its integration with systemd. You can create socket-activated services that only start when someone connects, and systemd manages the lifecycle.
Create a systemd socket unit (hello-varlink.socket):
[Unit]
Description=Hello World Varlink Socket
[Socket]
ListenStream=/run/hello.varlink
[Install]
WantedBy=sockets.target
And a corresponding service unit (hello-varlink.service):
[Unit]
Description=Hello World Varlink Service
[Service]
ExecStart=/usr/local/bin/varlink-helloworld
With socket activation, systemd listens on the socket, and when a connection comes in, it starts your service and hands over the socket. This means zero resource usage until someone actually needs the service - perfect for our scale-to-zero philosophy at DTZ.
But there’s more to the systemd story. Several systemd components already expose Varlink interfaces:
This means we can use the exact same Varlink patterns we’re developing for our services to interact with the host system. Want to query the DNS cache? varlinkctl call /run/systemd/resolve/io.systemd.Resolve io.systemd.Resolve.ResolveHostname '{"name":"example.com"}'. Same protocol, same tooling, same mental model.
For DTZ, this is particularly exciting because it means our orchestration layer can use a unified approach for both application-level IPC and system-level management. No more context switching between different APIs and protocols.
This hello world experiment has me genuinely excited about Varlink’s potential for DTZ. Here are some directions I’m considering:
Machine configuration service: A Varlink service that exposes machine settings (network config, resource limits, etc.) with proper access control.
Container orchestration IPC: Using Varlink for communication between our container runtime and management services.
Observability aggregation: A local Varlink service that aggregates metrics from various system components.
Systemd integration: Directly querying systemd’s Varlink interfaces for service status and management.
Health check aggregation: A central Varlink service that collects health status from all our running services and exposes a unified health endpoint.
The fact that we can use the same protocol to talk to our own services AND system services like systemd-resolved is a huge win for consistency and reduced complexity.
I’m also curious about performance characteristics. While JSON isn’t the most compact wire format, for local IPC the parsing overhead is typically negligible compared to the benefits of human-readability. That said, I plan to do some benchmarking in a follow-up experiment to get real numbers on latency and throughput for our use cases.
Varlink hits a sweet spot between simplicity and capability. It’s not trying to solve every distributed systems problem - it’s focused on doing local IPC really well, with just enough features for discoverability and type safety.
For DownToZero, where we’re constantly optimizing for efficiency and simplicity, this approach resonates strongly. We don’t need the complexity of gRPC for local communication. We don’t want the overhead of HTTP for machine-internal calls. Varlink gives us a clean, well-designed protocol that plays nicely with the Linux ecosystem we’re building on.
If you’re interested in experimenting yourself, grab the code from GitHub, fire up your editor, and give it a try. The learning curve is gentle, and there’s something satisfying about seeing that first varlinkctl call return your message.
Happy experimenting!