[{"contents":"Most of our hosted sites are statically rendered, built with tools like Hugo, Zola, or Jekyll. In general, all those site renderers take simplified input (usually Markdown) and generate well-defined HTML as output. This leaves the question: how can I host such a site?\nThere are specialized providers offering hosting for static sites, but as you know, we took a different path when building our core infrastructure. For our cloud, the common denominator for deployment is a container. And with that comes the question: how can I go from a static site build to a container that hosts the website?\nSo let\u0026rsquo;s start from scratch by generating a very simple hello world page with Hugo.\nhugo new site hello cd hello git init git submodule add https://github.com/theNewDynamic/gohugo-theme-ananke.git themes/ananke echo \u0026#34;theme = \u0026#39;ananke\u0026#39;\u0026#34; \u0026gt;\u0026gt; hugo.toml To check the build, we can start the server locally:\nhugo serve Then our very own \u0026ldquo;Hello World\u0026rdquo; page appears under http://localhost:1313\nSo far so good. Now let\u0026rsquo;s consider putting this into a container.\nTwo Approaches to Containerization Technically, we can go two different routes here:\nPut all the source into a container and run the build process inside Build locally and copy only the outputs into the container To make our example more reproducible and less reliant on local environments, we\u0026rsquo;re choosing option 1 for this scenario. This also makes CI/CD pipelines cleaner since the build environment is fully defined in the Dockerfile.\nBuilding the Container Image A container image always starts with a base image. For this, we\u0026rsquo;re using Alpine Linux since it\u0026rsquo;s small (around 5MB) and provides enough tooling for our project.\nWe\u0026rsquo;ll use a multi-stage build, a technique supported by modern container build tools that lets us use one image for building and another for running. This keeps our final image small by excluding build tools we don\u0026rsquo;t need at runtime.\nStage 1: Build\nFROM alpine AS build RUN apk add --no-cache hugo WORKDIR /src ADD . . RUN hugo --minify In this stage, we:\nStart from Alpine Linux Install Hugo using Alpine\u0026rsquo;s package manager Copy our source code into the container Run Hugo with the --minify flag to produce optimized output The built site ends up in /src/public/.\nStage 2: Runtime\nFROM alpine AS runner RUN apk add --no-cache lighttpd COPY --from=build /src/public /var/www/localhost/htdocs EXPOSE 80 CMD [\u0026#34;lighttpd\u0026#34;, \u0026#34;-D\u0026#34;, \u0026#34;-f\u0026#34;, \u0026#34;/etc/lighttpd/lighttpd.conf\u0026#34;] In this stage, we:\nStart fresh from Alpine (no Hugo installed) Install lighttpd, a lightweight and fast HTTP server Copy only the built HTML files from the build stage Expose port 80 and start lighttpd in foreground mode (-D) The Complete Dockerfile Here\u0026rsquo;s the complete Dockerfile combining both stages:\n# Stage 1: Build the static site FROM alpine AS build RUN apk add --no-cache hugo WORKDIR /src ADD . . RUN hugo --minify # Stage 2: Serve with lighttpd FROM alpine AS runner RUN apk add --no-cache lighttpd COPY --from=build /src/public /var/www/localhost/htdocs EXPOSE 80 CMD [\u0026#34;lighttpd\u0026#34;, \u0026#34;-D\u0026#34;, \u0026#34;-f\u0026#34;, \u0026#34;/etc/lighttpd/lighttpd.conf\u0026#34;] Save this as Dockerfile in your Hugo project root.\nBuilding and Running Locally You can use any OCI-compatible container tool like Podman or Docker. The examples below use docker, but podman works as a drop-in replacement.\nTo build the image:\ndocker build -t my-website . To run it locally:\ndocker run -p 8080:80 --rm my-website Your site is now available at http://localhost:8080\nThe -p 8080:80 flag maps port 8080 on your machine to port 80 inside the container. The --rm flag automatically removes the container when it stops.\nWhy This Approach Works Well This setup has several advantages:\nSmall image size: The final image contains only Alpine (~5MB) + lighttpd (~1MB) + your HTML files. No Node.js, no Ruby, no build tools bloating your production image.\nReproducible builds: The exact same Hugo version runs in CI as locally, eliminating \u0026ldquo;works on my machine\u0026rdquo; issues.\nFast startup: lighttpd starts in milliseconds, making this perfect for scale-to-zero deployments on DTZ.\nSecurity: The production container has minimal attack surface - just a static file server with no dynamic runtime.\nArchitecture Considerations If you are building your image on an Apple Silicon Mac (ARM64) or another non-standard architecture, remember that servers typically run on AMD64 (x86_64). To ensure your container runs correctly on DTZ (and most other cloud providers), you should explicitly specify the target platform during the build.\nChange your build command to:\ndocker build --platform linux/amd64 -t my-website . This tells Docker to cross-compile the image for standard Linux servers, ensuring compatibility regardless of the machine you build on.\nDeploying to DownToZero Once your image is built, you can push it to a container registry and deploy it on DTZ. If you\u0026rsquo;re using our container registry:\n# Tag for DTZ registry docker tag my-website YOUR_CONTEXT_ID.cr.dtz.dev/my-website:latest # Login and push docker login YOUR_CONTEXT_ID.cr.dtz.dev -u apikey docker push YOUR_CONTEXT_ID.cr.dtz.dev/my-website:latest Then create a container service in the DTZ dashboard pointing to your image. The service will automatically handle TLS certificates, scaling, and routing.\nFor automated deployments on every commit, check out our GitHub Action for seamless deployments.\nAdapting for Other Static Site Generators The same pattern works for other generators. Here are the key changes:\nFor Zola:\nFROM alpine AS build RUN apk add --no-cache zola WORKDIR /src ADD . . RUN zola build For Jekyll:\nFROM ruby:alpine AS build RUN apk add --no-cache build-base RUN gem install bundler jekyll WORKDIR /src ADD . . RUN bundle install RUN bundle exec jekyll build The runtime stage stays the same - just copy from /src/public (Zola) or /src/_site (Jekyll) to lighttpd\u0026rsquo;s document root.\nWrapping Up Containerizing static sites is straightforward once you understand the pattern: build in one stage, serve from another. The result is a tiny, fast, secure container that\u0026rsquo;s perfect for modern cloud deployments.\nThis approach aligns well with our philosophy at DTZ - minimal resource usage, fast cold starts, and infrastructure that scales to zero when not in use. A static site in a ~10MB container that starts instantly is about as efficient as web hosting gets.\n","permalink":"https://downtozero.cloud/posts/2025/hosting-static-site/","title":"Hosting a static site in a container"},{"contents":"If you\u0026rsquo;ve ever tried to get different processes on a Linux system to talk to each other efficiently, you know the landscape is\u0026hellip; let\u0026rsquo;s say, fragmented. We have D-Bus, Unix sockets with custom protocols, REST APIs over localhost, gRPC, and countless other approaches. Each comes with its own complexity, tooling requirements, and learning curve.\nRecently, I stumbled upon Varlink (and its newer Rust-centric sibling Zlink), and it immediately clicked with what we\u0026rsquo;re trying to achieve at DownToZero. We\u0026rsquo;re building infrastructure that needs reliable, low-overhead inter-process communication - from container orchestration to machine settings management. The more I dug into Varlink, the more I realized this could be exactly what we need.\nIn this post, I\u0026rsquo;ll walk you through my experiments with Varlink. We\u0026rsquo;ll build a simple hello world service together, step by step, and I\u0026rsquo;ll share my thoughts on why this technology excites me for our platform\u0026rsquo;s future.\nWhat is Varlink, and Why Should You Care? Varlink is an interface description format and protocol designed for defining and implementing service interfaces. Think of it as a simpler, more modern alternative to D-Bus, or a lighter alternative to gRPC for local communication.\nIf you\u0026rsquo;ve worked with D-Bus before, you probably know the pain: complex type systems, introspection that requires special tools, XML configuration files that seem designed to confuse. D-Bus is powerful, but it\u0026rsquo;s also from an era when \u0026ldquo;simple\u0026rdquo; wasn\u0026rsquo;t a design priority. Varlink takes a different approach - it\u0026rsquo;s what D-Bus might look like if it were designed today, with modern sensibilities about developer experience.\nHere\u0026rsquo;s what makes Varlink interesting:\nSelf-describing interfaces: Every Varlink service can describe its own API. You can connect to any service and ask \u0026ldquo;what can you do?\u0026rdquo; and get a machine-readable (and human-readable) answer.\nLanguage agnostic: The protocol is simple enough that implementations exist in Rust, Go, Python, C, and more. But importantly, the interfaces themselves are language-neutral.\nSocket-based: Communication happens over Unix sockets (or TCP for remote connections), which means it plays nicely with the Linux ecosystem, containers, and systemd.\nJSON-based protocol: The wire format is JSON, which makes debugging trivial. You can literally use netcat to talk to a Varlink service if you want.\nsystemd integration: This is huge. systemd already uses Varlink for some of its internal services, which means the protocol is battle-tested and has first-party support for socket activation and service management.\nWhy Varlink for DownToZero? At DTZ, we\u0026rsquo;re constantly dealing with machine-level configuration and orchestration. Our infrastructure spans physical hardware (you might remember our solar-powered nodes), containers, and various system services that need to coordinate.\nCurrently, we have a mix of approaches for inter-process communication:\nREST APIs for external-facing services Direct function calls within our Rust services Some ad-hoc socket-based protocols for specific use cases What we\u0026rsquo;re missing is a unified way for our system-level services to communicate. Consider these scenarios:\nA container service needs to query the host\u0026rsquo;s resource limits Our observability agents need to communicate with system monitoring daemons Configuration changes need to propagate across multiple local services System health checks need to aggregate data from various subsystems For all of these, Varlink offers a compelling solution. It\u0026rsquo;s local-first (great for latency), self-documenting (great for debugging), and has native systemd support (great for reliability).\nThe fact that systemd itself uses Varlink for services like systemd-resolved and systemd-hostnamed means we can potentially integrate directly with system services using the same protocol we use for our own services. That\u0026rsquo;s powerful.\nLet\u0026rsquo;s Build Something: A Hello World Service Enough theory - let\u0026rsquo;s get our hands dirty. I\u0026rsquo;ve created a simple hello world implementation to test the waters, and I\u0026rsquo;ll walk you through building it from scratch.\nThe complete source code is available at https://github.com/DownToZero-Cloud/varlink-helloworld.\nStep 1: Setting Up the Project First, create a new Rust project:\ncargo new varlink-helloworld cd varlink-helloworld Now we need to add our dependencies. Open Cargo.toml and add:\n[package] name = \u0026#34;varlink-helloworld\u0026#34; version = \u0026#34;0.1.0\u0026#34; edition = \u0026#34;2024\u0026#34; [dependencies] futures-util = \u0026#34;0.3\u0026#34; serde = { version = \u0026#34;1\u0026#34;, features = [\u0026#34;derive\u0026#34;] } serde_json = \u0026#34;1\u0026#34; tokio = { version = \u0026#34;1\u0026#34;, features = [\u0026#34;full\u0026#34;] } zlink = { version = \u0026#34;0.2\u0026#34; } We\u0026rsquo;re using zlink, the modern Rust implementation of Varlink. It\u0026rsquo;s async-first and built on tokio, which fits perfectly with how we build services at DTZ. We also need serde for JSON serialization (Varlink\u0026rsquo;s wire format) and futures-util for stream handling.\nStep 2: Implementing the Service Now for the fun part - implementing our service from scratch. The beauty of zlink is that we don\u0026rsquo;t need code generation or separate interface definition files. We define everything directly in Rust, which means full IDE support, type checking, and no build-time magic.\nCreate src/main.rs:\nuse serde::{Deserialize, Serialize}; use zlink::{ self, Call, Connection, ReplyError, Server, Service, connection::Socket, service::MethodReply, unix, varlink_service::Info, }; const SOCKET_PATH: \u0026amp;str = \u0026#34;/tmp/hello.varlink\u0026#34;; #[tokio::main] async fn main() { println!(\u0026#34;starting varlink hello world server\u0026#34;); run_server().await; } pub async fn run_server() { // Clean up any existing socket file let _ = tokio::fs::remove_file(SOCKET_PATH).await; // Bind to the Unix socket let listener = unix::bind(SOCKET_PATH).unwrap(); // Create our service and server let service = HelloWorld {}; let server = Server::new(listener, service); match server.run().await { Ok(_) =\u0026gt; println!(\u0026#34;server done.\u0026#34;), Err(e) =\u0026gt; println!(\u0026#34;server error: {:?}\u0026#34;, e), } } This is our entry point - simple and clean. We bind to a Unix socket at /tmp/hello.varlink, create our service, and let the server handle incoming connections.\nStep 3: Defining the Message Types Here\u0026rsquo;s where Varlink\u0026rsquo;s elegance shows through. We define our protocol entirely using Rust types with serde annotations. Let\u0026rsquo;s look at each piece:\nMethod Calls (Incoming Requests)\n#[derive(Debug, Deserialize)] #[serde(tag = \u0026#34;method\u0026#34;)] enum HelloWorldMethod { #[serde(rename = \u0026#34;rocks.dtz.HelloWorld.Hello\u0026#34;)] Hello, #[serde(rename = \u0026#34;rocks.dtz.HelloWorld.NamedHello\u0026#34;)] NamedHello { #[serde(default)] parameters: NamedHelloParameters, }, #[serde(rename = \u0026#34;org.varlink.service.GetInfo\u0026#34;)] VarlinkGetInfo, } #[derive(Debug, Serialize, Deserialize, Default)] pub struct NamedHelloParameters { name: String, } The HelloWorldMethod enum represents all the methods our service can handle. The #[serde(tag = \u0026quot;method\u0026quot;)] attribute tells serde to use the JSON method field to determine which variant to deserialize into. The #[serde(rename = \u0026quot;...\u0026quot;)] attributes map our Rust enum variants to the actual Varlink method names.\nNotice how NamedHello has a nested parameters field - this matches the Varlink protocol where method parameters are wrapped in a parameters object in the JSON.\nReplies (Outgoing Responses)\n#[derive(Debug, Serialize)] #[serde(untagged)] enum HelloWorldReply { Hello(HelloResponse), VarlinkInfo(Info\u0026lt;\u0026#39;static\u0026gt;), } #[derive(Debug, Serialize)] pub struct HelloResponse { message: String, } The reply enum uses #[serde(untagged)] because Varlink responses don\u0026rsquo;t include a type discriminator - the response type is implicit based on the method called. HelloResponse is our simple response struct containing just a message field.\nError Handling\n#[derive(Debug, ReplyError)] #[zlink(interface = \u0026#34;rocks.dtz.HelloWorld\u0026#34;)] enum HelloWorldError { Error { message: String }, } The #[derive(ReplyError)] macro from zlink generates the necessary code to serialize our errors according to the Varlink error format. The #[zlink(interface = \u0026quot;...\u0026quot;)] attribute specifies which interface these errors belong to.\nStep 4: Implementing the Service Trait Now we tie everything together by implementing the Service trait:\nstruct HelloWorld {} impl Service for HelloWorld { type MethodCall\u0026lt;\u0026#39;de\u0026gt; = HelloWorldMethod; type ReplyParams\u0026lt;\u0026#39;ser\u0026gt; = HelloWorldReply; type ReplyStreamParams = (); type ReplyStream = futures_util::stream::Empty\u0026lt;zlink::Reply\u0026lt;()\u0026gt;\u0026gt;; type ReplyError\u0026lt;\u0026#39;ser\u0026gt; = HelloWorldError; async fn handle\u0026lt;\u0026#39;ser, \u0026#39;de: \u0026#39;ser, Sock: Socket\u0026gt;( \u0026amp;\u0026#39;ser mut self, call: Call\u0026lt;Self::MethodCall\u0026lt;\u0026#39;de\u0026gt;\u0026gt;, _conn: \u0026amp;mut Connection\u0026lt;Sock\u0026gt;, ) -\u0026gt; MethodReply\u0026lt;Self::ReplyParams\u0026lt;\u0026#39;ser\u0026gt;, Self::ReplyStream, Self::ReplyError\u0026lt;\u0026#39;ser\u0026gt;\u0026gt; { println!(\u0026#34;handling call: {:?}\u0026#34;, call.method()); match call.method() { HelloWorldMethod::Hello =\u0026gt; { MethodReply::Single(Some(HelloWorldReply::Hello(HelloResponse { message: \u0026#34;Hello, World!\u0026#34;.to_string(), }))) } HelloWorldMethod::NamedHello { parameters } =\u0026gt; { MethodReply::Single(Some(HelloWorldReply::Hello(HelloResponse { message: format!(\u0026#34;Hello, {}!\u0026#34;, parameters.name), }))) } HelloWorldMethod::VarlinkGetInfo =\u0026gt; { MethodReply::Single(Some(HelloWorldReply::VarlinkInfo(Info::\u0026lt;\u0026#39;static\u0026gt; { vendor: \u0026#34;DownToZero\u0026#34;, product: \u0026#34;hello-world\u0026#34;, url: \u0026#34;https://github.com/DownToZero-Cloud/varlink-helloworld\u0026#34;, interfaces: vec![\u0026#34;rocks.dtz.HelloWorld\u0026#34;, \u0026#34;org.varlink.service\u0026#34;], version: \u0026#34;1.0.0\u0026#34;, }))) } } } } The Service trait is the heart of zlink. Let\u0026rsquo;s break down what\u0026rsquo;s happening:\nAssociated Types: We declare what types our service uses for method calls, replies, streaming responses, and errors. This gives us complete type safety throughout.\nThe handle method: This is where all incoming calls are routed. We pattern match on the deserialized method call and return the appropriate response.\nMethodReply::Single: For non-streaming responses, we wrap our reply in MethodReply::Single. Varlink also supports streaming responses (useful for monitoring or subscriptions), but we keep it simple here.\nVarlinkGetInfo: Every Varlink service should implement the org.varlink.service.GetInfo method. This returns metadata about our service - vendor, product name, version, URL, and the list of interfaces we implement.\nStep 5: Running and Testing Start the server:\ncargo run You should see:\nstarting varlink hello world server Now, in another terminal, we can test it using varlinkctl, which is part of systemd. First, let\u0026rsquo;s see what the service exposes:\nvarlinkctl info /tmp/hello.varlink Output:\nVendor: DownToZero Product: hello-world Version: 1.0.0 URL: https://github.com/DownToZero-Cloud/varlink-helloworld Interfaces: org.varlink.service rocks.dtz.HelloWorld This is the self-describing nature of Varlink in action. The client can discover exactly what this service offers.\nNow let\u0026rsquo;s call our methods:\nvarlinkctl call /tmp/hello.varlink rocks.dtz.HelloWorld.Hello {} Output:\n{ \u0026#34;message\u0026#34; : \u0026#34;Hello, World!\u0026#34; } And with a parameter:\nvarlinkctl call /tmp/hello.varlink rocks.dtz.HelloWorld.NamedHello \u0026#39;{\u0026#34;name\u0026#34;:\u0026#34;jens\u0026#34;}\u0026#39; Output:\n{ \u0026#34;message\u0026#34; : \u0026#34;Hello, jens!\u0026#34; } It works! We have a fully functional Varlink service.\nDebugging and Exploration One thing I love about Varlink is how easy it is to explore and debug. Since the protocol is JSON-based, you can even use basic tools like socat or netcat for manual testing:\necho \u0026#39;{\u0026#34;method\u0026#34;:\u0026#34;rocks.dtz.HelloWorld.Hello\u0026#34;,\u0026#34;parameters\u0026#34;:{}}\u0026#39; | \\ socat - UNIX-CONNECT:/tmp/hello.varlink You\u0026rsquo;ll get back a JSON response you can pipe through jq or just read directly. No special debugging tools needed, no binary protocols to decode. When you\u0026rsquo;re debugging at 2 AM and something isn\u0026rsquo;t working, this simplicity is invaluable.\nYou can also introspect the interface definition itself:\nvarlinkctl introspect /tmp/hello.varlink rocks.dtz.HelloWorld This returns the exact interface definition we wrote earlier. Combined with the info command, you have complete visibility into what any Varlink service can do - even services you\u0026rsquo;ve never seen before.\nIntegrating with systemd One of the most powerful features of Varlink is its integration with systemd. You can create socket-activated services that only start when someone connects, and systemd manages the lifecycle.\nCreate a systemd socket unit (hello-varlink.socket):\n[Unit] Description=Hello World Varlink Socket [Socket] ListenStream=/run/hello.varlink [Install] WantedBy=sockets.target And a corresponding service unit (hello-varlink.service):\n[Unit] Description=Hello World Varlink Service [Service] ExecStart=/usr/local/bin/varlink-helloworld With socket activation, systemd listens on the socket, and when a connection comes in, it starts your service and hands over the socket. This means zero resource usage until someone actually needs the service - perfect for our scale-to-zero philosophy at DTZ.\nBut there\u0026rsquo;s more to the systemd story. Several systemd components already expose Varlink interfaces:\nsystemd-hostnamed: Query and set hostname, machine info, OS details systemd-resolved: DNS resolution, cache control, DNS server management systemd-machined: Container and VM machine management systemd-userd: User session management This means we can use the exact same Varlink patterns we\u0026rsquo;re developing for our services to interact with the host system. Want to query the DNS cache? varlinkctl call /run/systemd/resolve/io.systemd.Resolve io.systemd.Resolve.ResolveHostname '{\u0026quot;name\u0026quot;:\u0026quot;example.com\u0026quot;}'. Same protocol, same tooling, same mental model.\nFor DTZ, this is particularly exciting because it means our orchestration layer can use a unified approach for both application-level IPC and system-level management. No more context switching between different APIs and protocols.\nWhat\u0026rsquo;s Next? This hello world experiment has me genuinely excited about Varlink\u0026rsquo;s potential for DTZ. Here are some directions I\u0026rsquo;m considering:\nMachine configuration service: A Varlink service that exposes machine settings (network config, resource limits, etc.) with proper access control.\nContainer orchestration IPC: Using Varlink for communication between our container runtime and management services.\nObservability aggregation: A local Varlink service that aggregates metrics from various system components.\nSystemd integration: Directly querying systemd\u0026rsquo;s Varlink interfaces for service status and management.\nHealth check aggregation: A central Varlink service that collects health status from all our running services and exposes a unified health endpoint.\nThe fact that we can use the same protocol to talk to our own services AND system services like systemd-resolved is a huge win for consistency and reduced complexity.\nI\u0026rsquo;m also curious about performance characteristics. While JSON isn\u0026rsquo;t the most compact wire format, for local IPC the parsing overhead is typically negligible compared to the benefits of human-readability. That said, I plan to do some benchmarking in a follow-up experiment to get real numbers on latency and throughput for our use cases.\nWrapping Up Varlink hits a sweet spot between simplicity and capability. It\u0026rsquo;s not trying to solve every distributed systems problem - it\u0026rsquo;s focused on doing local IPC really well, with just enough features for discoverability and type safety.\nFor DownToZero, where we\u0026rsquo;re constantly optimizing for efficiency and simplicity, this approach resonates strongly. We don\u0026rsquo;t need the complexity of gRPC for local communication. We don\u0026rsquo;t want the overhead of HTTP for machine-internal calls. Varlink gives us a clean, well-designed protocol that plays nicely with the Linux ecosystem we\u0026rsquo;re building on.\nIf you\u0026rsquo;re interested in experimenting yourself, grab the code from GitHub, fire up your editor, and give it a try. The learning curve is gentle, and there\u0026rsquo;s something satisfying about seeing that first varlinkctl call return your message.\nHappy experimenting!\nResources Varlink Official Site GitHub: varlink-helloworld Zlink - Modern Rust Varlink Implementation Zlink Crate on crates.io ","permalink":"https://downtozero.cloud/posts/2025/varlink-experiments/","title":"Experimenting with Varlink: Building a Hello World IPC Service"},{"contents":"Introduction In cloud development, we often treat storage as infinite. Every log, backup, or dataset we create is preserved indefinitely - not because it’s always useful, but because deletion feels risky. Yet this mindset quietly drives up both cost and carbon impact.\nThe truth is: most stored objects should not live forever. From transient build artifacts to temporary exports and short-lived analytics datasets, much of the data we produce has a natural lifespan. The challenge lies in aligning that lifecycle with our systems.\nAt DownToZero.Cloud, we designed DTZ ObjectStore around this principle. It\u0026rsquo;s not just about storing data efficiently - it\u0026rsquo;s about storing it responsibly.\nThe Problem with Perpetual Storage Every object in the cloud consumes more than disk space. It uses:\nEnergy to stay on spinning disks or SSDs Cooling to maintain data center environments Metadata overhead to keep it indexed, replicated, and backed up This ongoing activity continues even for data that’s never accessed again. In essence, our storage has a carbon shadow - and unmanaged data lifecycles make it grow larger over time.\nDesigning with Time in Mind One of the core Green Software Patterns is Data Lifecycle Management - ensuring that digital resources align with real-world relevance.\nIn practice, that means:\nDefining time-to-live (TTL) policies for each object class Automating the removal or archival of obsolete data Treating deletion as an act of optimization, not loss When you apply this pattern, the benefits extend beyond sustainability:\nReduced operational cost through automatic cleanup Simpler data governance with predictable retention Improved system performance as metadata and index sizes shrink How DTZ ObjectStore Makes Lifecycles First-Class The DTZ ObjectStore service is built with lifecycle management at its core. Unlike traditional storage systems where TTLs are an afterthought, DTZ embeds this capability into the object creation workflow.\n1) Per-Object TTL via HTTP Headers Attach a TTL when you upload. Use either relative duration (X-DTZ-EXPIRE-IN) or an absolute timestamp (X-DTZ-EXPIRE-AT). Supply your API key as X-API-KEY.\nBase URL: https://objectstore.dtz.rocks/api/2022-11-28\nUpload with a 24-hour TTL (duration):\ncurl -X PUT \\ \u0026#34;https://objectstore.dtz.rocks/api/2022-11-28/obj/logs/builds/2025-11-05.json\u0026#34; \\ -H \u0026#34;X-API-KEY: $DTZ_API_KEY\u0026#34; \\ -H \u0026#34;X-DTZ-EXPIRE-IN: P1D\u0026#34; \\ --data-binary @build-output.json Upload that expires at a fixed time (RFC-3339):\ncurl -X PUT \\ \u0026#34;https://objectstore.dtz.rocks/api/2022-11-28/obj/exports/report.csv\u0026#34; \\ -H \u0026#34;X-API-KEY: $DTZ_API_KEY\u0026#34; \\ -H \u0026#34;X-DTZ-EXPIRE-AT: 2025-12-01T00:00:00Z\u0026#34; \\ --data-binary @report.csv 2) Reading, Inspecting, and Listing Objects Download an object:\ncurl -s \\ \u0026#34;https://objectstore.dtz.rocks/api/2022-11-28/obj/exports/report.csv\u0026#34; \\ -H \u0026#34;X-API-KEY: $DTZ_API_KEY\u0026#34; -o report.csv Check metadata (including server-computed expiration):\ncurl -I \\ \u0026#34;https://objectstore.dtz.rocks/api/2022-11-28/obj/exports/report.csv\u0026#34; \\ -H \u0026#34;X-API-KEY: $DTZ_API_KEY\u0026#34; # Look for headers like: # X-DTZ-EXPIRATION: 2025-12-01T00:00:00Z List objects by prefix:\ncurl -s \\ \u0026#34;https://objectstore.dtz.rocks/api/2022-11-28/obj/?prefix=logs/builds/\u0026#34; \\ -H \u0026#34;X-API-KEY: $DTZ_API_KEY\u0026#34; \\ | jq . # Response includes per-object fields like key, size, lastModified, and expiration. Delete early (if needed):\ncurl -X DELETE \\ \u0026#34;https://objectstore.dtz.rocks/api/2022-11-28/obj/exports/report.csv\u0026#34; \\ -H \u0026#34;X-API-KEY: $DTZ_API_KEY\u0026#34; 3) Energy-Aware Cleanup The deletion of every object is split into two phases. When an object reaches its expiration time, it is no longer accessible through the API, but it remains present in the storage system (and is not billed at that point). The actual deletion is scheduled in batches during low-carbon time windows to minimize the footprint of background jobs - turning lifecycle enforcement into a sustainability lever, not just an ops task.\nFrom Policy to Practice Lifecycle isn\u0026rsquo;t about deleting recklessly - it\u0026rsquo;s about designing with purpose. Patterns that work well:\nBuild artifacts → expire after CI/CD completion Analytics exports → expire after delivery or ingestion User uploads → expire after inactivity Caches \u0026amp; transient datasets → short default TTLs Define the rules once and let the system continuously optimize itself - consuming less, costing less, and doing more with the same energy envelope.\nThe Shared Goal: Sustainability by Design Object lifecycle management directly reflects the Green Software ethos: delete unused storage resources and set retention policies by default - a small, intentional act of stewardship that scales.\nAt DownToZero.Cloud, this is where good engineering meets good citizenship. Our mission is to empower developers to design for zero waste - where data lives only as long as it needs to, and no longer.\nGitHub Actions Integration Object lifecycle management integrates seamlessly into CI/CD pipelines. DTZ provides dedicated GitHub Actions for uploading and downloading objects with built-in expiration support:\nobjectstore-upload - Upload build artifacts, logs, or any file with optional TTL objectstore-download - Retrieve objects from ObjectStore in your workflows Example: Upload a build artifact with 30-day expiration\n- name: Upload build artifact uses: DownToZero-Cloud/objectstore-upload@main with: api_key: ${{ secrets.DTZ_API_KEY }} name: build.zip expiration: P30D This makes it effortless to apply lifecycle policies directly in your deployment workflows - ensuring temporary artifacts don\u0026rsquo;t accumulate indefinitely.\nConclusion Storage should be dynamic, not static.\nBy embedding lifecycle awareness into your architecture, you reduce cloud waste and embrace a pattern of sustainable design that benefits the planet, your platform, and your users alike.\nDTZ ObjectStore turns this principle into practice - giving every object a story, a purpose, and, most importantly, an end.\n","permalink":"https://downtozero.cloud/posts/2025/object-lifecycle/","title":"Why Every Object Deserves a Lifecycle: Building Greener Storage with DTZ ObjectStore"},{"contents":"Continuing our journey with our MCP documentation server from our previous post, we can dig a little deeper into the TLS story, and how we are generating the Let\u0026rsquo;s Encrypt certificate on startup.\nIf you want to see the full source code, the github repo is linked at the bottom of the post.\nSo let\u0026rsquo;s look a little closer at the details. When we tried to get our MCP server integrated into ChatGPT and Gemini, the requirement for TLS came up. Since we also use Let\u0026rsquo;s Encrypt all over DownToZero, we mostly reused the process in this container.\nHere is a general overview sequenceDiagram participant App as get_certificate() participant LE as Let\u0026#39;s Encrypt (ACME) participant Axum as Axum HTTP Server participant FS as Filesystem App-\u0026gt;\u0026gt;LE: Create Account (NewAccount) App-\u0026gt;\u0026gt;LE: Create New Order (Identifier::Dns(domain)) LE--\u0026gt;\u0026gt;App: Order (status = Pending) App-\u0026gt;\u0026gt;App: Create oneshot channel (snd, rcv) App-\u0026gt;\u0026gt;LE: Fetch authorizations LE--\u0026gt;\u0026gt;App: Authorization incl. HTTP-01 challenge App-\u0026gt;\u0026gt;App: Compute token + key_authorization (secret) App-\u0026gt;\u0026gt;Axum: Start server on acme_port serving\u0026lt;br/\u0026gt;/.well-known/acme-challenge/{token} -\u0026gt; secret App-\u0026gt;\u0026gt;App: Sleep 2s App-\u0026gt;\u0026gt;LE: challenge.set_ready() LE-\u0026gt;\u0026gt;Axum: GET /.well-known/acme-challenge/{token} Axum--\u0026gt;\u0026gt;LE: 200 secret (key_authorization) App-\u0026gt;\u0026gt;LE: poll_ready (with backoff) LE--\u0026gt;\u0026gt;App: Order Ready App-\u0026gt;\u0026gt;LE: finalize() LE--\u0026gt;\u0026gt;App: private_key_pem App-\u0026gt;\u0026gt;LE: poll_certificate() LE--\u0026gt;\u0026gt;App: cert_chain_pem App-\u0026gt;\u0026gt;FS: write certs/{domain}.cert.pem App-\u0026gt;\u0026gt;FS: write certs/{domain}.key.pem App-\u0026gt;\u0026gt;Axum: snd.send() (graceful shutdown) App--\u0026gt;\u0026gt;App: Ok(()) As you can see, the ACME protocol is rather straightforward. Now comes the tricky part of integrating this into our MCP server.\nOn every start, we check for the existence of certs/{domain}.cert.pem and certs/{domain}.key.pem files, which means we already have valid certificates. If the files do not exist, we invoke the acme-client to generate the certificate and write it to those files. After that, we start our MCP server with a TLS config present. // loading cert and key from file let cert_file = format!(\u0026#34;certs/{}.cert.pem\u0026#34;, domain); let key_file = format!(\u0026#34;certs/{}.key.pem\u0026#34;, domain); let tls_config = RustlsConfig::from_pem_file(cert_file, key_file) .await .unwrap(); log::info!(\u0026#34;listening on https://[::]:{}\u0026#34;, config.port); let sse_config = SseServerConfig { bind: format!(\u0026#34;[::]:{}\u0026#34;, config.port).parse().unwrap(), sse_path: \u0026#34;/sse\u0026#34;.to_string(), post_path: \u0026#34;/message\u0026#34;.to_string(), ct: tokio_util::sync::CancellationToken::new(), sse_keep_alive: None, }; let (sse_server, router) = SseServer::new(sse_config); let addr = sse_server.config.bind; let ct = sse_server.with_service(DowntozeroTool::new); let server = axum_server::bind_rustls(addr, tls_config).serve(router.into_make_service()); tokio::spawn(async move { if let Err(e) = server.await { log::error!(\u0026#34;sse server shutdown with error, {e}\u0026#34;); } }); We also implemented a fallback for local runs: if there is no domain configuration present, we start the MCP server with plain HTTP. This makes local testing possible without the need for a certificate or DNS during development.\nGitHub Repo https://github.com/DownToZero-Cloud/dtz-docs-mcp ","permalink":"https://downtozero.cloud/posts/2025/tls-for-our-mcp-server/","title":"Let's Encrypt support for our MCP Server"},{"contents":"As a fun project, we wanted to expose our official documentation — which is also hosted here — as an MCP server. The idea was to test whether this would make developing our own platform easier, since our AI-based IDEs would have less difficulty checking architectural knowledge against an easy-to-consume public endpoint.\nNow, starting very naively, we came up with the following plan to implement this.\nAs a short teaser, here is what we want to build.\nflowchart LR X[Internet] subgraph downtozero.cloud A[downtozero.cloud] end subgraph W[MCP server] N[Axum frontend] T[tantivy search] M[MCP Server] N -- /index.json --\u0026gt; A N -- query --\u0026gt; T N -- MCP request --\u0026gt; M end X -- search \u0026#39;registry\u0026#39; --\u0026gt; N X[Internet] -- GET /index.html --\u0026gt; A Steps Export the website in an easier-to-consume format Consume and search for the relevant content Build an MCP server with a search capability Step 1 - Export the website The current website is built with Hugo. So all content on this site is created as Markdown and then rendered into HTML. For browsers, HTML is a good format. For search engines as well as LLMs, this would waste quite some resources and only have minimal impact on the quality of the result. But especially LLMs are really good at reading and understanding Markdown. So it became clear that we wanted to feed Markdown into the LLM. At the same time, we needed to consume this data for our search server.\nSince Hugo supports multiple output formats and even arbitrary formats, we started building a JSON output. The idea was to render all pages we have into a single large JSON and see what comes out of that.\n[ { \u0026#34;contents\u0026#34;: \u0026#34;We always aim ..\u0026#34;, \u0026#34;permalink\u0026#34;: \u0026#34;https://downtozero.cloud/posts/2025/scale-to-zero-postgres/\u0026#34;, \u0026#34;title\u0026#34;: \u0026#34;Scale-To-Zero postgresql databases\u0026#34; }, { \u0026#34;contents\u0026#34;: \u0026#34;Eliminating Wasted Cycles in Deployment At DownToZero, ...\u0026#34;, \u0026#34;permalink\u0026#34;: \u0026#34;https://downtozero.cloud/posts/2025/github-deployment/\u0026#34;, \u0026#34;title\u0026#34;: \u0026#34;Seamless Deployments with the DTZ GitHub Action\u0026#34; }, Now, to create something like this, Hugo needs to have a template in place. So we put the following file into the default templates directory. layouts/_default/index.json\n{{- $.Scratch.Add \u0026#34;index\u0026#34; slice -}} {{- range .Site.RegularPages }} {{- /* start with an empty map */ -}} {{- $page := dict -}} {{- /* always present */ -}} {{- $page = merge $page (dict \u0026#34;title\u0026#34; .Title \u0026#34;permalink\u0026#34; .Permalink) -}} {{- /* add optional keys only when they have content */ -}} {{- with .Params.tags }} {{- if gt (len .) 0 }} {{- $page = merge $page (dict \u0026#34;tags\u0026#34; .) -}} {{- end }} {{- end }} {{- with .Params.categories }} {{- if gt (len .) 0 }} {{- $page = merge $page (dict \u0026#34;categories\u0026#34; .) -}} {{- end }} {{- end }} {{- with .Plain }} {{- $page = merge $page (dict \u0026#34;contents\u0026#34; .) -}} {{- end }} {{- $.Scratch.Add \u0026#34;index\u0026#34; $page -}} {{- end }} {{- $.Scratch.Get \u0026#34;index\u0026#34; | jsonify -}} To get this template rendered, we needed to add the JSON output to the config.toml.\nbaseURL = \u0026#39;https://downtozero.cloud/\u0026#39; title = \u0026#39;Down To Zero\u0026#39; [outputs] home = [\u0026#34;HTML\u0026#34;, \u0026#34;RSS\u0026#34;, \u0026#34;JSON\u0026#34;] Step 2 - Consume and search Now that we have the JSON built, it becomes available on the site through /index.json.\nFetching the most up-to-date content became easy, which led to the open question of implementing search. Since we are building our whole stack on serverless containers, with a mainly Rust-based backend, our choice here also fell to a Rust container.\nSo we chose https://github.com/quickwit-oss/tantivy. The API for this engine is straightforward and we do not care too much about edge cases and weights.\nHere is a short code snippet showing retrieval and indexing.\nfn test() { let data = fetch_data().await.unwrap(); let index = build_search_index(data); let results = search_documentation(index, \u0026#34;container registry\u0026#34;.to_string()); } async fn fetch_data() -\u0026gt; Result\u0026lt;Vec\u0026lt;DocumentationEntry\u0026gt;, reqwest::Error\u0026gt; { let response = reqwest::get(\u0026#34;https://downtozero.cloud/index.json\u0026#34;) .await .unwrap(); let text = response.text().await.unwrap(); log::debug!(\u0026#34;text: {text}\u0026#34;); let data = serde_json::from_str(\u0026amp;text).unwrap(); Ok(data) } fn build_search_index(data: Vec\u0026lt;DocumentationEntry\u0026gt;) -\u0026gt; Index { let schema = get_schema(); let index = Index::create_in_ram(schema.clone()); let mut index_writer: IndexWriter = index.writer(50_000_000).unwrap(); for entry in data { let doc = doc!( schema.get_field(\u0026#34;title\u0026#34;).unwrap() =\u0026gt; entry.title, schema.get_field(\u0026#34;contents\u0026#34;).unwrap() =\u0026gt; entry.contents.unwrap_or_default(), schema.get_field(\u0026#34;permalink\u0026#34;).unwrap() =\u0026gt; entry.permalink, schema.get_field(\u0026#34;categories\u0026#34;).unwrap() =\u0026gt; entry.categories.join(\u0026#34; \u0026#34;), schema.get_field(\u0026#34;tags\u0026#34;).unwrap() =\u0026gt; entry.tags.join(\u0026#34; \u0026#34;), ); index_writer.add_document(doc).unwrap(); } index_writer.commit().unwrap(); index } fn search_documentation(index: Index, query: String) -\u0026gt; Vec\u0026lt;(f32, DocumentationEntry)\u0026gt; { let reader = index .reader_builder() .reload_policy(ReloadPolicy::OnCommitWithDelay) .try_into() .unwrap(); let searcher = reader.searcher(); let schema = get_schema(); let query_parser = QueryParser::for_index( \u0026amp;index, vec![ schema.get_field(\u0026#34;title\u0026#34;).unwrap(), schema.get_field(\u0026#34;contents\u0026#34;).unwrap(), schema.get_field(\u0026#34;permalink\u0026#34;).unwrap(), schema.get_field(\u0026#34;categories\u0026#34;).unwrap(), schema.get_field(\u0026#34;tags\u0026#34;).unwrap(), ], ); let query = query_parser.parse_query(\u0026amp;query).unwrap(); let top_docs = searcher.search(\u0026amp;query, \u0026amp;TopDocs::with_limit(10)).unwrap(); let mut results = Vec::new(); for (score, doc_address) in top_docs { let retrieved_doc: TantivyDocument = searcher.doc(doc_address).unwrap(); let entry = DocumentationEntry { title: retrieved_doc .get_first(schema.get_field(\u0026#34;title\u0026#34;).unwrap()) .unwrap() .as_str() .unwrap() .to_string(), contents: Some( retrieved_doc .get_first(schema.get_field(\u0026#34;contents\u0026#34;).unwrap()) .unwrap() .as_str() .unwrap() .to_string(), ), permalink: retrieved_doc .get_first(schema.get_field(\u0026#34;permalink\u0026#34;).unwrap()) .unwrap() .as_str() .unwrap() .to_string(), categories: retrieved_doc .get_first(schema.get_field(\u0026#34;categories\u0026#34;).unwrap()) .unwrap() .as_str() .unwrap() .split(\u0026#34; \u0026#34;) .map(|s| s.to_string()) .collect(), tags: retrieved_doc .get_first(schema.get_field(\u0026#34;tags\u0026#34;).unwrap()) .unwrap() .as_str() .unwrap() .split(\u0026#34; \u0026#34;) .map(|s| s.to_string()) .collect(), }; results.push((score, entry)); } results } Now that we have the content covered, let\u0026rsquo;s continue with the more interesting part: the MCP server.\nStep 3 - Build the MCP server Since all our services are built in Rust, we set our goal of building this service as a Rust service. Luckily, the MCP project has a Rust reference implementation for clients and servers.\nWe basically followed the example by the letter and got a MCP server running locally rather quickly.\nHere is the full GitHub repo for everybody who wants to get into all the details.\nhttps://github.com/DownToZero-Cloud/dtz-docs-mcp\nStep 3.5 - unexpected complication So now we wanted to deploy this MCP server and quickly got the error from the LLM clients that remote MCP servers are only supported through TLS. That didn\u0026rsquo;t make our experiment any easier.\nWe quickly adopted Let\u0026rsquo;s Encrypt to generate a TLS certificate on startup and use it to host our MCP. Since we already have code for other parts of the DTZ platform, we did not need too many adjustments for this.\nWe will do an extra post on detailed description how to get Let's Encrypt running in an axum server setup.\nLet\u0026rsquo;s Encrypt support for our MCP Server\nFinal thoughts So in conclusion, we did get our MCP server running. It is available on the internet, and we added it to our Cursor, Gemini CLI, and ChatGPT clients. Interestingly, every client has very different reactions to it. Cursor is just ignoring the information source and never asks for additional information, regardless of the task at hand. Gemini uses the MCP if required. It\u0026rsquo;s not clear how or when it is invoked, but it uses the available information source. ChatGPT does not use the MCP and always falls back to its own web search feature, which takes precedence over the MCP server. In Research mode, ChatGPT uses the MCP, but the results don\u0026rsquo;t seem to be more valuable than just the web search.\nGithub Repo https://github.com/DownToZero-Cloud/dtz-docs-mcp ","permalink":"https://downtozero.cloud/posts/2025/mcp-server-for-documentation/","title":"Building a MCP Server for Service Documentation"},{"contents":"We always aim to build services as resource-efficient as possible. This is true for the services we offer externally and, just as importantly, for our own internal infrastructure. Many of our internal tools, like billing and monitoring systems, rely on PostgreSQL databases. While essential, these databases often sit idle for long periods, consuming RAM and CPU cycles for no reason.\nSo, we asked ourselves: can we apply our scale-to-zero philosophy to our own databases? The answer is yes. We\u0026rsquo;ve developed a system to provision PostgreSQL instances that only run when they are actively being used. This design is incredibly resource-efficient but does come with some trade-offs, which we\u0026rsquo;ll explore.\nHere is a schematic overview of what we built and how we achieved this dynamic scaling.\nflowchart LR subgraph Machine A[systemd.socket] B[systemd-socket-proxyd] D[(local disk)] A -- port 15432 --\u0026gt; B subgraph Docker-Compose C[postgres container] end B -- port 5432 --\u0026gt; C C -- data-dir --\u0026gt; D end X[Internet] -- port 25432 --\u0026gt; A The Magic of Socket Activation The core of this setup is systemd socket activation. Instead of having a PostgreSQL container running 24/7, we let the systemd init system listen on the database port. When an application attempts to connect, systemd intercepts the request, starts the database container on-demand, and then hands the connection over. Once the database is no longer in use, it\u0026rsquo;s automatically shut down.\nThis approach combines the power of standard, battle-tested Linux tools: systemd for service management and socket activation, and Docker Compose for defining our containerized database environment. It\u0026rsquo;s simple, robust, and requires no custom software.\nOur Technology Choices: Why Containers and Docker Compose? We made two specific technology choices for this setup: running PostgreSQL in a container and managing it with Docker Compose.\nDecoupling from the Host OS: By running PostgreSQL inside a Docker container, we decouple the database version from the host operating system\u0026rsquo;s version. This gives us the flexibility to run different versions of PostgreSQL for different internal services on the same host without conflicts or dependency issues. We can upgrade a database for one service without impacting any others.\nCompatibility with systemd: We chose Docker Compose because its lifecycle commands fit perfectly with how systemd manages services. The systemd ExecStart directive expects a command that runs in the foreground until the service is stopped. docker-compose up does exactly this. A more classic docker create followed by docker start semantic is harder to manage, as systemd would need a more complex script to handle the lifecycle. docker-compose down provides a single, clean command for the ExecStop directive, ensuring the entire environment is torn down gracefully.\nLet\u0026rsquo;s break down the configuration files that make this possible.\nThe Components We use a combination of a docker-compose.yml file to define the database and three systemd unit files to manage the scale-to-zero lifecycle.\n1. The Database Definition: Docker Compose This is a standard docker-compose.yml file. It defines a PostgreSQL 18 container, maps an internal port to the host, and mounts a volume to persist the database data on the local disk. This ensures that even though the container stops, the data remains safe. All settings documented in the official PostgreSQL image on Docker Hub can be used here, allowing for further customization like creating specific users or databases on startup.\n/root/pg/pg1/docker-compose.yml\n1version: \u0026#34;3\u0026#34; 2services: 3 database: 4 image: \u0026#39;postgres:18\u0026#39; 5 ports: 6 - 127.0.0.1:14532:5432 7 volumes: 8 - /root/pg/pg1/data:/var/lib/postgresql 9 environment: 10 POSTGRES_PASSWORD: SuperSecretAdminPassword 2. The Listener: systemd Socket This .socket unit tells systemd to listen on port 24532 on all network interfaces. When a TCP connection arrives, systemd will activate pg1-proxy.service. This is the entry point for all database connections.\n/etc/systemd/system/pg1-proxy.socket\n1[Unit] 2Description=Socket for pg1 pg proxy (24532-\u0026gt;127.0.0.1:14532) 3 4[Socket] 5ListenStream=0.0.0.0:24532 6ReusePort=true 7NoDelay=true 8Backlog=128 9 10[Install] 11WantedBy=sockets.target 3. The Proxy and Idle Timer: systemd Service This is where the on-demand logic lives. When activated by the socket, this service first starts the actual database service (Requires=pg1-postgres.service). The ExecStartPre command is a small but critical shell loop that repeatedly checks if the internal PostgreSQL port is open. Without this check, a race condition could occur where the proxy starts and forwards the client\u0026rsquo;s connection before the PostgreSQL container has finished initializing. This would result in an immediate \u0026ldquo;Connection Refused\u0026rdquo; error for the client. This pre-start script ensures the handoff is smooth and the client only connects once the database is fully ready.\nThe main process is systemd-socket-proxyd, a built-in tool that forwards the incoming connection to the internal port where the PostgreSQL container is listening (127.0.0.1:14532). The crucial part is --exit-idle-time=3min. This tells the proxy to automatically exit if it has been idle for three minutes.\n/etc/systemd/system/pg1-proxy.service\n1[Unit] 2Description=Socket-activated TCP proxy to local Postgres on 14532 3 4Requires=pg1-postgres.service 5After=pg1-postgres.service 6 7[Service] 8Type=simple 9Sockets=pg1-proxy.socket 10ExecStartPre=/bin/bash -c \u0026#39;for i in {1..10}; do nc -z 127.0.0.1 14532 \u0026amp;\u0026amp; exit 0; sleep 1; done; exit 0\u0026#39; 11ExecStart=/usr/lib/systemd/systemd-socket-proxyd --exit-idle-time=3min 127.0.0.1:14532 4. The Container Manager: systemd Service This service manages the Docker Compose lifecycle. It\u0026rsquo;s started by the proxy service. The key directive is StopWhenUnneeded=true. This links its lifecycle to the proxy service. When pg1-proxy.service stops (because its idle timer expired), systemd sees that this service is no longer needed and automatically stops it by running docker-compose down. The container is shut down, freeing up all its resources.\n/etc/systemd/system/pg1-postgres.service\n1[Unit] 2Description=postgres container 3PartOf=pg1-proxy.service 4StopWhenUnneeded=true 5 6[Service] 7WorkingDirectory=/root/pg/pg1 8 9Type=simple 10ExecStart=/usr/bin/docker-compose up 11ExecStop=/usr/bin/docker-compose down 12 13Restart=on-failure 14RestartSec=2s 15TimeoutStopSec=30s The Trade-Off: Cold Starts This setup is incredibly efficient, but it comes with one major consideration: the \u0026ldquo;cold start\u0026rdquo; latency. The very first connection to the database after a period of inactivity will be delayed. The client has to wait for systemd to run docker-compose up and for the PostgreSQL container to initialize. In our experience, this takes about one second for a small database, but increases with storage size.\nFor many internal systems—CI/CD, batch jobs, or admin dashboards with infrequent use—this delay is a perfectly acceptable trade-off for the significant resource savings. For high-traffic, latency-sensitive production applications, a traditional, always-on database is still the right choice.\nEnabling the Service To bring a new database online, we just need to enable the systemd units.\n1systemctl daemon-reload 2systemctl enable pg1-proxy.service 3systemctl enable pg1-postgres.service 4systemctl enable --now pg1-proxy.socket Once enabled, the database is ready to accept connections, but it won\u0026rsquo;t be consuming any resources until the first one arrives. This is another small step in our mission to eliminate waste, proving that even essential infrastructure like a relational database can be run in a lean, on-demand fashion.\n","permalink":"https://downtozero.cloud/posts/2025/scale-to-zero-postgres/","title":"Scale-To-Zero postgresql databases"},{"contents":"Eliminating Wasted Cycles in Deployment At DownToZero, we’re committed to eliminating waste - whether it\u0026rsquo;s in compute, energy, or operational overhead. One challenge many teams face in containerized environments is handling updates to container images. Traditionally, deployments often rely on polling for updates when using tags like latest. This leads to unnecessary requests, increased latency for updates, and inefficient resource usage.\nTo address this, we are introducing a new way to integrate deployments directly with GitHub Actions. With the newly developed DTZ GitHub Action, container image updates can now be pushed directly from your GitHub pipeline into DownToZero. This means no more waiting, no more polling—just immediate, streamlined deployments.\nAt a glance From commit to deploy in one pass: build the image, push to your container registry, resolve the exact digest, and update the target service via the DTZ GitHub Action. No polling or ambiguity—just precise, digest-driven releases.\nflowchart LR subgraph GitHub Actions A[Commit / Dispatch / Schedule]:::action --\u0026gt; B[Build Docker Image]:::action B --\u0026gt; C[Push to Container Registry]:::action C --\u0026gt; D[Resolve Image Digest]:::action D --\u0026gt; E[DTZ Action: Update Service]:::action end subgraph DownToZero R[(DTZ Container Registry)]:::registry S[Container Service]:::service end C --\u0026gt; R E --\u0026gt; S classDef action fill:#fff8e1,stroke:#f9a825,color:#5d4037 classDef registry fill:#e3f2fd,stroke:#1e88e5,color:#0d47a1 classDef service fill:#e8f5e9,stroke:#43a047,color:#1b5e20 How It Works The DTZ GitHub Action connects your GitHub workflow with your DownToZero container services. Once your pipeline builds and pushes a new container image, the action automatically updates the designated service with the freshly built image digest.\nThis ensures:\nImmediate updates: Deploy the moment your image is ready. Digest-based precision: No ambiguity around latest—deployments reference the exact image digest. Reduced overhead: No need for external polling or manual triggers. Aligned with sustainability goals: Fewer wasted cycles mean fewer wasted resources. Sample Pipeline Here’s a sample GitHub Actions workflow showing how this integration looks in practice:\non: push: workflow_dispatch: schedule: # scheduled rebuild for security updates - cron: \u0026#39;30 5 25 * *\u0026#39; jobs: build-website: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: build website run: | docker build -t ee8h25d0.cr.dtz.dev/sample-website . - name: Login to ee8h25d0.cr.dtz.dev uses: docker/login-action@v3 with: registry: ee8h25d0.cr.dtz.dev username: apikey password: ${{ secrets.DTZ_API_KEY }} - name: uploading image to ee8h25d0.cr.dtz.dev run: | docker push ee8h25d0.cr.dtz.dev/sample-website:latest - name: Resolve image digest id: resolve_digest run: | DIGEST=$(docker inspect --format=\u0026#39;{{index .RepoDigests 0}}\u0026#39; ee8h25d0.cr.dtz.dev/sample-website:latest) echo \u0026#34;IMAGE_URL=$DIGEST\u0026#34; \u0026gt;\u0026gt; $GITHUB_ENV - name: Deploy latest image to service uses: DownToZero-Cloud/containers-service-update@main with: container_image: ${{ env.IMAGE_URL }} container_image_version: \u0026#39;\u0026#39; api_key: ${{ secrets.DTZ_API_KEY }} service_id: service-0194e6d9 - name: Publish image URL to summary run: | echo \u0026#34;## Deployed image\u0026#34; \u0026gt;\u0026gt; $GITHUB_STEP_SUMMARY echo \u0026#34;\u0026#34; \u0026gt;\u0026gt; $GITHUB_STEP_SUMMARY echo \u0026#34;${IMAGE_URL}\u0026#34; \u0026gt;\u0026gt; $GITHUB_STEP_SUMMARY This pipeline:\nBuilds a Docker image. Pushes it to your DTZ container registry. Resolves the image digest. Uses the DTZ GitHub Action to deploy the image to the target service. Publishes a summary with the deployed image reference. Conclusion By integrating the DTZ GitHub Action into your CI/CD pipelines, you gain a faster, more efficient, and resource-conscious way to manage container deployments. This approach removes the guesswork of polling, ensures precise deployments, and reflects our mission at DownToZero: removing waste at every step.\n","permalink":"https://downtozero.cloud/posts/2025/github-deployment/","title":"Seamless Deployments with the DTZ GitHub Action"},{"contents":"Whitepaper: ueo.ventures - a single page static website hosted at DownToZero This whitepaper will illustrate how DownToZero is used to host a static website. We will provide insights into how the website was build as well as give some real-world data how it is used and how that reflects in DownToZero usage and therefore cost.\nWhat Is ueo.ventures? ueo.ventures is a static site providing credentials to external entities. The site is very minimal and only consists of 2 static pages. It includes:\nindex page - stating the name of the company impressum page - stating the legal requirements fo a private company How is it build? As previously stated, the website only contains 2 pages. Those pages are static files in a github repository. To make this available to DownToZero, we need to package those two files as docker container.\nHere is the full Dockerfile used to package the website.\nFROM alpine AS runner RUN apk add --update --no-cache lighttpd \u0026amp;\u0026amp; rm -rf /var/cache/apk/* ADD index.html /var/www/localhost/htdocs/ ADD impressum.html /var/www/localhost/htdocs/ CMD [\u0026#34;/usr/sbin/lighttpd\u0026#34;, \u0026#34;-D\u0026#34;, \u0026#34;-f\u0026#34;, \u0026#34;/etc/lighttpd/lighttpd.conf\u0026#34;] And the pipeline we use to build and deploy the most up-to-date version.\nname: website on: push: workflow_dispatch: schedule: - cron: \u0026#39;30 5 25 * *\u0026#39; jobs: build-website: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: build website run: | docker build -t xb35e5d0.cr.dtz.dev/ueo-ventures . - name: Login to xb35e5d0.cr.dtz.dev uses: docker/login-action@v3 with: registry: xb35e5d0.cr.dtz.dev username: apikey password: ${{ secrets.DTZ_API_KEY }} - name: uploading image to xb35e5d0.cr.dtz.dev run: | docker push xb35e5d0.cr.dtz.dev/ueo-ventures:latest - name: Resolve image digest id: resolve_digest run: | DIGEST=$(docker inspect --format=\u0026#39;{{index .RepoDigests 0}}\u0026#39; xb35e5d0.cr.dtz.dev/ueo-ventures:latest) echo \u0026#34;IMAGE_URL=$DIGEST\u0026#34; \u0026gt;\u0026gt; $GITHUB_ENV - name: Deploy latest image to service uses: DownToZero-Cloud/containers-service-update@main with: container_image: ${{ env.IMAGE_URL }} api_key: ${{ secrets.DTZ_API_KEY }} service_id: service-e1d1efd8 - name: Publish image URL to summary run: | echo \u0026#34;## Deployed image\u0026#34; \u0026gt;\u0026gt; $GITHUB_STEP_SUMMARY echo \u0026#34;\u0026#34; \u0026gt;\u0026gt; $GITHUB_STEP_SUMMARY echo \u0026#34;${IMAGE_URL}\u0026#34; \u0026gt;\u0026gt; $GITHUB_STEP_SUMMARY How much is it used? requests The number of requests going to this page is rather low. So we will see a lot of unoptimized cold-starts. The overall volume per day averages at about 200 requests per day.\nHere a short sample set\nday requests 2025-09-12 147 2025-09-13 116 2025-09-14 118 2025-09-15 121 2025-09-16 200 2025-09-17 169 2025-09-18 157 2025-09-19 268 2025-09-20 524 2025-09-21 496 2025-09-22 228 2025-09-23 246 2025-09-24 123 2025-09-25 79 2025-09-26 60 splitting the request by cold-starts Cold starts are starts where nothing is present on the executing host. Warm starts are when the image is already loaded on the executing host, but the container needs to be started. Host starts are when the container is already up and running.\nday cold starts warm starts hot starts 2025-09-12 63 48 14 2025-09-13 70 3 21 2025-09-14 61 3 22 2025-09-15 68 0 31 2025-09-16 84 3 27 2025-09-17 53 3 25 2025-09-18 72 2 27 2025-09-19 48 2 19 2025-09-20 501 2 10 2025-09-21 474 3 1 2025-09-22 199 3 2 2025-09-23 201 3 27 2025-09-24 71 5 31 2025-09-25 51 3 20 response times While the difference between hot and cold-starts seem significant in relative values, the overall cold-start time does not go under 3ms.\nLatency Percentiles in ms state p50 p90 p95 p99 cold 0.536 1.003 2.0888 3.2584 hot 0 0 0.001 0.001 Power Consumption All this translates to a certain usage type and pattern. Our metric to determine usage is Watt. We measure the power consumption for all that activity an provide detailed statistics over time.\n| day | power consumption (in Wh) | | \u0026mdash; | \u0026mdash;\u0026mdash;\u0026mdash;\u0026ndash; :| | 2025-09-12 | 1.88 | | 2025-09-13 | 2.00 | | 2025-09-14 | 0.96 | | 2025-09-15 | 1.76 | | 2025-09-16 | 3.14 | | 2025-09-17 | 0.43 | | 2025-09-18 | 1.90 | | 2025-09-19 | 1.06 | | 2025-09-20 | 4.74 | | 2025-09-21 | 4.51 | | 2025-09-22 | 1.62 | | 2025-09-23 | 6.51 | | 2025-09-24 | 3.85 | | 2025-09-25 | 3.29 | | 2025-09-26 | 2.81 |\nPricing \u0026amp; Cost Efficiency DownToZero bills compute purely by energy consumed. Current rates are:\nCompute: 0.010 EUR per Watt-Hour (Wh) in normal mode; 0.005 EUR/Wh in ecoMode. Storage: hot storage - 0.0013 EUR / GB / day; cold stoorage - 0.0007 EUR / GB / day Based on the measured usage of this ueo.ventures service:\nRequests: ~203 requests/day on average over the sample period (peak 524; low 60). Energy: ~2.40 Wh/day on average (min 0.43 Wh; max 6.51 Wh). Estimated compute cost at current rates:\nNormal mode: ~0.024 EUR/day (~0.72 EUR per 30-day month); daily range ≈ 0.004–0.065 EUR. ecoMode: ~0.012 EUR/day (~0.36 EUR per 30-day month); daily range ≈ 0.002–0.033 EUR. Because you pay per watt, there is direct incentive to keep images lean, avoid unnecessary work on cold starts, and opt into ecoMode where practical. Over time this can reduce both spend and energy use.\nTrade-offs: ecoMode may involve different performance characteristics (e.g., startup latency, scheduling); costs can vary with usage spikes. Monitor consumption and adjust as needed.\n","permalink":"https://downtozero.cloud/whitepaper/ueo-ventures/","title":"ueo.ventures - a single page static website hosted at DownToZero"},{"contents":" A while back I kept running into the same problem: I had a bunch of notes, specs, and markdown docs on my laptop, but when I opened ChatGPT and asked questions about them, the model obviously didn’t know they existed. Copy-pasting files was tedious; manually uploading one by one wasn’t much better. So I built a tiny CLI that does the boring bit for me: openai-folder-sync, a command-line tool that syncs a local directory to an OpenAI Vector Store, so those files become searchable context for ChatGPT.\nWhy a vector store? OpenAI’s file search is built around a vector store. You add files to a store, they’re processed into embeddings, and assistants (or ChatGPT via connected tools) can retrieve and ground answers using those files. It’s a clean model, but it assumes you’ll remember to upload and maintain your file set. A sync step—which mirrors a folder you already care about—is closer to how we work day to day.\nWhat the tool does openai-folder-sync scans a directory on your machine, filters files by extension, and pushes them to an existing vector store. You can run it once to populate a store, or add it to your routine (e.g., a cron job) to keep your knowledge base fresh. There’s also a switch to embed git metadata (branch and commit) into the uploaded content, so when ChatGPT cites something, you know exactly which version it came from.\nMirrors a local folder into a Vector Store Supports extension filtering (e.g., md,txt,pdf) Optional git provenance (--git-info) Can be scripted or run ad-hoc Repo: https://github.com/JensWalter/openai-folder-sync\nInstallation It’s written in Rust and installs via Cargo:\ncargo install --git https://github.com/JensWalter/openai-folder-sync.git Cargo will build and place the openai-folder-sync binary on your PATH.\nUsage\nYou’ll need an OpenAI API key and an existing vector store ID (looks like vs_…). Then point the CLI at your folder:\nopenai-folder-sync \\ --vector-store \u0026#39;vs_ABCDEFGHIJK\u0026#39; \\ --local-dir \u0026#39;/Users/jens/tmp/wiki/content\u0026#39; \\ --extensions md Prefer running through Cargo during development?\ncargo run -- \\ --vector-store \u0026#39;vs_ABCDEFGHIJK\u0026#39; \\ --local-dir \u0026#39;/Users/jens/tmp/wiki/content\u0026#39; \\ --extensions md Most flags can also be set via environment variables:\nexport OPENAI_API_KEY=sk-... export VECTOR_STORE=vs_ABCDEFGHIJK export LOCAL_DIR=/Users/jens/tmp/wiki/content export EXTENSIONS=md,txt,pdf export GIT_INFO=true openai-folder-sync Common flags --vector-store / VECTOR_STORE – target store ID --local-dir / LOCAL_DIR – folder to sync --extensions / EXTENSIONS – comma-separated list --git-info / GIT_INFO – include git metadata (true|false) --help – show all options How this fits with ChatGPT Once the files are in a vector store, ChatGPT (or your Assistants API apps) can retrieve relevant snippets from those files while answering your questions. That means you can ask things like \u0026ldquo;What did the proposal say about the migration timeline?\u0026rdquo; and the model will pull the relevant context from your synced docs—no copy-paste required. Your folder effectively becomes a living knowledge base.\nTips \u0026amp; gotchas Choose extensions deliberately. Start with the formats you actually use—md, txt, maybe pdf. This keeps processing fast and retrieval focused. Create the vector store first. The tool expects a store ID; set one up via the API or dashboard, then hand that ID to the CLI. Use git info for provenance. If you sync from a repo, enabling \u0026ndash;git-info tags uploaded text with commit metadata, which makes answers easier to trust and trace. Exclude noise. Consider keeping build artifacts, node_modules, or temporary files outside your sync directory, or add them to .gitignore if you rely on the git metadata. Example workflow Create one vector store per knowledge domain (e.g., “Company Wiki”, “Research Notes”). Point openai-folder-sync at the matching local folder. Run it as part of your writing routine (for me: after pushing to main). Ask ChatGPT questions and let retrieval do the heavy lifting. Roadmap I currently have no further plans for this tool. If you think something is still missing, just open an issue or PR.\n⸻\nIf you want to try it, the repository includes install and usage details plus help output. I built it to make my own workflow simpler; if it saves you from a few copy-paste marathons and helps ChatGPT answer from your real documents, that’s a win. Happy syncing!\n","permalink":"https://downtozero.cloud/posts/2025/openai-folder-sync/","title":"Sync Your Folder to ChatGPT: the story behind `openai-folder-sync`"},{"contents":"We love automation. We use it to power our infrastructure, to scale workloads down to zero, and—increasingly—to shrink the amount of human attention needed to ship high-quality code. One place that still felt stubbornly manual was pull-request reviews. Between Cursor as our IDE, ChatGPT/Codex for prototyping, and gemini-cli for quick checks, our local workflows were fast—but CI still waited for a human.\nSo we asked a simple question: could we let a large language model read the diff, spot issues, and comment directly on the PR?\nTurns out: yes. It took just a few lines of GitHub Actions glue to get helpful, structured reviews on every pull request.\nThe goal We weren\u0026rsquo;t trying to replace humans. We wanted a first pass that:\nreads the actual diff of a PR (not the entire repo), points out obvious mistakes and risky changes, suggests small refactors or missing tests, categorises findings by priority, and posts results right where we already look: in the PR conversation and the Actions summary. If a change is fine, we want the bot to simply say so and get out of the way.\nThe tools in our stack GitHub Actions for CI orchestration. Cursor (our day-to-day IDE). ChatGPT/Codex for ideation and quick off-line reviews. @google/gemini-cli inside CI to run the automated review step. The GitHub CLI (gh) to comment on the PR. A small but important ingredient: a prompt that steers the model to produce useful, actionable feedback. The workflow, end to end Here\u0026rsquo;s the full Action we\u0026rsquo;re running. Drop it into .github/workflows/gemini-pr.yml:\nname: gemini-pr on: workflow_dispatch: pull_request: jobs: build: permissions: write-all runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 with: submodules: \u0026#39;true\u0026#39; fetch-depth: 0 - uses: actions-rust-lang/setup-rust-toolchain@v1 with: components: rustfmt, clippy cache: false - uses: actions/setup-node@v4 with: node-version: 20 - name: install gemini run: | npm install -g @google/gemini-cli - name: gemini run: | echo \u0026#34;merging into ${{ github.base_ref }}\u0026#34; git diff origin/${{ github.base_ref }} \u0026gt; pr.diff echo $PROMPT | gemini \u0026gt; review.md cat review.md \u0026gt;\u0026gt; $GITHUB_STEP_SUMMARY gh pr comment ${{ github.event.pull_request.number }} --body-file review.md env: GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }} GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} PROMPT: \u0026gt; please review the changes of @pr.diff (this pull request) and suggest improvements or provide insights into potential issues. do not document or comment on existing changes, if everything looks good, just say so. can you categorise the changes and improvesments into low, medium and high priority? Whenever you find an issue, please always provide an file and line number as reference information. if multiple files are affected, please provide a list of files and line numbers. provide the output in markdown format and do not include any other text. What each part does Checkout with fetch-depth: 0 so we can diff against the PR\u0026rsquo;s base branch reliably.\nRust toolchain installs rustfmt and clippy because our repos often include Rust code; those run elsewhere in our pipeline, but keeping toolchain setup here avoids surprises.\nNode is required for the gemini-cli.\nWe install @google/gemini-cli globally inside the runner.\nWe create a diff file:\ngit diff origin/${{ github.base_ref }} \u0026gt; pr.diff This ensures the model sees only the changes under review.\nWe pipe the prompt into gemini (the CLI reads @pr.diff inline as a file reference) and capture the model\u0026rsquo;s markdown output into review.md.\nWe append the review to the Job Summary ($GITHUB_STEP_SUMMARY) so it\u0026rsquo;s visible in the Actions UI.\nWe comment on the PR using gh pr comment … --body-file review.md.\nThe prompt that makes it useful LLM outputs are only as good as the instructions. Ours keeps things practical:\nScope: Only review what changed. Don\u0026rsquo;t re-document the repository. Signal: Say \u0026ldquo;looks good\u0026rdquo; when there is nothing to add. No forced creativity. Actionability: Always include file + line numbers for findings. Priorities: Group by low / medium / high to help reviewers scan quickly. Format: Markdown only, so it pastes cleanly into PR comments and renders well in the summary. We iterated a bit to reach this. The most impactful tweaks were: insisting on file/line references and forbidding extra prose.\nWhat the review looks like On a typical PR, we see sections like:\nHigh: Security-sensitive changes, broken error handling, missing input validation, accidental secrets, or removed tests. Medium: Edge cases, concurrency risks, questionable error messages, non-idiomatic Rust/Go/TS that could bite later. Low: Naming, comments, small refactors, or suggesting a short test to lock a behaviour. If everything\u0026rsquo;s fine, we get a one-liner: \u0026ldquo;Looks good.\u0026rdquo; Perfect—that\u0026rsquo;s exactly what we want.\nGotchas and practical notes Secrets: You need GEMINI_API_KEY and GITHUB_TOKEN in repo or org secrets. Keep scopes tight. The Action sets permissions: write-all because it posts a comment; restrict this if your policy requires it. Diff source: For complex merges, git diff origin/${{ github.base_ref }} gives the right context. If your workflow fetches only the merge commit, make sure the base branch is available or adjust to github.event.pull_request.base.sha. Forks: If you accept PRs from forks, review how you handle secrets. You may want to run this on pull_request_target with careful hardening, or gate the review behind labels. Noise control: We found it useful to let the model say nothing beyond \u0026ldquo;looks good\u0026rdquo; when a change is trivial. That alone drops reviewer fatigue. Costs and quotas: Model calls aren\u0026rsquo;t free. We cap the size of the diff we send and run this only on pull_request (not every push). Privacy: You are sending your diff to an external model provider. If your code is sensitive or under export restrictions, assess risk and choose a provider/deployment model that fits your compliance needs. Why this matters (beyond convenience) Automated reviews make humans more selective with their attention. We spend less time on \u0026ldquo;rename this variable\u0026rdquo; and more time on architecture, data flows, and security boundaries. That means:\nFaster feedback loops for contributors. Fewer review cycles on nitpicks. A cleaner commit history with problems caught earlier. More time for the sustainability work we actually care about—like shaving watts off a service or reducing network egress. It\u0026rsquo;s also surprisingly good at consistency. An LLM won\u0026rsquo;t forget the agreed-upon error-handling pattern between services or our preferred log structure; it applies those checks uniformly on every PR.\nVariations you might try This pattern works with almost any model or CLI. A few easy extensions:\nMulti-model voting: Call two models with the same prompt and keep only findings they agree on. Language-aware passes: If your repo mixes languages, run language-specific prompts (e.g., one tuned for Rust with clippy hints, one for TypeScript). \u0026ldquo;Fail on High\u0026rdquo; gates: Use a small parser to detect a \u0026ldquo;High\u0026rdquo; section and flip the job to failed to block merges until addressed. Inline review: Convert file/line references into GitHub review comments (the gh CLI supports this) for even tighter feedback. PR label control: Only run when a maintainer adds a ai-review label, or auto-add a needs-attention label when high-priority findings appear. Results so far Shorter review cycles on straightforward changes. Cleaner diffs because contributors fix low-hanging items before a human ever looks. Better onboarding: new teammates get concrete advice that mirrors what senior reviewers would say. No drama: if the bot has nothing to add, it\u0026rsquo;s quiet. None of this replaces a human approving a merge. It\u0026rsquo;s a lightweight filter that pays for itself on day one.\n","permalink":"https://downtozero.cloud/posts/2025/pr-reviews/","title":"Letting an LLM Review Our Pull Requests (So You Don't Have To)"},{"contents":"We’ve refreshed rss2email with a cleaner layout and tighter workflows. The service stays lightweight, but the things you do every day—add feeds, check freshness, tune emails—are now faster and easier.\nLive demo: https://rss2email.dtz.rocks/\nThe dashboard puts rss2email next to your other services for quick access.\nWhat’s new (and why it’s nicer) A glanceable overview The FEEDS home shows counts for upcoming notifications (1 day, 1 week, 1 month) plus the total number of subscriptions. It’s the “how noisy will my inbox be?” sanity check.\nGlanceable stats for feeds and upcoming notifications.\nFeed health you don’t have to interpret Each subscription card highlights three facts:\nURL — the canonical feed address Last check — when we last polled the feed Last data — when the feed last produced an item We also added color-coded health: green = fresh, amber = slow/quiet, red = stale. That saves you from opening a feed just to learn it hasn’t published in ages.\nHealthy: recent data in green.\nStale: nothing new for a long while, flagged in red.\nControls for Disable and Delete stay within reach but out of your way, so cleanup is quick without being scary.\nCompact cards make scanning and housekeeping fast.\nAdd feeds without hunting for XML Paste any homepage—we’ll discover the feed link for you—or drop in a direct RSS/Atom URL. One click creates the subscription. It’s built for speed when you’re in “add five sources” mode.\nFeed discovery: paste a homepage, let us find the feed.\nEmails you can actually read Notification emails are rendered via Mustache templates. You can edit the subject and body right in the UI with documented variables, so messages fit your team’s tone (or your own inbox rules).\nAvailable variables: {{title}}, {{link}}, {{description}}, {{content}}, {{date}} Example:\nSubject: {{title}} Body: {{link}} {{\u0026amp; description}} Tune the subject/body with Mustache—no redeploy required.\nSmall touches that add up Explicit timestamps with consistent formatting Rounded cards, wider tap targets, and saner spacing Clearer status at the bottom: Status Page and Documentation links are always visible This isn’t a features dump; it’s a quality pass that makes rss2email feel snappier and more trustworthy day to day. If you’re already using down to zero, your feeds are there—just easier to manage. Tell us what else you’d like to see next.\n","permalink":"https://downtozero.cloud/posts/2025/rss2email-redesign/","title":"rss2email: a calmer, faster UI"},{"contents":"Our container registry now speaks the same OAuth dialect as the rest of the DownToZero (DTZ) platform. If you already mint bearer tokens for any DTZ API, you can log in to cr.dtz.rocks with zero extra configuration. This change completes our long-term goal of one identity, one token, everywhere.\nWhy we added OAuth The registry originally accepted API keys only. While these keys remain supported, they lack the fine-grained scopes and short lifetimes that OAuth brings. A JWT issued by the DTZ Identity service carries role information such as containerregistry/admin/{context_id} and expires automatically, shrinking the blast radius if it ever leaks.\nLogging in with Docker # 1 - Request a bearer token curl -X POST https://identity.dtz.rocks/api/2021-02-21/token/auth -H \u0026#34;Content-Type: application/json\u0026#34; -d \u0026#39;{\u0026#34;username\u0026#34;:\u0026#34;you\u0026#34;,\u0026#34;password\u0026#34;:\u0026#34;secret\u0026#34;}\u0026#39; # 2 - Use that token to authenticate docker login cr.dtz.rocks -u bearer -p YOUR_ACCESS_TOKEN The username must literally be bearer; Docker forwards the token as the password. Behind the scenes, the registry validates the JWT using the very same logic as every other DTZ service.\nBackward compatibility If your CI/CD pipeline already uses an API key, nothing breaks—docker login -u apikey -p YOUR_API_KEY cr.dtz.rocks still works. OAuth is an opt-in upgrade that unlocks shorter-lived credentials, role-based access control, and smoother secrets rotation.\nA consistent developer experience Because the registry now honors the exact same HTTP headers and basic-auth conventions as our REST endpoints, you can:\nreuse service accounts and tokens across build, deploy and runtime stages plug DTZ into any OAuth-capable secret manager audit access in one place via the Identity service logs Ready to try it? Grab a token, run your next docker push, and enjoy a simpler, safer login flow.\n","permalink":"https://downtozero.cloud/posts/2025/registry-oauth/","title":"OAuth Authentication for the Container Registry"},{"contents":"Today we\u0026rsquo;re thrilled to unveil a long-awaited improvement to the DownToZero experience: our homepage is now available in five languages. In addition to English you can read every section in German (DE), French (FR), Italian (IT) and Spanish (ES). Whether you\u0026rsquo;re researching our energy-efficient infrastructure in europe, the content should now feel familiar and approachable in your native tongue.\nWhy Multi-Language Matters Cloud technology is global by nature, but language barriers still slow teams down. We constantly heard from users who loved the idea of sustainable, scale-to-zero infrastructure yet found technical details intimidating when presented only in English. By translating our public pages we reduce friction for non-English-speaking builders and make our climate-friendly mission clearer for everyone.\nHow We Implemented It Technically, the homepage uses the built-in i18n features of Hugo. For each supported locale we add a lightweight markdown copy of every page and let Hugo generate language-specific URLs (/de/, /fr/, /it/, /es/). A new language switcher appears in the bottom right corner. Because our micro-frontend is fully static, the additional language bundles add only a few kilobytes to the total payload and are still served from our renewable-powered CDN.\nA ChatGPT-Powered Translation Workflow Maintaining five versions of every paragraph could quickly become a burden, so we automated as much as possible. Our source of truth remains the English markdown file. Whenever we update content, a GitHub Action calls the OpenAI Chat Completion API and asks ChatGPT to translate the changes into the other four languages. The translated snippets are committed back into their respective locale files in the same pull request. The result is a translation pipeline that is fast, reproducible and costs only a few cents per update.\nWhat\u0026rsquo;s Available Today The homepage The services overview The blog The documentation Our technical documentation and blog posts will continue to be written primarily in English for now, but we\u0026rsquo;re monitoring demand and may expand translations based on feedback.\nWhat\u0026rsquo;s Next This multi-language release is just the first step toward greater accessibility. In the coming months we plan to surface energy-consumption dashboards and API reference guides in the same set of languages.\nFinally, this project reflects our broader commitment to inclusivity. Sustainability is not only about carbon budgets; it\u0026rsquo;s also about making sure the ideas that drive greener software can be shared with as many people as possible. By lowering the language barrier we hope the scale-to-zero philosophy can inspire operators and developers who might not speak English daily.\nIf you discover a typo or have suggestions for terminology that feels more natural in your language, please open an issue on within you Account or send us a quick email at contact@downtozero.cloud.\nWe hope this update makes DownToZero more welcoming, no matter where you code from. Happy reading — and danke, merci, grazie, gracias for supporting sustainable cloud computing!\n","permalink":"https://downtozero.cloud/posts/2025/multi-language/","title":"Introducing Multi-Language Support on the DownToZero Homepage"},{"contents":"Down-to-Zero (DTZ) has always been about doing more with fewer watts and fewer bytes. From scale-to-zero container scheduling to solar-powered build runners, every service we ship is measured against a ruthless baseline: could this run happily on a fan-less notebook CPU in the sun?\nToday we are excited to announce the next step on that journey - remote Model Context Protocol (MCP) servers that you can spin up as lightweight Docker containers inside any DTZ context.\nWhy MCP matters MCP is an open standard that lets language-model hosts reach out to task-specific “servers” for data, tools, or actions, using a simple, authenticated JSON stream. Think of it as a USB-C port for AI agents: one plug, many peripherals. By running an MCP server next to your data, you avoid shuttling entire datasets through an LLM API call. That fits perfectly with our “shift computation to the edge, not the core” mantra.\nWhat changed - Server-Sent Events in the balancer Until now DTZ\u0026rsquo;s multi-tenant load balancer terminated only HTTP/1 and HTTP/2. MCP, however, relies on Server-Sent Events (SSE) for its long-lived, one-way event stream. SSE works great over HTTP/2, but browsers strictly limit concurrent SSE connections when they fall back to HTTP/1 — usually six per origin.\nWe have therefore extended the balancer with native SSE support:\nPersistent streams are mapped to internal HTTP/2 multiplexed channels, so dozens of MCP sessions share a single TCP socket. Idle streams auto-hibernate and wake instantly, preserving our idle-to-zero promise. Connection health is propagated all the way to the origin container, making horizontal scaling trivial. This improvement unlocks remote-first MCP servers: you can now deploy the server component as a container image and have any LLM client connect back through secure SSE without extra proxies.\nHow to deploy Build (or download) an MCP server image.\nPush it to your private DTZ registry:\ndocker push {context-id}.cr.dtz.dev/my-mcp-server:latest Create a new service in your context and point it at the image. Our scheduler pulls only on demand and scales to zero when no host is connected.\nBecause the registry endpoint lives inside the same energy-efficient mesh, image pulls happen over the local backbone, keeping egress near zero and speeding up cold starts.\nTiny containers, huge reach Remote MCP servers typically need a single Rust- or Go-based binary plus a tiny Alpine base layer. In our own tests a full-featured GitHub integration server consumes 15 MiB RAM at boot and idles below 2 W on our DTZ worker nodes. That leaves plenty of headroom for dozens of servers per node before the solar panels even notice.\nFor workloads that do spike, DTZ\u0026rsquo;s cgroup isolation lets the kernel reclaim memory the moment the job is done. Combined with the balancer\u0026rsquo;s SSE hibernation, your context returns to zero just seconds after the last token is streamed to your model.\nOutlook We are actively integrating the DTZ Identity Server via OAuth 2.1 into the MCP ecosystem, ensuring that every stream is served only to authenticated clients and your remote servers remain both minimal and secure.\nLess energy, less fuss - just context where you need it.\n","permalink":"https://downtozero.cloud/posts/2025/remote-mcp-support/","title":"Adding support for Remote MCP Server in our infrastructure"},{"contents":"TL;DR Metric Old SLO New SLO Any DTZ customer-facing service 95 % 99 % dtz overall health (aggregated heartbeat) 95 % 99.9 % The new objectives take effect **1 July 2025** and will be measured over the same rolling 30-day window you already know from the [status page](https://status.dtz.rocks). Why we\u0026rsquo;re ready for an extra nine Over the past year our platform has quietly evolved from “promising” to “battle-hardened”:\nData speaks: Since April 1 we\u0026rsquo;ve logged 11 production incidents totalling 7 h 28 m of downtime. That\u0026rsquo;s 99.66 % availability for a 75-day stretch—already above the new global target. Overall health is rock solid: The aggregated dtz overall health probe has been unavailable for only 16 m in 2025 to date, translating to 99.97 %. Mean time to recovery (MTTR) shrank 42 % thanks to automatic rollbacks, blue/green deploys and a growing suite of smoke tests. Observability everywhere: Every critical path now emits RED metrics (rate, errors, duration) and SLO burn alerts feed directly into on-call slack channels. What changes for you Tighter error budgets. With 99 % availability a service may now be down for ~7 h 18 m per month (previously ~36 h). For the 99.9 % overall-health check the allowance is just 43 m. Faster incident response. Pager thresholds are being shortened from 3 m to 60 s of failing probes so we can act before you notice. Transparent credits. If we breach the SLO, service credits will land automatically—no ticket required. The updated ToS goes live next week. Richer public telemetry. Latency percentiles and burn-rate graphs will be added to each component on the status page so you can correlate issues with your own dashboards. How we\u0026rsquo;ll stay inside budget Redundant probes from three regions for every heartbeat. Instant deploy rollbacks. 90 % of reversions already complete in under three minutes; the goal is sub-one-minute. chaos drills keep recovery playbooks fresh. Sustainable ops, not wasteful ops. We continue to run on carbon-aware schedules; more nines do not mean more megawatts. A quick look at the numbers Since 1 April 2025 we have seen:\n11 incidents across five services. Average incident length: 41 m. Longest single outage: 1 h 5 m (objectstore, 06 April). Latest 30-day window: 2 incidents, 1 h 9 m total downtime → 99.85 % availability. These figures give us comfortable head-room to meet the new targets even before the upcoming redundancy upgrades land.\nThank you Reliability isn\u0026rsquo;t a switch you flip—it\u0026rsquo;s the cumulative effect of design reviews, test coverage, observability and a crew that cares. Your bug reports and feature suggestions pushed us to raise the bar. Keep the feedback coming, and here\u0026rsquo;s to fewer pages, greener ops and one extra nine.\n","permalink":"https://downtozero.cloud/posts/2025/updated-slo/","title":"Raising the Bar: New SLOs at 99 % (and 99.9 % for Overall Health)"},{"contents":"At Down To Zero (DTZ) we believe a modern cloud platform can — and should — be both blazingly reliable and gentle on the planet.\nOur answer is a twin-site architecture that pairs a private, solar-powered micro-datacenter in Leipzig with a fully redundant footprint inside Hetzner\u0026rsquo;s flagship facility in Falkenstein. Together they give us three things every customer cares about:\nConsistent performance (low-latency paths inside Germany) Provable sustainability (100 % renewable energy, measured \u0026amp; monitored) Fault tolerance (geographic isolation, automated fail-over) Below we dig into the what, why, and how.\nWhy Germany? Germany sits at the crossroads of Europe\u0026rsquo;s internet backbone, offering sub-30 ms round-trip times to most EU capitals and direct links to the major Internet Exchange Points (IXPs) in Frankfurt (DE-CIX) and Berlin. Strict environmental and privacy regulations (think Energieeffizienzstrategie and GDPR) also align with our values:\nEnergy mix — grid electricity here already skews heavily toward renewables, giving us a cleaner baseline. Data sovereignty — all bits stay on German soil, simplifying compliance for EU-focused customers. Fiber density — abundant dark-fiber routes mean we can lease dedicated 40 Gbps circuits without eye-watering costs. Twin-site strategy at a glance Responsibility Leipzig Falkenstein (Hetzner) Workload type Asynchronous / batch Synchronous / latency-critical Power source On-site PV array 100 % renewable grid contracts Operating mode Eco-mode (compute only when sun shines) 24 × 7 always-on PUE* snapshot n/a 1.18 (Hetzner public report) *PUE = Power Usage Effectiveness, lower is better.\nPrivate solar-powered location — Leipzig Our Leipzig cluster is literally fueled by daylight:\nMicro-grid controller cuts in compute as soon as generation outstrips local demand + battery trickle charge. Typical tasks here include CI/CD pipelines, nightly ETL, large-scale rebuilds, and generative-AI batch inference — anything that tolerates a flexible runtime window.\nHetzner datacenter — Falkenstein For real-time APIs, ingress proxies, we anchor in Hetzner\u0026rsquo;s facility:\nPower — 100 % renewable (hydro + wind certificates), N+1 diesel backup. Cooling — indirect free cooling for ~70 % of the year; hot-aisle containment elsewhere. Security — 24/7 staffed NOC, biometric access, CCTV, ISO 27001 \u0026amp; PCI-DSS alignment. We also spin up overflow async pods here when Leipzig\u0026rsquo;s sky goes grey — maintaining job SLA without compromising eco-goals.\nBringing it all together Two locations, one mission: deliver production-grade cloud services while driving operational carbon to zero.\nIf you\u0026rsquo;re looking for a platform that puts sustainability on equal footing with speed and uptime, we\u0026rsquo;d love to hear from you.\n","permalink":"https://downtozero.cloud/posts/2025/physical-location/","title":"Physical locations of our datacenter"},{"contents":"We\u0026rsquo;re excited to announce a groundbreaking addition to our help support the DownToZero project: a CustomGPT. This new feature takes our technical documentation and support capabilities to the next level, enabling you to streamline workflows and get answers faster than ever. Below, we\u0026rsquo;ll walk through how CustomGPT works, highlight its key benefits, and show you why we believe it will become an indispensable tool for your DownToZero projects.\n⸻\nWhat Is CustomGPT? CustomGPT is a specialized, AI-driven chatbot that goes beyond simple Q\u0026amp;A. Powered by advanced natural language processing, it is trained on our comprehensive DownToZero documentation. In addition to explaining our docs clearly, CustomGPT has two standout capabilities:\nDirect API invocation: You can use CustomGPT to interact directly with our API directly. This means you\u0026rsquo;ll be able to test specific endpoints, send queries, and see real response payloads without having to leave the support page. Terraform Code Generation: Need to provision infrastructure quickly? CustomGPT can generate Terraform code snippets on the fly, tailored to your specific requirements. Whether you\u0026rsquo;re spinning up new environments or testing changes, you can rely on CustomGPT to draft the initial Terraform configurations. ⸻\nKey Features and Benefits Natural Doc Explanations Instead of searching a large knowledge base or deciphering complex instructions, CustomGPT translates our documentation into easy-to-understand language. You can ask it questions in plain English and get relevant, concise answers based on the DownToZero documentation.\nSeamless API Interaction With direct access to the DownToZero API, CustomGPT can assist in troubleshooting and testing. This feature streamlines your workflow by cutting out the need for multiple tools or manual code whenever you want to quickly confirm responses or debug an endpoint.\nOn-The-Fly Terraform Code Managing infrastructure is simpler than ever with CustomGPT\u0026rsquo;s ability to generate Terraform configurations. Whether you need a quick snippet for a single resource or a more elaborate multi-resource setup, it\u0026rsquo;s as easy as asking CustomGPT to create the code. You can then tweak and deploy it in your own environment.\nEnhanced Productivity and Collaboration By consolidating documentation help, API interaction, and infrastructure provisioning guidance, CustomGPT reduces context-switching and the usual overhead of toggling between multiple resources. You and your team can focus on higher-level tasks, relying on CustomGPT to handle or initiate much of the heavy lifting.\n⸻\nHow to Use CustomGPT On our docs page, we present you with a link to jump to the chatGPT homepage to access our customGPT.\nYou can access it through direct link\nChat with AI\n⸻\nUse Cases: Bringing It All Together Quick Onboarding: New team members can rapidly learn the ropes by using CustomGPT to get clarifications on documentation, test sample API calls, and create starter Terraform configurations. Rapid Prototyping: When you want to demonstrate a proof of concept, CustomGPT\u0026rsquo;s ability to generate infrastructure code and call APIs helps you quickly spin up a working example. Troubleshooting: Stuck on a particular DownToZero API response? CustomGPT can help isolate issues by fetching live data from the endpoints and suggesting best-practice fixes. ⸻\nThe Future of CustomGPT We envision even broader capabilities down the line. Our goal is to make CustomGPT an ever-present virtual assistant, equipped to handle more tasks and integrate with other tools. As we continue improving and adding features, we welcome your feedback to ensure it aligns with your needs.\n⸻\nGet Started Today Head over to the DownToZero Docs Page and start chatting with CustomGPT. Don\u0026rsquo;t hesitate to reach out if you encounter any issues or have suggestions—our team is eager to hear about your experiences and how we can make CustomGPT an even more valuable resource.\nThank you for your continued support. We\u0026rsquo;re confident that CustomGPT will transform the way you interact with our documentation, APIs, and infrastructure resources, giving you a faster and more intuitive workflow. Enjoy exploring this new feature!\nChat with AI\n","permalink":"https://downtozero.cloud/posts/2025/adding-a-support-chatbot/","title":"Introducing Our CustomGPT: A Powerful New Addition to the DownToZero project"},{"contents":"Preparations Before you can use the DTZ container registry, you need to enable the service first.\nEnable the service Access the container registry dashboard Container Registry Dashboard Testing the registry endpoint You can click the link on the dashboard to test your registry instance. This link should lead you to the following page.\nAfter completing the setup, you need to retrieve your instance URL from the container registry dashboard, in this case, https://66fc19af.cr.dtz.dev/.\nSo all images are hosted under 66fc19af.cr.dtz.dev.\nDocker Login The easiest way to login is via API-key. You can following the guide in our docs on how to create an API-key.\nTo perform the login you can use the terminal:\ndocker login 66fc19af.cr.dtz.dev Username: apikey Password: {past in you API-key here} Login Succeeded This registers you credential in the docker daemon and you can now push and pull images to that registry.\nKubernetes Credentials In kubernetes, managing credential cannot be handled by interacting the docker daemon directly. Therefore you need to provide credentials via the deployment definitions as secret.\nIf you have access to kubectl, you can following the instruction from the official documentation.\nkubectl create secret docker-registry \u0026lt;name\u0026gt; \\ --docker-server=DOCKER_REGISTRY_SERVER \\ --docker-username=DOCKER_USER \\ --docker-password=DOCKER_PASSWORD \\ --docker-email=DOCKER_EMAIL Often times, kubectl access is not an option, since all deployments run through some CI/CD system which needs you to provide the secret as YAML. So here are the steps to take to manually create such a secret.\nbase64 encode your credentials echo \u0026#39;DOCKER_USER:DOCKER_PASSWORD\u0026#39; | base64 RE9DS0VSX1VTRVI6RE9DS0VSX1BBU1NXT1JECg== place the credential string into a docker config json, and store this as credentials.json. { \u0026#34;auths\u0026#34;: { \u0026#34;66fc19af.cr.dtz.dev\u0026#34;: { \u0026#34;auth\u0026#34;: \u0026#34;RE9DS0VSX1VTRVI6RE9DS0VSX1BBU1NXT1JECg==\u0026#34; } } } base64 encode the whole file cat credentials.json | base64 ewogICAgImF1dGhzIjogewogICAgICAgICI2NmZjMTlhZi5jci5kdHouZGV2IjogewogICAgICAgICAgICAiYXV0aCI6ICJSRTlEUzBWU1gxVlRSVkk2UkU5RFMwVlNYMUJCVTFOWFQxSkVDZz09IgogICAgICAgIH0KICAgIH0KfQo= create a Secret with the encoded credential file inside. apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: dtz-credentials data: .dockerconfigjson: ewogICAgImF1dGhzIjogewogICAgICAgICI2NmZjMTlhZi5jci5kdHouZGV2IjogewogICAgICAgICAgICAiYXV0aCI6ICJSRTlEUzBWU1gxVlRSVkk2UkU5RFMwVlNYMUJCVTFOWFQxSkVDZz09IgogICAgICAgIH0KICAgIH0KfQo= after applying this secret to you kubernetes cluster, you can use private images. apiVersion: v1 kind: Pod metadata: name: private-ubuntu spec: containers: - name: private-ubuntu-container image: 66fc19af.cr.dtz.dev/ubuntu imagePullSecrets: - name: dtz-credentials ","permalink":"https://downtozero.cloud/posts/2025/container-registry-login/","title":"How to log into the container registry"},{"contents":"We use Terraform a lot either to test our own infrastructure or deploy projects within DownToZero.\nSince DTZ is supported as provider we started implementing projects on top of it. One thing that regualarly was coming up is the location of the state file. Checking the state in the git repo is usually not a good idea (although it still better than keeping it local), but having some for of remote state helps a lot with running TF in pipelines and making the state independent of the project.\nLooking at our options for remote state, there is quite a list behind this. Sadly most of them are bound to cloud providers which is not really helpful to us at the moment. There is however the generic http backend provider.\nLooking at this provider we found that we can use this provider to connect to our objectstore and use our own system for persisting the state file.\nAnd this is what it would look like.\nterraform { required_providers { dtz = { source = \u0026#34;DownToZero-Cloud/dtz\u0026#34; version = \u0026#34;\u0026gt;= 0.1.24\u0026#34; } } backend \u0026#34;http\u0026#34; { address = \u0026#34;http://objectstore.dtz.rocks/api/2022-11-28/obj/tf-test/state.tfstate\u0026#34; update_method = \u0026#34;PUT\u0026#34; username = \u0026#34;apikey\u0026#34; password = var.apikey } } Sadly the locking does not work, since it has some implementation details that are not compatible with our objectstore.\nThe objectstore does not support the LOCK, UNLOCK-http methods (although this is adjustable in the provider).\nThe other limitation is the return code, the object store always returns a HTTP-201 (CREATED) status if the object was persisted. The terraform provider however only looks for an HTTP-200 (OK). There already is an open issue and pull-request about this, but both are open for years now. So I wouldn\u0026rsquo;t expect a fix for this anytime soon.\nhttp provider docs\n","permalink":"https://downtozero.cloud/posts/2024/using-objectstore-for-terraform-state/","title":"Using DTZ Objectstore for Terraform state files"},{"contents":"Last year we already did a blog post on the hardware setup of DownToZero. Since then we are using the same setup for a while now and wanted to share an update.\nEnergy consumption Since we decided to purely use notebook chips, our energy consumption is significantly lower then desktop machine. Also our mode to only power the machine during times that solar power is available lead us to near zero power grid consumption.\nOn a per node basis we currently consume about 1.4 kWh idle per month, with our current load mix about 2kWh per month.\nMonth Consumed Energy in kWh September 2023 2.1 October 2023 2.383 November 2023 1.807 December 2023 1.4 January 2024 1.389 February 2024 1.65 March 2024 1.528 April 2024 1.521 May 2024 1.62 June 2024 1.81 July 2024 1.91 August 2024 1.52 energy consumed over the last 12 months energe consumed over the last 12 months with a 40 watt baseline To give you some comparison, we added a machine with average energy consumption of about 40 watts.\nWe see this as confirmation that building out stack on top of mobile CPUs is worth the effort an money.\n","permalink":"https://downtozero.cloud/posts/2024/node-setup/","title":"Worker Node Setup of DownToZero"},{"contents":"We started implementing our own Terraform provider since we think automation is key to cloud adoption.\nYou can find our provider at github under\nhttps://github.com/DownToZero-Cloud/terraform-provider-dtz and over at the Hashicorp terraform site:\nhttps://registry.terraform.io/providers/DownToZero-Cloud/dtz/latest This provider give every customer another path of accessing DTZ resources and implemting their automation/IaaS on top of our services.\nTo setup the provider you need to generate an API-key through the website. We currently only support apikey authentication.\nterraform { required_providers { dtz = { source = \u0026#34;DownToZero-Cloud/dtz\u0026#34; version = \u0026#34;\u0026gt;= 0.1.22\u0026#34; } } } provider \u0026#34;dtz\u0026#34; { api_key = \u0026#34;apikey-1234\u0026#34; } accessing a DTZ context https://registry.terraform.io/providers/DownToZero-Cloud/dtz/latest/docs/data-sources/context\ndata \u0026#34;dtz_context\u0026#34; \u0026#34;ctx\u0026#34; {} output \u0026#34;context-alias\u0026#34; { value = data.dtz_context.ctx.alias } output \u0026#34;context-id\u0026#34; { value = data.dtz_context.ctx.id } ","permalink":"https://downtozero.cloud/posts/2024/terraform-support/","title":"Implementing our own Terraform provider to allow infrastructure automation"},{"contents":"From time to time, we get the question of explaining DownToZero and what makes it stand out from other cloud services to non-technical people. We\u0026rsquo;ve tried various explanations, but we found an analogy that resonates with most people. We wanted to share it here to make our principle accessible to more people.\nAfter a long workday, you are driving home. Arriving at your house, you pull into your driveway, turn off the car, and go into the house. Even if you plan to go shopping later that day, you still turn off the car because leaving it running seems wasteful.\nSo later that day, you go back to your car to go shopping. You turn the car back on and drive to the shop. Coming back home, you turn off your car for the night. Even if it is cold at night and inconvenient for the heating to kick in and the seats to warm up in the morning, we still turn the car off. Keeping the car running seems ridiculous for just a few minutes of warm-up time.\nFor cars, we have developed an understanding that it is not very practical or cost-effective to keep them running all the time. We are fine with a warm-up phase, and manufacturers have invested heavily in making this phase more convenient.\nNow, for IT services and machines, we expect all services to be running at all times. The reason is convenience, so we always opt for “always-on” because the cost is acceptable. DownToZero breaks with that habit by shutting down everything that is not in use. This approach is called \u0026ldquo;scale-to-zero.\u0026rdquo; We incorporate scale-to-zero at every possible point because achieving sustainability requires not just green energy or more efficient CPUs but also shutting down unused software. We believe that following this path will have the most significant impact. We did it for cars, and it is natural to use cars this way. Now it is time to apply this paradigm to software too.\n","permalink":"https://downtozero.cloud/posts/2024/a-non-technical-explanation/","title":"A non-technical explanation of DownToZero"},{"contents":"All our services need to have a Swagger UI for presenting its current OpenAPI file to the user. Since our backend is completely built with Rust, hosting this Swagger UI is also done through a Rust process.\nSo to make the Swagger UI a first degree citizen in the Rust ecosystem, we decided to re-package the Swagger-UI as Rust module. This way, dependabot can pick up update to the underlying project und pull those dependencies into our projects.\nThis also has the benefit that whenever this dependency gets updates, we run our CI/CD pipeline to check whether the UI is still fine for us.\nHow it is build The Swagger UI is build packaged through github. So determining the current version is just another call against the Github API. This way we can decide whether update are in order or not. If a new version has been released, we download the minified JS and CSS files und update our local Rust repo with the new dependencies.\nAfter those files were updated we also raise the module version to the same version as the github release. This way, the cargo version always matches the github version.\nAll the code required for this procedure in in the github action definition.\nhttps://github.com/apimeister/swagger-ui-dist-rs/blob/main/.github/workflows/daily_version_check.yml How to use it Using this crate is rather straight forward. The static resources are exposed as axum routes and can be merged with an existing rout definition. To implement those routes, you need to specify a prefix for the routes, as well as an API definition. The API definition can either be an inline YAML file or an external link to the API definition.\nHere a sample of how to use the crate with an inline OpenAPI definition.\nuse axum::Router; use swagger_ui_dist::{ApiDefinition, OpenApiSource}; #[tokio::main] async fn main() { let api_def = ApiDefinition { uri_prefix: \u0026#34;/api\u0026#34;, api_definition: OpenApiSource::Inline(include_str!(\u0026#34;petstore.yaml\u0026#34;)), title: Some(\u0026#34;My Super Duper API\u0026#34;), }; let app = Router::new().merge(swagger_ui_dist::generate_routes(api_def)); let listener = tokio::net::TcpListener::bind(\u0026#34;0.0.0.0:3000\u0026#34;).await.unwrap(); println!(\u0026#34;listening on http://localhost:3000/api\u0026#34;); axum::serve(listener, app).await.unwrap(); } crate.io link: https://crates.io/crates/swagger-ui-dist\ngithub repo: https://github.com/apimeister/swagger-ui-dist-rs/\n","permalink":"https://downtozero.cloud/posts/2024/swagger-ui-dist/","title":"Re-packaging the Swagger UI as a Rust module"},{"contents":"To further improve our offering we intruce a status page to our stack. This page is externally hosted, so it will keep running even if everything else is failing.\nThis will provide a single entry point for communicating issues and provide transparancy when service failed.\nstatus.dtz.rocks\n","permalink":"https://downtozero.cloud/posts/2024/status-page/","title":"[new feature] Status page added"},{"contents":"Ever since we started measuring our energy demand, we also started to develop concepts on how we could improve energy efficiency. Customizing our own machines more then just the software seems obvious, but it is also easier said then done.\nFirst we had to look out on what hardware we want to build upon. Most cloud providers start with rather heavy workload machine and are optimising for peak load. Our setup was different and we wanted to move the needle on the low end, not the high end. That meant that traditional servers were not really in scope for this task. Looking around the market, something between small form factor ARM or RISC-V machine seemed like an easy choice. sadly getting those machine into a more server like setup takes quite some preparation. Also we saw 2 properties missing from all of those setups.\nBattery to power the machine at night Power monitoring (preferably IPMI, but external monitoring would also be fine) Required Features Battery Low power idle footprint Off-the-shelf hardware Power metering One type of machine that almost perfectly fits these characteristics is the notebook. These machines usually have a small form factor that is already optimized for power consumption. Also those machines come with their own battery. The Problem here is, that Notebooks come with a lot of hardware (keyboard, monitor, touchpad) that we have no use for. So we started looing for some bare-bone notebook vendors where we could start evaluating.\nThis led us to frame.work. Framework is a notebook vendor who sells highly customizable, repairable machines. Also we already had a Framework machine around that we could re-fit for our experiments.\nWorkshop setup After testing our setup with the existing Framework notebook, we ordered 2 more \u0026lsquo;machines\u0026rsquo;. So what we basically ordered was the mainboard (i5 13th gen) with some memory and SSD, as well as battery.\nThe assembly was just screwing the components on to a base plate and connect it to power. for netwrking we are using the WiFi. Here a shot of how we assembled the machine in our workshop.\nSoftware After having the hardware set up, we started looking at the software. Our base setup requires some Linux, but further then that we have not locked down any vendor or version. So we started with fedora, since this was also the software we used on our sample notebook. Sadly we quickly found out that doing an unattended install on a machine with no monitor or keyboard, and also no network (since we could not setup wifi) was something rather tricky. To not spend too much time on this conundrum, we opted for installing fedora on our sample notebook and just cloned the SSD over to the two new machines. This way we could build a base OS with keyboard and monitor attached to it and also preconfigure networking and SSH as required. After we got the clones SSDs we just screwed them back into our hosts and connected the power adapter.\nFirst boot After the first boot, we started to look into the power monitoring problem. At first we thought we could increase the battery capacity by plugging another commercially available battery into the USB-C port. Unfortunately, this did not work. It was very unreliable because the battery pack supplied not enough power for the notebook to actually load while it was running. So we had to skip this part of the project. That left us with the 55Wh battery that is included with the Framework.\nPower Control As we mentioned before, we wanted these machines to only run/consume power only when we were actually generating power from our solar panels. So we needed an external power plug that could both measure and turn power on/off. We decided on some CloudFree Power Plugs, but we will share on this later on. The important fact here is, those devices have a local API endpoint for both power measurement and on/off-switching.\nFinal Thoughts So now we have the hardware setup. The software is also in place. Let\u0026rsquo;s try to run these devices and start optimizing things like charge cycle, consumption levels, and so on. For now, we will use these as async worker nodes to provide more compute to our stack.\n","permalink":"https://downtozero.cloud/posts/2023/building-our-own-nodes/","title":"Building our own nodes"},{"contents":"Every cloud platform must have some metric of cost assiciaated with its services. For most public cloud offerings, the target metrics for cost are usually CPU core count and amount of memory. That gets multiplied by the amount of time used and we arrive at usage-based pricing.\nThis model has its pros and cons, so let\u0026rsquo;s iterate a little over this.\nThe Pros Since all cost is tied to a usage pattern, the goal ist to reduce cost by reducing usage.\nThe overall price can be somewhat estimated by just adding up the required CPU/Memory amount over time. This makes pricing somewhat predictable (more on that on the Cons side).\nThe Cons Companies who aim for cost predictability will choose the solution that gives that. So if I want to be sure about cost, I will choose the least flexible path and scale to peak performance. This sounds counterintuitive at first (money-wise), but this is happening often in large organizations since most are preferring predictability over some unpredictable cost-savings.\nEveryone has some idea of what a CPU/vCPU is, but it is actually not a good metric to compare between providers. I always have to do my own benchmarking to see how instance types, CPU architecture, and vertical scaling affect my performance.\nThe variable CPU processing power also makes it very difficult to compare the actual impact, as in environmental impact, for my project. Some provider already stepped up and make CO2 (or CO2 equivalent) metrics available, but even these become very complex as it becomes harder to understand what is included and what is excluded.\nConclusion So having usage-based pricing is a big step forward in comparison to just paying some monthly amount and nobody cared about what it entails. It also tries so give an incentive to reduce the overall footprint, but it also gives just limited capabilities for comparing services between providers.\nSo maybe it is time to take usage-based pricing a step further and change the metric here.\nA first attempt So if we start with CPU and memory consumption over time, we can capture the amount of resources utilised at the provider level. One metric we are missing here, or just include it into the pricing, is power (as in watt consumed).\nSo instead having a cost calculated by processing, and a environmental footprint calculated through some CO2 metric, how about we merge those two and calculated processing power in watts. This way we can provide one metric for cost and impact.\nThat way, a company that wanted to improve its environmental footprint would automatically improve its cost and vice versa. Both goals would be visible and actionable.\nHow should it work That leads to the question of how it should work, because in theory all it all sounds simple. In reality there is big problem, our current hardware does not provide this kind of metrics in the resolution we need.\nSo for now we have to estimate and work around our current hardware limitations.\nWhat we have implemented for now is a power meter on each machine, this way we know the total amount of energy consumed (with a minute resolution). This metric gets overlayed to the user-based workload on those machines. What this gives us is an estimate of how much energy was consumed by which user. We also want to bring this metric into the UI as soon as possible.\nEpilogue So this is our first attempt at this topic. We will provide another update if we can see movement in this direction or if we run into problems that make this is bad idea.\nFor now, this looks like the way to go.\n","permalink":"https://downtozero.cloud/posts/2023/metrics-for-calculating-cost/","title":"Metrics for calculating the cost of processing"},{"contents":"Since we can only improve on what we measure, we started implementing the Server Timing API on all HTTP endpoints.\nThe API provides simple means to extend the http header with some performance measurements.\nFor some general introduction into the topic you can check the mozilla documentation on it (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Server-Timing).\nAs for the actual implementation, we use the following Rust implementation, which provides us with performance data for axum server out of the box, (https://crates.io/crates/axum-server-timing).\nBecause our architecture spans multiple datacenters, we do not publish these metrics as a single measurement. We have taken a layered approach to our network.\nDTZ-RED Our public Internet exposed endpoint DTZ-BLUE Internal customer facing network Scaling is handled in that layer Contains the service state, hot/warm/cold Backend service If available, any service can provide their own performance metrics. In our case we use the Objectstore-API Here are some sample headers produced by our API.\nserver-timing: dtz-red;dur=765, dtz-blue;desc=\u0026#34;dtz-blue(warm)\u0026#34;;dur=538 content-length: 19276864 last-modified: Sun, 07 May 2023 10:25:33 GMT date: Sun, 07 May 2023 10:27:52 GMT content-type: application/octet-stream the backend was in a warm state blue layer, scaling + backend took 538ms red layer, took 765ms the backend was in a hot state and took 7ms blue layer, took 16ms red layer, took 35ms ","permalink":"https://downtozero.cloud/posts/2023/server-timing-on-all-endpoints/","title":"Server Timing on all Endpoints"},{"contents":"The more our little project progresses, the more code we produce. And like most of our community these days, we host our code on GitHub.\nSo during development we discussed the implications of a DownToZero architecture for something like a GitHub process. We quickly identified two types of actions that need to be performed on the infrastructure.\nThe first is the CI build, which should always provide immediate feedback to the developer. It checks for compliance as well as code integrity and is usually deeply involved in the development process. These jobs are time-sensitive because someone is usually waiting for them.\nThe second category we identified is a little different. With the rise of dependabot and other security scanners, we saw more and more pipelines being triggered by these bots. The thing about that is, we want to run the pipeline to check our dependencies and keep our code base up to date, but at the same time, no one is waiting for those pipelines. So it wouldn\u0026rsquo;t make any difference if those pipelines were delayed.\nSo let us look at the second category and see if we can build something within GitHub. Well, GitHub allows anyone to attach self-hosted runners to any project (hosting your own runners). If you look at the process, it is relatively straightforward, download the runner, attach it to your organization or repo and then run the shell script. There\u0026rsquo;s also a little helper that makes this runner into a systemd service, so we don\u0026rsquo;t have to start and stop the service ourselves.\nLooking at the job distribution, GitHub says that the queued job is held for 24 hours. Within that time frame the job has to be picked up or it will timeout. So 24 hours is technically enough time to wait for the sun to come up, regardless of when the job was spawned.\nWith that being covered, we started to look at our local setup and how we can achieve such capacity planning. Our current setup looks like this.\nwe have solar panels that produce energy we have local machines that can consume that energy and have internet access GitHub provides the persistent job queue for our runner We don\u0026rsquo;t have any battery storage attached to it, because that would make the whole system more expensive and complex.\nAll metrics, like the energy output of the solar panel or the energy consumption of the servers are tracked by independent tasmota devices (CloudFree EU Smart Plug).\nSo we hooked everything up. For convenience, we installed a ubuntu 22.10 (the same used for the GitHub hosted runner) on our machines. We also installed the toolchain we needed, such as rustup, gcc-musl, protobuf.\nNow, we wrote 3 independent systemd services.\n1) Dtz-Edge Service The first service is always running and reads the energy output from HomeAssistant (this is where our energy data is aggregated). It also takes into account what other devices are currently running and how much energy they are already consuming. It then implements the following state model:\nSystemd Service definition\n[Unit] Description=dtz edge Service [Service] Type=simple WorkingDirectory=/root/dtz-edge ExecStart=!/root/dtz-edge/busy.sh Restart=always [Install] Alias=dtz-edge WantedBy=multi-user.target busy.sh shell script (shortened version)\n#!/bin/bash for (( ; ; )) do POWER=`curl -H \u0026#39;Authorization: Bearer token1\u0026#39; -H \u0026#34;Content-Type: application/json\u0026#34; http://192.168.178.76:8123/api/states/sensor.solar_panel_energy_power 2\u0026gt; /dev/null | jq -r .state` METER=`curl -H \u0026#39;Authorization: Bearer token1\u0026#39; -H \u0026#34;Content-Type: application/json\u0026#34; http://192.168.178.76:8123/api/states/sensor.tasmota_energy_power_4 2\u0026gt; /dev/null | jq -r .state` SALDO=$((POWER - METER)) echo \u0026#34;Saldo: $SALDO (solar: $POWER)\u0026#34; CURRENT_HOUR=`date +%H` if [ $CURRENT_HOUR -gt 17 ]; then service cheap-energy stop service actions.runner.DownToZero-Cloud.dtz-edge1 stop echo \u0026#34;sleep till tomorrow (10h)\u0026#34; rtcwake -m disk -s 36000 fi if [ $SALDO -gt 70 ]; then echo \u0026#34;more then 70: $SALDO\u0026#34; service cheap-energy start service actions.runner.DownToZero-Cloud.dtz-edge1 start sleep 300; else service cheap-energy stop rtcwake -m mem -s 660 fi done 2) Cheap-Energy Service This service only holds the state that cheap energy is available. So when this systemd-service is running, it means there is energy available, when it is stopped, all workers should shut down. So we use this service as a proxy to make management easier.\n[Unit] Description=cheap energy [Service] Type=simple WorkingDirectory=/root/dtz-edge ExecStart=!/root/dtz-edge/cheap-energy.sh Restart=always [Install] Alias=cheap-energy WantedBy=multi-user.target The script we run in here is just a sleep command.\n#!/bin/bash sleep infinity 3) GitHub Runner Service We followed the instuctions provided by GitHub and installed the runner as a systemd service.\nsudo ./svc.sh install This already gave us the correct service definition and the only thing we needed to change was the service dependency line. Because now we want this service to run whenever the cheap-energy-service is running, and also to stop this service when cheap-energy-service is stopped.\nSo we changed our service definition (actions.runner.DownToZero-Cloud.dtz-edge1.service) to include the BindsTo description.\n[Unit] Description=GitHub Actions Runner (DownToZero-Cloud.dtz-edge1) After=network.target BindsTo=cheap-energy.service [Service] ExecStart=/home/user1/gh-dtz-org/runsvc.sh User=user1 WorkingDirectory=/home/user1/gh-dtz-org KillMode=process KillSignal=SIGTERM TimeoutStopSec=5min [Install] WantedBy=multi-user.target Now that we have the hardware part of the solution set up, let\u0026rsquo;s get back to the GitHub side. We now have 2 types of runners in our GitHub UI. One is the GitHub-hosted runner, that we want our type 1 task to run on, and one is our dtz-edge pool that only comes up when there is enough solar power.\nLet us split our pipeline definitions.\nFor the type 1 jobs, everything can stay like a normal GitHub pipeline.\nname: build on: workflow_dispatch: push: branches: - main jobs: build: permissions: write-all runs-on: ubuntu-latest For the type 2 jobs, so job we want to run delayed on the solar powered machines, we just need to define the on-trigger section to include for scenarios that should be supported here. In our case we started with doing this for all pull-requests. Then the only thing that needs to be changed is the runs-on statement. Here we placed our newly generated runner.\nname: pr on: workflow_dispatch: pull_request: jobs: test: name: coverage runs-on: self-hosted So now whenever dependabot sends us some updates to merge, or some other bot wants to check tests and code coverage, those jobs will run whenever we have the resources to do so.\nAs an added bonus, we also do no longer have to pay for these extra runners. On-prem runners are free (in the sense of GitHub pricing).\n","permalink":"https://downtozero.cloud/posts/2023/solar-powered-github-runner/","title":"A Solar Powered GitHub Runner"},{"contents":"Why RSS is a decentralized, open-source subscription method superior to centralized options like social networking.\nIn today\u0026rsquo;s digital world, it can be easy to get overwhelmed by the vast amount of information available online. Using a subscription service is one way to stay organised and keep track of the content that interests you. There are a number of different options, including social media networks and RSS feeds. In this article, we\u0026rsquo;re going to take a look at why RSS is a better decentralised, open-source system for subscribing to content than centralised alternatives such as social networks.\nFirst, let\u0026rsquo;s define some terms. When we talk about decentralization, we are referring to the distribution of power and control. A decentralized system is one in which power and control are distributed among multiple entities rather than being concentrated in a single entity. An open resource is a resource that is freely available to all users, regardless of who they are or where they are located.\nLet\u0026rsquo;s now take a look at social media networks as a subscription service. These networks are centralised, meaning that they are controlled by a single entity (i.e. the company that owns the network). This means that the user has no control over the content shown to them and is at the mercy of the network\u0026rsquo;s algorithms and policies. In addition, social media networks often have a profit motive, which means that they may prioritise showing users content that generates the most revenue (e.g. through advertising) rather than content that is most relevant or valuable to the user.\nRSS feeds, on the other hand, are decentralised and open. RSS stands for \u0026ldquo;Really Simple Syndication\u0026rdquo; and is a way for websites to publish updates (e.g. new blog posts, articles, etc.) in a standardised format that can be easily read by other software. This means that users can subscribe to updates from multiple websites using a single RSS reader, or even have these feeds delivered to their email and use email readers to consume the feeds. Because RSS feeds are open, users can access them from any device. There are no restrictions on who can use them.\nOne major advantage of RSS feeds is that they give users control over the content that they see. With an RSS reader, users can choose which websites they want to subscribe to and customize the types of updates that they receive. This allows users to curate a personalized feed of content that is relevant to their interests. In contrast, social media algorithms often show users content that is popular or trending, rather than content that is personally relevant.\nAnother benefit of RSS feeds is that they are not influenced by profit motives. Since they are open and decentralized, there is no single entity that controls them or profits from them. This means that users can trust that the content they are seeing is not being influenced by financial interests.\nIn summary, RSS feeds are a better decentralized open resource subscription system than central alternatives like social media networks because they give users control over the content they see, allow for customization, and are not influenced by profit motives. If you\u0026rsquo;re looking for a way to stay organized and keep track of the content you care about, consider using an RSS reader or having the feeds forwarded to your email.\n","permalink":"https://downtozero.cloud/posts/2023/rss-a-better-decentral-alternative/","title":"RSS a better decentral alternative"},{"contents":"Scale-to-zero architectures have gained popularity in recent years as a way to optimize resource utilization and reduce costs in cloud-based deployments. In a scale-to-zero architecture, idle resources are automatically scaled down to zero, freeing up resources and reducing costs. This is in contrast to traditional architectures, where idle resources are still allocated and consuming resources, even if they are not actively being used.\nOne way to implement a scale-to-zero architecture is through the use of containers. Containers are lightweight, isolated environments that allow developers to package their applications and dependencies into a single unit, making it easy to deploy and run applications in any environment. Containers have become a popular choice for deploying cloud-based applications due to their portability, scalability, and ease of use.\nIn a container-based deployment, scale-to-zero can be achieved through the use of container orchestrators such as Kubernetes. Kubernetes allows developers to define the desired state of their application, and the orchestrator automatically scales the application up or down based on the defined resource requirements. This means that when the application is not in use, the orchestrator can automatically scale the application down to zero, freeing up resources and reducing costs.\nThere are a few implications of scale-to-zero architectures and container-based deployments that developers should consider:\nCost savings: One of the main benefits of scale-to-zero architectures is the potential for cost savings. By automatically scaling down idle resources, organizations can significantly reduce their cloud infrastructure costs.\nPerformance: In a scale-to-zero architecture, the performance of the application may be affected when it is scaled up or down. When the application is scaled up, there may be a delay as the resources are provisioned and the application is started. Similarly, when the application is scaled down, there may be a delay as the resources are de-provisioned and the application is stopped. Developers should consider these performance impacts and design their applications accordingly.\nResource utilization: While scale-to-zero architectures can help optimize resource utilization, they may not be suitable for all applications. Some applications may require a minimum number of resources to be available at all times in order to function properly. Developers should carefully evaluate the resource requirements of their application before implementing a scale-to-zero architecture.\nOverall, scale-to-zero architectures and container-based deployments can provide significant cost savings and resource optimization benefits for cloud-based applications. However, developers should carefully consider the performance and resource utilization implications of these architectures before implementing them.\n","permalink":"https://downtozero.cloud/posts/2023/scale-to-zero-containers/","title":"scale-to-zero with containers"},{"contents":"Under the umbrella of our flows service, which provides easy to use integration services, we implemented a RSS to Email (RSS2Email) service.\nAfter enabling the service within a context, you can access the service through header field.\nFirst thing that has to be filled is the target email address. All notification will go out to this single adress. If you want to send certains feeds to another address, you need to open a new context.\nAfter the address is known, you can add any rss/atom feed URL to the subscription. After clicking the create button, a new subscription is created. The name will appear after the first entry was parsed. From the on, the last check and last data will be updated in real-time to signal you any issues with the url.\nFor more information you can head over to the documentation page.\n","permalink":"https://downtozero.cloud/posts/2022/rss2email-service/","title":"[new service] rss to email"},{"contents":"One Year in and progress is still slow and steady. this month we added a documentation page to the site. This way the changes will be more transparent and easier to follow.\n","permalink":"https://downtozero.cloud/posts/2022/added-some-documentation/","title":"added some documentation"},{"contents":"IT infrastructure has become cheap. Building a highly available system with multi-machine, multi-datacenter, even multi-regional architectures can be easily achieved with just a credit card. Building fast-performing systems on top became the defacto standard for computing and data storage. Today more and more effort is put into vertical and horizontal scaling to achieve lower latency and to reach customers faster. And while in general, this is a good idea, the cost for these kinds of architectures is not just money. Scaling a \u0026lsquo;simple\u0026rsquo; website to a multi-regional approach can easily take tens of machines, all always on, all ready to perform on the incoming requests.\nWe think it is time to think about our services differently. Services should always be able to scale to zero when they are not used. Data should be kept at rest (at best on an unpowered storage device) when it is not used. High availability should not come at the cost of keeping more machines running all the time.\n","permalink":"https://downtozero.cloud/posts/2021/time-for-a-new-perspective/","title":"time for a new perspective"},{"contents":"The authentication mechanisms are shared between all DTZ APIs. So all here described mechanisms are available for all APIs.\nThe authentication data can be carried through the following fields:\nbearer token cookie based api-key basic auth Authenticating Authentication is handled by the DTZ Identity service.\nOAuth 2.0 \u0026amp; OpenID Connect: For comprehensive OAuth and OIDC integration, see the detailed OAuth Guide in the Identity section.\nPossible Login scenarios:\nWebUI https://identity.dtz.rocks/login/\nWith HTTP Apikey Header To authenticate with an api key, the api key has to be passed as header field X-API-KEY.\nHere is an example curl command:\ncurl -X GET \u0026#34;https://api.dtz.rocks/v1/me\u0026#34; -H \u0026#34;X-API-KEY: YOUR_API_KEY\u0026#34; With HTTP Bearer Token To authenticate with a bearer token, the token has to be passed as header field Authorization: Bearer YOUR_BEARER_TOKEN.\nHere an example how to get a bearer token:\n\u0026gt; POST https://identity.dtz.rocks/api/2021-02-21/token/auth \u0026gt; Content-Type: application/json \u0026gt; \u0026gt; { \u0026gt; \u0026#34;username\u0026#34;: \u0026#34;user\u0026#34;, \u0026gt; \u0026#34;password\u0026#34;: \u0026#34;password\u0026#34; \u0026gt; } \u0026lt; { \u0026lt; \u0026#34;access_token\u0026#34;: \u0026#34;eyJhb...\u0026#34;, \u0026lt; \u0026#34;scope\u0026#34;: \u0026#34;00000000-0000-0000-0000-000000000000\u0026#34;, \u0026lt; \u0026#34;token_type\u0026#34;: \u0026#34;Bearer\u0026#34;, \u0026lt; \u0026#34;expires_in\u0026#34;: 86400 \u0026lt; } Here is an example how to use the bearer token:\ncurl -X GET \u0026#34;https://identity.dtz.rocks/api/2021-02-21/me\u0026#34; -H \u0026#34;Authorization: Bearer {bearer token}\u0026#34; \u0026gt; GET https://identity.dtz.rocks/api/2021-02-21/me \u0026gt; Authorization: Bearer eyJhb... \u0026lt; { \u0026lt; \u0026#34;roles\u0026#34;: [ ] \u0026lt; } With HTTP Basic Auth Header Here is an example curl command how to use basic auth, to access the dtz API.\ncurl -X GET -u \u0026#39;apikey:apikey-1234\u0026#39; \u0026#34;https://identity.dtz.rocks/api/2021-02-21/me\u0026#34; As a fallback, or when regular bearer authentication is not available, the service also accepts bearer tokens through the basic auth scheme.\ncurl -X GET -u \u0026#39;bearer:{bearer-token}\u0026#39; \u0026#34;https://identity.dtz.rocks/api/2021-02-21/me\u0026#34; With HTTP Cookie Using the JWT token as cookie is also allowed. The token has to be passed in as cookie with the name dtz-auth.\nWith Get Parameter Sometimes, third party provider do not allow to set for any authnetication. For that case, an apikey can also be passed as get parameter with the name apiKey.\n","permalink":"https://downtozero.cloud/docs/authentication/","title":"API Authentication"},{"permalink":"https://downtozero.cloud/docs/containers/api/","title":"API Reference"},{"permalink":"https://downtozero.cloud/docs/core/api/","title":"API Reference"},{"permalink":"https://downtozero.cloud/docs/identity/api/","title":"API Reference"},{"permalink":"https://downtozero.cloud/docs/objectstore/api/","title":"API Reference"},{"permalink":"https://downtozero.cloud/docs/observability/api/","title":"API Reference"},{"permalink":"https://downtozero.cloud/docs/registry/api/","title":"API Reference"},{"permalink":"https://downtozero.cloud/docs/rss2email/api/","title":"API Reference"},{"contents":"Any client can define an arbitrary set of attributes for its logs.\nSome attributes are commonly defined throughout various sources.\nGCP gcp.resource.type Kubernetes k8s.namespace.name k8s.pod.name k8s.container.name k8s.pod.uid Http http.uri http.user_agent http.verb http.status http.version Stripe stripe.event.type stripe.event.id stripe.charge.amount stripe.charge.amount_refunded stripe.charge.captured stripe.charge.currency stripe.charge.description stripe.charge.disputed stripe.charge.failure_code stripe.charge.failure_message stripe.charge.paid stripe.charge.refunded stripe.payment_intent.id stripe.payment_intent.amount stripe.payment_intent.currency stripe.payment_intent.status ","permalink":"https://downtozero.cloud/docs/observability/logs/attributes/","title":"Attributes"},{"contents":"Using DTZ identities The service can be accessed with any identity which has one of the following role attached to it.\nRequired Role role privileges https://dtz.rocks/containerregistry/admin/{context_id} full access Login via Apikey The preferred way to authenticate with the container registry service is through an apikey. To further improve on security, limited privileged service identities should be used instead of an user account.\nThe credentials have to be applied as follows:\nUsername: apikey Password: {Apikey generated for the identity} docker login cr.dtz.rocks -u apikey Password: Login via DTZ Identity If a Identity already has an Oauth token, this access token can be used to access the registry.\nThe credentials have to be applied as follows:\nUsername: bearer Password: {AccessToken generated for the identity} docker login cr.dtz.rocks -u bearer Password: ","permalink":"https://downtozero.cloud/docs/registry/authentication/","title":"Authentication"},{"contents":"All accounts within DownToZero operate on a prepaid basis. You must charge your account balance before consuming any resources. Each account top-up is made via IBAN/SEPA bank transfer and generates an official invoice for the transferred amount (Vorauszahlungsrechnung). This model is simple, tax-compliant, and familiar for German/EU customers.\nCharging Your Account Make a SEPA/IBAN transfer to our bank account:\nBank Details\nIBAN: DE08100101236489344438 BIC: QNTODEB2XXX Bank: Olinda Zweigniederlassung Deutschland, Warschauer Platz 11–13, 10245 Berlin How to Top Up Log into the Billing Dashboard. Find your Account ID (shown on the Charge page and in your account settings). Make a SEPA/IBAN transfer using the bank details above and set the transfer reference (Verwendungszweck) to: charge\nExample: charge identity-abcdef\n4. After we receive the transfer, your account balance will be credited. SEPA transfers typically take 1–2 business days.\nInvoices Every top-up creates an official invoice for the exact amount transferred. Once your payment is processed, you can download the invoice from the Billing Dashboard. Monthly usage is shown as a consumption report in the dashboard; only top-ups are invoiced. Notes Use the reference exactly as shown (charge \u0026lt;AccountID\u0026gt;). Payments without the correct reference may be delayed. If your bank limits special characters, keep a space or dash between charge and your Account ID (e.g., charge identity-abcdef). If your balance isn’t credited within 3 business days, contact support and include proof of transfer. Unused balances remain in your account until consumed. Balances are not refundable. ","permalink":"https://downtozero.cloud/docs/billing/","title":"Billing"},{"contents":"All accounts within DownToZero operate on a prepaid basis. This means you must add funds to your account balance before you can consume any resources. This approach gives you full control over your spending.\nThe Billing Dashboard You can manage all aspects of your billing through the Billing Dashboard. The dashboard provides an overview of your current balance, as well as your recent and overall consumption.\nCharging Your Account To add funds to your account, you will make a standard IBAN/SEPA bank transfer.\nNavigate to the Charge tab in the Billing Dashboard. You will find our bank details and your unique Account ID. Make a SEPA transfer and be sure to use the exact reference provided to ensure the funds are allocated to your account correctly. SEPA transfers typically take 1–2 business days to process. Once the transfer is received, your account balance will be credited.\nBank Details IBAN: DE08100101236489344438 BIC: QNTODEB2XXX Bank: Olinda Zweigniederlassung Deutschland, Warschauer Platz 11–13, 10245 Berlin Invoices For every top-up you make, an official invoice (Vorauszahlungsrechnung) is generated for the transferred amount. This model is simple, tax-compliant, and familiar for German/EU customers.\nYou can download your invoices from the Billing Dashboard once your payment has been processed. Monthly usage is shown as a consumption report in the dashboard; only top-ups are invoiced. Important Notes Use the correct reference: Payments without the correct reference (e.g., charge identity-abcdef) may be delayed. Unused balance: Any unused balance will remain in your account until it is consumed. Balances are not refundable. Contact support: If your balance isn’t credited within 3 business days, please contact support and include proof of your transfer. Further Reading For more detailed information about our billing policies and for answers to common questions, please see the following documents:\nBilling Documentation Frequently Asked Questions ","permalink":"https://downtozero.cloud/docs/identity/charging-your-account/","title":"Charging Your Account"},{"contents":"Every entity inside DTZ needs to have a parent context. This represents the organizational structure to hold an entity, allow access control, and provides accounting and billing. Every user has by default attached to its session. So whenever a user is logged in, regardless of the method (apikey, oauth, etc.) the session already has a context attached to it.\nflowchart LR uid[User Identity] -- \u0026#34;has access\u0026#34; --\u0026gt; context subgraph context Context[Context Core] -- \u0026#34;owns\u0026#34; --\u0026gt; Objectstore Context -- \u0026#34;owns\u0026#34; --\u0026gt; Containers Context -- \u0026#34;owns\u0026#34; --\u0026gt; Rss2Email Context -- \u0026#34;owns\u0026#34; --\u0026gt; E@{ shape: processes, label: \u0026#34;Other Services\u0026#34;} end The current context is always shown in the title bar on the top left.\nChanging the context can be achieved by selecting the new context from the drop-down menu.\nA new context can be created through the main page or the following link.\nhttps://dtz.rocks/new/ | New Context\nIn Terraform, the context is implicitly derived from the user session or fetched using the dtz_context data source—even if it\u0026rsquo;s not explicitly declared in the resource block.\nContext Admin Context admin is a role that allow the owning identity the right to control rights and roles regarding the context. The creator of the context always gets assigned the role of context admin.\nAlso a new identity is created for the context, which serves as service principle within the context. The identity is created with the following alias.\nadmin@{context_id}.dtz.rocks\nAll Context admins automatically get access to all used services within the context.\n","permalink":"https://downtozero.cloud/docs/context/","title":"Context"},{"contents":"The Profile page is where you control how and where you receive email notifications for your RSS feed subscriptions. You can set your destination email address and use the powerful Mustache templating language to fully customize the layout and content of the notification emails.\nURL:\nhttps://rss2email.dtz.rocks/profile/\nEmail Address Configuration Set the email address where you want to receive your feed updates.\nField: Email Address Description: Enter the destination email address for notifications. Example: your.email@example.com Email Template Customization You can tailor the Subject and Body of your notification emails using Mustache templates. This allows you to arrange the feed data exactly how you like it.\nAvailable Template Variables When a new item is found in a feed, you can use the following variables in your templates to access its data:\nVariable Description {{title}} The title of the feed item. {{link}} The direct URL to the original article or post. {{description}} A short summary or excerpt of the item. HTML tags within this variable will be escaped (e.g., \u0026amp;lt;b\u0026amp;gt;). {{content}} The full content of the feed item, which may include HTML. This is also escaped by default. {{date}} The publication date of the item. Basic Example Here’s a simple template to get you started.\nSubject Template:\nNew Post: {{title}} Body Template:\nA new item has been posted: Title: {{title}} Link: {{link}} Published on: {{date}} Handling HTML Content (Advanced) RSS feeds often include HTML in their description or content fields for formatting, links, or images.\nTo output the HTML as plain text (escaping tags), use two curly braces: {{description}}. To render the HTML as actual formatted content in your email, use three curly braces {{{content}}} or an ampersand {{\u0026amp; content}}. This is useful for creating rich, readable emails that preserve the original formatting.\nComplete Example Let\u0026rsquo;s create a more advanced template that includes a formatted title, a clickable link, and the full, unescaped HTML content.\nSample Feed Item Data:\nTitle: Announcing Our New API Link: https://example.com/blog/new-api Content: Check out our \u0026lt;b\u0026gt;new API\u0026lt;/b\u0026gt;! It's a game-changer. \u0026lt;a href=\u0026quot;https://example.com/docs\u0026quot;\u0026gt;Read the docs here\u0026lt;/a\u0026gt;. Subject Template:\n[New Feed] {{title}} Body Template:\n\u0026lt;h1\u0026gt;\u0026lt;a href=\u0026#34;{{link}}\u0026#34;\u0026gt;{{title}}\u0026lt;/a\u0026gt;\u0026lt;/h1\u0026gt; \u0026lt;p\u0026gt;\u0026lt;em\u0026gt;Published on: {{date}}\u0026lt;/em\u0026gt;\u0026lt;/p\u0026gt; \u0026lt;hr\u0026gt; \u0026lt;div\u0026gt; {{{content}}} \u0026lt;/div\u0026gt; \u0026lt;hr\u0026gt; \u0026lt;p\u0026gt;\u0026lt;a href=\u0026#34;{{link}}\u0026#34;\u0026gt;Read the original post here\u0026lt;/a\u0026gt;\u0026lt;/p\u0026gt; Resulting Email:\nSubject: [New Feed] Announcing Our New API\nAnnouncing Our New API Published on: 2025-09-10\nCheck out our new API! It's a game-changer. Read the docs here. *** [Read the original post here](https://example.com/blog/new-api) Saving Your Changes Once you\u0026rsquo;re happy with your email address and templates, click the Save button. Your new settings will be used for all future notifications.\n","permalink":"https://downtozero.cloud/docs/rss2email/profile/","title":"Customizing Notifications"},{"contents":" The dtz_container_registry data source allows you to retrieve information about the container registry from the DownToZero.cloud service.\nExample Usage data \u0026#34;dtz_container_registry\u0026#34; \u0026#34;example\u0026#34; { } output \u0026#34;registry_url\u0026#34; { value = data.dtz_container_registry.example.url } output \u0026#34;image_count\u0026#34; { value = data.dtz_container_registry.example.image_count } Schema Read-Only url (String) The URL of the container registry server. image_count (Int64) The number of images in the container registry. Terraform Docs https://registry.terraform.io/providers/DownToZero-Cloud/dtz/latest/docs/data-sources/container_registry Github Sources https://github.com/DownToZero-Cloud/terraform-provider-dtz/blob/main/docs/data-sources/container_registry.md ","permalink":"https://downtozero.cloud/docs/terraform/datasources/dtz_container_registry/","title":"dtz_container_registry"},{"contents":" The dtz_containers_domain data source returns information about a registered Containers domain. If name is provided it returns that domain. If name is omitted, it returns the system-generated domain ending with .containers.dtz.dev when present; otherwise it falls back to the first domain in the list.\nExample Usage # Return the default (system) domain when present, else the first data \u0026#34;dtz_containers_domain\u0026#34; \u0026#34;default\u0026#34; {} output \u0026#34;default_domain\u0026#34; { value = data.dtz_containers_domain.default.name } # Return a specific domain by name data \u0026#34;dtz_containers_domain\u0026#34; \u0026#34;example\u0026#34; { name = \u0026#34;example.com\u0026#34; } output \u0026#34;example_domain_verified\u0026#34; { value = data.dtz_containers_domain.example.verified } Schema Optional name (String) Domain name to fetch. If omitted, the system domain is returned. Read-Only context_id (String) created (String) verified (Boolean) Terraform Docs https://registry.terraform.io/providers/DownToZero-Cloud/dtz/latest/docs/data-sources/containers_domain Github Sources https://github.com/DownToZero-Cloud/terraform-provider-dtz/blob/main/docs/data-sources/containers_domain.md ","permalink":"https://downtozero.cloud/docs/terraform/datasources/dtz_containers_domain/","title":"dtz_containers_domain"},{"contents":" The dtz_containers_domain resource allows you to create, read, and delete container domains in the DownToZero.cloud service.\nExample Usage resource \u0026#34;dtz_containers_domain\u0026#34; \u0026#34;example\u0026#34; { name = \u0026#34;example.com\u0026#34; } Schema Required name (String) The name of the domain. Changing this value always forces a recreate. Read-Only context_id (String) The context ID associated with the domain. verified (Boolean) Whether the domain has been verified. created (String) The timestamp when the domain was created. Import Import is supported using the following syntax:\nTerraform Docs https://registry.terraform.io/providers/DownToZero-Cloud/dtz/latest/docs/resources/containers_domain Github Sources https://github.com/DownToZero-Cloud/terraform-provider-dtz/blob/main/docs/resources/containers_domain.md ","permalink":"https://downtozero.cloud/docs/terraform/resources/dtz_containers_domain/","title":"dtz_containers_domain"},{"contents":" The dtz_containers_job resource allows you to create, update, and delete container jobs in the DownToZero.cloud service.\nExample Usage Basic Job resource \u0026#34;dtz_containers_job\u0026#34; \u0026#34;example\u0026#34; { name = \u0026#34;my-container-job\u0026#34; container_image = \u0026#34;docker.io/library/hello-world:latest\u0026#34; schedule_type = \u0026#34;precise\u0026#34; schedule_cron = \u0026#34;52 3 * * *\u0026#34; #daily at 03:52am } Job with Environment Variables resource \u0026#34;dtz_containers_job\u0026#34; \u0026#34;example_with_env\u0026#34; { name = \u0026#34;my-container-job\u0026#34; container_image = \u0026#34;docker.io/library/hello-world:latest\u0026#34; schedule_type = \u0026#34;precise\u0026#34; schedule_cron = \u0026#34;0 0 * * *\u0026#34; #daily at midnight env_variables = { PORT = \u0026#34;8080\u0026#34; DATABASE_URL = \u0026#34;postgres://localhost:5432/mydb\u0026#34; API_KEY = var.api_key ENVIRONMENT = \u0026#34;production\u0026#34; } } Job with Private Registry Authentication resource \u0026#34;dtz_containers_job\u0026#34; \u0026#34;private_registry_job\u0026#34; { name = \u0026#34;private-registry-job\u0026#34; container_image = \u0026#34;my-registry.com/my-app:v1.0.0\u0026#34; schedule_type = \u0026#34;none\u0026#34; container_pull_user = \u0026#34;myuser\u0026#34; container_pull_pwd = var.registry_password } Schema Required container_image (String) The Docker image to use for the job. If no tag or digest is specified, :latest will be automatically appended. name (String) The name of the container job. schedule_type (String) The schedule type. Must be one of: \u0026lsquo;relaxed\u0026rsquo;, \u0026lsquo;precise\u0026rsquo;, or \u0026rsquo;none\u0026rsquo;. Optional container_pull_pwd (String, Sensitive) The password for private image registry authentication. container_pull_user (String) The username for private image registry authentication. env_variables (Map of String) Environment variables to pass to the container. Each variable can be a simple string value. schedule_cron (String) The cron expression for job scheduling (used when schedule_type is \u0026ldquo;precise\u0026rdquo;). schedule_repeat (String) The repeat interval for the job (used when schedule_type is not \u0026ldquo;cron\u0026rdquo;). Read-Only id (String) The ID of this resource. Validation container_image must include a tag (e.g., :1.2 or :latest) or a digest (e.g., @sha256:...). Terraform Docs https://registry.terraform.io/providers/DownToZero-Cloud/dtz/latest/docs/resources/containers_job Github Sources https://github.com/DownToZero-Cloud/terraform-provider-dtz/blob/main/docs/resources/containers_job.md ","permalink":"https://downtozero.cloud/docs/terraform/resources/dtz_containers_job/","title":"dtz_containers_job"},{"contents":" Creates and manages a container service that runs your Docker container on the DownToZero.cloud platform.\nExample Usage # Basic web service (no authentication required) resource \u0026#34;dtz_containers_service\u0026#34; \u0026#34;web_app\u0026#34; { prefix = \u0026#34;/api\u0026#34; container_image = \u0026#34;nginx:alpine\u0026#34; } # Application with environment variables (no authentication) resource \u0026#34;dtz_containers_service\u0026#34; \u0026#34;app\u0026#34; { prefix = \u0026#34;/app\u0026#34; container_image = \u0026#34;myregistry.com/myapp:v1.2.3\u0026#34; env_variables = { PORT = \u0026#34;8080\u0026#34; DATABASE_URL = \u0026#34;postgres://...\u0026#34; API_KEY = var.api_key } } # Private registry with authentication resource \u0026#34;dtz_containers_service\u0026#34; \u0026#34;private_app\u0026#34; { prefix = \u0026#34;/private\u0026#34; container_image = \u0026#34;private-registry.com/app:latest\u0026#34; container_pull_user = \u0026#34;registry-user\u0026#34; container_pull_pwd = var.registry_password login = { provider_name = \u0026#34;dtz\u0026#34; } } # Using specific digest for immutable deployments resource \u0026#34;dtz_containers_service\u0026#34; \u0026#34;production\u0026#34; { prefix = \u0026#34;/prod\u0026#34; container_image = \u0026#34;myapp@sha256:a1b2c3d4e5f6789...\u0026#34; env_variables = { ENV = \u0026#34;production\u0026#34; } login = { provider_name = \u0026#34;dtz\u0026#34; } } Schema Required prefix (String) The URL path prefix for your service (e.g., /api, /app). Must be unique within your context. container_image (String) The Docker image to run. Must include a tag or a digest: With tag: nginx:1.21 or myregistry.com/app:v2.0 With digest: nginx@sha256:abc123... (recommended for production) Optional container_pull_user (String) Username for authenticating with private container registries. container_pull_pwd (String, Sensitive) Password for authenticating with private container registries. env_variables (Map of String) Environment variables passed to the container at runtime. login (Object, Optional) Enables DTZ authentication for the service. If provided, must contain: provider_name (String, Required) Must be \u0026quot;dtz\u0026quot; (only supported provider). Read-Only id (String) The unique identifier of the service. container_image_version (String) Computed output. Use the tag or digest directly in container_image instead. Argument Reference Container Image Validation The container_image must include either a tag (e.g., :1.2 or :latest) or a digest (e.g., @sha256:...).\nPrivate Registry Authentication For private registries, provide both container_pull_user and container_pull_pwd:\nresource \u0026#34;dtz_containers_service\u0026#34; \u0026#34;private\u0026#34; { prefix = \u0026#34;/app\u0026#34; container_image = \u0026#34;private.registry.com/app:latest\u0026#34; container_pull_user = \u0026#34;username\u0026#34; container_pull_pwd = var.registry_password } DTZ Authentication (Login Attribute) The login attribute is optional and can be used in two ways:\nNo login attribute: Service is publicly accessible Login attribute with provider_name = \u0026ldquo;dtz\u0026rdquo;: Service requires DTZ authentication to access # Public service (no login attribute) resource \u0026#34;dtz_containers_service\u0026#34; \u0026#34;public_api\u0026#34; { prefix = \u0026#34;/public\u0026#34; container_image = \u0026#34;my-public-api:latest\u0026#34; } # Authenticated service (login attribute with provider_name) resource \u0026#34;dtz_containers_service\u0026#34; \u0026#34;private_api\u0026#34; { prefix = \u0026#34;/private\u0026#34; container_image = \u0026#34;my-private-api:latest\u0026#34; login = { provider_name = \u0026#34;dtz\u0026#34; } } Import Services can be imported using their service ID:\nterraform import dtz_containers_service.example \u0026lt;service_id\u0026gt; Find your service ID in the DTZ dashboard or via the API.\nTerraform Docs https://registry.terraform.io/providers/DownToZero-Cloud/dtz/latest/docs/resources/containers_service Github Sources https://github.com/DownToZero-Cloud/terraform-provider-dtz/blob/main/docs/resources/containers_service.md ","permalink":"https://downtozero.cloud/docs/terraform/resources/dtz_containers_service/","title":"dtz_containers_service"},{"contents":" The dtz_context data source allows you to retrieve information about an the DownToZero context.\nDTZ context docs\nSchema Read-Only alias (String) id (String) The ID of this resource. created (String) The timestamp when the context was created. context_identity (String) The identity associated with the context, typically in the form admin@\u0026lt;context_id\u0026gt;.dtz.rocks. Terraform Docs https://registry.terraform.io/providers/DownToZero-Cloud/dtz/latest/docs/data-sources/context Github Sources https://github.com/DownToZero-Cloud/terraform-provider-dtz/blob/main/docs/data-sources/context.md ","permalink":"https://downtozero.cloud/docs/terraform/datasources/dtz_context/","title":"dtz_context"},{"contents":" The dtz_context resource allows you to create, update, and delete contexts in the DownToZero.cloud service. A context represents a specific configuration or environment within the DTZ platform.\nDTZ context docs\nIn Terraform, the context is implicitly derived from the user session or fetched using the dtz_context data source—even if it’s not explicitly declared in the resource block.\nExample Usage resource \u0026#34;dtz_context\u0026#34; \u0026#34;example\u0026#34; { alias = \u0026#34;production\u0026#34; } Schema Required alias (String) A user-defined alias for the context. Read-Only id (String) The ID of this resource. Import Import is supported using the following syntax:\nterraform import dtz_context.example \u0026lt;context_id\u0026gt; Terraform Docs https://registry.terraform.io/providers/DownToZero-Cloud/dtz/latest/docs/resources/context Github Sources https://github.com/DownToZero-Cloud/terraform-provider-dtz/blob/main/docs/resources/context.md ","permalink":"https://downtozero.cloud/docs/terraform/resources/dtz_context/","title":"dtz_context"},{"contents":" The dtz_identity_apikey resource allows you to create, update, and delete API keys in the DownToZero.cloud service.\nExample Usage resource \u0026#34;dtz_identity_apikey\u0026#34; \u0026#34;example\u0026#34; { alias = \u0026#34;my-api-key\u0026#34; context_id = \u0026#34;my-api-key-context\u0026#34; } Schema Required context_id (String) The context ID of the API key. Optional alias (String) The alias of the API key. Read-Only apikey (String) The API key. Import Import is supported using the following syntax:\nterraform import dtz_identity_apikey.example \u0026lt;apikey_id\u0026gt; Terraform Docs https://registry.terraform.io/providers/DownToZero-Cloud/dtz/latest/docs/resources/identity_apikey Github Sources https://github.com/DownToZero-Cloud/terraform-provider-dtz/blob/main/docs/resources/identity_apikey.md ","permalink":"https://downtozero.cloud/docs/terraform/resources/dtz_identity_apikey/","title":"dtz_identity_apikey"},{"contents":" The dtz_rss2email_feed data source allows you to retrieve information about an RSS2Email feed from the DownToZero.cloud service.\nExample Usage data \u0026#34;dtz_rss2email_feed\u0026#34; \u0026#34;example\u0026#34; { url = \u0026#34;https://example.com/rss-feed\u0026#34; } output \u0026#34;feed_name\u0026#34; { value = data.dtz_rss2email_feed.example.name } Schema Optional url (String) The URL of the RSS feed to retrieve information about. Read-Only id (String) The ID of this resource. name (String) The name of the RSS feed. enabled (Boolean) Whether the feed is enabled or not. last_check (String) The timestamp of the last check performed on the feed. last_data_found (String) The timestamp when data was last found in the feed. Terraform Docs https://registry.terraform.io/providers/DownToZero-Cloud/dtz/latest/docs/data-sources/rss2email_feed Github Sources https://github.com/DownToZero-Cloud/terraform-provider-dtz/blob/main/docs/data-sources/rss2email_feed.md ","permalink":"https://downtozero.cloud/docs/terraform/datasources/dtz_rss2email_feed/","title":"dtz_rss2email_feed"},{"contents":" The dtz_rss2email_feed resource allows you to create, update, and delete RSS2Email feeds in the DownToZero.cloud service.\nExample Usage resource \u0026#34;dtz_rss2email_feed\u0026#34; \u0026#34;example\u0026#34; { url = \u0026#34;https://example.com/rss-feed\u0026#34; enabled = true } Schema Required url (String) The URL of the RSS feed. Optional enabled (Boolean) Whether the feed is enabled or not. Defaults to false. Read-Only id (String) The ID of this resource. name (String) The name of the RSS feed. last_check (String) The timestamp of the last check performed on the feed. last_data_found (String) The timestamp when data was last found in the feed. Import Import is supported using the following syntax:\nterraform import dtz_rss2email_feed.example \u0026lt;feed_id\u0026gt; Terraform Docs https://registry.terraform.io/providers/DownToZero-Cloud/dtz/latest/docs/resources/rss2email_feed Github Sources https://github.com/DownToZero-Cloud/terraform-provider-dtz/blob/main/docs/resources/rss2email_feed.md ","permalink":"https://downtozero.cloud/docs/terraform/resources/dtz_rss2email_feed/","title":"dtz_rss2email_feed"},{"contents":" The dtz_rss2email_profile data source allows you to retrieve the current RSS2Email profile configuration from the DownToZero.cloud service. This profile defines the email settings for RSS feed notifications.\nExample Usage data \u0026#34;dtz_rss2email_profile\u0026#34; \u0026#34;current\u0026#34; {} output \u0026#34;profile_email\u0026#34; { value = data.dtz_rss2email_profile.current.email } Schema Read-Only email (String) The email address where RSS notifications are sent. subject (String) The subject template for the email notifications. It may contain placeholders like {title} that are replaced with actual content from the RSS feed. body (String) The body template for the email notifications. It may contain placeholders like {title}, {link}, {description} that are replaced with actual content from the RSS feed. Terraform Docs https://registry.terraform.io/providers/DownToZero-Cloud/dtz/latest/docs/data-sources/rss2email_profile Github Sources https://github.com/DownToZero-Cloud/terraform-provider-dtz/blob/main/docs/data-sources/rss2email_profile.md ","permalink":"https://downtozero.cloud/docs/terraform/datasources/dtz_rss2email_profile/","title":"dtz_rss2email_profile"},{"contents":" The dtz_rss2email_profile resource allows you to create, update, and delete RSS2Email profiles in the DownToZero.cloud service. An RSS2Email profile defines the email settings for RSS feed notifications.\nExample Usage resource \u0026#34;dtz_rss2email_profile\u0026#34; \u0026#34;example\u0026#34; { email = \u0026#34;user@example.com\u0026#34; subject = \u0026#34;New RSS Item: {title}\u0026#34; body = \u0026#34;A new item has been published:\\n\\nTitle: {title}\\nLink: {link}\\nDescription: {description}\u0026#34; } Schema Required email (String) The email address where RSS notifications will be sent. Optional subject (String) The subject template for the email notifications. You can use placeholders like {title} that will be replaced with actual content from the RSS feed. body (String) The body template for the email notifications. You can use placeholders like {title}, {link}, {description} that will be replaced with actual content from the RSS feed. Read-Only id (String) The ID of this resource. Import Import is supported using the following syntax:\nterraform import dtz_rss2email_profile.example \u0026lt;profile_id\u0026gt; Terraform Docs https://registry.terraform.io/providers/DownToZero-Cloud/dtz/latest/docs/resources/rss2email_profile Github Sources https://github.com/DownToZero-Cloud/terraform-provider-dtz/blob/main/docs/resources/rss2email_profile.md ","permalink":"https://downtozero.cloud/docs/terraform/resources/dtz_rss2email_profile/","title":"dtz_rss2email_profile"},{"contents":" ecoMode is a mode of operation that allows for a more efficient processing of workload. Definition A machine is running in ecoMode, wenn all energy it does consume it produced by DTZ owner systems.\nCriteria for Running in ecoMode energy is produced internally (on privately owned hardware), mainly through solar panels energy is stored in batteries, for later consumption energy is directly consumed by nodes energy from the batteries can also be consumed by the nodes Operational Aspects EcoMode can be enforced on Jobs (asynchronous workload) EcoMode cannot be set on Services (synchronous workload) DTZ always tries to run Job in ecoMode and falls back to normal Node if no node is available ","permalink":"https://downtozero.cloud/docs/containers/ecomode/","title":"ecoMode"},{"contents":"Service in DTZ are exposed through HTTP endpoints.\nFor example, if you deploy a nginx for the URI \u0026lsquo;/\u0026rsquo;, the content becomes avaiable under htts://{domain}/.\nIn addition to the deployed endpoints, we also expose exndpoint for DTZ internal puproses.\nDTZ provided Endpoints uri response sample description /.well-known/dtz-owner context-d8b951fb-01b4-45e6-875e-0d110de35c6e id of the context hosting this domain ","permalink":"https://downtozero.cloud/docs/containers/endpoints/","title":"Exposed Endpoints"},{"contents":"Welcome to the DowntoZero Cloud FAQ! Here you\u0026rsquo;ll find answers to the most common questions about our cloud services, pricing, and features. If your question is not answered here, feel free to contact us at support@downtozero.cloud.\nGeneral Questions What is DowntoZero Cloud? DowntoZero Cloud is a scalable and secure cloud platform that allows you to deploy, manage, and monitor containerized applications. We provide a unique **pay-per-watt** pricing model, which enables users to control costs based on the power consumption of their deployed resources. How do I create an account? You can create an account by visiting [our website](https://downtozero.cloud/) and clicking the **Sign Up** button. After filling in your details and verifying your email, you'll gain access to your personalized **DowntoZero Cloud Dashboard**. What services does DowntoZero Cloud offer? We offer the following services: - **Container Registry**: A private registry to store and manage container images. - **Container Services**: Deploy, scale, and manage containerized applications in a secure environment. - **Real-time Monitoring**: View resource usage and power consumption metrics for each deployment. - **Objectstore**: A simple and scalable object storage solution for your files and assets. Technical Questions How do I push a container image to the DowntoZero Cloud registry? To push a container image to our registry, follow these steps: docker tag \u0026lt;your-image\u0026gt; cr.dtz.rocks/\u0026lt;image-name\u0026gt;:\u0026lt;tag\u0026gt; docker push cr.dtz.rocks/\u0026lt;image-name\u0026gt;:\u0026lt;tag\u0026gt; Can I scale my container automatically? Yes, DowntoZero Cloud offers autoscaling. You can set up autoscaling rules based on CPU, memory usage, or power consumption from the deployment settings in your dashboard. What monitoring tools are available? DowntoZero Cloud offers built-in monitoring for: •\tPower consumption •\tCPU usage •\tLogs for debugging All of these are accessible via the dashboard for each deployment.\nWhat HTTP protocols and standards are supported? We currently support HTTP/1 and HTTP/2. We also support Server-Sent-Events (SSE) for response streaming purposes. Billing \u0026amp; Pricing How does the pay-per-watt model work? Our pay-per-watt model ensures that you only pay for the power your deployments consume. The billing system monitors your real-time power consumption, allowing you to have greater control over costs. You can set usage alerts to notify you when consumption exceeds your set thresholds. How do I charge my account? Accounts must be charged in advance via IBAN/SEPA bank transfer: Log in to the Billing Dashboard. Copy your Account ID and the bank details. Make a transfer with the reference format: charge Example: charge identity-abcdef Once we receive your transfer (typically 1–2 business days), your account balance will be credited automatically. What are the bank details for charging my account?\nIBAN: DE08100101236489344438 BIC: QNTODEB2XXX Bank: Olinda Zweigniederlassung Deutschland, Warschauer Platz 11–13, 10245 Berlin Beneficiary: DownToZero (Apimeister Consulting GmbH) How long does it take until my balance is credited? SEPA transfers usually take 1–2 business days. If your balance is not updated within 3 days, please contact support with proof of transfer. Do I get an invoice for my payments? Yes. Every top-up payment generates an official invoice (Vorauszahlungsrechnung) for the transferred amount. Invoices are tax-compliant and can be downloaded from the Billing Dashboard after your payment is processed. Do I get an invoice for monthly usage? No. Only prepayments (top-ups) are invoiced. Monthly usage is shown in your dashboard as a consumption report, but it is not invoiced separately. This ensures invoices always match your actual payments, which is required for German/EU accounting. What happens if I forget to include the reference code? If the transfer reference (e.g., charge identity-abcdef) is missing or incorrect, your payment may not be allocated automatically. Please contact support and provide proof of transfer so we can manually assign it to your account. Can I get a refund for unused balance? No. Prepaid balances are non-refundable but remain in your account until fully consumed. Are there any free-tier options? At this time, we do not offer a free tier. However, we provide flexible pricing based on your real-time consumption, allowing you to optimize costs according to your needs. Data Residency \u0026amp; Compliance Where is my data stored? All customer data — including containers, images, logs, and backups — are stored exclusively within Germany. Our infrastructure runs on German-based data centers that meet ISO 27001 and GDPR standards. We do not replicate or process your data outside the European Union. Does DownToZero Cloud comply with the GDPR? Yes. DownToZero Cloud is fully GDPR-compliant. We act as a data processor under the GDPR, and we process customer data only within the European Union. Our processing adheres to the principles of data minimization, purpose limitation, and storage limitation. You can request a Data Processing Agreement (DPA) for your organization at any time. Can I choose the location where my data is hosted? Currently, all services are hosted exclusively in Germany, ensuring consistent data protection standards and regulatory compliance. As we expand, any additional regions will remain within the European Economic Area (EEA). Is my data ever transferred outside the EU or to third countries? No. DownToZero Cloud does not transfer any customer data outside the EU. All data processing, storage, and backups remain entirely within Germany-based facilities. If an exception is ever required (e.g. third-party integration explicitly requested by the customer), we ensure it is covered by standard contractual clauses (SCCs) and with explicit customer consent. Which data protection standards or certifications do you follow? Our hosting providers and partners maintain ISO 27001 certification. All systems are subject to continuous security reviews, strict access controls, and audit trails in compliance with EU data protection law. How can I request deletion of my data? You can request full deletion of your account and associated data via our support portal or through your account dashboard. Once confirmed, we permanently delete all related data — including container images, object store contents, and logs — within 30 days, unless a longer retention is legally required. A deletion confirmation report can be issued upon request. Do you offer a Data Processing Agreement (DPA)? Yes. A standard DPA compliant with Art. 28 GDPR is available to all customers. It outlines how we handle customer data, subprocessors, and security controls. You can review and sign it electronically through our support or account portal. How do you ensure compliance with German and EU data protection laws? We operate under German jurisdiction and follow guidance from the Bundesbeauftragte für den Datenschutz und die Informationsfreiheit (BfDI). Regular audits ensure that all storage, processing, and data transfer mechanisms comply with EU GDPR, BDSG-neu, and Telekommunikation-Telemedien-Datenschutz-Gesetz (TTDSG). Why does DownToZero Cloud keep all data in Germany? Because we believe that sovereign cloud services should remain under European legal protection. Hosting entirely in Germany ensures your data is covered by one of the world’s strongest data protection frameworks, providing legal certainty and transparency. Security Questions Is my data secure? Yes, DowntoZero Cloud takes security seriously. We implement several layers of security to protect your data, including: •\tEncryption at rest and in transit •\tRole-based access control (RBAC) for managing permissions •\tSecrets management to securely store sensitive information How can I manage access to my cloud resources? You can manage access through role-based access control (RBAC), allowing you to assign specific permissions to users and services. This ensures only authorized users can access critical resources. Support How do I contact support? If you have any questions or issues, you can reach out to our support team at support@downtozero.cloud. We’re here to help! Still have questions? Check out our Documentation or contact our team for personalized support.\n","permalink":"https://downtozero.cloud/docs/faq/","title":"FAQ"},{"contents":"How long does an authenticated session stay open? The issued OAuth-Token does expire after 24hours. Is there any way to extend a session without re-authentication? There is no mechanism to extend the lifetime of a session. ","permalink":"https://downtozero.cloud/docs/identity/faq/","title":"FAQ"},{"contents":"API Keys are part of the identity service. Keys can be optained through the New Tab.\nAPI Keys can only be created for the current context. API Keys are scoped to a context. API Keys are scoped to an identity. After a successful creation, the resulting API key will be displayed.\n","permalink":"https://downtozero.cloud/docs/identity/getting-an-api-key/","title":"Getting an API Key"},{"contents":"Welcome to DownToZero Cloud! This guide will walk you through the steps to get started with our cloud services. Whether you\u0026rsquo;re a beginner or an experienced user, this guide covers the essential steps to begin leveraging the power of DownToZero Cloud.\nStep 1: Create an Account To begin using DownToZero Cloud, you first need to create an account:\nWeb UI Go to DownToZero Cloud Signup. Enter your email and password Click the Sign up button Once registered, you can log in via the DownToZero Cloud Login. cURL For cURL, please sign up via the Web UI, then generate an API key for API access.\nGo to New API Key. Click the Create API key button\nExport the generated API key export DTZ_API_KEY=apikey-sample1234 Step 2: Enable the Container Services Web UI Enable the Container Services Open the Container Services cURL Enable Container Services curl -X \u0026#39;POST\u0026#39; -H \u0026#34;X-API-KEY: $DTZ_API_KEY\u0026#34; \u0026#39;https://containers.dtz.rocks/api/2021-02-21/enable\u0026#39; Step 3: Deploy NGINX as a service Web UI Select the Services section Click Create Service Enter the service details for NGINX 3.1. The prefix should be set to /. 3.2. The image should be set to nginx. 3.3. Click Create Service.\nIn the Service Overview, the new service appears with a service link cURL Create the service curl -X \u0026#39;POST\u0026#39; \\ \u0026#39;https://containers.dtz.rocks/api/2021-02-21/service\u0026#39; \\ -H \u0026#39;Content-Type: application/json\u0026#39; \\ -H \u0026#34;X-API-KEY: $DTZ_API_KEY\u0026#34; \\ -d \u0026#39;{\u0026#34;prefix\u0026#34;: \u0026#34;/\u0026#34;,\u0026#34;containerImage\u0026#34;: \u0026#34;nginx\u0026#34;}\u0026#39; This returns the following JSON. { \u0026#34;contextId\u0026#34;: \u0026#34;context-fxev27jf\u0026#34;, \u0026#34;enabled\u0026#34;: true, \u0026#34;serviceId\u0026#34;: \u0026#34;service-ibok4vnd\u0026#34;, \u0026#34;created\u0026#34;: \u0026#34;2025-08-30T09:29:09.739001675Z\u0026#34;, \u0026#34;updated\u0026#34;: \u0026#34;2025-08-30T09:29:09.739002547Z\u0026#34;, \u0026#34;prefix\u0026#34;: \u0026#34;/\u0026#34;, \u0026#34;containerImage\u0026#34;: \u0026#34;nginx\u0026#34; } Step 4: Verify the deployment Web UI In the Service Overview, click the service link Verify the NGINX deployment cURL Fetch the generated domain for your context. curl -H \u0026#34;X-API-KEY: $DTZ_API_KEY\u0026#34; \u0026#39;https://containers.dtz.rocks/api/2021-02-21/domain\u0026#39; This returns the following JSON. [{ \u0026#34;contextId\u0026#34;: \u0026#34;context-fxev27jf\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;fxev27jf.containers.dtz.dev\u0026#34;, \u0026#34;verified\u0026#34;: true, \u0026#34;created\u0026#34;: \u0026#34;2025-08-30T08:45:26.656377567Z\u0026#34;, \u0026#34;updated\u0026#34;: \u0026#34;2025-08-30T08:45:26.656378477Z\u0026#34; }] Use the name value as the domain to verify your deployment. curl \u0026#34;https://DOMAIN_NAME\u0026#34; With this guide, you\u0026rsquo;re ready to start using DownToZero Cloud! Should you encounter any issues or have questions, please reach out to our support team at contact@downtozero.cloud.\n","permalink":"https://downtozero.cloud/docs/gettingstarted/","title":"Getting Started"},{"contents":"Welcome to the DowntoZero Cloud OCI-compliant container registry! This guide will help you get started with pushing and pulling container images using our registry.\nPrerequisites Before you begin, ensure you have the following:\nDocker installed on your machine. Download Docker An active account on DowntoZero Cloud Enable the Container Registry Service Generate API Key for registry authentication - Getting an API Key Step 1: Create an Account If you don\u0026rsquo;t have an account yet, follow these steps:\nVisit DowntoZero Cloud Click on Sign Up Fill in the required information Step 2: Generate an API Key To securely interact with the registry, you\u0026rsquo;ll need to generate an API key:\nLog in to your DowntoZero Cloud Navigate to Identity Service Click on New to create a new Authentication Provide a name for the token (e.g., \u0026ldquo;Docker Registry Access\u0026rdquo;) Set the appropriate permissions for the token Click Create Token Copy the generated token and store it securely (you won\u0026rsquo;t be able to view it again) Step 3: Log In to the Registry Use your API Key to log in to the registry:\necho \u0026#34;\u0026lt;your-api-key\u0026gt;\u0026#34; | docker login cr.dtz.rocks -u apikey --password-stdin Step 4: Tag Your Image Tag your local Docker image to match the registry format:\ndocker tag \u0026lt;local-image\u0026gt; cr.dtz.rocks/\u0026lt;image-name\u0026gt;:\u0026lt;tag\u0026gt; : The name of your local image : Desired image name in the registry : Image tag (optional, defaults to latest) Example:\ndocker tag my-app cr.dtz.rocks/my-app:latest Step 5: Push the Image Push the tagged image to the registry:\ndocker push cr.dtz.rocks/\u0026lt;image-name\u0026gt;:\u0026lt;tag\u0026gt; Example:\ndocker push cr.dtz.rocks/my-app:latest Step 6: Pull the Image You or others can now pull the image from the registry:\ndocker pull cr.dtz.rocks/\u0026lt;image-name\u0026gt;:\u0026lt;tag\u0026gt; Example:\ndocker pull cr.dtz.rocks/my-app:latest ","permalink":"https://downtozero.cloud/docs/registry/gettingstarted/","title":"Getting Started"},{"contents":"Google supports log forwarding through Log Routers. You can configure a router to forward all logs into a Pub/Sub Topic. A push subcription then forward the log entry to DTZ.\nTo configure the Push endpoint you can use the following endpoint.\nhttps://observability.dtz.rocks/gcp/logs?apiKey=00000000-0000-0000-0000-000000000000\u0026amp;contextId=00000000-0000-0000-0000-000000b25aa6 Log interpretation text payload is shown as payload json payload is transformed into attributes one-by-one tags are transformed into attributes one-by-one ","permalink":"https://downtozero.cloud/docs/observability/sources/google-logging/","title":"Google Logging"},{"contents":"The Green Software Foundation started working on Green Software Patterns.\nSince we are very much aligned with the goals of the Green Software Foundation we wanted to bring some of those pattern applied in the context of DownToZero.Cloud to everybodies attention.\nGreen Software Patterns Artificial Intelligence (AI) https://patterns.greensoftware.foundation/catalog/ai/\nSince we are not offering any AI related services, we skip this section of the catalog.\nCloud https://patterns.greensoftware.foundation/catalog/cloud/\nCache static data From an energy-efficiency perspective, it\u0026rsquo;s better to reduce network traffic by reading the data locally through a cache rather than accessing it remotely over the network.\nWe are caching on many instances while processing requests, e.g. our asynchronous job runtime locally caches docker images, so we wont need a full pull on every invocation.\nWe are also using Image hashes for container images as much as possible to reduce realtime load while instantiating new containers.\nChoose the region that is closest to users From an energy-efficiency perspective, it\u0026rsquo;s better to shorten the distance a network packet travels so that less energy is required to transmit it. Similarly, from an embodied-carbon perspective, when a network packet traverses through less computing equipment, we are more efficient with hardware.\nSince we are not multi-region, or we do not directly expose the regional layout of our infrastructure, this cannot be influenced by a user. Internally we try to optimise for regionality as much as possible, without sacrificing durability and availabilty.\nCompress stored data Storing too much uncompressed data can result in bandwidth waste and increase the storage capacity requirements.\nWe are commited to only store compressed data.\nBoth Objectstore and Container Registry support compression at the storage level. This is built into both services. The Observability service has an internal tiering mechanism that moves data from a hot storage tier to a colder, better compressed format after a fixed retention period.\nCompress transmitted data From an energy-efficiency perspective, it\u0026rsquo;s better to minimise the size of the data transmitted so that less energy is required because the network traffic is reduced.\nContainerize your workloads Containers allow resources to be used more flexibly, as workloads can be easily moved between machines. Containers allow for bin packing and require less compute resources than virtual machines, meaning a reduction in unnecessary resource allocation and an increase in utilization of the compute resources.\nWe are only supporting containerized workloads. Being able to move workload between locations is necessary at the core of DTZ to enable shifting workloads to more efficient execution environments on demand.\nDelete unused storage resources From an embodied carbon perspective, it\u0026rsquo;s better to delete unused storage resources so we are efficient with hardware and so that the storage layer is optimised for the task.\nOut object store supports expirations on objects, so that every entity can be cleaned up after it is no longer used. Our container registry also runs configurable cleanup jobs to reduce the amount of stored objects.\nEncrypt what is necessary Data protection through encryption is a crucial aspect of our security measures. However, the encryption process can be resource-intensive at multiple levels. Firstly, the amount of CPU required for encryption varies depending on the chosen algorithm, and more complex algorithms tend to demand higher computational power. Additionally, encryption can lead to increased storage requirements as it inflates the size of the data being stored because it typically contains additional metadata and padding, which is especially noticeable for smaller files. Furthermore, encryption is a repetitive task that needs to be performed each time data is fetched or updated. This repetitive nature can contribute to increased energy consumption, especially in high-throughput systems.\nEvaluate other CPU architectures Applications are built with a software architecture that best fits the business need they are serving. Cloud providers make it easy to evaluate other CPU types, such as x86-64, which can be included in the evaluation along with many cost effective alternatives that feature good performance per watt.\nFor now we only support a single CPU architecture, X86. We currently do not have the resources to evaluate different architectures.\nUse a service mesh only if needed A service mesh deploys additional containers for communication, typically in a sidecar pattern, to provide more operational capabilities. This can result in an increase in CPU usage and network traffic but also allows you to decouple your application from these capabilities, moving them out from the application layer and down to the infrastructure layer.\nSince our network does not rely on any kubernetes abstractions, we also have no use for a service mesh.\nTerminate TLS at border gateway Transport Layer Security (TLS) ensures that all data passed between the web server and web browsers remain private and encrypted. However, terminating and re-establishing TLS increases CPU usage and might be unnecessary in certain architectures.\nImplement stateless design Service state refers to the in-memory or on-disk data required by a service to function. State includes the data structures and member variables that the service reads and writes. Depending on how the service is architected, the state might also include files or other resources stored on the disk.\nOur existing services, like Container Services and Container Jobs build on stateless design and also bring those acapabilties to our users.\nMatch your service level objectives to business needs If service downtimes are acceptable it\u0026rsquo;s better to not strive for highest availability but to design the solution according to real business needs. Lower availability guarantees can help reduce energy consumption by using less infrastructure components.\nOur SLO is to provide the best service possible, but not compromising on efficiency and sustainability. So our services heavily depend on automatic scaling to alway provied an appropriate service level at minimal overhead.\nMatch utilization requirements of virtual machines (VMs) It\u0026rsquo;s better to have one VM running at a higher utilization than two running at low utilization rates, not only in terms of energy proportionality but also in terms of embodied carbon.\nSo far, we do not offer any VM based services. All our infrastructure offerings are based on containers to offer a more streamlined experience.\nMatch utilization requirements with pre-configured servers It\u0026rsquo;s better to have one VM running at a higher utilization than two running at low utilization rates, not only in terms of energy proportionality but also in terms of embodied carbon.\nWe aim for a high utilization in our infrastructure, but those components are not exposed to the end user. The user only sees containers as abstraction. Since we are providing energy efficiency metrics, those can vary based on the utilisation of the underlying infrastructure.\nMinimize the total number of deployed environments In a given application, there may be a need to utilize multiple environments in the application workflow. Typically, a development environment is used for regular updates, while staging or testing environments are used to make sure there are no issues before code reaches a production environment where users may have access.\nOptimise storage utilization It\u0026rsquo;s better to maximise storage utilisation so the storage layer is optimised for the task, not only in terms of energy proportionality but also in terms of embodied carbon.\nOptimize average CPU utilization CPU usage and utilization varies throughout the day, sometimes wildly for different computational requirements. The larger the variance between the average and peak CPU utilization values, the more resources need to be provisioned in stand-by mode to absorb those spikes in traffic.\nOptimize impact on customer devices and equipment Applications run on customer hardware or are displayed on it. The hardware used by the customer has a carbon footprint embodied through the production and electricity required while running. Optimising your software design and architecture to extend the life of the customer devices reduces the carbon intensity of the application. Ideally the customer can use the hardware until it\u0026rsquo;s failure or until it becomes obsolete.\nOptimize peak CPU utilization CPU usage and utilization varies throughout the day, sometimes wildly for different computational requirements. The larger the variance between the average and peak CPU utilization values, the more resources need to be provisioned in stand-by mode to absorb those spikes in traffic.\nQueue non-urgent processing requests All systems have periods of peak and low load. From a hardware-efficiency perspective, we are more efficient with hardware if we minimise the impact of request spikes with an implementation that allows an even utilization of components. From an energy-efficiency perspective, we are more efficient with energy if we ensure that idle resources are kept to a minimum.\nOur service Container Jobs is the manifestation of that goal as a service within DTZ.\nReduce transmitted data From an energy-efficiency perspective, it\u0026rsquo;s better to minimize the size of the data transmitted so that less energy is required because the network traffic is reduced.\nRemove unused assets Monitor and analyze the application and the cloud bill to identify resources that are no longer used or can be reduced.\nScale down kubernetes applications when not in use In order to reduce carbon emissions and costs, Dev\u0026amp;Test Kubernetes clusters can turn off nodes (VMs) out of office hours (e.g at night \u0026amp; during weekends). =\u0026gt; thereby, optimization is implemented at the cluster level.\nWe are not offering any Kubernetes based services.\nScale down applications when not in use Applications consume CPU even when they are not actively in use. For example, background timers, garbage collection, health checks, etc. Even when the application is shut down, the underlying hardware is consuming idle power. This can also happen with development and test applications or hardware in out-of-office hours.\nDownToZero, as the name states, scaling down is at the core of our doings. So we aim for all services to provide scale-downs.\nScale infrastructure with user load Demand for resources depends on user load at any given time. However, most applications run without taking this into consideration. As a result,resources are underused and inefficient.\nScale Kubernetes workloads based on relevant demand metrics By default, Kubernetes scales workloads based on CPU and RAM utilization. In practice, however, it\u0026rsquo;s difficult to correlate your application\u0026rsquo;s demand drivers with CPU and RAM utilization.\nWe do not offer any Kubernetes services.\nScale logical components independently A microservice architecture may reduce the amount of compute resources required as it allows each independent component to be scaled according to its own demand.\nScan for vulnerabilities Many attacks on cloud infrastructure seek to misuse deployed resources, which leads to an unnecessary spike in usage and cost.\nSet storage retention policies From an embodied carbon perspective, it\u0026rsquo;s better to have an automated mechanism to delete unused storage resources so we are efficient with hardware and so that the storage layer is optimised for the task.\nShed lower priority traffic When resources are constrained during high-traffic events or when carbon intensity is high, more carbon emissions will be generated from your system. Adding more resources to support increased traffic requirements introduces more embodied carbon and more demand for electricity. Continuing to handle all requests during high carbon intensity will increase overall emissions for your system. Shedding traffic that is lower priority during these scenarios will save on resources and a carbon emissions. This approach requires an understanding of your traffic, including which call requests are critical and which can best withstand retry attempts and failures.\nTime-shift Kubernetes cron jobs The carbon emissions of a software system depends on the power consumed by that software, but also on the Carbon intensity of the electricity it is powered on. For this reason, running energy-efficient software on carbon intensive electricity grid, might be inefficient to reduce its global carbon emissions.\nWhile not offering any Kubernetes services, our Container Jobs offer the possiblity for a flexible schedule model. See our docs for more info.\nUse Asynchronous network calls instead of synchronous When making calls across process boundaries to either databases or file systems or REST APIs, relying on synchronous calls can cause the calling thread to become blocked, putting additional load on the CPU.\nUse circuit breaker patterns Modern applications need to communicate with other applications on a regular basis. Since these other applications have their own deployment schedule, downtimes and availability, the network connection to these application might have problems. If the other application is not reachable, all network requests against this other application will fail and future network requests are futile.\nUse cloud native network security tools and controls Network \u0026amp; web application firewalls provide protection against most common attacks and load shedding bad bots. These tools help to remove unnecessary data transmission and reduce the burden on the cloud infrastructure, while also using lower bandwidth and less infrastructure.\nUse DDoS protection Distributed denial of service (DDoS) attacks are used to increase the server load so that it is unable to respond to any legitimate requests. This is usually done to harm the owner of the service or hardware. Due to the nature of attack, a lot of environmental resources are used up by nonsensical requests.\nUse cloud native processor VMs Cloud virtual machines come with different capabilities based on different hardware processors. As such, using virtual machines based on the efficiency of their processors would impact hardware efficiency and reduce carbon emissions.\nUse serverless cloud services Serverless cloud services are services that the cloud provider manages for the application. These scale dynamically with the workload needed to fulfill the service task and apply best practices to keep resource usage minimal.\nAll offerings of DownToZero are serverless cloud services.\nWeb https://patterns.greensoftware.foundation/catalog/web/\nSince we are not offering any Web related services, we skip this section of the catalog.\n","permalink":"https://downtozero.cloud/docs/greenpatterns/","title":"Green Software Patterns"},{"contents":"Identifier in DownToZero have a special format which makes them differentiable by source and usage.\nIn general we derive our identifiers from a random string.\nGeneral Format {service}-{random} So for you identity, that would be ident-01909d45.\nContainer Service service prefix sample container job job-dc2073b8dba4 container service service-dc2073b8dba4 Core Service service prefix sample core context context-uaq7weua core execution execution-dc2073b8dba4 core task task-dc2073b8dba4 core chat chat-dc2073b8 Identity Service service prefix sample identity identity identity-dc2073b8dba4 identity role role-dc2073b8dba4 identity apikey apikey-dc2073b8dba4b8dba4 Objectstore Service service prefix sample objectstore object object-dc2073b8dba4 RSS2Email Service service prefix sample rss2email feed feed-dc2073b8dba4 ","permalink":"https://downtozero.cloud/docs/identifiers/","title":"Identifiers"},{"contents":"Information according to Sect. 5 TMG (Telemedia Act):\nAPImeister Consulting GmbH\nFriedrichstr. 114A\n10117 Berlin\nGermany\nRepresented by:\nJens Walter\nE-Mail:\ncontact@downtozero.cloud\nRegistration\nRegistered in the commercial register of the\nRegister Court: Amtsgericht Charlottenburg\nRegistration Number: HRB 191437\nVAT-ID\nVAT ID number according to Sect. 27a of the Sales Tax Act: DE314890942\nDisclaimer: Liability for contents\nThe contents of our website were prepared with the greatest care. However, we cannot guarantee that the contents provided are accurate, complete and up to date. Pursuant to Sect. 7 (1) TMG, we as a service provider are responsible under general legislation for our own content on these pages. Pursuant to Sections 8 to 10 TMG, however, we as a service provider are not obligated to monitor transmitted or stored third-party information or investigate circumstances suggesting illegal activities. Any obligation under general legislation to remove or block the use of information remains unaffected. However, any liability in this respect is limited to a period commencing at the time that a concrete violation of the law becomes known to us. As soon as we become aware of such a violation, we will remove these contents without delay.\nLiability for links\nOur offer contains links to external third-party websites whose contents are beyond our control. We therefore cannot assume any liability for these external contents. The respective providers or operators of the linked websites are always responsible for their contents. The linked websites were checked for possible violations of the law at the time the links were established. No unlawful contents were apparent at that time. However, a constant check of the contents of linked sites is unreasonable without concrete evidence of a violation of the law. We will remove such links without delay as soon as we become aware of any such violations.\nCopyright\nThe contents and works on these webpages created by the website operator are subject to German copyright law. Any reproduction, editing and transmission and any type of use beyond the scope of the copyright law shall require the written approval of the respective author and/or creator. This site may only be downloaded and copied for private, non-commercial use. Insofar as the contents on these webpages were not created by the operator, third-party copyrights are respected. Any third-party contents are specifically identified as such. Should you nevertheless become aware of a copyright infringement, we ask you to notify us accordingly. We will remove such contents without delay as soon as we become aware of any infringement.\nPrivacy\nIt is generally possible to use our website without providing personal data. Insofar as personal data (e.g., name, address or email address) are collected on our website, this is always done on a voluntary basis to the extent possible. These data are not passed on to third parties without your express approval.\nPlease be aware that there are inherent security risks in transmitting data over the Internet (e.g., communicating by email). It is impossible to completely safeguard the data against unauthorized access by third parties.\nThe use of contact data published within the scope of editorial requirements by third parties for the purpose of sending advertising material and information not explicitly requested is hereby expressly prohibited. The operators of the website expressly reserve the right to take legal action against those who send unsolicited advertising information, such as spam emails.\n","permalink":"https://downtozero.cloud/imprint/","title":"imprint"},{"contents":"Overview Jobs are scheduled containers, which either trigger on time-basis or resource-basis.\nScheduling All schedules have a time-based and a resource-based component. Due to the nature of various workload, the user can decide on the triggering entity.\nPrimarily time-based schedules Time-based schedules are fixed schedules that require a job to run within a certain interval. Those jobs only tolerate a minimal amount of delay, so resource consumption is only a secondary decision factor for trigger a job.\nGod examples for time-based schedules are:\na CronJob that checks every 5 minutes for new records in a database a CronJob that checks every 15 minutes for new registered users Primarily resource-based schedules Resource-based schedules are relaxed schedules that require a job to run within a certain range of an interval. Those jobs can tolerate a more flexible schedule to optimize for resource availability and consumption.\nGood examples for resource-based schedules\na Job that check for an RSS feed, running at least once a week, at most once a day, depending on the available resource-preferences a Job that check github pull-requests and runs various test suites, whenever resources are available, but at least once a day Environment Variables from the runtime variable settable description DTZ_ACCESS_TOKEN yes (identity can be changed) JWT generate from the context that allows resources to be accessed within the context. DTZ_CONTEXT_ID no DTZ context that the current application is running in ","permalink":"https://downtozero.cloud/docs/containers/jobs/","title":"Jobs"},{"contents":"manifest blob unknown: blob unknown to registry Sometimes, while uploading an image the error comes up.\nUsing default tag: latest The push refers to repository [cr.dtz.rocks/dtz-containers-website] 5cd64489befb: Preparing d47002a612c1: Preparing 5495b3c9862a: Preparing 48fc0ecfdf7f: Preparing 94e5f06ff8e3: Preparing 94e5f06ff8e3: Layer already exists 5495b3c9862a: Pushed d47002a612c1: Pushed 5cd64489befb: Pushed 48fc0ecfdf7f: Pushed manifest blob unknown: blob unknown to registry We put some error handling in place to remove faulty upload from our registry. So retrying to upload the image should solve this issue.\n","permalink":"https://downtozero.cloud/docs/registry/knownissues/","title":"Known Issues"},{"contents":"The New Feed Subscription page allows users to:\nDiscover RSS feeds from any website URL. Add direct links to existing RSS or Atom feeds. Validate and check the feed URL before subscribing. Confirm and create a feed subscription once validated. URL:\nhttps://rss2email.dtz.rocks/new/\nFeed Discovery You can enter either:\nA homepage URL – The system will try to discover RSS feeds available on the homepage.\nExamples:\nhttps://jens.dev/ https://stackoverflow.com/questions/tagged/rust https://blog.rust-lang.org/ A direct RSS feed URL – If you already know the direct link to the RSS feed.\nExamples:\nhttps://jens.dev/index.xml https://blog.rust-lang.org/feed.xml Steps to Subscribe Step 1: Enter and Verify URL Paste the homepage URL or direct RSS feed URL into the url field. Click the Check URL button. The system will attempt to verify the feed: If successful, you will see a message: rss feed found: \u0026lt;feed_url\u0026gt; Step 2: Create Subscription Once the feed is found and verified:\nA Create Subscription button will appear. Click the Create Subscription button to add the feed to your account. Tips If the homepage URL doesn’t return a feed, try finding a direct feed link (e.g., .../feed.xml or .../rss.xml). Ensure the feed link is accessible and publicly available. If multiple feeds exist, test each URL to confirm the correct feed content. Example Workflow Scenario:\nYou want to subscribe to the jens.dev feed.\nSteps:\nEnter https://jens.dev/index.xml in the URL field. Click Check URL. You’ll see: rss feed found: https://jens.dev/index.xml\nClick Create Subscription to finalize the subscription. ","permalink":"https://downtozero.cloud/docs/rss2email/new/","title":"New Feed Subscription"},{"contents":"Use DTZ Identity as your OAuth 2.0 and OpenID Connect (OIDC) provider to let users sign in to your applications with their DTZ accounts.\nWhat is OIDC? OpenID Connect (OIDC) lets your application authenticate users through DTZ Identity without handling passwords directly. Instead of managing user credentials, you redirect users to DTZ for login, then receive a secure token to access their information.\nPerfect for: Web apps, mobile apps, or any service that needs secure user authentication.\nQuick Start 1. Get Your Context ID In DTZ, every application uses a \u0026ldquo;context\u0026rdquo; as its identifier. You\u0026rsquo;ll need a context-{uuid} that your users have access to.\nExample: context-abc123\nNote: In DTZ\u0026rsquo;s system, both your client_id and client_secret are the same context ID. This simplifies setup while maintaining security.\n2. Essential Endpoints You only need these two endpoints to get started:\nPurpose Endpoint User Login https://identity.dtz.rocks/api/2021-02-21/oauth/authorize Get Token https://identity.dtz.rocks/api/2021-02-21/oauth/token User Info https://identity.dtz.rocks/api/2021-02-21/oauth/userinfo 3. Auto-Discovery Most OAuth libraries can auto-configure using DTZ\u0026rsquo;s discovery endpoint:\nhttps://identity.dtz.rocks/.well-known/openid-configuration How It Works Step 1: Redirect User to DTZ When a user wants to sign in, redirect them to:\nhttps://identity.dtz.rocks/api/2021-02-21/oauth/authorize? response_type=code\u0026amp; client_id=YOUR_CONTEXT_ID\u0026amp; redirect_uri=https://yourapp.com/callback\u0026amp; scope=openid\u0026amp; state=random-string-for-security Step 2: User Signs In DTZ handles the login process:\nIf already signed in → immediate redirect back to your app If not signed in → shows login form, then redirects back Step 3: Exchange Code for Token DTZ redirects back to your app with a code. Exchange it for a token:\ncurl -X POST https://identity.dtz.rocks/api/2021-02-21/oauth/token \\ -H \u0026#34;Content-Type: application/x-www-form-urlencoded\u0026#34; \\ -d \u0026#34;grant_type=authorization_code\u0026#34; \\ -d \u0026#34;client_id=YOUR_CONTEXT_ID\u0026#34; \\ -d \u0026#34;client_secret=YOUR_CONTEXT_ID\u0026#34; \\ -d \u0026#34;redirect_uri=https://yourapp.com/callback\u0026#34; \\ -d \u0026#34;code=THE_CODE_FROM_REDIRECT\u0026#34; Step 4: Get User Information Use the access token to get user details:\ncurl -X GET https://identity.dtz.rocks/api/2021-02-21/oauth/userinfo \\ -H \u0026#34;Authorization: Bearer YOUR_ACCESS_TOKEN\u0026#34; Response:\n{ \u0026#34;sub\u0026#34;: \u0026#34;identity-12345678\u0026#34;, \u0026#34;iss\u0026#34;: \u0026#34;dtz.rocks\u0026#34;, \u0026#34;contexts\u0026#34;: [\u0026#34;abc124\u0026#34;], \u0026#34;roles\u0026#34;: [\u0026#34;https://dtz.rocks/context/admin/abc123...\u0026#34;] } ","permalink":"https://downtozero.cloud/docs/identity/oauth/","title":"OAuth 2"},{"contents":"The DTZ objectstore compresses all objects with Zstandard compression. This behavior is active for all data and cannot be turned off. The compression is fully transparent for the client.\nThis leads to the behavior that metrics for the objectstore can deviate from the actual stored data.\nMetrics displayed always represents the used storage after the compression is applied.\nOnly the compressed storage amount will be billed.\n","permalink":"https://downtozero.cloud/docs/objectstore/compression/","title":"Object Compression"},{"contents":"The DTZ objectstore supports object level expiration/retention. By default an object is kept forever. If an expiration is set, the object will no longer be visible after the expiration.\nThe object will evnetually be cleaned up and diappear from the used storage. Retention jobs should perform cleanup daily, but are running lazily.\nSetting Retention Retention can be set with an extra header while uploading the object.\nHTTP Header: X-DTZ-EXPIRATION Header Value: iso 8601 durations https://en.wikipedia.org/wiki/ISO_8601#Durations for one day: P1D for one hour: PT1H for one day and one hour: P1T1H \u0026gt; POST /api/2022-11-28/obj/object1 \u0026gt; Host: dtz-objectstore.dtz.rocks \u0026gt; Content-Type: application/octet-stream \u0026gt; X-DTZ-EXPIRATION: P1D The expiration timestamp is then calculated on creation.\nGetting Retention If a retention is set for an object, the expiration header is always returned with the exact timestamp (rfc3339).\n\u0026gt; GET /api/2022-11-28/obj/object1 \u0026gt; Host: dtz-objectstore.dtz.rocks \u0026lt; \u0026lt; ","permalink":"https://downtozero.cloud/docs/objectstore/retention/","title":"Object Retention"},{"contents":"Connecting to DTZ through OpenTelemetry can be done by configuring the OTEL-contrib agent with the following exporter settings.\nexporters: otlphttp: endpoint: \u0026#34;https://o11y.dtz.rocks/otel\u0026#34; headers: x-dtz-context: 00000000-0000-0000-0000-000000000000 x-api-key: 00000000-0000-0000-0000-000000000000 ","permalink":"https://downtozero.cloud/docs/observability/sources/opentelemetry/","title":"OpenTelemetry"},{"contents":"Our Journey: A Project History From a simple idea to a fully-featured sustainable cloud platform, our journey has been driven by a passion for efficiency, transparency, and a greener future for technology. This timeline highlights the most significant milestones in our development.\n2021 The Beginning: The idea for downtozero.cloud was born, with a mission to create a cloud platform that is both sustainable and developer-friendly. 2022 RSS2Email Service Launch: The first of our managed services, RSS2Email, was launched, providing a simple and efficient way to receive RSS feed updates via email. 2023 Solar-Powered GitHub Runners: A key sustainability milestone, with the introduction of our own GitHub Actions runners powered entirely by solar energy. Custom-Built Server Nodes: We began building our own server nodes, designed for maximum energy efficiency and performance. Scale-to-Zero Containers: The launch of our core technology, allowing containers to scale down to zero when not in use, saving energy and cost. Cost Calculation Metrics: We introduced detailed metrics for calculating the cost of services, providing full transparency to our users. 2024 Official Terraform Provider: A major step forward for developer experience, allowing users to manage their infrastructure as code. Terraform State in Object Store: We added the ability to use our Object Store for managing Terraform state, providing a secure and reliable backend. Public Status Page: To increase transparency, we launched a public status page to monitor the health of our services. 2025 Multi-Language Support: We launched full support for German, Spanish, French, and Italian, making our platform more accessible to a global audience. GitHub-Native Deployments: A new feature allowing developers to deploy their applications directly from their GitHub repositories, streamlining the development workflow. OAuth2 for Container Registry: Enhanced security and integration capabilities with the implementation of OAuth2 for our container registry. New Physical Location: We expanded our infrastructure to a new, highly energy-efficient data center in a new location. AI-Powered Support Chatbot: To improve our customer support, we launched an AI-powered chatbot to provide instant assistance to our users. RSS2Email Redesign: A complete redesign of our popular RSS2Email service, with a new user interface and improved functionality. ","permalink":"https://downtozero.cloud/about/","title":"Our Journey: A Project History"},{"contents":"DownToZero provides the ability to host your containers. We offer two types of services for your container workloads:\nJobs: Asynchronous, scheduled containers that run to completion. Services: Synchronous, long-running containers that handle request/reply traffic. DownToZero Containers\nJobs Jobs are ideal for asynchronous tasks that need to run on a schedule. These containers are always run to completion. You can adjust the schedule depending on your workload scenario.\nGood examples for asynchronous jobs include:\nCronJobs that run once a day. CI/CD pipelines that check for vulnerabilities daily. Checking an RSS feed at least once a day. Services Services are best for synchronous workloads that need to respond to incoming requests. These containers are triggered by an action and typically end with a response.\nGood examples for services include:\nWebsite hosting API endpoints REST interfaces ","permalink":"https://downtozero.cloud/docs/containers/overview/","title":"Overview"},{"contents":"The core services are not intended to be products or to be refined services. These services represent only the building blocks.\nThey can be used by any Context Admin, but there are no stability guarantees for these services.\n","permalink":"https://downtozero.cloud/docs/core/overview/","title":"Overview"},{"contents":"DTZ Identity manages three things: Roles, Identities, and Authentications for every resource that lives inside a Context. Contexts are the organizational units in DTZ; every entity belongs to one, and access control flows through it.\nCore concepts Context A Context (context-…) is the container for your applications, services, and billing. The creator is auto-assigned Context Admin, and a service identity like admin@{context_id}.dtz.rocks is provisioned for automation.\nIdentities An Identity is a principal (human user or service account). You’ll bind role assignments to identities.\nRoles (Abstract vs. Concrete) Abstract roles are reusable permission sets defined by each DTZ service (e.g., “containers admin”, “objectstore admin”, “billing admin”). Concrete roles are abstract roles bound to a scope (either a Context or an Identity) and expressed as a role URI. These are what you actually assign. Examples of concrete role URIs:\nContext-scoped: https://dtz.rocks/containers/admin/{context_id} Identity-scoped: https://dtz.rocks/identity/admin/{identity_id} This split keeps permission logic consistent while making assignments context-aware.\nRole scopes Identity-scoped roles Affect actions on the identity itself (e.g., who can set a password or create API keys for an identity).\nCommon examples:\nhttps://dtz.rocks/identity/admin/{identity_id} https://dtz.rocks/billing/admin/{identity_id} https://dtz.rocks/identity/assume/{identity_id} Context-scoped roles Affect actions within a context (deployments, logs, object store, etc.).\nCommon examples:\nhttps://dtz.rocks/context/admin/{context_id} https://dtz.rocks/containers/admin/{context_id} https://dtz.rocks/objectstore/admin/{context_id} https://dtz.rocks/observability/admin/{context_id} https://dtz.rocks/containerregistry/admin/{context_id} https://dtz.rocks/rss2email/admin/{context_id} Each DTZ service can define its own role names and scopes; the URIs above are representative.\nHow permissions are evaluated Authenticate (who you are). Resolve concrete roles for the caller (role URIs on the identity). Authorize based on whether a required role URI matches the target scope (context or identity) and action. Example: assigning and using a role Assign the abstract “containers admin” to a specific context → you get a concrete role: https://dtz.rocks/containers/admin/context-abc123\nBind that role to the identity alice@example.com. When Alice calls the Containers API inside context-abc123, the role URI matches and the action is authorized. The same role does not grant rights in other contexts. Authentication DTZ supports multiple auth methods; use what fits your client and environment.\nAPI Keys Keys are created in the Identity UI and are scoped to a context and an identity. Send via header: X-API-KEY: YOUR_API_KEY\nSome third-party integrations that can’t set headers can pass apiKey as a query parameter (use only when unavoidable). Bearer tokens (password login) Obtain a JWT by POSTing username/password, then send Authorization: Bearer ….\nRequest token:\nPOST https://identity.dtz.rocks/api/2021-02-21/token/auth Content-Type: application/json { \u0026#34;username\u0026#34;: \u0026#34;user\u0026#34;, \u0026#34;password\u0026#34;: \u0026#34;password\u0026#34; } Use token:\ncurl -H \u0026#34;Authorization: Bearer eyJhb...\u0026#34; \\ https://identity.dtz.rocks/api/2021-02-21/me You can also use the JWT as a cookie named dtz-auth. Basic auth is supported for some endpoints (apikey:apikey-1234).\nGetting started checklist\nCreate or select a Context for your app. Decide which abstract roles your app needs and bind them into concrete roles at the right scopes (Context vs. Identity). Assign those concrete roles to the identities (users/service accounts) that need them. ","permalink":"https://downtozero.cloud/docs/identity/overview/","title":"Overview"},{"contents":"This service provides objectstore capabilities.\nIt can be used either as Key/Value store, or Blobstore. The service itself makes no assumption about it content.\nKeys are represented as URIs, just the path section. Special characters like \u0026lsquo;?\u0026rsquo; or \u0026lsquo;#\u0026rsquo; are ignored from the key.\nDownToZero Objectstore\n","permalink":"https://downtozero.cloud/docs/objectstore/overview/","title":"Overview"},{"contents":"Provides observability features like working with\nTraces (not yet implemented) Logs Metrics (partially implemented) The following Sources are supported.\nSource Traces Logs Metrics OpenTelemetry Google (via PubSub) Stripe (via Webhook) Service URL Observability Dashboard\nWeb UI ","permalink":"https://downtozero.cloud/docs/observability/overview/","title":"Overview"},{"contents":"This Service provides a OCI compliant container registry.\nThe registry can be used through docker pull and docker push commands.\nDownToZero Container Registry\n","permalink":"https://downtozero.cloud/docs/registry/overview/","title":"Overview"},{"contents":"Subscribe to RSS feeds and send a notfication email, whenever a new post appears.\nDownToZero RSS to Email\nWeb UI Feed Subscriptions ","permalink":"https://downtozero.cloud/docs/rss2email/overview/","title":"Overview"},{"contents":" Pricing Compute Compute capacity is metered by energy consumed in Watt hours (Wh) across running containers. Metric Price Watt hour (Wh) 0.010\u0026nbsp;EUR / Wh Watt hour (Wh) \u0026mdash; ecoMode 0.005\u0026nbsp;EUR / Wh How we meter: Wh = avg. power draw (W) × runtime (hours) per container while it is executing. How we bill: charge = Wh × unit price (see rates above). ecoMode scope: ecoMode currently applies to Jobs (batch/async). Services (HTTP) run at the standard rate.\nLearn more about ecoMode Real‑world example (ueo.ventures): static site ~203 requests/day, ≈2.4\u0026nbsp;Wh/day. Compute ≈ €0.72/month (normal) or ≈ €0.36/month (ecoMode). Here is the whitepaper for more details. Storage Storage capacity is billed based on the maximum daily usage. Objects with any activity (read, write, delete) within 30 days are classified as hot; otherwise they are cold. Storage operations (reads/writes) consume compute at the standard Wh rate. Type Price per Day (per\u0026nbsp;GB) Price per Month (per\u0026nbsp;GB) Hot storage 0.0013\u0026nbsp;EUR / GB / day 0.039\u0026nbsp;EUR / GB / month Cold storage 0.0007\u0026nbsp;EUR / GB / day 0.021\u0026nbsp;EUR / GB / month Notes Prices are listed in EUR and include VAT. ","permalink":"https://downtozero.cloud/pricing/","title":"Pricing"},{"contents":"Privacy Policy Last updated: January 02, 2023\nThis Privacy Policy describes Our policies and procedures on the collection, use and disclosure of Your information when You use the Service and tells You about Your privacy rights and how the law protects You.\nWe use Your Personal data to provide and improve the Service. By using the Service, You agree to the collection and use of information in accordance with this Privacy Policy. This Privacy Policy has been created with the help of the Free Privacy Policy Generator.\nInterpretation and Definitions Interpretation The words of which the initial letter is capitalized have meanings defined under the following conditions. The following definitions shall have the same meaning regardless of whether they appear in singular or in plural.\nDefinitions For the purposes of this Privacy Policy:\nAccount means a unique account created for You to access our Service or parts of our Service. Company (referred to as either \u0026ldquo;the Company\u0026rdquo;, \u0026ldquo;We\u0026rdquo;, \u0026ldquo;Us\u0026rdquo; or \u0026ldquo;Our\u0026rdquo; in this Agreement) refers to Apimeister Consulting GmbH, Friedrichstr. 114A, 10117 Berlin. Cookies are small files that are placed on Your computer, mobile device or any other device by a website, containing the details of Your browsing history on that website among its many uses. Country refers to: Berlin, Germany Device means any device that can access the Service such as a computer, a cellphone or a digital tablet. Personal Data is any information that relates to an identified or identifiable individual. Service refers to the Website. Service Provider means any natural or legal person who processes the data on behalf of the Company. It refers to third-party companies or individuals employed by the Company to facilitate the Service, to provide the Service on behalf of the Company, to perform services related to the Service or to assist the Company in analyzing how the Service is used. Usage Data refers to data collected automatically, either generated by the use of the Service or from the Service infrastructure itself (for example, the duration of a page visit). Website refers to Down To Zero, accessible from https://downtozero.cloud You means the individual accessing or using the Service, or the company, or other legal entity on behalf of which such individual is accessing or using the Service, as applicable. Collecting and Using Your Personal Data Types of Data Collected Personal Data While using Our Service, We may ask You to provide Us with certain personally identifiable information that can be used to contact or identify You. Personally identifiable information may include, but is not limited to:\nEmail address Usage Data Usage Data Usage Data is collected automatically when using the Service.\nUsage Data may include information such as Your Device\u0026rsquo;s Internet Protocol address (e.g. IP address), browser type, browser version, the pages of our Service that You visit, the time and date of Your visit, the time spent on those pages, unique device identifiers and other diagnostic data.\nWhen You access the Service by or through a mobile device, We may collect certain information automatically, including, but not limited to, the type of mobile device You use, Your mobile device unique ID, the IP address of Your mobile device, Your mobile operating system, the type of mobile Internet browser You use, unique device identifiers and other diagnostic data.\nWe may also collect information that Your browser sends whenever You visit our Service or when You access the Service by or through a mobile device.\nTracking Technologies and Cookies We use Cookies and similar tracking technologies to track the activity on Our Service and store certain information. Tracking technologies used are beacons, tags, and scripts to collect and track information and to improve and analyze Our Service. The technologies We use may include:\nCookies or Browser Cookies. A cookie is a small file placed on Your Device. You can instruct Your browser to refuse all Cookies or to indicate when a Cookie is being sent. However, if You do not accept Cookies, You may not be able to use some parts of our Service. Unless you have adjusted Your browser setting so that it will refuse Cookies, our Service may use Cookies. Web Beacons. Certain sections of our Service and our emails may contain small electronic files known as web beacons (also referred to as clear gifs, pixel tags, and single-pixel gifs) that permit the Company, for example, to count users who have visited those pages or opened an email and for other related website statistics (for example, recording the popularity of a certain section and verifying system and server integrity). Cookies can be \u0026ldquo;Persistent\u0026rdquo; or \u0026ldquo;Session\u0026rdquo; Cookies. Persistent Cookies remain on Your personal computer or mobile device when You go offline, while Session Cookies are deleted as soon as You close Your web browser. Learn more about cookies on the Free Privacy Policy website article.\nWe use both Session and Persistent Cookies for the purposes set out below:\nNecessary / Essential Cookies\nType: Session Cookies Administered by: Us Purpose: These Cookies are essential to provide You with services available through the Website and to enable You to use some of its features. They help to authenticate users and prevent fraudulent use of user accounts. Without these Cookies, the services that You have asked for cannot be provided, and We only use these Cookies to provide You with those services. Cookies Policy / Notice Acceptance Cookies\nType: Persistent Cookies Administered by: Us Purpose: These Cookies identify if users have accepted the use of cookies on the Website. Functionality Cookies\nType: Persistent Cookies Administered by: Us Purpose: These Cookies allow us to remember choices You make when You use the Website, such as remembering your login details or language preference. The purpose of these Cookies is to provide You with a more personal experience and to avoid You having to re-enter your preferences every time You use the Website. For more information about the cookies we use and your choices regarding cookies, please visit our Cookies Policy or the Cookies section of our Privacy Policy.\nUse of Your Personal Data The Company may use Personal Data for the following purposes:\nTo provide and maintain our Service, including to monitor the usage of our Service. To manage Your Account: to manage Your registration as a user of the Service. The Personal Data You provide can give You access to different functionalities of the Service that are available to You as a registered user. For the performance of a contract: the development, compliance and undertaking of the purchase contract for the products, items or services You have purchased or of any other contract with Us through the Service. To contact You: To contact You by email, telephone calls, SMS, or other equivalent forms of electronic communication, such as a mobile application\u0026rsquo;s push notifications regarding updates or informative communications related to the functionalities, products or contracted services, including the security updates, when necessary or reasonable for their implementation. To provide You with news, special offers and general information about other goods, services and events which we offer that are similar to those that you have already purchased or enquired about unless You have opted not to receive such information. To manage Your requests: To attend and manage Your requests to Us. For business transfers: We may use Your information to evaluate or conduct a merger, divestiture, restructuring, reorganization, dissolution, or other sale or transfer of some or all of Our assets, whether as a going concern or as part of bankruptcy, liquidation, or similar proceeding, in which Personal Data held by Us about our Service users is among the assets transferred. For other purposes: We may use Your information for other purposes, such as data analysis, identifying usage trends, determining the effectiveness of our promotional campaigns and to evaluate and improve our Service, products, services, marketing and your experience. We may share Your personal information in the following situations:\nWith Service Providers: We may share Your personal information with Service Providers to monitor and analyze the use of our Service, to contact You. For business transfers: We may share or transfer Your personal information in connection with, or during negotiations of, any merger, sale of Company assets, financing, or acquisition of all or a portion of Our business to another company. With Affiliates: We may share Your information with Our affiliates, in which case we will require those affiliates to honor this Privacy Policy. Affiliates include Our parent company and any other subsidiaries, joint venture partners or other companies that We control or that are under common control with Us. With business partners: We may share Your information with Our business partners to offer You certain products, services or promotions. With other users: when You share personal information or otherwise interact in the public areas with other users, such information may be viewed by all users and may be publicly distributed outside. With Your consent: We may disclose Your personal information for any other purpose with Your consent. Retention of Your Personal Data The Company will retain Your Personal Data only for as long as is necessary for the purposes set out in this Privacy Policy. We will retain and use Your Personal Data to the extent necessary to comply with our legal obligations (for example, if we are required to retain your data to comply with applicable laws), resolve disputes, and enforce our legal agreements and policies.\nThe Company will also retain Usage Data for internal analysis purposes. Usage Data is generally retained for a shorter period of time, except when this data is used to strengthen the security or to improve the functionality of Our Service, or We are legally obligated to retain this data for longer time periods.\nTransfer of Your Personal Data Your information, including Personal Data, is processed at the Company\u0026rsquo;s operating offices and in any other places where the parties involved in the processing are located. It means that this information may be transferred to — and maintained on — computers located outside of Your state, province, country or other governmental jurisdiction where the data protection laws may differ than those from Your jurisdiction.\nYour consent to this Privacy Policy followed by Your submission of such information represents Your agreement to that transfer.\nThe Company will take all steps reasonably necessary to ensure that Your data is treated securely and in accordance with this Privacy Policy and no transfer of Your Personal Data will take place to an organization or a country unless there are adequate controls in place including the security of Your data and other personal information.\nDelete Your Personal Data You have the right to delete or request that We assist in deleting the Personal Data that We have collected about You.\nOur Service may give You the ability to delete certain information about You from within the Service.\nYou may update, amend, or delete Your information at any time by signing in to Your Account, if you have one, and visiting the account settings section that allows you to manage Your personal information. You may also contact Us to request access to, correct, or delete any personal information that You have provided to Us.\nPlease note, however, that We may need to retain certain information when we have a legal obligation or lawful basis to do so.\nDisclosure of Your Personal Data Business Transactions If the Company is involved in a merger, acquisition or asset sale, Your Personal Data may be transferred. We will provide notice before Your Personal Data is transferred and becomes subject to a different Privacy Policy.\nLaw enforcement Under certain circumstances, the Company may be required to disclose Your Personal Data if required to do so by law or in response to valid requests by public authorities (e.g. a court or a government agency).\nOther legal requirements The Company may disclose Your Personal Data in the good faith belief that such action is necessary to:\nComply with a legal obligation Protect and defend the rights or property of the Company Prevent or investigate possible wrongdoing in connection with the Service Protect the personal safety of Users of the Service or the public Protect against legal liability Security of Your Personal Data The security of Your Personal Data is important to Us, but remember that no method of transmission over the Internet, or method of electronic storage is 100% secure. While We strive to use commercially acceptable means to protect Your Personal Data, We cannot guarantee its absolute security.\nChildren\u0026rsquo;s Privacy Our Service does not address anyone under the age of 13. We do not knowingly collect personally identifiable information from anyone under the age of 13. If You are a parent or guardian and You are aware that Your child has provided Us with Personal Data, please contact Us. If We become aware that We have collected Personal Data from anyone under the age of 13 without verification of parental consent, We take steps to remove that information from Our servers.\nIf We need to rely on consent as a legal basis for processing Your information and Your country requires consent from a parent, We may require Your parent\u0026rsquo;s consent before We collect and use that information.\nLinks to Other Websites Our Service may contain links to other websites that are not operated by Us. If You click on a third party link, You will be directed to that third party\u0026rsquo;s site. We strongly advise You to review the Privacy Policy of every site You visit.\nWe have no control over and assume no responsibility for the content, privacy policies or practices of any third party sites or services.\nChanges to this Privacy Policy We may update Our Privacy Policy from time to time. We will notify You of any changes by posting the new Privacy Policy on this page.\nWe will let You know via email and/or a prominent notice on Our Service, prior to the change becoming effective and update the \u0026ldquo;Last updated\u0026rdquo; date at the top of this Privacy Policy.\nYou are advised to review this Privacy Policy periodically for any changes. Changes to this Privacy Policy are effective when they are posted on this page.\nContact Us If you have any questions about this Privacy Policy, You can contact us:\nBy email: contact@downtozero.cloud ","permalink":"https://downtozero.cloud/privacy/","title":"Privacy Policy"},{"contents":" The dtz provider allows you to manage various resources and services on the DownToZero.cloud platform. It provides support for containers, object storage, container registry, RSS2Email, and observability services.\nExample Usage terraform { required_providers { dtz = { source = \u0026#34;DownToZero-Cloud/dtz\u0026#34; version = \u0026#34;~\u0026gt; 0.1.33\u0026#34; } } } provider \u0026#34;dtz\u0026#34; { api_key = var.dtz_api_key enable_service_containers = true enable_service_rss2email = true } Schema Required api_key (String, Sensitive) The API key for authentication Optional enable_service_containers (Boolean) Enable the containers service. Defaults to false. enable_service_objectstore (Boolean) Enable the object store service. Defaults to false. enable_service_containerregistry (Boolean) Enable the container registry service. Defaults to false. enable_service_rss2email (Boolean) Enable the RSS2Email service. Defaults to false. enable_service_observability (Boolean) Enable the observability service. Defaults to false. Terraform Docs https://registry.terraform.io/providers/DownToZero-Cloud/dtz/latest/docs Github Sources https://github.com/DownToZero-Cloud/terraform-provider-dtz/blob/main/docs/index.md ","permalink":"https://downtozero.cloud/docs/terraform/provider/","title":"Provider"},{"contents":" 🎁 Early Bird Bonus! Get 50 € automatically credited to your account —\navailable for a limited time only! Email Password Repeat Password I agree to the Privacy Policy and Terms of Service. sign up ","permalink":"https://downtozero.cloud/signup/","title":"Register new Account - DownToZero"},{"contents":"Overview This document explains how our request routing system works and the limitations that may affect your deployments. Understanding these constraints helps you design your applications effectively.\nHow Request Routing Works Our routing system uses an S3-based configuration management approach where ingress rules are stored as JSON files and automatically indexed for fast lookups. This provides reliable routing but introduces specific timing and configuration considerations.\nRouting Limitations Configuration Propagation Timeline: Ingress configuration changes typically propagate within 30-60 seconds, but may take up to 2 minutes in rare cases.\nImpact on Deployments:\nPlan for this delay when making critical routing changes Deploy your service before updating ingress rules Allow sufficient time for configuration to take effect before testing URI Path Constraints Character Sanitization: URI paths are automatically sanitized for S3 compatibility:\nAlphanumeric characters, hyphens, underscores, and dots are preserved Special characters are converted to underscores Maximum path length: 1024 characters Examples:\n/api/v1/users → _api_v1_users /api/v1/users?filter=active → _api_v1_users_filter_active /api/v1/users@domain → _api_v1_users_domain Recommendation: Design your API paths to be S3-friendly from the start to avoid unexpected routing issues.\nService Discovery Index-Based Lookup: Our system uses index files to map URIs to services, which provides:\nFast routing decisions (\u0026lt; 10ms) Version tracking for rollbacks Automatic cleanup of invalid configurations Implications:\nService endpoints must be registered before they can receive traffic Deleted services are automatically removed from routing within the propagation window Configuration conflicts are resolved based on the most recent update Deployment Considerations 1. Service Deployment Order When deploying new services:\nDeploy your service first and verify it\u0026rsquo;s healthy Update ingress configuration after service deployment Wait for configuration propagation (30-60 seconds) Test the new routing 2. URI Design Best Practices Use RESTful, predictable URI patterns Avoid special characters in paths Keep paths reasonably short (\u0026lt; 200 characters) Use consistent naming conventions 3. Monitoring and Testing Monitor your service health endpoints Test routing changes in staging environments Allow sufficient time for configuration propagation Have a rollback plan ready Troubleshooting Routing Issues Service Not Receiving Traffic Check Service Health: Ensure your service is running and responding Verify Ingress Configuration: Confirm the ingress rule is properly configured Check URI Path: Ensure the path matches exactly (case-sensitive) Wait for Propagation: Allow up to 2 minutes for configuration changes Configuration Update Failures Validate JSON Format: Verify your configuration is valid JSON Check Character Limits: Ensure paths don\u0026rsquo;t exceed 1024 characters Review Domain Limits: Confirm you haven\u0026rsquo;t exceeded 100 rules per domain Performance Characteristics Routing Decision: \u0026lt; 10ms Configuration Update: 30-60 seconds Failover: Automatic failover to healthy service instances ","permalink":"https://downtozero.cloud/docs/core/limits/","title":"Request Routing Limitations"},{"contents":"Roles are privileges which can be assigned to identities.\nAll Roles are either privileges scoped to a context or to an identity. Each service can define their own roles and scopes.\nIdentity scoped Roles Identity scoped roles are roles assigned to identities which give actions over itself or other identities.\nFor example,\nassigning a password to an identity is a privilege. Not every user of an identity will have this privilege. generating a api-key for an identity is a privilege. Not every user of an identity will have this privilege. Sample Roles\n\u0026ldquo;https://dtz.rocks/identity/admin/{identity_id}\" \u0026ldquo;https://dtz.rocks/billing/admin/{identity_id}\" Context scoped Roles Context scoped roles are roles assigned to identities which give action over resources in a context.\nFor example,\nupdate/deploy rss2email integration via the flows service container deployments access logs and metrics Sample Roles\n\u0026ldquo;https://dtz.rocks/context/admin/{context_id}\" \u0026ldquo;https://dtz.rocks/flows/admin/{context_id}\" \u0026ldquo;https://dtz.rocks/containers/admin/{context_id}\" \u0026ldquo;https://dtz.rocks/observability/admin/{context_id}\" Available Context Roles ID Name Scope role-bfd584a9 objectstore admin https://dtz.rocks/objectstore/admin/{context_id} role-6bd059b1 containerregistry admin https://dtz.rocks/containerregistry/admin/{context_id} role-bb6d04d9 rss2email admin https://dtz.rocks/rss2email/admin/{context_id} role-e7e4c3b3 context admin https://dtz.rocks/context/admin/{context_id} role-bc43f2da containers admin https://dtz.rocks/containers/admin/{context_id} role-f880b4a8 observability admin https://dtz.rocks/observability/admin/{context_id} Available Identity Roles ID Name Scope role-e5832d4c billing admin https://dtz.rocks/billing/admin/{identity_id} role-ceb9417c identity admin https://dtz.rocks/identity/admin/{identity_id} role-5001d9c9 assume identity https://dtz.rocks/identity/assume/{identity_id} ","permalink":"https://downtozero.cloud/docs/identity/roles/","title":"Roles"},{"contents":"Subscribe to RSS feeds and send a notfication email, whenever a new post appears.\n","permalink":"https://downtozero.cloud/docs/rss2email/feeds/","title":"RSS Feeds"},{"contents":"Authenticating Deployed Services DownToZero (DTZ) provides built-in authentication for your deployed services, making it easy to build secure applications without managing your own authentication infrastructure.\nOverview When you deploy a service to DTZ, the platform automatically provides authentication credentials and context information through environment variables. Your service can use these to:\nAuthenticate requests to other DTZ services (API calls, object storage, etc.) Verify incoming user requests Access context-specific resources Implement secure service-to-service communication Environment Variables DTZ automatically injects the following environment variables into your service containers:\nVariable Description Example DTZ_ACCESS_TOKEN A JWT token for accessing DTZ services within your context. eyJhbGciOiJSUzI1NiI... DTZ_CONTEXT_ID Your DTZ context identifier. context-3cd84429-64a4-4226-b868-c83feeff0f46 PORT The port your service should listen on. 80 Authenticating Incoming Requests Your deployed service can authenticate incoming requests using several methods:\nAPI Key Authentication Users can authenticate with your service using DTZ API keys by providing the key in the X-API-KEY header.\ncurl -H \u0026#34;X-API-KEY: your-api-key\u0026#34; https://yourservice.dtz.rocks/api/endpoint Bearer Token Authentication Provide a JWT token in the Authorization header to authenticate.\ncurl -H \u0026#34;Authorization: Bearer your-jwt-token\u0026#34; https://yourservice.dtz.rocks/api/endpoint Basic Authentication You can also pass API keys using basic authentication.\ncurl -u apikey:your-api-key https://yourservice.dtz.rocks/api/endpoint Cookie-based Authentication For web applications, the DTZ Identity service can handle authentication through browser cookies.\nOAuth Flow DTZ provides automatic OAuth authentication for web applications. When unauthenticated users access your service, they are automatically redirected to the DTZ login page and then redirected back after a successful login.\nUsing DTZ Authentication in Your Service Making Authenticated Requests to DTZ Services Use the DTZ_ACCESS_TOKEN environment variable to make authenticated calls to other DTZ services.\nimport os import requests # Get the DTZ access token from the environment token = os.environ.get(\u0026#39;DTZ_ACCESS_TOKEN\u0026#39;) context_id = os.environ.get(\u0026#39;DTZ_CONTEXT_ID\u0026#39;) # Make an authenticated request to the DTZ API headers = { \u0026#39;Authorization\u0026#39;: f\u0026#39;Bearer {token}\u0026#39;, \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39; } response = requests.get( \u0026#39;https://api.dtz.rocks/v1/containers/services\u0026#39;, headers=headers ) // Node.js example const token = process.env.DTZ_ACCESS_TOKEN; const contextId = process.env.DTZ_CONTEXT_ID; const response = await fetch(\u0026#39;https://api.dtz.rocks/v1/containers/services\u0026#39;, { headers: { \u0026#39;Authorization\u0026#39;: `Bearer ${token}`, \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39; } }); # Bash example curl -H \u0026#34;Authorization: Bearer $DTZ_ACCESS_TOKEN\u0026#34; \\ -H \u0026#34;Content-Type: application/json\u0026#34; \\ https://api.dtz.rocks/v1/containers/services Verifying Incoming Authentication DTZ automatically handles authentication for incoming requests. When a user makes an authenticated request to your service, DTZ:\nValidates the authentication credentials. Converts API keys to JWT tokens. Forwards the request with an Authorization: Bearer \u0026lt;token\u0026gt; header. Your service receives the JWT token and can extract user information from it.\nimport jwt import os from flask import Flask, request app = Flask(__name__) @app.route(\u0026#39;/protected\u0026#39;) def protected_endpoint(): auth_header = request.headers.get(\u0026#39;Authorization\u0026#39;) if not auth_header or not auth_header.startswith(\u0026#39;Bearer \u0026#39;): return {\u0026#39;error\u0026#39;: \u0026#39;No authentication provided\u0026#39;}, 401 token = auth_header.split(\u0026#39; \u0026#39;)[1] try: # The token is already verified by DTZ, so you can # extract claims without signature verification. payload = jwt.decode(token, options={\u0026#34;verify_signature\u0026#34;: False}) user_id = payload.get(\u0026#39;sub\u0026#39;) # Identity ID context_id = payload.get(\u0026#39;scope\u0026#39;) # Context ID roles = payload.get(\u0026#39;roles\u0026#39;, []) # User roles return { \u0026#39;user_id\u0026#39;: user_id, \u0026#39;context_id\u0026#39;: context_id, \u0026#39;roles\u0026#39;: roles, \u0026#39;message\u0026#39;: \u0026#39;Access granted\u0026#39; } except jwt.InvalidTokenError: return {\u0026#39;error\u0026#39;: \u0026#39;Invalid token\u0026#39;}, 401 // Express.js example const express = require(\u0026#39;express\u0026#39;); const jwt = require(\u0026#39;jsonwebtoken\u0026#39;); const app = express(); app.get(\u0026#39;/protected\u0026#39;, (req, res) =\u0026gt; { const authHeader = req.headers.authorization; if (!authHeader || !authHeader.startsWith(\u0026#39;Bearer \u0026#39;)) { return res.status(401).json({ error: \u0026#39;No authentication provided\u0026#39; }); } const token = authHeader.split(\u0026#39; \u0026#39;)[1]; try { // The token is already verified by DTZ. const payload = jwt.decode(token); const userId = payload.sub; // Identity ID const contextId = payload.scope; // Context ID const roles = payload.roles || []; // User roles res.json({ user_id: userId, context_id: contextId, roles: roles, message: \u0026#39;Access granted\u0026#39; }); } catch (error) { res.status(401).json({ error: \u0026#39;Invalid token\u0026#39; }); } }); JWT Token Structure DTZ JWT tokens contain the following claims:\nClaim Description Example iss Issuer (always \u0026ldquo;dtz.rocks\u0026rdquo;) \u0026quot;dtz.rocks\u0026quot; sub Subject (user identity ID) \u0026quot;identity-abc123...\u0026quot; aud Audience (always \u0026ldquo;dtz.rocks\u0026rdquo;) \u0026quot;dtz.rocks\u0026quot; scope Context ID \u0026quot;context-3cd84429...\u0026quot; roles User roles/permissions [\u0026quot;https://dtz.rocks/context/admin/{context_id}\u0026quot;] contexts Available contexts [\u0026quot;context-3cd84429...\u0026quot;] exp Expiration time 1640995200 iat Issued at time 1640908800 Role-Based Access Control DTZ uses role-based access control with URI-based role identifiers. Common role patterns include:\nhttps://dtz.rocks/context/admin/{context_id} - Context administrator https://dtz.rocks/containers/admin/{context_id} - Container service administrator https://dtz.rocks/objectstore/admin/{context_id} - Object store administrator You can check for roles in your service:\ndef check_role(token, required_role_pattern): payload = jwt.decode(token, options={\u0026#34;verify_signature\u0026#34;: False}) roles = payload.get(\u0026#39;roles\u0026#39;, []) context_id = payload.get(\u0026#39;scope\u0026#39;) required_role = required_role_pattern.replace(\u0026#39;{context_id}\u0026#39;, context_id) return required_role in roles # Example: if check_role(token, \u0026#39;https://dtz.rocks/containers/admin/{context_id}\u0026#39;): # User has container admin permissions pass Best Practices Security Always validate that JWT tokens contain the expected claims. Check user roles before granting access to sensitive operations. Use HTTPS for all communications. Do not log sensitive authentication tokens. Error Handling Return appropriate HTTP status codes (e.g., 401 for unauthorized, 403 for forbidden). Provide meaningful error messages without exposing sensitive information. Performance Cache JWT token validation results when possible. Use connection pooling for DTZ API calls. Consider implementing request rate limiting. Example: Complete Authenticated Service Here is a complete example of a Python Flask service with DTZ authentication:\nimport os import jwt import requests from flask import Flask, request, jsonify app = Flask(__name__) DTZ_TOKEN = os.environ.get(\u0026#39;DTZ_ACCESS_TOKEN\u0026#39;) DTZ_CONTEXT_ID = os.environ.get(\u0026#39;DTZ_CONTEXT_ID\u0026#39;) def get_user_from_token(token): \u0026#34;\u0026#34;\u0026#34;Extracts user information from a DTZ JWT token.\u0026#34;\u0026#34;\u0026#34; try: payload = jwt.decode(token, options={\u0026#34;verify_signature\u0026#34;: False}) return { \u0026#39;user_id\u0026#39;: payload.get(\u0026#39;sub\u0026#39;), \u0026#39;context_id\u0026#39;: payload.get(\u0026#39;scope\u0026#39;), \u0026#39;roles\u0026#39;: payload.get(\u0026#39;roles\u0026#39;, []) } except jwt.InvalidTokenError: return None def require_auth(f): \u0026#34;\u0026#34;\u0026#34;A decorator to require authentication.\u0026#34;\u0026#34;\u0026#34; def decorated(*args, **kwargs): auth_header = request.headers.get(\u0026#39;Authorization\u0026#39;) if not auth_header or not auth_header.startswith(\u0026#39;Bearer \u0026#39;): return jsonify({\u0026#39;error\u0026#39;: \u0026#39;Authentication required\u0026#39;}), 401 token = auth_header.split(\u0026#39; \u0026#39;)[1] user = get_user_from_token(token) if not user: return jsonify({\u0026#39;error\u0026#39;: \u0026#39;Invalid token\u0026#39;}), 401 request.user = user return f(*args, **kwargs) decorated.__name__ = f.__name__ return decorated @app.route(\u0026#39;/health\u0026#39;) def health(): \u0026#34;\u0026#34;\u0026#34;A public health check endpoint.\u0026#34;\u0026#34;\u0026#34; return jsonify({\u0026#39;status\u0026#39;: \u0026#39;healthy\u0026#39;}) @app.route(\u0026#39;/profile\u0026#39;) @require_auth def profile(): \u0026#34;\u0026#34;\u0026#34;A protected endpoint that returns the user\u0026#39;s profile.\u0026#34;\u0026#34;\u0026#34; return jsonify({ \u0026#39;user_id\u0026#39;: request.user[\u0026#39;user_id\u0026#39;], \u0026#39;context_id\u0026#39;: request.user[\u0026#39;context_id\u0026#39;], \u0026#39;roles\u0026#39;: request.user[\u0026#39;roles\u0026#39;] }) @app.route(\u0026#39;/admin/users\u0026#39;) @require_auth def admin_users(): \u0026#34;\u0026#34;\u0026#34;An admin-only endpoint.\u0026#34;\u0026#34;\u0026#34; required_role = f\u0026#39;https://dtz.rocks/context/admin/{DTZ_CONTEXT_ID}\u0026#39; if required_role not in request.user[\u0026#39;roles\u0026#39;]: return jsonify({\u0026#39;error\u0026#39;: \u0026#39;Admin access required\u0026#39;}), 403 # Make an authenticated request to the DTZ API headers = {\u0026#39;Authorization\u0026#39;: f\u0026#39;Bearer {DTZ_TOKEN}\u0026#39;} response = requests.get( \u0026#39;https://identity.dtz.rocks/api/2021-02-21/users\u0026#39;, headers=headers ) return jsonify(response.json()) if __name__ == \u0026#39;__main__\u0026#39;: port = int(os.environ.get(\u0026#39;PORT\u0026#39;, 80)) app.run(host=\u0026#39;0.0.0.0\u0026#39;, port=port) Troubleshooting Common Issues Authentication token not found: Ensure your service is deployed through the DTZ containers service and that the environment variables are being read correctly. Invalid token errors: Check that you are correctly extracting the token from the Authorization header and that the JWT parsing is working. 403 Forbidden errors: Verify that the user has the required roles in their JWT token. Service-to-service authentication failing: Ensure you are using the DTZ_ACCESS_TOKEN environment variable for outbound requests. Testing Authentication You can test your service\u0026rsquo;s authentication using curl:\n# Test with an API key curl -H \u0026#34;X-API-KEY: your-api-key\u0026#34; https://yourservice.dtz.rocks/profile # Test with a bearer token curl -H \u0026#34;Authorization: Bearer your-jwt-token\u0026#34; https://yourservice.dtz.rocks/profile # Test unauthenticated (should return 401) curl https://yourservice.dtz.rocks/profile For more detailed information about DTZ authentication, see the Authentication documentation.\n","permalink":"https://downtozero.cloud/docs/containers/authentication/","title":"Service Authentication"},{"contents":"We do impose limits on service usage. Limits can and will change. Increases in Limit will not be announced separately, whereas decreasing limit will come with an announcement and transition period.\nLog Limit max size 100GB ","permalink":"https://downtozero.cloud/docs/observability/limits/","title":"Service Limits"},{"contents":"Services are containers that are triggered by an HTTP endpoint. Examples of services include:\nWebsites APIs Webhooks For these endpoints, we use our scale-to-zero technology to shut down unused resources and initialize them only when a request comes in.\nHow it Works When you create a service, we create a public endpoint and issue a valid TLS certificate for it. By default, this endpoint hosts an empty HTTP server that you can use to test your setup.\nWhen you provide us with a container image, we replace our static placeholder with your website or API.\nContainer Image Requirements Each container image is initialized on demand, so any system deployed to DownToZero (DTZ) needs to be designed with this in mind.\nBy default, DTZ waits for the container to open any port. The first port that is opened is gonna be the one, that DTZ attaches on and forwards the request to.\nEnvironment Variables The runtime provides the following environment variables to your service:\nvariable settable description PORT yes The port your application should listen on. DTZ_ACCESS_TOKEN yes (identity can be changed) A JWT generated from the context that allows resources to be accessed within the context. DTZ_CONTEXT_ID no The DTZ context that the current application is running in. HTTP Request Headers The runtime injects the following HTTP headers into every request forwarded to your service:\nheader example value description X-Request-ID 5f2a9b4e-2e9e-4e6c-a2a3-9f2a1c3d4e5f Unique identifier for the request. Useful for correlating logs and tracing across components. X-Forwarded-Host api.example.com The original Host value the client requested. Use when generating absolute URLs or for multi-tenant routing. X-Forwarded-Proto https The original protocol used by the client. For public endpoints on DTZ this is always https. These headers are provided by the platform and reflect the client-facing request context; your application should treat them as read-only inputs.\n","permalink":"https://downtozero.cloud/docs/containers/services/","title":"Services"},{"contents":" Compute Run Scale-To-Zero containers without the need to think about infrastructure.\nName Description Containers A fully serverless container hosting. Storage Data persistence with integrated storage tier handling.\nName Description Objectstore A fully manages objectstore solution build for Key/Value as well as Blobstore requirements. Container Registry A managed container registry for private image hosting. Identity Identity related services.\nName Description Identity Identity provider used for all DTZ services. Data Integration Run data integrations jobs.\nName Description RSS2Email Run low-code data integrations, like subscribe to an RSS/Atom feed and notify via Email on new entries. Monitoring Run data analysis on your log and metric data.\nName Description Observability Collect and search logs. ","permalink":"https://downtozero.cloud/services/","title":"Services"},{"contents":"The following Severties are supported.\nTRACE DEBUG INFO WARN ERROR FATAL ","permalink":"https://downtozero.cloud/docs/observability/logs/severity/","title":"Severity"},{"contents":"Stripe supports the forwarding of events through Webhooks. You can configure a webhook endpoint to receive all available events.\nThe endpoint would look like the following sample.\nhttps://observability.dtz.rocks/stripe/webhook?apiKey=00000000-0000-0000-0000-000000000000\u0026amp;contextId=00000000-0000-0000-0000-000000b25aa6 Log interpretation all structured data from the event is translated into attributes Implemented Events Charge Payment intent ","permalink":"https://downtozero.cloud/docs/observability/sources/stripe/","title":"Stripe"},{"contents":"Terms of Service Last updated: October 12, 2025\nPlease read these terms and conditions carefully before using Our Service.\nInterpretation and Definitions Interpretation The words of which the initial letter is capitalized have meanings defined under the following conditions. The following definitions shall have the same meaning regardless of whether they appear in singular or in plural.\nDefinitions For the purposes of this Terms of Service:\nCompany (referred to as either \u0026ldquo;the Company\u0026rdquo;, \u0026ldquo;We\u0026rdquo;, \u0026ldquo;Us\u0026rdquo; or \u0026ldquo;Our\u0026rdquo; in this Agreement) refers to APImeister Consulting GmbH, Friedrichstr. 114A, 10117 Berlin, Germany. Service refers to the Website and the cloud services provided by Downtozero. Terms and Conditions (also referred as \u0026ldquo;Terms\u0026rdquo;) mean these Terms and Conditions that form the entire agreement between You and the Company regarding the use of the Service. Website refers to Down To Zero, accessible from https://downtozero.cloud You means the individual accessing or using the Service, or the company, or other legal entity on behalf of which such individual is accessing or using the Service, as applicable. Acknowledgment These are the Terms and Conditions governing the use of this Service and the agreement that operates between You and the Company. These Terms and Conditions set out the rights and obligations of all users regarding the use of the Service.\nYour access to and use of the Service is conditioned on Your acceptance of and compliance with these Terms and Conditions. These Terms and Conditions apply to all visitors, users and others who access or use the Service.\nBy accessing or using the Service You agree to be bound by these Terms and Conditions. If You disagree with any part of these Terms and Conditions then You may not access the Service.\nYou represent that you are over the age of 18. The Company does not permit those under 18 to use the Service.\nYour access to and use of the Service is also conditioned on Your acceptance of and compliance with the Privacy Policy of the Company. Our Privacy Policy describes Our policies and procedures on the collection, use and disclosure of Your personal information when You use the Application or the Website and tells You about Your privacy rights and how the law protects You. Please read Our Privacy Policy carefully before using Our Service.\nUser Accounts When You create an account with Us, You must provide Us information that is accurate, complete, and current at all times. Failure to do so constitutes a breach of the Terms, which may result in immediate termination of Your account on Our Service.\nYou are responsible for safeguarding the password that You use to access the Service and for any activities or actions under Your password, whether Your password is with Our Service or a third-party social media service.\nYou agree not to disclose Your password to any third party. You must notify Us immediately upon becoming aware of any breach of security or unauthorized use of Your account.\nProhibited Uses You may use the Service only for lawful purposes and in accordance with the Terms. You agree not to use the Service:\nIn any way that violates any applicable national or international law or regulation. For the purpose of exploiting, harming, or attempting to exploit or harm minors in any way by exposing them to inappropriate content or otherwise. To transmit, or procure the sending of, any advertising or promotional material, including any \u0026ldquo;junk mail\u0026rdquo;, \u0026ldquo;chain letter,\u0026rdquo; \u0026ldquo;spam,\u0026rdquo; or any other similar solicitation. To impersonate or attempt to impersonate the Company, a Company employee, another user, or any other person or entity. In any way that infringes upon the rights of others, or in any way is illegal, threatening, fraudulent, or harmful, or in connection with any unlawful, illegal, fraudulent, or harmful purpose or activity. To engage in any other conduct that restricts or inhibits anyone\u0026rsquo;s use or enjoyment of the Service, or which, as determined by us, may harm or offend the Company or users of the Service or expose them to liability. Termination We may terminate or suspend Your account immediately, without prior notice or liability, for any reason whatsoever, including without limitation if You breach these Terms and Conditions.\nUpon termination, Your right to use the Service will cease immediately. If You wish to terminate Your account, You may simply discontinue using the Service.\nLimitation of Liability To the maximum extent permitted by applicable law, in no event shall the Company or its suppliers be liable for any special, incidental, indirect, or consequential damages whatsoever (including, but not limited to, damages for loss of profits, for loss of data or other information, for business interruption, for personal injury, for loss of privacy arising out of or in any way related to the use of or inability to use the Service, third-party software and/or third-party hardware used with the Service, or otherwise in connection with any provision of this Terms), even if the Company or any supplier has been advised of the possibility of such damages and even if the remedy fails of its essential purpose.\nSome states do not allow the exclusion of implied warranties or limitation of liability for incidental or consequential damages, which means that some of the above limitations may not apply. In these states, each party\u0026rsquo;s liability will be limited to the greatest extent permitted by law.\n\u0026ldquo;AS IS\u0026rdquo; and \u0026ldquo;AS AVAILABLE\u0026rdquo; Disclaimer The Service is provided to You \u0026ldquo;AS IS\u0026rdquo; and \u0026ldquo;AS AVAILABLE\u0026rdquo; and with all faults and defects without warranty of any kind. To the maximum extent permitted under applicable law, the Company, on its own behalf and on behalf of its Affiliates and its and their respective licensors and service providers, expressly disclaims all warranties, whether express, implied, statutory or otherwise, with respect to the Service, including all implied warranties of merchantability, fitness for a particular purpose, title and non-infringement, and warranties that may arise out of course of dealing, course of performance, usage or trade practice. Without limitation to the foregoing, the Company provides no warranty or undertaking, and makes no representation of any kind that the Service will meet Your requirements, achieve any intended results, be compatible or work with any other software, applications, systems or services, operate without interruption, meet any performance or reliability standards or be error free or that any errors or defects can or will be corrected.\nGoverning Law The laws of Germany, excluding its conflicts of law rules, shall govern this Terms and Your use of the Service. Your use of the Application may also be subject to other local, state, national, or international laws.\nDisputes Resolution If You have any concern or dispute about the Service, You agree to first try to resolve the dispute informally by contacting the Company.\nChanges to These Terms and Conditions We reserve the right, at Our sole discretion, to modify or replace these Terms at any time. If a revision is material We will make reasonable efforts to provide at least 30 days\u0026rsquo; notice prior to any new terms taking effect. What constitutes a material change will be determined at Our sole discretion.\nBy continuing to access or use Our Service after those revisions become effective, You agree to be bound by the revised terms. If You do not agree to the new terms, in whole or in part, please stop using the website and the Service.\nContact Us If you have any questions about these Terms and Conditions, You can contact us:\nBy email: contact@downtozero.cloud ","permalink":"https://downtozero.cloud/terms/","title":"Terms of Service"}]