As a fun project, we wanted to expose our official documentation — which is also hosted here — as an MCP server. The idea was to test whether this would make developing our own platform easier, since our AI-based IDEs would have less difficulty checking architectural knowledge against an easy-to-consume public endpoint.
Now, starting very naively, we came up with the following plan to implement this.
As a short teaser, here is what we want to build.
flowchart LR X[Internet] subgraph downtozero.cloud A[downtozero.cloud] end subgraph W[MCP server] N[Axum frontend] T[tantivy search] M[MCP Server] N -- /index.json --> A N -- query --> T N -- MCP request --> M end X -- search 'registry' --> N X[Internet] -- GET /index.html --> A
The current website is built with Hugo. So all content on this site is created as Markdown and then rendered into HTML. For browsers, HTML is a good format. For search engines as well as LLMs, this would waste quite some resources and only have minimal impact on the quality of the result. But especially LLMs are really good at reading and understanding Markdown. So it became clear that we wanted to feed Markdown into the LLM. At the same time, we needed to consume this data for our search server.
Since Hugo supports multiple output formats and even arbitrary formats, we started building a JSON output. The idea was to render all pages we have into a single large JSON and see what comes out of that.
[
{
"contents": "We always aim ..",
"permalink": "https://downtozero.cloud/posts/2025/scale-to-zero-postgres/",
"title": "Scale-To-Zero postgresql databases"
},
{
"contents": "Eliminating Wasted Cycles in Deployment At DownToZero, ...",
"permalink": "https://downtozero.cloud/posts/2025/github-deployment/",
"title": "Seamless Deployments with the DTZ GitHub Action"
},
Now, to create something like this, Hugo needs to have a template in place. So we put the following file into the default templates directory. layouts/_default/index.json
{{- $.Scratch.Add "index" slice -}}
{{- range .Site.RegularPages }}
{{- /* start with an empty map */ -}}
{{- $page := dict -}}
{{- /* always present */ -}}
{{- $page = merge $page (dict
"title" .Title
"permalink" .Permalink) -}}
{{- /* add optional keys only when they have content */ -}}
{{- with .Params.tags }}
{{- if gt (len .) 0 }}
{{- $page = merge $page (dict "tags" .) -}}
{{- end }}
{{- end }}
{{- with .Params.categories }}
{{- if gt (len .) 0 }}
{{- $page = merge $page (dict "categories" .) -}}
{{- end }}
{{- end }}
{{- with .Plain }}
{{- $page = merge $page (dict "contents" .) -}}
{{- end }}
{{- $.Scratch.Add "index" $page -}}
{{- end }}
{{- $.Scratch.Get "index" | jsonify -}}
To get this template rendered, we needed to add the JSON output to the config.toml
.
baseURL = 'https://downtozero.cloud/'
title = 'Down To Zero'
[outputs]
home = ["HTML", "RSS", "JSON"]
Now that we have the JSON built, it becomes available on the site through /index.json.
Fetching the most up-to-date content became easy, which led to the open question of implementing search. Since we are building our whole stack on serverless containers, with a mainly Rust-based backend, our choice here also fell to a Rust container.
So we chose https://github.com/quickwit-oss/tantivy. The API for this engine is straightforward and we do not care too much about edge cases and weights.
Here is a short code snippet showing retrieval and indexing.
fn test() {
let data = fetch_data().await.unwrap();
let index = build_search_index(data);
let results = search_documentation(index, "container registry".to_string());
}
async fn fetch_data() -> Result<Vec<DocumentationEntry>, reqwest::Error> {
let response = reqwest::get("https://downtozero.cloud/index.json")
.await
.unwrap();
let text = response.text().await.unwrap();
log::debug!("text: {text}");
let data = serde_json::from_str(&text).unwrap();
Ok(data)
}
fn build_search_index(data: Vec<DocumentationEntry>) -> Index {
let schema = get_schema();
let index = Index::create_in_ram(schema.clone());
let mut index_writer: IndexWriter = index.writer(50_000_000).unwrap();
for entry in data {
let doc = doc!(
schema.get_field("title").unwrap() => entry.title,
schema.get_field("contents").unwrap() => entry.contents.unwrap_or_default(),
schema.get_field("permalink").unwrap() => entry.permalink,
schema.get_field("categories").unwrap() => entry.categories.join(" "),
schema.get_field("tags").unwrap() => entry.tags.join(" "),
);
index_writer.add_document(doc).unwrap();
}
index_writer.commit().unwrap();
index
}
fn search_documentation(index: Index, query: String) -> Vec<(f32, DocumentationEntry)> {
let reader = index
.reader_builder()
.reload_policy(ReloadPolicy::OnCommitWithDelay)
.try_into()
.unwrap();
let searcher = reader.searcher();
let schema = get_schema();
let query_parser = QueryParser::for_index(
&index,
vec![
schema.get_field("title").unwrap(),
schema.get_field("contents").unwrap(),
schema.get_field("permalink").unwrap(),
schema.get_field("categories").unwrap(),
schema.get_field("tags").unwrap(),
],
);
let query = query_parser.parse_query(&query).unwrap();
let top_docs = searcher.search(&query, &TopDocs::with_limit(10)).unwrap();
let mut results = Vec::new();
for (score, doc_address) in top_docs {
let retrieved_doc: TantivyDocument = searcher.doc(doc_address).unwrap();
let entry = DocumentationEntry {
title: retrieved_doc
.get_first(schema.get_field("title").unwrap())
.unwrap()
.as_str()
.unwrap()
.to_string(),
contents: Some(
retrieved_doc
.get_first(schema.get_field("contents").unwrap())
.unwrap()
.as_str()
.unwrap()
.to_string(),
),
permalink: retrieved_doc
.get_first(schema.get_field("permalink").unwrap())
.unwrap()
.as_str()
.unwrap()
.to_string(),
categories: retrieved_doc
.get_first(schema.get_field("categories").unwrap())
.unwrap()
.as_str()
.unwrap()
.split(" ")
.map(|s| s.to_string())
.collect(),
tags: retrieved_doc
.get_first(schema.get_field("tags").unwrap())
.unwrap()
.as_str()
.unwrap()
.split(" ")
.map(|s| s.to_string())
.collect(),
};
results.push((score, entry));
}
results
}
Now that we have the content covered, let’s continue with the more interesting part: the MCP server.
Since all our services are built in Rust, we set our goal of building this service as a Rust service. Luckily, the MCP project has a Rust reference implementation for clients and servers.
We basically followed the example by the letter and got a MCP server running locally rather quickly.
Here is the full GitHub repo for everybody who wants to get into all the details.
https://github.com/DownToZero-Cloud/dtz-docs-mcp
So now we wanted to deploy this MCP server and quickly got the error from the LLM clients that remote MCP servers are only supported through TLS. That didn’t make our experiment any easier.
We quickly adopted Let’s Encrypt to generate a TLS certificate on startup and use it to host our MCP. Since we already have code for other parts of the DTZ platform, we did not need too many adjustments for this.
We will do an extra post on detailed description how to get Let's Encrypt
running in an axum server setup.
Let’s Encrypt support for our MCP Server
So in conclusion, we did get our MCP server running. It is available on the internet, and we added it to our Cursor, Gemini CLI, and ChatGPT clients. Interestingly, every client has very different reactions to it. Cursor is just ignoring the information source and never asks for additional information, regardless of the task at hand. Gemini uses the MCP if required. It’s not clear how or when it is invoked, but it uses the available information source. ChatGPT does not use the MCP and always falls back to its own web search feature, which takes precedence over the MCP server. In Research mode, ChatGPT uses the MCP, but the results don’t seem to be more valuable than just the web search.