Practices · 2026-04-15

Architecture docs that update themselves

Auto-generated system overview diagram showing service connections
Auto-generated system overview diagram showing service connections

I've been building a handful of services that work together, and I wanted a single place where I could see how they all fit together. What each endpoint accepts, what it returns, and how the services depend on each other.

But actually sitting down to document all of that? Not something I was looking forward to. My team spends more time than it would like on documentation, and I've never been someone that did it for the love of the game. Drawing the diagram itself is fine, but then you have to figure out how to display the API details alongside it. The request fields, the response shapes, and the error codes. Do you put that in the same doc? Link out to separate pages? Screenshot Postman? None of the options felt great, and all of them meant I'd be maintaining it by hand going forward.

So instead, I set up a dedicated architecture repo that regenerates its own documentation from the live APIs.

How it works

I recently came across Chris Kluis' post on self-describing API endpoints and started adding /help endpoints to each of these services. Every service now exposes a /help route that describes its own API: fields, actions, response shapes, error codes. A GitHub Action in the architecture repo fetches every /help endpoint, validates the responses, and generates Mermaid diagrams and payload samples from whatever comes back.

The output is a set of markdown files with system-level diagrams, per-service sequence diagrams, and JSONC payload samples for every endpoint. All generated, all in one repo.

Generated service detail page showing request fields, sequence diagram, and response variants

The images in this post are styled for the blog. The actual output is standard Mermaid and markdown rendered in GitHub.

How it stays current

Three triggers keep things in sync:

  • Repository dispatch. Each service repo fires a webhook to the architecture repo after a successful deploy. Push a change, and the docs regenerate within minutes.
  • Manual trigger. I can kick off a regeneration from the GitHub Actions UI whenever I want.
  • Daily cron. A safety net that catches anything the dispatch might have missed.

The regeneration script reads a registry file that lists every service and its /help URLs, fetches them, validates the responses against a strict envelope, and routes the data through three renderers: Mermaid diagrams, JSONC payload samples, and a README table.

If a response doesn't conform, it doesn't break the build. The script generates a drift page with the raw output and moves on. That's a signal to fix the /help endpoint, not a reason to block the rest of the docs.

Adding a new service

This is the part I like most. To document a new service, I add one entry to a JSON registry file with the service name and its /help URLs. Then I push. The Action handles everything else.

No diagram to draw by hand. No markdown to write. No payload samples to copy from Postman. If the /help endpoints conform, the docs appear. If they don't, I get a drift page telling me what's off.

The registry file -- one entry per service is all you need

The registry file. One entry per service is all you need.

The registry file itself is manual, and building the initial version was quick. I had Claude go through each repo and pull the endpoints. Keeping it updated as you add services is the real question. You could automate that too: service discovery, scanning your GitHub org for repos with /help endpoints, or pulling from your CI / CD pipeline. That's very dependent on your environment and more than I want to get into here.

More on the /help convention

The idea is that every route gets a sibling GET /route/help that returns a machine-readable description of what that endpoint does, its fields, actions, response shapes, and error codes. It's lightweight, lives right next to the code it describes, and doesn't require maintaining a separate spec file.

What makes it work well for this pattern is that the response is structured enough to parse and render automatically, but simple enough that adding a /help route to an existing endpoint isn't a big lift.

That said, you could use an OpenAPI specification for this just as easily. The regeneration pattern works the same way: fetch the spec, parse it, render docs. The shape of the source data matters less than the principle: your API describes itself, and your documentation reads from that description.

Beyond APIs

I built this for services and endpoints, but the pattern isn't limited to that. Anything that can describe itself can feed into the same kind of pipeline.

DevOps teams maintaining network diagrams are an obvious one. If your infrastructure is defined in code, Terraform, Pulumi, CloudFormation, the topology is already machine-readable. A regeneration step could pull that state and render an up-to-date network diagram on every apply, instead of someone manually redrawing it in Lucidchart or Miro after a change.

Agentic workflows are another. As teams build systems where AI agents call tools, chain decisions, and hand off to other agents, the graph of what's connected to what gets complicated fast. If each agent or tool step exposes a description of its inputs, outputs, and downstream dependencies, you could generate the workflow diagram the same way.

The underlying idea is simple: if the thing already knows what it does, just ask it.

Why bother

When you're only running a few services, it's easy enough to keep things straight in your head. But even at a small scale, having one place that shows you the full picture, the system diagram, the contracts, the payload shapes, is just convenient. And since it updates on deploy, I don't have to think about keeping it current. It just is.

← All posts