Skip to content
Logo
Published on

The Linear Walkthrough: Architecture Docs That Stay Accurate

AI agents are only as good as the context they start with. Here's the pattern I'm using to give them a complete, accurate picture of a codebase in one file — and how I keep it from going stale.

I've been thinking about what makes AI-assisted development actually work at the session level. Not the prompt engineering, not the model choice — the boring stuff. What does the agent read when it wakes up, and how accurate is it?

The answer I landed on is a document pattern I'm calling the Linear Walkthrough. I first wrote about it while implementing it on a side project called OnCourt. This week I implemented it here on this site.

The problem it solves

Every AI coding session starts with a context gap. The agent has no memory of the last session, no intuition built up from months of working in the codebase. It has to figure out the system from scratch.

The usual approach is to put a summary in CLAUDE.md — a list of directories, key files, commands. That's better than nothing, but it's a catalog, not a map. It tells you what files exist, not how they relate. "The blog posts live in data/blog/" is useful. "Blog posts are MDX files that Contentlayer processes at build time, generating typed TypeScript objects that page components import from contentlayer/generated" is transformative.

The difference is: one is a reference, the other is understanding.

What the Linear Walkthrough is

A single document — docs/architecture.md — that narrates how the system works from the outside in. Not a list of files, but a connected story about how data flows through the app.

It has six sections:

1. What the app is. One paragraph, plain English. What does this thing do and who uses it? Establishes context before diving into tech.

2. System map. An ASCII diagram showing all components and how they connect. Makes the overall topology scannable at a glance before you read the details.

3. Layers. One section per architectural layer, working from the bottom of the stack upward: data storage, content schema, configuration, pages, SEO, feature flags, analytics, and so on. Each layer explains not just what exists but how it connects to the layers above and below it.

4. End-to-end flows. Two or three concrete user journeys traced all the way through the stack. "Visitor reads a blog post" and "AI agent starts a session" on this site. These are the most valuable section for agents — they stitch together all the layers into a coherent picture of cause and effect.

5. Infrastructure. Local dev, production, CI/CD. Deploy commands and what auto-deploys vs. requires manual action.

6. Maintenance rules. An explicit list of what changes require updating the document. This is what separates a useful doc from one that quietly goes stale in three weeks.

Key design decisions

Narrative, not catalog. Everything written from the perspective of a reader who needs to understand, not reference. Routes get a table, but the table has a paragraph above it explaining the request lifecycle.

Code snippets as schemas. Inline TypeScript and JSON blocks show data shapes directly. Readers shouldn't need to open contentlayer.config.ts to know what fields a blog post has.

Only verifiable facts. Every endpoint path, field name, and environment variable is copied directly from source. No inferred behavior, no aspirational behavior. This is the constraint that keeps the doc trustworthy.

Surgical updates. When code changes, only the affected section updates — not a full rewrite. A section-to-source mapping table in the maintenance rules tells you exactly which source files to read for each section.

AI-agent-optimized. The primary reader is an AI agent at session start. The goal is that after reading this document, the agent has enough context to complete routine tasks without exploring files.

On this site specifically

This site is a statically-exported Next.js portfolio site with a Contentlayer MDX pipeline, a blog feature flag, and a GitHub Actions deploy to GitHub Pages. The architecture isn't complicated, but the interactions between the feature flag, the redirect config, the sitemap generation, and the nav rendering are exactly the kind of thing an agent gets wrong without full context.

The system map for this site looks like:

Visitor's browser
  └── stonebergdesign.com (GitHub Pages CDN)
        └── Static HTML/CSS/JS (pre-built at deploy time)

Build pipeline (GitHub Actions)
  └── Next.js static export (EXPORT=1 UNOPTIMIZED=1)
        ├── Contentlayer — processes data/blog/**/*.mdx
        └── Next.js App Router — renders all pages to /out/

That's the whole thing. No server. No database. No auth. But even a simple system benefits from having its interactions documented — especially the blog feature flag, which touches four separate files and controls whether the blog exists at all for visitors.

Enforcing freshness

The pattern only works if the document stays accurate. Two mechanisms enforce that:

The update skill. A runbook at docs/skills/architecture-update/SKILL.md that gives a step-by-step process for updating the doc when code changes. It includes a section-to-source mapping table so you know exactly which files to read for each section, and an explicit checklist for accuracy (no inferred behavior, no aspirational behavior, no historical context).

The CLAUDE.md rule. A mandatory rule in the agent instructions file that lists the specific triggers for updating the architecture doc and points to the skill for procedure. The agent can't decide the update is optional — the rule makes it explicit.

The key operational rule is: commit the doc update in the same commit as the code change. Treating it as optional or "I'll do it later" is what causes drift.

Maintenance burden

Realistically: low. Most changes are surgical — one route table, one field name, one section header. The full document on this site is about 350 lines and took a single focused session to write. Future updates will be measured in paragraphs, not documents.

The discipline isn't writing the doc. It's committing the update alongside the code. That's the only hard part, and it's mostly a habit.

The pattern in one sentence

Write the architecture narrative once, keep it accurate with surgical updates, and give AI agents a rule that makes updating non-negotiable.

The result is a codebase where any session starts with full context instead of an exploration tax.