Skip to content
Logo
Published on

Inspect before you ask: wiring Chrome's debug protocol into your AI workflow

I needed a CSS selector from a live site and didn't know the answer. Instead of guessing, I scraped it from the rendered DOM in 30 seconds. Here's the workflow, and why having it changes how you work.

I was in the middle of a GA4 event tracking project and needed to know the exact CSS selector for a promo button buried in a mega menu. I didn't know the answer off the top of my head.

I could have opened DevTools in my browser and clicked around. Instead, I queried the live DOM from inside Claude Code and had the confirmed selector in about 30 seconds.

That 30-second lookup is the point of this post.


The pattern: answer your own questions before they become someone else's

When you're coordinating across a project (with collaborators, with an AI doing implementation work, or just with your future self) ambiguity has a compounding cost. A question you can't answer right now becomes a blocker or a guess. With the right tooling, it becomes a 30-second lookup.

This applies at every stage of a project. During discovery, you're checking whether the live site matches the documented spec. During implementation, you're verifying selectors, event bindings, z-index stacking. During QA, you're confirming that the deployed version behaves the way the code says it should.

The common thread: you have a question about the live rendered page, and the answer lives in the DOM, not in the source code. Source tells you what the code intends. The live DOM tells you what actually happened after the framework, the CMS, the lazy loader, and the ad scripts all had their say.

The Chrome DevTools Protocol has been available for years. What's changed is that MCP servers make it a first-class tool in an AI workflow, not a standalone debugging utility. You don't need to leave your terminal to get the answer.


What the Chrome DevTools MCP actually gives you

The Chrome DevTools MCP server connects Claude Code to a running Chrome instance through the Chrome DevTools Protocol. That gives you a set of concrete capabilities:

  • Navigate to any URL. Load the page as a real browser, not a fetch request. JavaScript executes, lazy content loads, auth cookies apply.
  • Evaluate arbitrary JavaScript against the live DOM. document.querySelector, getComputedStyle, read dataset attributes, walk the DOM tree. Anything you could type in the Chrome console, the AI can run programmatically.
  • Extract markup, computed styles, event listener bindings. Not the authored CSS, but what the browser actually applied after cascade, specificity, and media queries resolved.
  • Take screenshots. Visual confirmation that the page state matches what you're querying. Useful for catching cases where the DOM says one thing and the layout says another.
  • Inspect network requests. See what resources loaded, what failed, what the response headers look like. Catch third-party scripts misbehaving without opening the Network tab manually.

The critical thing: you're working against the live, rendered site, not the source code, not a static build artifact. If a class gets added at runtime by a JS framework, you see it. If a lazy-loaded element hasn't rendered yet, you know. If a third-party script injects a wrapper div that shifts your layout, it's right there.

For analytics work, this means you can query selectors against the production site and get confirmed answers. No guessing, no "I think this is the right class."


The friction problem, and why it matters

The MCP itself is easy to configure. A few lines in .mcp.json and Claude Code picks it up at session start. The friction is upstream of that.

Chrome needs to be running with --remote-debugging-port=9222 for the MCP to connect. On macOS, that means quitting your existing Chrome session and relaunching with the flag. The first time, it's fine. The fifth time in a week, it's annoying enough that you start skipping it for "small" questions. And small questions answered with guesses are where projects accumulate technical debt.

Two pieces solve this:

A shell function (chrome-debug) that handles the quit-and-relaunch dance in one command. It closes Chrome gracefully, waits for the process to terminate, relaunches with --remote-debugging-port=9222, and polls until the debug port responds. One command, no manual steps, works from any terminal.

chrome-debug() {
  osascript -e 'tell application "Google Chrome" to quit' 2>/dev/null
  while pgrep -x "Google Chrome" > /dev/null; do sleep 0.5; done
  open -a "Google Chrome" --args --remote-debugging-port=9222
  echo "Waiting for debug port..."
  while ! curl -s http://localhost:9222/json > /dev/null 2>&1; do sleep 0.5; done
  echo "Chrome debug port active on 9222"
}

A Claude Code skill (/chrome-debug) that wraps the shell function, verifies the MCP connection is live, and optionally navigates to a URL, all from inside Claude Code without switching windows. The skill calls the shell function, confirms the port is responding, and reports back. From that point, every chrome-devtools MCP tool is available in the session.

The skill part matters more than it sounds. When the tool is one slash command away, you use it reflexively. When it requires a context switch to a terminal, a remembered flag, and a manual verification step, you skip it for "small" questions. Those are exactly the questions where guessing introduces errors.


What this changes about how you work

The selector lookup is a small example. The larger pattern is: when you can inspect the live site directly, a whole category of "I'll need to check on that" questions becomes "let me look that up now."

  • Need to confirm which elements exist in a mega menu before writing tracking code? Query the DOM.
  • Not sure whether a component is lazy-loaded or server-rendered? Check the live page.
  • Wondering if a CMS injected extra wrapper markup that changes your selector path? Inspect and find out.
  • Want to verify that the deployed site matches what the source code says it should produce? Screenshot and compare.

Each of those is a question that would otherwise stall work, either because you'd need to context-switch to a browser or because you'd guess and move on. The tool doesn't make you faster at guessing. It makes guessing unnecessary.

Any time you're writing code, documentation, or instructions that reference the live site (selectors, page structure, computed styles, loaded resources) the ability to query the real thing from inside your AI workflow eliminates an entire class of "I think this is right" hedging.


The setup

The full setup has three pieces: the MCP server configuration, the shell function, and the skill.

1. MCP server configuration (in .mcp.json at the project root):

{
  "mcpServers": {
    "chrome-devtools": {
      "command": "npx",
      "args": ["-y", "@anthropic/mcp-chrome-devtools@latest"]
    }
  }
}

Claude Code reads this at session start and makes the chrome-devtools tools available automatically.

2. Shell function (in your .zshrc or .bashrc):

The chrome-debug function shown above. Source your shell config or restart your terminal after adding it.

3. Claude Code skill (optional, but recommended):

A skill file at .claude/skills/chrome-debug/ that wraps the shell function with verification. This gives you /chrome-debug as a slash command inside Claude Code, so you never need to leave the session to set up the connection.

Once all three are in place, the workflow is: /chrome-debug → Chrome relaunches with debugging → MCP connects → you can query the live DOM from inside Claude Code. The whole startup takes about five seconds.


When to reach for this versus other tools

This isn't the right tool for everything. Some guidelines from using it on real projects:

Use the Chrome DevTools MCP when you need the live rendered page: runtime-generated markup, computed styles, lazy-loaded content, third-party script effects. Anything where the source code and the browser output might diverge.

Use Playwright or a headless scraper when you need to automate a multi-step flow: click through a navigation, fill a form, capture a sequence of states. The Chrome DevTools MCP is better for point queries; Playwright is better for scripted interactions.

Use curl or fetch when you just need the raw HTML response. If you're checking whether a meta tag exists in the initial HTML, you don't need a full browser.

Read the source code when you're changing the implementation, not inspecting the output. The MCP shows you what the browser did with the code. The source shows you what you intended.

The sweet spot is exactly where I started: you have a specific question about the live page, and you want a verified answer in your current working context, without switching tools.


More on AI-assisted development