Pillar

How I Build with AI

A reading path through everything I have written about building with large language models, in the order I would re-read it myself.

There are now twelve essays on this site about building with AI. They were not written as a course. They were written as I was learning, one practical problem at a time. The pieces are connected, but the connections are not obvious if you read them in publish order, and they are not obvious if you read them in reverse either. This page is the path I would walk a friend through if they asked me, today, "how should I read your AI writing."

The thesis underneath all of it is simple. An AI coding assistant is not a faster typist. It is a junior engineer with infinite stamina, no taste, and no memory. Everything I have written is some version of, "here is the part of the operating model you have to put around it so that the work it produces is good, the costs are bounded, and you do not lose your mind in the process."

The path moves from foundation to system to discipline to economics to evaluation to application to strategy. Eight stops, twelve essays. Roughly two hours end to end if you read every one.

1. Start with the foundation

How I Build Software with AI is the entry point. It describes the spec-driven workflow I run every day, and the reason I run it. The short version: most AI coding workflows fail not because the model is bad but because the human did not write a spec the model could be held to. The post lays out the multi-model, multi-defense-layer system I use as a baseline, and every other post on this list builds on top of it.

If you read only one piece on this page, read this one.

2. Then build the personal engineering org

The single biggest leverage move in the last year was treating one Claude Code terminal as if it were a small engineering team rather than a chatbot. The Personal Engineering Org describes how three open-source tools, gstack, Superpowers, and Visual-Explainer, snap together into something that thinks strategically, enforces process, and communicates visually. It is the post I get the most questions about.

The honest companion to it is The Limits of the Personal Engineering Org Coming soon, which catalogs the six concrete situations where the right move is to switch the whole stack off and reach for something smaller. Reading these two together is the point. Tools that look universal almost never are.

3. Adopt the discipline before you ship

Speed is the seductive part of AI coding. It is also the part that breaks production. The 5-Gate Rule tells the story of a deploy plan I was about to ship that, on inspection, would have introduced shell injection, SQLite corruption on every cycle, and a monitoring alert that fires when nothing is broken. One review did not catch it. Five did. The post is the rule I now apply to every AI-generated change before merge.

Discipline shows up in design too, not just code. Impeccable: The Design Skill That Made Claude Code My Most Valuable UI/UX Asset Coming soon is about a Claude Code skill by Paul Bakaus that gives the model the actual vocabulary designers use, then enforces it across twenty-three commands. It is one of two skills that have become permanent fixtures in my stack.

4. Take the economics seriously

Tokens are the new line item, and the line item compounds. 10 Strategies I Use to Slash Token Usage Without Compromising Quality and Reliability is the practical post. It is the one I send to anyone who tells me their AI bill is getting out of hand, because the practices in it have cut my own consumption by sixty to eighty percent without measurably hurting output.

Token Anxiety is the human counterpart. It is about what happens when your day-to-day cognition gets quietly tethered to a rate limit. I think about this post often when I read other people writing about AI productivity gains.

5. Evaluate before you adopt

I test more AI products than I care to admit, and most disappoint. My Personal AI Evaluation Framework: How I Size Up Every New Tool I Come Across is the eight-dimension framework I use to decide whether any new tool is worth my time and money. If you build AI products, this post is also a checklist for what you have to be unambiguously good at.

The same eight-dimension thinking applies one layer down, to the integration question. Finding the Right Place for MCP: My JIRA Story and the Honest Trade-Off Coming soon is what I learned the hard way about when MCP earns its keep and when direct tool calling wins. Multi-user auth is finally maturing in 2026, but the architectural decision still has to be made on the merits.

6. Apply the system to code you already have

Most published AI workflows quietly assume you are starting from scratch. Real engineering work is mostly about improving software that already exists, with users on it. How I Dramatically Improve My Existing Applications Using Claude Code, gstack, and Superpowers is the exact flow I use to safely improve production code. It builds on the personal engineering org and the 5-gate rule, and it shows them in the messier conditions where they actually have to hold up.

7. Read the strategy at the industry level

The last two pieces step back from the workbench and look at the field.

Headless AI Is Here, and It's About to Change Everything Coming soon argues that the visible chat-window era of AI is the small story. The bigger story is what happens when LLMs sit invisibly inside other software, doing the work without ever surfacing a chat at all. Salesforce is the trigger that made this impossible to ignore.

Where the Value Accrues: Vertical Integration, Horizontal Orchestration, and the Real Architecture of AI Coming soon is the longest piece on this list and the most important if you are deciding what to build. It applies Christensen's Law of Conservation of Attractive Profits to the current AI stack and lays out where the durable margin is going to live. It is the piece that ties everything else together, because it tells you why the operating discipline I have been describing matters in the first place.

How to use this page

If you have an hour, read sections 1, 2, and 3. That is the spine of the operating model.

If you have a Saturday, read every post in the order above. The arc of the argument lands at the end, not at the beginning.

If you build AI products, sections 5 and 7 will read differently from the rest. Treat them as a market map.

This pillar is updated when new AI essays publish. The companion category index lives at /ai/, with every post in chronological order and full RSS at /feed.xml.