── method ──

How I build

My four-step loop from vague idea to shipped tool, with one worked example.

This is how I work: I notice a problem, I try a rough version, I build a real one, and I live with it long enough to know what’s wrong. The code comes from Claude. The decisions about what to build, what to cut, and when it’s good enough are mine. Below is the loop as it actually played out on Signal Briefing, a daily intelligence briefing tool I’ve been building to see if AI could earn a senior leader’s trust.

01 — the itch

The itch is the part I trust least but need the most. It’s the vague shape of a problem before I know what I’d actually build. For Signal Briefing, the itch was a question I’d been turning over: if AI is as good as people say at summarizing, could it watch the daily firehose — news, regulatory filings, contract activity, public posts — and bring back just the handful of items that would change a senior leader’s decision? I didn’t know if the answer was yes. I wanted to find out.

02 — the first try

The first try is the cheapest thing I can get running. For Signal Briefing, that meant giving Claude the use case and asking for a working pipeline end-to-end — sources, filtering, a language-model summary, and a rendered output. I didn’t tell it how to structure any of that. The point wasn’t to get the answer right on the first pass; it was to see the shape of the problem once it was running against real data. The summaries were decent from the start. Everything around them was where the real problem turned out to live.

03 — the real one

The real one is where I commit. Once the first try showed me that the summaries were the easy part, I started pushing Claude on the boring engineering underneath: how to tell if an item is a duplicate of something already delivered, what to do when a source is slow, how to test a multi-day run without burning tokens. Most of the direction I gave on this project was pushing for that kind of reliability — not because the model wasn’t smart enough, but because an executive briefing that’s sometimes wrong is worse than no briefing at all. People just stop trusting it.

04 — living with it

The live-with-it part is where I find out what I actually built. Running Signal Briefing against a real schedule with real sources turned up things the earlier stages couldn’t have predicted — a source that went quietly stale, a synthesis block that looked right in testing but read as noise a few runs in, a small bug that only showed up when the dedup window rolled over. I kept what held up and cut what didn’t. Most of what I’d call product judgment happens here, after the thing is running.

── contact ──

Get in touch