Signal Briefing
An AI-enriched intelligence briefing pipeline that pulls from a mix of public sources, filters the noise, and renders a focused scannable report.
What I wanted
I wanted to see if I could build something that would be genuinely high-value for an executive. Senior leaders and managers don’t have time to sift through the daily firehose of news, regulatory updates, contract activity, and public filings — but those inputs can shape the decisions they have to make next week. The hypothesis was simple: if AI can reliably surface the handful of items that matter each day, and flag the decisions that might follow from them, that’s real leverage on someone’s time. I wanted to see if I could direct Claude to build a tool an executive could actually lean on.
How I got it
I gave Claude the use case and let it build the pipeline. Watching the first few runs, it became clear that the interesting work wasn’t in the language-model calls at all — those produced decent summaries from the beginning. The hard parts were the boring engineering around the model: how do we decide an item is a duplicate of something already delivered, what happens when a source is slow, how do we test a multi-day run without hitting live services. Most of my direction on this project was pushing Claude to keep investing in those reliability layers, because an executive briefing that’s sometimes wrong is worse than no briefing at all — the executive just stops trusting it.
What it does
On a schedule, the pipeline pulls from a handful of public data sources, filters and deduplicates them against everything it’s already delivered, asks a language model to summarize and categorize what’s left, and renders the result as a scannable report — readable as an email or on a web page. At the top of every run is a short synthesis block that surfaces the items most worth an executive’s time, plus a callout for anything that looks important enough to act on today. If one of the sources was slow or broken, a small banner tells you which one, so a quiet briefing never silently means “you missed something.”
Why it matters to me
I built this as a proof of concept for a hypothetical executive. The idea was to test whether AI could become a kind of always-on chief of staff — something that watches the firehose for you, brings back what actually matters, and is honest about what it isn’t sure of. That framing is why the whole project kept coming back to trust. An executive doesn’t have time to spot-check every summary, so the product has to earn confidence on its own: by handling its bad days visibly, by never quietly dropping a source, by telling you when something changed. That’s the shape of a tool someone would actually use, as opposed to one they’d install and abandon after a week.
What I learned
The biggest shift was realizing that in an AI product, the interesting engineering work happens everywhere except the model call. Claude can summarize text fine. That’s the free part. The hard parts — what counts as a duplicate, how to test a multi-day run without burning tokens, what to show the reader when half the sources are down — are all classic systems problems, and they’re the parts that decide whether anyone comes back tomorrow. I came into this project thinking the prompt was the product. I came out understanding the prompt is the smallest piece of it.
Under the hood — for the technical folks
Claude built this as a scheduled Python pipeline. A fetcher layer handles several public data sources — regulatory filings, official rulemaking documents, contract data, news feeds, and public social/community posts. Each fetcher returns a typed envelope with the same shape so the downstream stages don’t need to care where an item came from.
The deduplication layer is backed by SQLite in write-ahead-log mode with a rolling time window. Production runs share a dedup namespace so a later run never re-delivers items an earlier run already sent. Test runs use a separate namespace so the simulation harness never contaminates production state, and the whole dedup database is backed up daily with rolling retention.
The enrichment layer asks a language model to write a short summary of each kept item, categorize it, and extract the “so what.” Three synthesis blocks render at the top of every run: an at-a-glance overview, a callout for items that need immediate attention, and a set of daily themes.
Two renderers — a responsive HTML email and a standalone web view — share a common data shape so the content stays in sync across formats. A simulation harness can replay a multi-day run against recorded fixtures with golden-file regression checks, so shape changes are visible in a diff before they ship. The whole pipeline runs on a self-hosted container with a watchdog that alerts if a scheduled run is late.
Technical highlights Claude built under direction:
- A consistent typed envelope for items from every source, so downstream stages are source-agnostic
- SQLite with WAL mode for concurrent reads during a run, with a
prod|/test|key split so production state is never polluted by simulation - Rolling-window deduplication keyed on content-stable hashes so small text changes don’t re-deliver the same item
- Per-run diagnostics written directly into the rendered output: token counts, fallback counts, per-source fetch stats, and a degraded-source banner
- A template-driven enrichment layer so prompts are versioned, reviewable, and easy to diff
- Two renderers (email and web) with a shared data shape so content stays in sync across formats
- A simulation harness with recorded fixtures and golden-file regression, so the pipeline can be tested end-to-end without hitting any live API or burning any tokens
- Scheduled deployment on a self-hosted container with a watchdog and daily dedup-database backups on a rolling retention window
Stack: Python · Claude API · SQLite (WAL) · Jinja · feedparser · requests
Signal Brief
Session 0421- Harbor Goods' quarterly refresh landed nine days faster than the category norm, pulling forward a signal we'd otherwise have seen next session.
- Meridian Labs' margin-compression story has now been corroborated by a second independent source — promote out of the watchlist.
- Altitude Outfitters swapped their primary supplier from Northwind to Kepler Supply, reshaping category-wide lead times.
- Community mentions of the new category-B format climbed from three last session to eleven — early but non-trivial.
- review Confirm Harbor Goods refresh with a second-source check.
A faster-than-usual refresh reshapes near-term pricing and triggers the reprice playbook, so a second-source confirmation is the first ask.
Source · Trade Wire · Session 0421
- decide Greenlight the Meridian Labs deep-dive before the Friday sync.
Second corroborating source promotes this signal above the threshold for a coverage decision — leaving it until Monday costs a session.
Source · Market Notes · Session 0421
- review Flag the Northwind → Kepler Supply swap to supply leads.
Lead-time shift cascades through three downstream categories where we hold positions — supply-chain leads should see this today.
Source · Supplier Wire · Session 0420
- monitor Watch the category-B mention cluster for one more session.
Mention volume is up meaningfully session-over-session but still below the action-item threshold; one more session tells us if it's structural.
Source · Forum Digest · Session 0421
- Faster refreshes across the category — Three peers shipped refreshes inside a 72-hour window, historically unusual outside a launch cycle.
- Supplier-side consolidation — Kepler Supply is absorbing a larger share of upstream orders, reshaping concentration risk for every mid-market brand in the category.
- Format experimentation picking up — Mentions of the new category-B format are climbing outside the usual peer set, suggesting a new entrant is testing.
Kepler Supply lands three new mid-market supply agreements
Kepler Supply confirmed three new supply agreements with mid-market brands this session, extending a multi-week run of wins. The new deals cover category inventory through the end of the fiscal year.
With Kepler absorbing a larger share of mid-market orders, lead-time risk is concentrating on a single supplier — worth flagging to supply-chain leads ahead of the next planning cycle.
Category-wide pricing softens as inventory rolls over
Market data shows a category-wide softening in unit pricing this session, coinciding with the rolling inventory refresh window. Early read is that the drop is window-specific rather than a structural shift.
Temporary pricing softness may open a short window for opportunistic inventory builds — a quick read from the trading desk would pay for itself.
Altitude Outfitters posts unit velocity well above their own forecast
Altitude Outfitters reported unit velocity meaningfully above their own forecast for the current window. The beat is attributed primarily to a new distribution channel rather than underlying demand.
The distribution-channel lift is reproducible; it's the kind of structural advantage worth benchmarking against before the next planning cycle.
Category-B format mentions climbing across monitored forums
Mention volume for the new category-B format climbed session-over-session across monitored community forums. Pattern is early but non-trivial.
How much of a useful AI product is state management and reliability, not prompts. The interesting parts were the dedup keys, the graceful-degradation banners, and the fixture-backed regression harness. The model does the summarizing; everything around it is what makes the summary trustworthy.