Can You Quantify a Writing Voice?
I fed nine years of my writing into a 25-dimension analysis pipeline. It handed back a spec sheet for my voice. With tolerances.
I fed nine years of my writing into a 25-dimension analysis pipeline. It handed back a spec sheet for my voice. With tolerances.
Most knowledge bases serve one audience. Ours serves two through the same data: a web UI for humans and an MCP server for Claude Code. Same wikilinks, same search, same schema. Two interfaces, one system.
A working AI pipeline was silently failing on 16% of documents, fabricating identifiers for 13% of records, and spending full extraction costs every run even when nothing had changed. Here's what the redesign looked like.
AI-assisted PRs ignore project standards because the tools never see them. CLAUDE.md and AGENTS.md fix that. Your conventions load into the contributor's AI tool the moment they clone the repo.
The compound error argument against AI agents has real math behind it. But the assumptions it smuggles in describe a system nobody serious is actually building.
I built a pipeline that diffs two AppImage builds of claude-desktop-debian, feeds the deobfuscated JS hunks through Sonnet, and synthesizes actual release notes. Shipping "Version 0.x.y" as a changelog wasn't cutting it.