🧩

The Part Nobody Talks About: Keeping an AI on the Same Page

Feb 12, 2026 6 min read

I’ve been building a browser extension called MindCap for a few weeks now. It tracks your browsing behavior and maps your curiosity — where your attention goes, how your rabbit holes branch, what you keep coming back to. I’m building it with Claude Code as my primary development partner, and I’ve learned something that I don’t see many people writing about.

The hardest part of working with AI isn’t the code. It’s the context.


The Problem You Don’t See Coming

When you start a project with an AI coding assistant, the first few sessions feel effortless. You explain what you’re building, it understands, you move fast. Then the project grows. Your codebase gets bigger. Your conversations get longer. And one day, mid-session, the AI runs out of room to think.

That’s what happened to me around Session 10. I was deep into a refactor — replacing a 15-category content classification system with a simpler intent-based model — and the conversation hit its context limit. The AI summarized what we’d been doing, compressed the history, and carried on. But things got lossy. Details I’d explained earlier were gone. Decisions we’d made together needed to be re-explained.

I realized I was treating the AI like a colleague with perfect memory, and it isn’t one. It’s more like a brilliant contractor who shows up every morning with no idea what happened yesterday.


What I Actually Did About It

The obvious first step was documentation. I’d been keeping project docs from the start — architecture notes, a decision log, session summaries. But I was keeping them for me, not for the AI. The format was wrong. Too much narrative, not enough structure. Too much “why we considered this” and not enough “here’s what we decided and where to find it.”

So I restructured everything around one principle: the AI should be able to restore full project context by reading two files.

The first file is project-state.md. It’s the living document — what we’re working on right now, what changed in the last session, what’s blocked, and what’s next. I update it at the end of every work session. It’s not a journal. It’s a briefing.

The second is a memory file that persists across conversations. It holds the stable stuff: where important files live, what the data schema looks like, how the pipeline works, what mistakes to avoid. Think of it as institutional knowledge for a team of one human and one AI.

I also trimmed the project’s instruction file from 90 lines to 10. That file gets loaded into every single conversation, so every unnecessary line is wasted space. The detailed reference material moved to memory, where it’s also always available but more targeted.


The Workflow That Made It Stick

Documentation only works if you actually do it. I built a simple end-of-session routine: update the project state, log what happened, record any decisions, clean up the memory file. It takes five minutes. I trigger it by telling the AI “update daily project documents” and it knows exactly what to do — because the procedure itself is stored in memory.

This sounds mundane, and it is. That’s the point. The sessions where I skip it are the sessions where the next conversation starts slow, with me re-explaining things I’ve already explained.


What I Didn’t Expect to Learn

The surprising part is how much this discipline improved my own thinking about the project. Before I started managing context deliberately, decisions lived in my head. I knew why we chose FastAPI over Next.js, why we track six behavioral intents instead of fifteen content categories, why rabbit holes are the core feature and not a warning system. But I hadn’t written most of it down in a way that would survive me forgetting.

Now I have a decision log with dates, rationale, alternatives considered, and files changed. Not because the AI needs it — although it does — but because I need it. Two weeks from now, when I’m questioning a choice I made, the answer is already written down. By me, at the time I made it, when my reasoning was fresh.

There’s a concept in software called “documentation-driven development” — the idea that writing the docs first forces you to think clearly about what you’re building. Working with an AI enforces something similar, except the pressure is constant. If you can’t explain it clearly enough for the AI to pick up where you left off, you probably haven’t thought it through well enough yourself.


The Real Constraint

People talk about context windows like they’re a temporary limitation — something that will go away when models get bigger. Maybe. But I think the underlying challenge is permanent. Any project of meaningful complexity generates more context than fits in a single conversation. That’s true whether your collaborator is an AI with a 200,000-token window or a human colleague who was on vacation last week.

The skill isn’t having unlimited context. It’s deciding what matters enough to preserve, structuring it so it’s useful later, and building habits that keep it current. That’s not an AI problem. That’s a project management problem. The AI just makes it impossible to ignore.


What I’d Tell Someone Starting Out

If you’re building something real with an AI coding tool — not a weekend project, but something you’ll work on for weeks or months — start your documentation system on day one. Not because you’ll need it on day one, but because by the time you need it, you’ll wish you’d started earlier.

Keep a project state file. Keep a decision log. Keep session notes. Make them structured, not narrative. Update them every time you stop working. And when the AI runs out of context mid-conversation and you don’t lose a beat — that’s when you’ll know it’s working.

MindCap is a personal project — a browser extension that maps curiosity patterns. Built with Plasmo, Dexie.js, FastAPI, Supabase, and Claude. You can follow the development on this blog.