Display Settings

Back to Blog
Projects MindCap Reflection

The Hard Part: Understanding Attention Through Behavior

January 27, 2026 8 min read
A colorful iridescent brain illustration
The biggest, most ambitious thing I've ever built—and why that terrifies me.

MindCap is the biggest, most complex thing I've built. And I've been struggling.

Not with the code—the code is coming together. The extension tracks visits, the Topic Registry aggregates data, the categorization engine classifies pages with confidence scoring. Working. Tested. Deployed.

The struggle is deeper: how do you understand attention through behavior?

The Problem I Keep Running Into

I can measure everything. Time on page, scroll depth, click patterns, video watch time, exit behavior—MindCap captures it all. I have dozens of engagement signals per page visit, aggregated into topic registries, classified into 14 categories with confidence scores.

But what does it mean?

I spent two hours on YouTube yesterday. Was that mindless scrolling? Deep learning? Background music while I worked? The raw data can't tell me. Time is just time. Clicks are just clicks.

Attention isn't just where you look. It's why you look, how you engage, and what you take away. And none of that is directly observable.

I've written about session detection, topic aggregation, and categorization like they're solved problems. They're not. They're approximations—my best guesses about how to infer internal states from external behavior.

The Gap Between Behavior and Understanding

Here's an example that keeps bothering me:

Person A spends 30 minutes reading a technical article, scrolling slowly, reaching 95% depth, following 3 related links. Person B spends 30 minutes on the same article, scrolling at the same pace, same depth, same link behavior.

MindCap sees identical patterns. But:

Same behavior. Completely different attention quality. And I have no way to tell them apart.

This bothers me more than any technical challenge I've faced. I'm building a tool to understand attention, and the fundamental insight—actual understanding—is hidden behind the veil of consciousness.

What I've Learned About the Limits

After wrestling with this, I've started making peace with the limitations. Here's where I've landed:

1. Patterns Are Better Than Points

A single browsing session tells me almost nothing. But patterns across hundreds of sessions? Those reveal something real. If you keep returning to the same topic week after week, you're interested—regardless of how engaged you were on any particular visit. If a category is trending upward over months, that's meaningful signal.

MindCap can't tell me how well you paid attention to any one article. But it can tell me what subjects keep pulling you back, how your interests evolve, and where your curiosity takes you when you're left to wander.

2. Shape Matters More Than Measurement

I've been thinking about this like a map. A map doesn't tell you how much you enjoyed walking through a neighborhood—it tells you where you went. That's still incredibly useful. You can look at a map of your year and see: I spent a lot of time in this area, I never went to that area, my path has been meandering vs. focused.

MindCap can show you the shape of your digital curiosity. Where does your attention cluster? How do topics connect? Are you going deep or wide? That's not the same as knowing whether you were paying attention—but it's something.

3. Self-Reflection Requires Data

The whole point of MindCap isn't to judge people's browsing. It's to enable reflection. And reflection requires raw material.

When MindCap tells you "you spent 40% of your browsing time on technology this month, up from 25% last month," that's an invitation to reflect: Huh, I have been learning more tech lately. Is that intentional? Is it serving me?

I can't answer those questions for you. But I can give you the data to ask them.

Why I'm Still Excited

Despite this uncertainty, I'm more excited about MindCap than I've been. Here's why:

The infrastructure is done. After three posts about data models, topic registries, and categorization engines, the foundation is solid. Every page visit gets captured, categorized, and aggregated. The hard plumbing work is complete.

And now I get to build the interesting part: pattern detection and insight generation.

This is what MindCap has been building toward from the beginning. All that infrastructure exists to answer one question: what patterns emerge from your browsing that you can't see in the moment?

The Nine Patterns I'm Implementing

Tomorrow I start building the pattern detector. I've designed 9 patterns that work within MindCap's limitations—they don't claim to read your mind, but they surface observable patterns that might be meaningful:

Pattern What It Surfaces
recurring_interest Topics you keep returning to across multiple weeks
growing_interest Categories accelerating week-over-week
paused_exploration Topics that used to appear but have gone quiet
passive_browsing High time but low engagement scores
unanswered_question Repeated searches without resolution
rabbit_hole Maps the flow and branching of curiosity-driven sessions
temporal_pattern Behavior tied to time of day or day of week
learning_style Preferences for videos, articles, docs, or discussions
exploration_curiosity Tentative interest—peeking at topics without diving in

I'm most excited about rabbit_hole. It's the founding vision of MindCap: instead of warning "you went down a rabbit hole" like a scolding productivity app, it maps the journey.

"You started in technology, branched into science, touched on history, and ended up in philosophy. High spread, medium depth, loosely coherent—a wandering journey."

That's not judgment. That's insight. And it's exactly the kind of self-knowledge I want MindCap to enable.

The Biggest Thing I've Ever Built

It's not the code complexity, though there's plenty. It's not the cross-platform architecture (extension + backend + AI integration). It's not even the privacy engineering (keeping URLs local, syncing only insights).

It's that MindCap is trying to do something genuinely new. I've looked for tools that do what I want, and they don't exist. Every attention tracker I've found is either:

None of them help you understand your curiosity. None of them map the shape of your exploration. None of them celebrate rabbit holes as a form of learning instead of shaming them as distraction.

Building something new is terrifying. There's no template to follow. No existing product to copy. Every design decision is a guess about what will actually be useful.

But it's also exhilarating. Because if I get it right, I'll have built a tool that genuinely helps people understand their own minds.

What's Actually Hard

Before I dive into pattern detection, some difficulties I've faced:

Architectural Decisions Without Precedent

Every design choice feels consequential because I can't look at how others solved it. Should topic time be distributed equally across keywords or weighted by frequency? Should confidence scores decay over time or stay fixed? Should rabbit hole detection use time-based windowing or semantic clustering?

I've made dozens of these calls, and I honestly don't know if they're right. They seem reasonable. They pass my tests. But I won't know if they produce meaningful insights until real people use the tool.

The Inference Problem

As I described above: behavior doesn't directly reveal attention. Every metric I track is a proxy for something I can't observe. I've accepted this, but it still makes me uncomfortable. MindCap will inevitably get things wrong about individual sessions. I can only hope the aggregate patterns are useful enough to justify the noise.

The Loneliness of Building Alone

I have Claude Code to brainstorm with, but the vision is mine. When I'm stuck on a design question at 2am, Claude can suggest options—but the decision is still mine. The accountability is mine. If this whole project turns out to be a waste, that's on me.

That's the hardest part. Not the code. The uncertainty.

Tomorrow: Pattern Detection Begins

Tomorrow I start implementing the pattern detector. I'm going to do it systematically—one pattern at a time, with tests and validation at each step.

The Topic Registry gives me weekly time aggregates. The categorization engine gives me confidence-scored classifications. The engagement tracker gives me quality signals. Now I get to combine all of that into patterns that might help people understand their digital attention.

I'm nervous and excited. Building the thing I've wanted to exist for years.

The vision: A tool that looks at your browsing and says "here's the shape of your curiosity, here's how your interests are evolving, here's what captures your attention when you're not paying attention."

Not guilt. Not restriction. Just understanding.

Updates soon.

Jen Kim

Jen Kim

Developer, Claude Whisperer. Building tools for curiosity, creativity, and chaos.

More from the MindCap Series