Mischa Sigtermans

Thoughts
· Technology AI

Everyone's giving Cowork everything, and thanking it after

Within 24 hours, people handed Claude Cowork their tax documents, emails, and files. Then they posted about it. Here's what nobody's asking.

Everyone's giving Cowork everything, and thanking it after

Anthropic released Cowork this week. If you missed it: it's Claude Code for everyone else. Give it access to a folder on your Mac, and it works autonomously. Organizes files. Builds spreadsheets from receipts. Creates reports from scattered notes. Files your taxes. Cleans your inbox.

It's genuinely impressive. Built in ten days. Using Claude Code itself. The recursion is poetic.

But that's not what caught my attention.

What caught my attention is the reaction.

Full trust, zero hesitation

Within hours, my timeline looked like this:

"Already did my taxes with it this morning. 40 hours of work reduced to 15 minutes. Earth shattering."

"Claude Cowork is insane. The closest thing to AGI we have right now, imo."

"People don't realize how massive this is. I honestly believe Claude Code is AGI. The issue was, only tech people are using it. This is Claude Code but for like.. everything."

And then the satire, which was actually the most honest take (with a grain of salt):

"I installed Claude Cowork yesterday. Since then, it has freed 14GB, got boot time from 15s to 6s, nearly doubled battery life, cleared my inbox, filed my taxes, resolved all my open GitHub issues, finished a thought I started at uni, taught my 5yo the piano, fixed my posture, settled a family dispute from 2013, negotiated peace between neighbours, achieved cold fusion, looked at me and sighed. 11/10 would install again."

The satire works because it's barely exaggerated. People are handing over tax documents, email access, file systems, GitHub repos. No friction. No questions. The convenience is so obvious that skepticism feels stupid.

Everyone racing to share what Cowork did for them.

Nobody asking what Cowork learned about them.

The breakthrough isn't the intelligence. It's the access.

People are calling Cowork AGI. They're missing the point.

The model didn't change. Opus 4.5 was already there. The architecture didn't change. It's the same Claude Agent SDK that powers Claude Code.

What changed is the access.

Cowork isn't just reading your prompts anymore. It's reading your files. Your folder structures. Your naming conventions. Your tax documents. Your email patterns. The way you organize your thoughts.

It's learning you from the inside.

That's not AGI. That's intimacy. And intimacy at scale is a business model.

Cowork isn't selling productivity. It's buying trust.

Think about what people handed over in the first 24 hours:

Tax records. Financial documents. Personal emails. Work files. GitHub repositories with proprietary code. Notes. Drafts. The messy folders we never clean up because nobody else sees them.

This is the most comprehensive picture of how someone thinks, works, and lives. And people gave it away for the price of a productivity win.

I'm not saying Anthropic is doing something nefarious. I actually trust them more than most.

But trust isn't the point. The point is: what happens next?

What's the switching cost nobody's measuring?

Here's the thing about convenience: it compounds.

Every time Cowork helps you, it learns what helps you. Your preferences. Your patterns. Your shortcuts. The context builds.

After six months of using Cowork, switching to a competitor means losing all of that. Not your files. Those are portable. But the understanding. The relationship. The fact that it knows how you work without you explaining it.

That's not a feature. That's lock-in.

We've been here before. Facebook held your social graph. Spotify held your listening history and recommendations. Apple held your ecosystem. Switching wasn't technically impossible. It was just costly enough that you didn't.

AI providers are doing the same thing. Just with something less visible: your working relationship with the model.

Your files you can export. Your context you cannot.

We're building the next generation of lock-in and nobody's talking about it.

Developers figured this out. Everyone else hasn't.

There's a reason every AI coding tool now supports context files. CLAUDE.md. AGENTS.md. Cursor rules. Developers realized early: if the context lives in my repo, I own it. I can move it. I can improve it. I can take it somewhere else.

I built a Claude Code plugin that channels Taylor Otwell's code review philosophy. His opinions on Laravel conventions. His preferences. His standards. It works because context is portable when you control it.

Regular users don't have this.

Your preferences live in Claude's memory. Your conversation history lives on Anthropic's servers. Your projects live scattered across ChatGPT, Claude, Gemini, Copilot. None of them talk to each other.

Every new chat is a fresh start. Every new tool is amnesia.

Context is more than memory

Here's what most people miss.

Context isn't just "remember what I told you last week". That's table stakes. That's a feature checkbox.

Real context is situational.

How you communicate with your team is different from how you write for clients. The version of you debugging code at midnight isn't the same as the one preparing a board deck. Your side project has different constraints than your day job.

One flat memory doesn't capture this. You're not one person. You're multiple contexts depending on what you're doing and who you're doing it for.

The AI that understands this wins. Not because it remembers more. Because it knows which version of you it's talking to.

Where this actually goes

Models are converging. What Claude does today, GPT does next quarter. What Cowork does now, everyone else will copy.

Context won't converge. It compounds. And it's personal.

But here's what nobody's building toward yet: context that's truly yours. Portable. Private. Structured around how you actually work.

Different versions of yourself for different situations. Professional. Personal. Creative. Each one trained on what matters for that context. Each one yours to take wherever you want.

And it goes further than just your own personas. Imagine importing expertise the way you import a library. A code reviewer with someone else's philosophy. A writing coach with a specific voice. A domain expert's judgment, packaged and portable.

Not chatbots. Not characters. Functional context you can plug into any model.

That layer doesn't exist for most people. Yet.

The question nobody's asking

When you evaluate AI tools, you ask: How good is the model? How fast is it? What can it do?

You should also ask: Who owns what it learns about me? Can I take it with me? What happens when something better comes along?

Right now, the answer is: they own it, you can't, and you start over.

That's not a feature. That's a trap dressed up as personalization.


Context is the moat. But only if you control it.

I'm building in this space. Not because the models aren't good enough. Because the infrastructure around them is broken.

thanks for reading

Hi, I'm Mischa. I've been Shipping products and building ventures for over a decade. First exit at 25, second at 30. Now Partner & CPO at Ryde Ventures, an AI venture studio in Amsterdam. Currently shipping Stagent and Onoma. Based in Hong Kong. I write about what I learn along the way.

Keep reading: Why I decided to sell my web agency.

Thoughts