Mischa Sigtermans

Thoughts
· AI

The most skeptical about AI haven't shipped with it

AI skepticism correlates with consumption vs creation. People who read about AI see failures. People who build with it see both, and that changes everything.

Daniel Stenberg, the creator of curl, is considering shutting down the project's bug bounty program. Not because of budget. Not because of lack of interest. Because AI-generated submissions are drowning his team.

About 20% of all bug reports in 2025 have been what Stenberg calls 'AI slop': fabricated vulnerabilities, hallucinated function names, made-up memory addresses. His team of seven reviewers spends 30 minutes to three hours evaluating each submission. 'We are effectively being DDoSed', he wrote.

This is legitimate criticism. The problem is real.

And it leads to exactly the wrong conclusion.

The pattern: symptoms vs cause

Skeptics aren't wrong about what they're seeing. The spam is real. The deepfakes are real. The AI slop flooding YouTube and Spotify is real. Nearly 1 in 10 of the fastest-growing YouTube channels in July 2025 consisted entirely of AI-generated videos. Spotify removed 75 million spam tracks last year, against a real catalogue of 100 million.

But skeptics misdiagnose the cause.

These are incentive problems wearing AI clothes. The same bad actors existed before. They're just faster now. Every communication technology gets weaponized by people chasing easy money. Email gave us spam. Social media gave us engagement farming. AI gives us slop at scale.

The technology isn't the problem. The incentives are.

When someone points at AI-generated garbage and says 'this is why AI is useless', they're confusing the signal with the noise. The curl situation isn't evidence that AI can't find real bugs. It's evidence that bug bounties attract low-effort submissions when there's money on the table. That was true before AI. It's just more visible now.

The access problem: you can't test drive AI

This is what I think is actually happening.

When Tesla released the Model S, early adopters could sit in the car, drive it around the block, feel the instant torque. The experience sold itself. You didn't need to believe in electric vehicles. You just needed 10 minutes behind the wheel.

AI doesn't work like that.

You can't casually test drive AI. Opening ChatGPT and asking it to write a poem doesn't show you anything useful. The value only reveals itself through sustained building. Weeks of integrating it into your actual workflow. Learning its failure modes. Discovering where it genuinely helps.

This creates a knowledge gap that looks like a belief gap.

The people who dismiss AI as hype are often the same people who tried it once, got a hallucinated answer, and walked away. They're not wrong that it hallucinated. They're wrong about what that means.

There's a METR study that captures this perfectly. They ran a randomized controlled trial with experienced developers working on their own projects. When AI tools were allowed, developers took 19% longer on average. But those same developers estimated they were 20% faster.

Even people using AI every day misjudge its impact on their productivity. The gap between perception and reality runs both ways.

What changed my mind

I was skeptical too.

Not about the technology existing. I could see it worked. But about whether it mattered for the kind of building I do. Another tool in a long line of overhyped tools. Blockchain for code, basically.

What changed wasn't the hype. It was shipping.

When you build with AI daily, you learn its actual boundaries. Where it fails (often). Where it works (enough to matter). You stop thinking about AI as a category and start thinking about specific tools for specific problems.

Some examples from actual work:

  • Code review: Useful for catching obvious issues. Useless for architectural decisions.
  • First drafts: Good for getting something on the page. Terrible if you ship it unchanged.
  • Debugging: Surprisingly helpful for rubber-ducking. Often wrong about the actual fix.
  • Documentation: Genuinely useful. This is where I see the clearest wins.

None of this sounds revolutionary. That's the point. The revolutionary framing is what creates skeptics. When you promise magic and deliver 'genuinely useful for documentation', people feel cheated.

GitHub's research found that developers follow a predictable pattern. They start as skeptics. They become explorers. Then collaborators. Then strategists. None of the strategists started out that way, and none of them were converted by arguments. They were converted by experience.

You can't argue someone out of skepticism. You build them out of it.

The marketing problem (not the technology problem)

I'll agree with the skeptics on this: the marketing around AI is insufferable.

Sam Altman spent years talking about AGI like it was a solved problem, raised billions on that narrative, and is now retreating to 'AGI is not a super useful term'. The gap between promised 'PhD-level expertise' and the reality of GPT-5 (which reviewers called 'incremental, not revolutionary') is exactly why people stop believing.

MIT Technology Review put it well: Altman's hype 'hinged less on today's capabilities than on a philosophical tomorrow, an outlook that quite handily doubles as a case for more capital and friendlier regulation'.

Gary Marcus is right that faith has waned because industry leaders are 'constantly overpromising'. The emperor has always been wearing some clothes, just not the ones described in the press releases.

But the marketing being bullshit doesn't make the technology useless.

I'm not defending OpenAI's worldsaving narrative. I'm saying you can reject the hype cycle and still find the tools useful. These are separate positions that get conflated in every AI conversation.

The skeptic position ('the marketing is overblown, therefore AI doesn't work') is as wrong as the booster position ('AI works, therefore the marketing is accurate').

Both are lazy. Both mistake the map for the territory.

The internet in 1995

In 1995, Newsweek published an article titled 'The Internet? Bah!'. The author, Clifford Stoll, argued that the internet was overhyped, that it would never replace newspapers, that online shopping was a fantasy.

He later admitted: 'Wrong? Yep. At the time, I was trying to speak against the tide of futuristic commentary on how The Internet Will Solve Our Problems'.

Sound familiar?

The early internet was also 'just spam' if you weren't building with it. Usenet was flooded with garbage. Email was drowning in unsolicited messages. Serious people wrote serious articles about how this whole network thing was a fad.

They weren't wrong about the spam. They were wrong about what the spam meant.

The question was never 'does the internet have problems?'. The question was 'do the problems outweigh the utility for people actually building things?'. For consumers casually browsing, maybe. For builders, the answer was obviously no.

Same pattern. Different decade.

Not selling anything

I'm not defending the hype cycle. I'm not telling you AI will change everything. I'm definitely not telling you to trust OpenAI's safety claims or Anthropic's positioning or any other corporate narrative dressed up as philosophy.

I'm telling you what I observed: my skepticism dropped when I started shipping.

The gap between AI consumers and AI builders isn't a belief gap. It's a knowledge gap. Consumers see the failures that make headlines. Builders see both the failures and the wins, and that changes the calculation.

If you're skeptical, I'm not going to argue with you. Arguments don't work anyway. But if you're skeptical and you haven't built anything substantial with AI tools, consider that your sample is biased toward consumption.

The critics who read about AI are right about what they see. The builders who ship with AI are right about what they see.

They're just looking at different things.

thanks for reading

Hi, I'm Mischa. I've been Shipping products and building ventures for over a decade. First exit at 25, second at 30. Now Partner & CPO at Ryde Ventures, an AI venture studio in Amsterdam. Currently shipping Stagent and Onoma. Based in Hong Kong. I write about what I learn along the way.

Keep reading: Joining Ryde Ventures as partner and chief product officer.

Thoughts