My Claude Code settings.json, hooks, and config explained — Building Products

 [Mischa Sigtermans](https://mischa.sigtermans.me)

  [ Thoughts ](https://mischa.sigtermans.me/thoughts) [ Books ](https://mischa.sigtermans.me/books) [ About ](https://mischa.sigtermans.me/about)     [← Thoughts](https://mischa.sigtermans.me/thoughts)   April 13, 2026  · AI Open Source

Every setting in my Claude Code config, explained
=================================================

My full settings.json, every environment variable, every hook, every plugin. The setup I use across six projects and what each line actually does.

I get asked about my Claude Code setup more than anything else I've written about. People see the [status line post](/thought/how-i-set-up-my-claude-code-status-line), the [rtk token savings post](/thought/how-i-saved-billions-of-tokens-with-a-claude-code-hook), or the [Ralph loop](/thought/my-simplified-ralph-loop-setup-for-claude-code), and the follow-up is always the same: what does the rest of your config look like?

Here's the whole thing. Nothing fancy. It does the job. Every setting, every environment variable, every hook, every plugin. This is the `~/.claude/settings.json` I use across six projects.

One thing I want to say upfront: I experimented with tweaking thinking mode, context window settings, and auto-compaction thresholds. I tried `CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING`, played with `MAX_THINKING_TOKENS`, fiddled with `CLAUDE_AUTOCOMPACT_PCT_OVERRIDE`. None of it made a meaningful difference compared to the defaults. The Anthropic team, and specifically people like [Boris Cherny](https://x.com/bcherny) who work on the Claude Code internals, are tuning these things constantly. I trust their defaults more than my guesses. The settings that matter are the ones that change what the tool does, not how the model thinks.

The full settings.json
----------------------

```
{
    "env": {
        "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1",
        "CLAUDE_CODE_NO_FLICKER": "1",
        "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1",
        "CLAUDE_CODE_SUBAGENT_MODEL": "sonnet"
    },
    "hooks": {
        "PreToolUse": [
            {
                "matcher": "Bash",
                "hooks": [
                    {
                        "type": "command",
                        "command": "~/.claude/hooks/rtk-rewrite.sh"
                    }
                ]
            }
        ]
    },
    "statusLine": {
        "type": "command",
        "command": "bash ~/.claude/statusline-wrapper.sh",
        "padding": 0
    },
    "enabledPlugins": {
        "taylor-says@rydeventures-claude-plugins": true,
        "raymond-says@rydeventures-claude-plugins": true,
        "ralph@rydeventures-claude-plugins": true,
        "steve-says@rydeventures-claude-plugins": true,
        "david-says@rydeventures-claude-plugins": true,
        "codex@openai-codex": true,
        "session-bridge@session-bridge": true,
        "swift-lsp@claude-plugins-official": true,
        "rust-analyzer-lsp@claude-plugins-official": true,
        "telegram@claude-plugins-official": true,
        "frontend-design@claude-plugins-official": true
    },
    "effortLevel": "high",
    "includeCoAuthoredBy": false,
    "autoUpdatesChannel": "latest",
    "autoDreamEnabled": true
}

```

I'll walk through each section.

The environment variables
-------------------------

These are the lines that matter most and get talked about least. Claude Code reads environment variables from `settings.json` under the `env` key, and some of them change how the tool behaves in ways that aren't obvious.

**`CLAUDE_CODE_SUBAGENT_MODEL`: `"sonnet"`**

This is the single biggest productivity setting I run. By default, subagents use the same model as the main conversation. That means if I'm on Opus, every background agent, every research task, every parallel search also burns Opus quota. Setting this to `sonnet` means subagents run on Sonnet 4.6 while my main conversation stays on [Opus 4.6](/thought/opus-4-6-is-the-best-model-ive-ever-used). Opus thinks, Sonnet executes. The split is the reason my Ralph loops work at all without draining my quota by noon.

**`CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS`: `"1"`**

Enables the agent teams feature. I can spin up named teams of agents that work in parallel on different parts of a task. Still experimental, but I use it daily. The coordination patterns are rougher than solo agents but the throughput is worth it.

**`CLAUDE_CODE_NO_FLICKER`: `"1"`**

Stops the terminal flickering during long responses. Without this, the output area redraws on every token, which on a large conversation makes the screen strobe. Small setting, big quality-of-life improvement.

**`CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC`: `"1"`**

This one is worth explaining carefully. It disables non-essential network traffic like error reporting and autocomplete telemetry. It does not disable the experiment gate system. That distinction matters.

There's a separate variable, `DISABLE_TELEMETRY`, that disables everything including the experiment gates. I don't use it. Here's why: the experiment gate system is how Anthropic rolls out prompt cache optimisations. When you disable the gates entirely, the client falls back to default values, and right now the default cache TTL is lower than what opted-in users get. Anthropic is rolling out longer cache times selectively, and the experiment gates are the mechanism. Disabling them doesn't protect your privacy any more than `DISABLE_NONESSENTIAL_TRAFFIC` already does, but it does cost you cache performance.

The short version: keep `DISABLE_TELEMETRY` off. Use `DISABLE_NONESSENTIAL_TRAFFIC` to stop the things you actually want stopped.

Variables I don't set (but you might want)
------------------------------------------

Claude Code has a lot of environment variables. Most of them you'll never touch. These are the ones worth knowing about.

**`CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY`: `"1"`** stops the 'How is Claude doing?' survey from popping up mid-session. I don't set it because `DISABLE_NONESSENTIAL_TRAFFIC` already covers it. But if you want to keep error reporting and auto-updates while killing just the survey, this is the scalpel.

**`BASH_DEFAULT_TIMEOUT_MS`** controls how long a bash command can run before Claude Code kills it. Default is 120 seconds. If you run long test suites or database migrations through Claude Code, bump this up.

**`CLAUDE_CODE_MAX_OUTPUT_TOKENS`** limits how many tokens Claude Code can generate per response. Useful if you're on API billing and want to cap cost per turn. The trade-off is that complex answers get cut short.

**`CLAUDE_AUTOCOMPACT_PCT_OVERRIDE`** sets the context window percentage that triggers auto-compaction. Default is around 95%. Lower it if you want Claude Code to compact earlier and keep more headroom. I leave it at default because my [status line](/thought/how-i-set-up-my-claude-code-status-line) warns me at 80%.

**`CLAUDE_CODE_DISABLE_AUTO_MEMORY`** turns off the automatic memory system. Set it to `"1"` if you don't want Claude Code building persistent context about you across sessions.

**`CLAUDE_CODE_SHELL`** overrides shell detection. Useful if Claude Code picks up the wrong shell or you want to force a specific one.

**`CLAUDE_CODE_MAX_TOOL_USE_CONCURRENCY`** controls how many read-only tools can run in parallel. Default is 10. Higher means faster for file-heavy operations but more resource usage.

The hook
--------

I run one hook. It rewrites every Bash command through [rtk](https://github.com/rtk-ai/rtk) before Claude Code sees the output. I [wrote about this in detail](/thought/how-i-saved-billions-of-tokens-with-a-claude-code-hook), but the short version is: rtk strips token waste from shell output. File listings, git status, test results, all of it gets compressed before it enters the context window. 5 billion tokens saved and counting.

The hook sits on `PreToolUse` with a `Bash` matcher. Every time Claude Code is about to run a shell command, the hook rewrites it through `rtk rewrite`, which wraps the command in rtk's output filter. The rewrite happens before execution, so Claude Code never sees the uncompressed output.

```
"hooks": {
    "PreToolUse": [
        {
            "matcher": "Bash",
            "hooks": [
                {
                    "type": "command",
                    "command": "~/.claude/hooks/rtk-rewrite.sh"
                }
            ]
        }
    ]
}

```

If you want to add your own hooks, the pattern is the same. `PreToolUse` fires before a tool runs, `PostToolUse` fires after. The matcher filters which tool triggers it. The hook script receives the tool input as JSON on stdin and can modify it by returning JSON on stdout.

The plugins
-----------

I run eleven plugins. Most of them are from the [Ryde Ventures plugin marketplace](https://github.com/rydeventures/claude-plugins).

**The review agents:**

- **[taylor-says](https://github.com/mischasigtermans/taylor-says)** catches over-engineering in Laravel code. I [wrote about this one](/thought/taylor-otwell-is-immortal-and-costs-200-month).
- **[raymond-says](https://github.com/mischasigtermans/raymond-says)** does the same for Python
- **[david-says](https://github.com/mischasigtermans/david-says)** does it for Rust
- **[steve-says](https://github.com/mischasigtermans/steve-says)** channels Steve Jobs for product design decisions

These run as subagents. When I push code or ask for a review, the relevant agent reads the diff and tells me what Taylor Otwell or DHH or Steve Jobs would say about it. They run on Sonnet (because of the subagent model setting), which means they don't eat into my Opus quota.

**The workflow plugins:**

- **[ralph](/thought/my-simplified-ralph-loop-setup-for-claude-code)** manages autonomous coding loops
- **[codex](https://github.com/openai/codex-plugin-cc)** connects to OpenAI's Codex for a second opinion when I'm stuck
- **[session-bridge](https://github.com/PatilShreyas/claude-code-session-bridge)** lets me connect two Claude Code sessions and pass context between them
- **telegram** bridges Claude Code to Telegram channels (official Anthropic plugin)
- **frontend-design** generates production-grade UI components (official Anthropic plugin)

**Project-specific LSP plugins:**

- **swift-lsp** connects Claude Code to the Swift language server for type info, symbol resolution, and diagnostics (official Anthropic plugin)
- **rust-analyzer-lsp** does the same for Rust via rust-analyzer (official Anthropic plugin)

These are enabled globally but only activate when I'm in a Swift or Rust project. They give Claude Code real code intelligence instead of guessing at types and signatures. The difference is noticeable on Rust especially, where the borrow checker catches things that text-level analysis misses.

The MCP servers
---------------

Plugins handle what Claude Code can do. MCP servers handle what it can see. I run a mix of user-scope and project-scope servers.

**User-scope (always on):**

- **[Linear](https://linear.app/docs/mcp)** for issue tracking. Claude Code reads tickets, creates issues, updates status. Replaces the context-switch of opening Linear in a browser.
- **[Sentry](https://docs.sentry.io/ai/mcp/)** for error monitoring. When something breaks in production, Claude Code can pull the stack trace and start debugging without me copy-pasting from a dashboard.
- **[Ahrefs](https://docs.ahrefs.com/mcp/docs/claude-code)** for SEO research. Keyword volumes, site metrics, backlink data. Used it heavily while optimizing this blog.
- **Claude in Chrome** for browser automation. Built-in MCP that ships with Claude Code. Controls a Chrome tab, reads pages, reads browser logs, fills forms, takes screenshots.

**Project-scope (per repo):**

- **[Solo](https://soloterm.com)** by Aaron Francis. A terminal for managing dev processes. Claude Code starts, stops, and reads output from dev servers through it, and controls commands of the project. Highly recommended terminal.
- **[PhpStorm](https://www.jetbrains.com/phpstorm/)** for IDE integration. Claude Code reads diagnostics, runs inspections, navigates symbols. Bridges the gap between the terminal and the IDE.
- **[Laravel Boost](https://laravel.com/ai/boost)** for Laravel-specific context. Database schema, routes, application info. Claude Code understands the app structure without reading every file.

The project-scope servers live in each repo's `.mcp.json`. The user-scope ones live in `~/.claude/settings.json` under `mcpServers`. The split matters because a Python project doesn't need PhpStorm, and a static site doesn't need Laravel Boost.

The status line
---------------

```
"statusLine": {
    "type": "command",
    "command": "bash ~/.claude/statusline-wrapper.sh",
    "padding": 0
}

```

Four pieces of information: repo name, branch, uncommitted changes, context window percentage. Turns red at 80%. I [wrote about the full setup](/thought/how-i-set-up-my-claude-code-status-line) and it hasn't changed since.

The small settings
------------------

**`effortLevel`: `"high"`**. makes Claude Code try harder on every response. More tokens per response, more thorough analysis, fewer shortcuts. I switch between this and Sonnet depending on what I'm doing. Planning, writing, architectural decisions: Opus on high effort. Execution work, refactoring, test generation: either medium effort or Sonnet entirely. The config shows `high` because that's where I start my day. The model picker and `/model` command handle the rest.

**`includeCoAuthoredBy`: `false`**. stops adding 'Co-authored-by: Claude' to commit messages. My commits are my commits.

**`autoUpdatesChannel`: `"latest"`**. always on the latest Claude Code version. I want bugs fixed and features shipped, not stability.

**`autoDreamEnabled`: `true`**. This one isn't in the official docs yet. I found it in the Claude Code source and enabled it manually. It's not what some posts describe as 'pre-processing context' or 'preparing for likely next actions'. What it actually does is background memory consolidation.

After your conversation turn ends, it checks two gates: has it been 2+ hours since the last consolidation, and does the session still have budget? If both pass, it spawns a hidden background task that reads your `~/.claude/projects/.../memory/` files and `MEMORY.md` index, merges new learnings into existing memory files, resolves contradictions, converts relative dates to absolute, prunes stale memories, and rewrites the index. It's a janitor for the auto-memory system.

It does burn tokens when it fires, a full agent run with transcript scanning. But it only runs once per 2 to 4 hours per session, so it's infrequent. The upside is cleaner, deduplicated memories that cost fewer tokens on every future turn, because `MEMORY.md` is loaded into every conversation. Worth keeping on if you use auto-memory.

What I'd change
---------------

This config has been stable for months, which is either a sign that it works or a sign that I've stopped experimenting. Probably both.

The one thing I keep coming back to is the plugin architecture. Eleven plugins is a lot. Each one adds weight to the system prompt and increases the surface area for conflicts. I'd like to trim it, but every time I disable one I need it within the week. The review agents in particular have earned their place: I catch patterns now that I used to ship.

The environment variables are the part most people skip and the part that makes the biggest difference. The subagent model split alone changed how I use Claude Code more than any plugin. If you take one thing from this post, it's that line.

Every hook you add is a decision to stop thinking about something. Every environment variable is a decision about how the tool behaves when you're not looking. Get both right and the tool disappears into the work.

 *thanks for reading*

Hi, I'm [Mischa](https://mischa.sigtermans.me/about). I've been *shipping products* and *building ventures* for over a decade. First exit at 25, second at 30. Now Partner &amp; CPO at [Ryde Ventures](https://ryde.ventures), an AI venture studio in Amsterdam. Currently shipping [Stagent](https://stagent.com) and [Onoma](https://askonoma.com). Based in Hong Kong. I [write](https://mischa.sigtermans.me/thoughts) about what I learn along the way. [More about me](https://mischa.sigtermans.me/about).

Keep reading: [How working out 5 times a week changed my life](https://mischa.sigtermans.me/thought/how-working-out-5-times-a-week-has-changed-my-life).

   [← Thoughts](https://mischa.sigtermans.me/thoughts)  Connect
-------

  [X](https://x.com/mischamartijn) [LinkedIn](https://linkedin.com/in/mischasigtermans) [GitHub](https://github.com/mischasigtermans)

 © 2026 · [RSS](feed:https://mischa.sigtermans.me/feed)
