Files
server-manager/BUG_REPORT_CLAUDE_CODE_PNG_CRASH.md
chrome-storm-c442 f5c91adac8 v1.9.36: document Claude Code image read crash bug & workarounds
- CLAUDE.md: add rule to never read images directly, only via Agent
- CLAUDE.md: add rule to never use chrome-devtools take_screenshot directly
- BUG_REPORT_CLAUDE_CODE_PNG_CRASH.md: full root cause analysis
- tools/patch_claude_code.js: v2 patcher with mapper media_type fix

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-06 08:31:27 -05:00

5.7 KiB

Bug Report: Claude Code CLI crashes when reading large image files

Summary

The Read tool in Claude Code CLI fails when reading images larger than ~25K base64 tokens (~150KB file size). Small images work fine. The root cause is in the DP1 image compression pipeline — when a large image goes through compression, the resulting API content block ends up with source: {type: "base64"} but missing both data and media_type fields. This causes an unrecoverable API 400 error.

Environment

  • Claude Code CLI: @anthropic-ai/claude-code@2.1.70
  • OS: Windows 10 Pro for Workstations 10.0.19045
  • Node.js: v24.13.1
  • sharp: 0.34.5 (manually installed, works correctly)

Root Cause Analysis

The Size Threshold

Images are read by Nv8() which calls q01() to create the result. After q01(), a size check runs:

if (Math.ceil($.file.base64.length * 0.125) > q)  // q = Tv8() = 25000 tokens
  • Small images (< ~150KB file / < 25K tokens base64): Skip DP1, return directly from q01()WORKS
  • Large images (> ~150KB file / > 25K tokens base64): Enter DP1 compression path → CRASHES

What happens in the DP1 path

When the image exceeds the token limit, DP1() is called to compress it. DP1 uses sharp to resize/recompress and returns {base64, mediaType, originalSize}. The code then returns:

return {type: "image", file: {base64: H.base64, type: H.mediaType, originalSize: z}}

In isolation, this looks correct. H.mediaType is "image/jpeg" (from vp6() inside DP1).

Where it actually breaks

The tool result mapper converts this to an API content block:

case "image": return {
  tool_use_id: q,
  type: "tool_result",
  content: [{
    type: "image",
    source: {type: "base64", data: A.file.base64, media_type: A.file.type}
  }]
};

However, between the mapper output and the actual API request, the image content block gets stripped. The API receives:

{"type": "image", "source": {"type": "base64"}}

Both data and media_type are absent. JSON.stringify silently drops undefined properties, so if both become undefined at any point, the serialized JSON omits them entirely.

Evidence from transcript analysis

The session transcript (.jsonl output) captured the exact message content sent to the API:

{
  "type": "user",
  "content": [{
    "tool_use_id": "toolu_01NmuSjPErhBfbtoV8RBrJip",
    "type": "tool_result",
    "content": [{"type": "image", "source": {"type": "base64"}}]
  }]
}

This confirms data and media_type are both missing at the API call level.

The actual root cause (suspected)

The image data stripping likely occurs in the message normalization/storage layer between the tool result mapper and the API call. When conversation messages are stored in memory (the internal D array or conversation state), large base64 image data may be:

  1. Stripped for memory efficiency
  2. Moved to a separate image attachment store (referenced by imagePasteIds)
  3. Lost during structuredClone or message serialization

The reconstruction step that should restore the image data before the API call fails for tool_result image blocks, possibly because it only handles top-level image blocks (from user pastes) but not images nested inside tool_result.content[].

Test Results

File Size Base64 tokens DP1 path Result
photo.jpg 25KB ~4,250 No Works
test_tiny.png 98B ~16 No Works
test_medium.png 751KB ~125,000 Yes Crashes
screenshot_gui.png 387KB ~64,500 Yes Crashes

Severity: Critical

  • Session-killing: corrupted message poisons the entire conversation context
  • No recovery: every subsequent API call fails with 400
  • Affects subagents too: Agent tool crashes, but main session survives
  • Size-dependent: only images > ~150KB trigger the bug

Patches Applied

Patch 1: Nv8 try/catch wrapper (PATCHED_NV8_SAFE_IMAGE_READ)

Wraps the entire Nv8 function in try/catch. On failure, returns a text error message instead of corrupted binary. Also adds ||"image/png" fallback on H.mediaType in the DP1 path.

Patch 2: Image mapper media_type fallback (PATCHED_IMAGE_MEDIA_TYPE)

Adds ||"image/png" fallback to media_type in the tool result mapper. Prevents undefined from being serialized as absent field.

Effectiveness

  • Patches only work after restarting Claude Code (cli.js is loaded once at startup)
  • Patches fix the media_type issue but may NOT fix the missing data issue
  • The underlying cause (image data being stripped from stored messages) needs to be fixed upstream

Patcher Tool

node tools/patch_claude_code.js          # Apply all patches
node tools/patch_claude_code.js --check  # Check status
node tools/patch_claude_code.js --revert # Revert to backup

After updating Claude Code (npm update -g @anthropic-ai/claude-code), re-run the patcher.

Workarounds

  1. Use subagent for ALL image reading — crashes in isolation, main session survives
  2. Resize large images before reading — keep under ~150KB
  3. Read images only via Bash toolfile screenshot.png for metadata, avoid actual content

Files Referenced

  • Patcher: tools/patch_claude_code.js
  • CLI entry: node_modules/@anthropic-ai/claude-code/cli.js (minified, ~13K lines)
  • Key functions: Nv8 (image reader), DP1 (compressor), q01 (result builder), ig (sharp wrapper), mapToolResultToToolResultBlockParam (API mapper)

Report Info

  • Date: 2026-03-06
  • Version: Claude Code 2.1.70
  • Reproducible: 100% on Windows with any image > ~150KB