Debugging 10x Faster with Signal LLM Context Export
How to use Signal to generate structured debugging context for Claude, GPT-4o, and other AI agents - with automatic redaction and token budget awareness.
The Problem with Pasting Bug Reports into ChatGPT
If you have tried passing a bug report to an AI model, you have probably run into two problems:
- Too much noise - Raw logs include third-party errors, internal framework noise, and megabytes of HAR data the model does not need.
- Wrong format - AI models work best with structured context, not copy-pasted DevTools output.
Signal's LLM Context export solves both.
How It Works
After recording a session with a bug, open the viewer and click LLM Context in the header. You will see a two-panel modal:
- Left panel: A live preview of the context that will be passed to the model
- Right panel: A config sidebar where you can toggle each section on or off
The sections you can include:
- Environment (browser, OS, viewport, timezone)
- User events (steps to reproduce, written in imperative English)
- Reported issues (what the user flagged with the Report Issue tool)
- Console errors (first-party only, third-party noise filtered out)
- Failed network requests (filterable by domain)
- Request and response bodies (redacted through the redaction engine)
- Framework component state (React fiber, Vue instance, Svelte stores)
Token Budget Awareness
The modal shows a circular arc indicator displaying how much of your chosen model's context window is used. The arc turns yellow at 70% and red at 90%.
Supported models and their limits:
- Claude Sonnet/Opus: 200K tokens
- GPT-4o: 128K tokens
- Gemini 1.5: 1M tokens
- Llama 3: 128K tokens
If you are over budget, uncheck the large sections first. Response bodies are usually the culprit.
Automatic Redaction
Before the LLM sees any data, Signal's redaction engine runs over the entire context. It automatically strips:
- Authorization headers and Bearer tokens
- JWT tokens in cookies, headers, or query parameters
- AWS access key IDs
- Email addresses
- Credit card numbers
- Any fields matching common sensitive key names (password, token, secret, api_key, cvv, ssn)
You can define custom redaction rules in Settings for project-specific sensitive fields. Four rule types are supported: header-name, query-param, json-key, and regex.
The Workflow
- Record a session where the bug occurs
- Open the viewer and click LLM Context
- Toggle sections to tune what is included
- Copy the context to clipboard
- Paste into your AI agent of choice
With the VS Code extension and Signal MCP server, steps 3 to 5 are automatic. Signal pushes the context directly to your agent when you click Open in VS Code.
What You Get Back
With a well-structured context, a modern LLM can typically:
- Identify the root cause from the call stack and HAR data
- Pinpoint the exact component that owns the failing element
- Suggest a specific fix with file and line reference
- Explain why the bug occurs in terms the team can act on
The limiting factor is no longer the AI - it is the quality of the context you give it. Signal makes that context precise and complete.