[{"data":1,"prerenderedAt":235},["ShallowReactive",2],{"blog-debugging-with-llm-context":3},{"id":4,"title":5,"author":6,"body":7,"date":225,"description":226,"extension":227,"meta":228,"navigation":229,"path":230,"readingTime":231,"seo":232,"stem":233,"__hash__":234},"blog\u002Fblog\u002Fdebugging-with-llm-context.md","Debugging 10x Faster with Signal LLM Context Export","Signal Team",{"type":8,"value":9,"toc":215},"minimark",[10,15,19,36,39,43,50,65,68,91,95,98,101,115,118,122,125,145,164,168,185,191,195,198,212],[11,12,14],"h2",{"id":13},"the-problem-with-pasting-bug-reports-into-chatgpt","The Problem with Pasting Bug Reports into ChatGPT",[16,17,18],"p",{},"If you have tried passing a bug report to an AI model, you have probably run into two problems:",[20,21,22,30],"ol",{},[23,24,25,29],"li",{},[26,27,28],"strong",{},"Too much noise"," - Raw logs include third-party errors, internal framework noise, and megabytes of HAR data the model does not need.",[23,31,32,35],{},[26,33,34],{},"Wrong format"," - AI models work best with structured context, not copy-pasted DevTools output.",[16,37,38],{},"Signal's LLM Context export solves both.",[11,40,42],{"id":41},"how-it-works","How It Works",[16,44,45,46,49],{},"After recording a session with a bug, open the viewer and click ",[26,47,48],{},"LLM Context"," in the header. You will see a two-panel modal:",[51,52,53,59],"ul",{},[23,54,55,58],{},[26,56,57],{},"Left panel",": A live preview of the context that will be passed to the model",[23,60,61,64],{},[26,62,63],{},"Right panel",": A config sidebar where you can toggle each section on or off",[16,66,67],{},"The sections you can include:",[51,69,70,73,76,79,82,85,88],{},[23,71,72],{},"Environment (browser, OS, viewport, timezone)",[23,74,75],{},"User events (steps to reproduce, written in imperative English)",[23,77,78],{},"Reported issues (what the user flagged with the Report Issue tool)",[23,80,81],{},"Console errors (first-party only, third-party noise filtered out)",[23,83,84],{},"Failed network requests (filterable by domain)",[23,86,87],{},"Request and response bodies (redacted through the redaction engine)",[23,89,90],{},"Framework component state (React fiber, Vue instance, Svelte stores)",[11,92,94],{"id":93},"token-budget-awareness","Token Budget Awareness",[16,96,97],{},"The modal shows a circular arc indicator displaying how much of your chosen model's context window is used. The arc turns yellow at 70% and red at 90%.",[16,99,100],{},"Supported models and their limits:",[51,102,103,106,109,112],{},[23,104,105],{},"Claude Sonnet\u002FOpus: 200K tokens",[23,107,108],{},"GPT-4o: 128K tokens",[23,110,111],{},"Gemini 1.5: 1M tokens",[23,113,114],{},"Llama 3: 128K tokens",[16,116,117],{},"If you are over budget, uncheck the large sections first. Response bodies are usually the culprit.",[11,119,121],{"id":120},"automatic-redaction","Automatic Redaction",[16,123,124],{},"Before the LLM sees any data, Signal's redaction engine runs over the entire context. It automatically strips:",[51,126,127,130,133,136,139,142],{},[23,128,129],{},"Authorization headers and Bearer tokens",[23,131,132],{},"JWT tokens in cookies, headers, or query parameters",[23,134,135],{},"AWS access key IDs",[23,137,138],{},"Email addresses",[23,140,141],{},"Credit card numbers",[23,143,144],{},"Any fields matching common sensitive key names (password, token, secret, api_key, cvv, ssn)",[16,146,147,148,152,153,152,156,159,160,163],{},"You can define custom redaction rules in Settings for project-specific sensitive fields. Four rule types are supported: ",[149,150,151],"code",{},"header-name",", ",[149,154,155],{},"query-param",[149,157,158],{},"json-key",", and ",[149,161,162],{},"regex",".",[11,165,167],{"id":166},"the-workflow","The Workflow",[20,169,170,173,176,179,182],{},[23,171,172],{},"Record a session where the bug occurs",[23,174,175],{},"Open the viewer and click LLM Context",[23,177,178],{},"Toggle sections to tune what is included",[23,180,181],{},"Copy the context to clipboard",[23,183,184],{},"Paste into your AI agent of choice",[16,186,187,188,163],{},"With the VS Code extension and Signal MCP server, steps 3 to 5 are automatic. Signal pushes the context directly to your agent when you click ",[26,189,190],{},"Open in VS Code",[11,192,194],{"id":193},"what-you-get-back","What You Get Back",[16,196,197],{},"With a well-structured context, a modern LLM can typically:",[51,199,200,203,206,209],{},[23,201,202],{},"Identify the root cause from the call stack and HAR data",[23,204,205],{},"Pinpoint the exact component that owns the failing element",[23,207,208],{},"Suggest a specific fix with file and line reference",[23,210,211],{},"Explain why the bug occurs in terms the team can act on",[16,213,214],{},"The limiting factor is no longer the AI - it is the quality of the context you give it. Signal makes that context precise and complete.",{"title":216,"searchDepth":217,"depth":217,"links":218},"",2,[219,220,221,222,223,224],{"id":13,"depth":217,"text":14},{"id":41,"depth":217,"text":42},{"id":93,"depth":217,"text":94},{"id":120,"depth":217,"text":121},{"id":166,"depth":217,"text":167},{"id":193,"depth":217,"text":194},"2026-04-15","How to use Signal to generate structured debugging context for Claude, GPT-4o, and other AI agents - with automatic redaction and token budget awareness.","md",{},true,"\u002Fblog\u002Fdebugging-with-llm-context","5 min read",{"title":5,"description":226},"blog\u002Fdebugging-with-llm-context","2CdK2dpRm8B5f2DMzBen3NECErsyV7jKgR2N0gs98Ek",1778935227765]