What Happened
The FORGE workflow failed on 5 consecutive executions (125-129) at the same early node:Fetch Epic Issue returned HTTP 404. A previous debugging agent spent 1.5 hours applying two “fixes” to nodes that were 20+ nodes deep into the pipeline — nodes that were never reached in any failing execution.
The fixes applied:
- SplitInBatches v3 → v1 downgrade on
Iterate BlocksandIterate Tasks - A 5,148 → 6,434 character rewrite of
Assemble FORGE Prompt
The Actual Root Cause
The webhook payload sentepic_number but the workflow’s Set Configuration node reads $json.body.epic_issue_number. The epic issue number was silently undefined, cast to NaN, and the GitHub URL became .../issues/NaN — a 404.
Time to fix once identified: 30 seconds.
The Anti-Pattern
Hypothesis fixation: The agent assumed the failure was in complex internal logic (batch processing, template assembly) rather than checking the simplest thing first — whether the input data was correct. In n8n workflows, execution traces show exactly which node failed and its input/output data. Executions 125-129 all showed the same pattern:Set Configuration output epic_issue: NaN, then Fetch Epic Issue returned 404. The root cause was visible in the first execution’s trace data.
The Rule: Trace From Input Forward
When debugging n8n workflows:- Start at the webhook trigger — log the raw payload
- Check
Set Configurationoutputs match expectations - Walk forward through the execution trace, node by node
- The first node that produces unexpected output is the problem
What Also Got Fixed (Positive Outcomes)
The debugging session also produced a genuine architectural improvement: replacing SplitInBatches feedback loops with vectorized$input.all().map() processing. This makes the workflow simpler (fewer nodes, no loop connections) and avoids n8n’s limitation where $('NodeName').all() only returns current-iteration items inside a SplitInBatches loop.