Fix a regression where the final assistant answer could stay inside the thinking container, leaving only “Working” visible.#298500
Conversation
There was a problem hiding this comment.
Pull request overview
Fixes a chat rendering regression where the final assistant markdown could remain inside the “thinking” container, leaving only the “Working” indicator visible, by unifying “final answer” classification and ensuring final markdown is re-rendered/moved out of thinking when needed.
Changes:
- Consolidates “final answer markdown” detection into
isFinalAnswerMarkdownPart(...)and reuses it across render/diff paths. - Updates progressive diff rendering to move final-answer markdown out of the thinking container when the previously-rendered node is inside thinking.
- Adds browser widget regression tests covering final-answer classification, diff rerender behavior, and DOM relocation out of thinking.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| src/vs/workbench/contrib/chat/browser/widget/chatListRenderer.ts | Unifies final-answer detection and adjusts diff rendering to relocate final markdown out of thinking. |
| src/vs/workbench/contrib/chat/test/browser/widget/chatListRenderer.test.ts | Adds regression tests for final-answer classification and the “move final markdown out of thinking” behavior. |
| alreadyRenderedPart.domNode.replaceWith(newPart.domNode); | ||
| if (isFinalAnswerPart && this.isRenderedPartInsideThinking(alreadyRenderedPart)) { | ||
| alreadyRenderedPart.domNode.remove(); | ||
| templateData.value.appendChild(newPart.domNode); |
There was a problem hiding this comment.
When moving a final-answer markdown part out of the thinking container, this appends the new DOM node to templateData.value. If there are already-rendered sibling parts that come after this markdown in contentForThisTurn (e.g., changes summary, code citations, error details), appending will reorder the DOM relative to those parts. Consider inserting the node at the correct position (e.g., before the next rendered sibling domNode for a later contentIndex, falling back to append when none exists) so visual order continues to match the content array.
| templateData.value.appendChild(newPart.domNode); | |
| // Insert the new part at the correct position relative to already-rendered siblings | |
| let inserted = false; | |
| for (let i = contentIndex + 1; i < renderedParts.length; i++) { | |
| const nextPart = renderedParts[i]; | |
| const nextDomNode = nextPart?.domNode; | |
| if (nextDomNode && nextDomNode.parentElement === templateData.value) { | |
| templateData.value.insertBefore(newPart.domNode, nextDomNode); | |
| inserted = true; | |
| break; | |
| } | |
| } | |
| if (!inserted) { | |
| templateData.value.appendChild(newPart.domNode); | |
| } |
| const lastPinnedPartIndex = content.findLastIndex(c => | ||
| c.kind === 'thinking' | ||
| || c.kind === 'toolInvocation' | ||
| || c.kind === 'toolInvocationSerialized' | ||
| || c.kind === 'textEditGroup' | ||
| || c.kind === 'hook'); | ||
|
|
||
| return index > lastPinnedPartIndex; |
There was a problem hiding this comment.
isFinalAnswerMarkdownPart computes lastPinnedPartIndex via findLastIndex each time it’s called. Since it’s invoked inside per-part loops (diff and renderChatContentDiff) and also during markdown rendering, this can turn into O(n^2) scans for larger responses once element.isComplete becomes true. Consider computing lastPinnedPartIndex once per render/diff pass (or caching it per content array) and reusing it when evaluating indexes.
|
thanks for giving this a look - i'm not entirely sure i understand this solution yet, but i might have a simpler one that I'm double checking atm. but on top of that, the markdown here should never be added into that dropdown in the first place - hence my comment #297392 (comment). |
|
closing this PR in favor of #298519. we can likely rip out some of the previous also funny thing, this isn't a regression - i've always checked for codeblocks naively this way for the last few months, so i'm wondering if we just got a lot of reports in the last few weeks because of another issue. will push this and see if we fix most of the situations tho! |

This PR fixes a regression where the final assistant answer could stay inside the thinking container, leaving only “Working” visible. Fix #297392
I used GPT-5.3-Codex to help draft and validate this change.
The “final answer” check was implemented in multiple places with slightly different rules.
Because of that, some render paths treated the last markdown as still pinned to thinking.
Added regression tests:
diffforces rerender when prior markdown is inside thinking.renderChatContentDiffmoves final markdown out of thinking into the root container.