Skip to content

Fix a regression where the final assistant answer could stay inside the thinking container, leaving only “Working” visible.#298500

Closed
tamuratak wants to merge 2 commits intomicrosoft:mainfrom
tamuratak:fix_finalresponse_rendering_2_f
Closed

Fix a regression where the final assistant answer could stay inside the thinking container, leaving only “Working” visible.#298500
tamuratak wants to merge 2 commits intomicrosoft:mainfrom
tamuratak:fix_finalresponse_rendering_2_f

Conversation

@tamuratak
Copy link
Contributor

This PR fixes a regression where the final assistant answer could stay inside the thinking container, leaving only “Working” visible. Fix #297392

I used GPT-5.3-Codex to help draft and validate this change.

The “final answer” check was implemented in multiple places with slightly different rules.
Because of that, some render paths treated the last markdown as still pinned to thinking.

Added regression tests:

  1. Final-answer classification after pinned parts.
  2. diff forces rerender when prior markdown is inside thinking.
  3. renderChatContentDiff moves final markdown out of thinking into the root container.

Copilot AI review requested due to automatic review settings February 28, 2026 22:51
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes a chat rendering regression where the final assistant markdown could remain inside the “thinking” container, leaving only the “Working” indicator visible, by unifying “final answer” classification and ensuring final markdown is re-rendered/moved out of thinking when needed.

Changes:

  • Consolidates “final answer markdown” detection into isFinalAnswerMarkdownPart(...) and reuses it across render/diff paths.
  • Updates progressive diff rendering to move final-answer markdown out of the thinking container when the previously-rendered node is inside thinking.
  • Adds browser widget regression tests covering final-answer classification, diff rerender behavior, and DOM relocation out of thinking.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
src/vs/workbench/contrib/chat/browser/widget/chatListRenderer.ts Unifies final-answer detection and adjusts diff rendering to relocate final markdown out of thinking.
src/vs/workbench/contrib/chat/test/browser/widget/chatListRenderer.test.ts Adds regression tests for final-answer classification and the “move final markdown out of thinking” behavior.

alreadyRenderedPart.domNode.replaceWith(newPart.domNode);
if (isFinalAnswerPart && this.isRenderedPartInsideThinking(alreadyRenderedPart)) {
alreadyRenderedPart.domNode.remove();
templateData.value.appendChild(newPart.domNode);
Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When moving a final-answer markdown part out of the thinking container, this appends the new DOM node to templateData.value. If there are already-rendered sibling parts that come after this markdown in contentForThisTurn (e.g., changes summary, code citations, error details), appending will reorder the DOM relative to those parts. Consider inserting the node at the correct position (e.g., before the next rendered sibling domNode for a later contentIndex, falling back to append when none exists) so visual order continues to match the content array.

Suggested change
templateData.value.appendChild(newPart.domNode);
// Insert the new part at the correct position relative to already-rendered siblings
let inserted = false;
for (let i = contentIndex + 1; i < renderedParts.length; i++) {
const nextPart = renderedParts[i];
const nextDomNode = nextPart?.domNode;
if (nextDomNode && nextDomNode.parentElement === templateData.value) {
templateData.value.insertBefore(newPart.domNode, nextDomNode);
inserted = true;
break;
}
}
if (!inserted) {
templateData.value.appendChild(newPart.domNode);
}

Copilot uses AI. Check for mistakes.
Comment on lines +1488 to +1495
const lastPinnedPartIndex = content.findLastIndex(c =>
c.kind === 'thinking'
|| c.kind === 'toolInvocation'
|| c.kind === 'toolInvocationSerialized'
|| c.kind === 'textEditGroup'
|| c.kind === 'hook');

return index > lastPinnedPartIndex;
Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isFinalAnswerMarkdownPart computes lastPinnedPartIndex via findLastIndex each time it’s called. Since it’s invoked inside per-part loops (diff and renderChatContentDiff) and also during markdown rendering, this can turn into O(n^2) scans for larger responses once element.isComplete becomes true. Consider computing lastPinnedPartIndex once per render/diff pass (or caching it per content array) and reusing it when evaluating indexes.

Copilot uses AI. Check for mistakes.
@justschen
Copy link
Collaborator

thanks for giving this a look - i'm not entirely sure i understand this solution yet, but i might have a simpler one that I'm double checking atm.

these were the traces:
Screenshot 2026-02-28 at 4 50 50 PM

but on top of that, the markdown here should never be added into that dropdown in the first place - hence my comment #297392 (comment).

@justschen
Copy link
Collaborator

justschen commented Mar 1, 2026

closing this PR in favor of #298519. we can likely rip out some of the previous finalResponse stuff, but i'll investigate that in debt. thanks a lot!

also funny thing, this isn't a regression - i've always checked for codeblocks naively this way for the last few months, so i'm wondering if we just got a lot of reports in the last few weeks because of another issue. will push this and see if we fix most of the situations tho!

@justschen justschen closed this Mar 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Chat stuck in Working loop

3 participants