chat-launched-with

Chat launched with codex spawns a claude session instead — model selection ignored on chat creation

Metadata

Statusdone
Assignedagent-954
Agent identity3184716484e6f0ea08bb13539daf07686ee79d440505f1fdf2de0357707034c3
Modelclaude:opus
Created2026-04-28T21:23:33.076743389+00:00
Started2026-04-28T21:51:41.583465222+00:00
Completed2026-04-28T22:10:54.264805796+00:00
Tagseval-scheduled
Eval score0.91
└ blocking impact0.92
└ completeness0.92
└ coordination overhead0.88
└ correctness0.95
└ downstream usability0.90
└ efficiency0.85
└ intent fidelity0.81
└ style adherence0.90

Description

Description

User reports: launched a new chat configured for codex, got a claude session. Model selection on chat creation is being ignored (or overridden) somewhere in the chat-creation pipeline.

Verbatim: 'lol. i just tried to launch a chat with codex and got... claude. lol lol.'

Likely suspects:

  1. The new chat dialog / wg service create-chat (or whatever the entry point is) doesn't pass the chosen model spec to the spawn
  2. Project-level coordinator.model (claude:opus in this user's config) silently wins over the per-chat selection
  3. Per CLAUDE.md, agency tasks ignore project cascade and pin to claude:haiku — chat agents may have analogous logic incorrectly applied

This is distinct from the codex worker-task handler bug (already fixed in b6c786cef). That was about prompt-stdin wiring for wg add worker tasks. This bug is in the chat-spawn code path, which is a different surface.

Investigation order

  1. Repro: launch a fresh chat, explicitly pick codex / gpt-5.5 in whatever the UI offers. Confirm via wg agents --alive what handler actually spawned.
  2. Find the chat-spawn code path (likely src/commands/service/coordinator_agent.rs, src/commands/service/mod.rs, or the IPC layer between TUI and dispatcher). Trace where the model selection flows from UI → spawn args → executor wrapper.
  3. Identify where the override happens (config cascade vs hardcoded vs IPC drop) and fix it.
  4. Add a regression test: spawning a chat with explicit model spec must result in a worker subprocess invoked with that model.

Validation

  • Failing test first: spawning a chat agent with model = codex:gpt-5.5 results in a codex CLI subprocess (mock the subprocess, assert on argv)
  • Implementation makes the test pass
  • Live smoke: open TUI, create a new chat picking codex, confirm wg agents --alive shows codex executor
  • Live smoke: same with claude — assert no regression
  • Live smoke: existing chats spawned pre-fix continue to work on whatever they were on
  • cargo build + cargo test pass

Depends on

Required by

Log