Overview

The Chat view gives you a streaming conversation interface backed by any LLM you’ve configured. What makes it powerful is automatic MCP tool calling β€” the LLM can invoke any tool from your active connections mid-conversation, and MCP Explorer handles the round-trip transparently.


Starting a Chat

  1. Click New Session in the left sidebar to create a fresh conversation
  2. Select which Connections to make available for tool calling from the multiselect dropdown
  3. Choose your Model (defaults to GPT-4o)
  4. Type your message and press Ctrl+Enter or click the Send button
Chat session with mcp server connection selected

A new chat session with the mcp server connection selected. The connection badge appears next to the model selector, confirming the LLM has access to all MCP tools from that server.


Sending a Message

Type your message in the input box at the bottom of the chat area. Responses stream in real time β€” you see tokens appearing as the model generates them. Token usage (input ↑ / output ↓) is shown under each assistant response.

who is garrard? typed in chat input

“who is garrard?” ready to send. Press Ctrl+Enter to submit.

LLM response streamed into chat with MCP tool call

GPT-4o’s streamed response to “who is garrard?”. Notice the πŸ”§ Calling tool: who_is badge β€” the model automatically invoked an MCP tool mid-conversation to answer the question. Thinking time (πŸ€” 2.5s) and token cost (πŸ’° 1158↑ 14↓) are shown beneath the message.


Slash Commands

Type / in the message input to open the command palette. All available commands appear with descriptions β€” navigate with ↑↓ arrow keys, press Enter to select, or Esc to dismiss.

Slash command menu showing all available commands

Typing / reveals the full command palette with categories for MCP Prompts and Chat Controls.

Prompt Picker dialog showing MCP prompts after /prompt command

After pressing Enter on /prompt, the Prompt Picker dialog opens. Select any MCP prompt from the connected server, fill in its arguments, then click Run in Chat to inject it directly into the conversation.

CommandDescription
/promptBrowse & inject an MCP prompt template from connected servers
/statsShow token usage: input / output / total for this session
/reportCopy entire conversation as formatted Markdown to clipboard
/systemOpen the system prompt editor for the active model
/modelQuick-switch the active AI model
/clearClear all messages in the current session

Tool Calling

When the LLM decides to use a tool, MCP Explorer:

  1. Shows an active tool badge indicating which tool is running
  2. Sends the tool call to the appropriate MCP server
  3. Returns the result to the LLM to continue its response

Tool call details (inputs and outputs) are shown inline in the message stream β€” fully expandable.


Token Usage

Each LLM response shows a token usage summary:

  • Prompt tokens β€” input sent to the model
  • Completion tokens β€” tokens in the response
  • Total β€” combined

Conversation Management

  • New Chat β€” start a fresh conversation (retains model and connection selection)
  • Clear β€” clear messages without changing settings
  • Chat history is persisted per session

Sensitive Data in Chat

MCP Explorer detects and masks sensitive data (API keys, passwords, secrets) in:

  • Your messages before they’re sent
  • Tool response content before display

Masked values are shown as ●●●●●●●● with a reveal toggle.