前言
如果你之前接触过LangGraph的"Human in the loop"概念,那么理解MCP的Elicitation机制就会容易很多。这两个功能非常相似,都是让AI在需要时停下来,礼貌地向人类寻求帮助或确认。
想象一下,当你正在和朋友聊天,突然他问你:"嘿,我该穿哪件衬衫去参加明天的聚会?"这时候你就会停下来思考,然后给出建议。Elicitation就是让AI具备这种"求助"的能力。它允许MCP服务器在工具执行过程中向用户请求结构化的输入信息。与要求一次性提供所有输入不同,服务器可以根据需要与用户进行交互式沟通——比如,提示缺失的参数、请求澄清或收集额外上下文信息。举个例子,一个文件管理工具可能会问:"我应该在哪个目录创建这个文件?";而一个数据分析工具则可能请求:"我该分析哪个时间段的数据?"
Elicitation能让工具在关键时刻暂停执行,并向用户请求特定信息。这在以下场景中尤为有用:
- 缺失参数:当初始输入未提供必要信息时,主动向用户索取
- 澄清请求:在模糊或有歧义的场景下,获取用户的确认或选择
- 渐进式披露:分步骤收集复杂信息,避免一次性要求过多内容
- 动态工作流:根据用户的响应实时调整工具的行为逻辑
基本示例
让我们通过几个基本示例来演示如何使用elicitation功能。
MCP Server
在服务器端,我们创建一个收集用户信息的工具:- from fastmcp import FastMCP, Context
- from dataclasses import dataclass
- mcp = FastMCP("Elicitation Server")
- @dataclass
- class UserInfo:
- name: str
- age: int
- @mcp.tool
- async def collect_user_info(ctx: Context) -> str:
- """Collect user information through interactive prompts."""
- result = await ctx.elicit(
- message="Please provide your information",
- response_type=UserInfo
- )
-
- if result.action == "accept":
- user = result.data
- return f"Hello {user.name}, you are {user.age} years old"
- elif result.action == "decline":
- return "Information not provided"
- else: # cancel
- return "Operation cancelled"
-
- if __name__ == "__main__":
- mcp.run(transport="streamable-http", host="localhost", port=8001, show_banner=False)
复制代码 ctx.elicit()方法的参数说明:
- message: 显示给用户的提示词,就像一个礼貌的请求
- response_type: 定义预期响应结构的Python类型(数据类、基本类型等)。注意,Elicitation响应仅支持JSON Schema类型的子集。
ctx.elicit()的响应是一个ElicitationResult对象,包含以下属性:
- action: 用户如何回应,其值只有accept(接受)、decline(拒绝)和cancel(取消)
- data: 用户的输入值,类型为response_type或者None,只有当action=accept时存在
MCP Client
客户端需要实现一个处理征询请求的回调函数:- import asyncio
- from fastmcp import Client
- from fastmcp.client.elicitation import ElicitResult
- from mcp.shared.context import RequestContext
- from mcp.types import ElicitRequestParams
- from openai import AsyncOpenAI
- from pkg.config import cfg
- llm = AsyncOpenAI(
- base_url=cfg.llm_base_url,
- api_key=cfg.llm_api_key,
- )
- async def elicitation_handler(message: str, response_type: type, params: ElicitRequestParams, context: RequestContext):
- print(f"MCP Server asks: {message}")
- user_name = input("Your name: ").strip()
- user_age = input("Your age: ").strip()
- if not user_name or not user_age:
- return ElicitResult(action="decline")
-
- response_date = response_type(name=user_name, age=user_age)
- return response_date
- mcp_client = Client("http://localhost:8001/mcp", elicitation_handler=elicitation_handler)
- async def main():
- async with mcp_client:
- resp = await mcp_client.call_tool("collect_user_info", {})
- print(f"collect_user_info result: {resp}")
-
- if __name__ == "__main__":
- # 运行主程序
- asyncio.run(main())
复制代码 运行输出示例:
- MCP Server asks: Please provide your information
- Your name: Rainux
- Your age: 18
- collect_user_info result: CallToolResult(content=[TextContent(type='text', text='Hello Rainux, you are 18 years old', annotations=None, meta=None)], structured_content={'result': 'Hello Rainux, you are 18 years old'}, data='Hello Rainux, you are 18 years old', is_error=False)
复制代码- MCP Server asks: Please provide your information
- Your name:
- Your age:
- collect_user_info result: CallToolResult(content=[TextContent(type='text', text='Information not provided', annotations=None, meta=None)], structured_content={'result': 'Information not provided'}, data='Information not provided', is_error=False)
复制代码 运行shell命令-用户征询
在这个示例中,我们将实现一个命令行交互程序。当用户的命令需要在主机上执行可能造成修改的命令时,AI会礼貌地请求用户确认。
MCP Server
当执行可能修改系统的命令时,服务器会要求用户确认是否执行:- import asyncio
- from dataclasses import dataclass
- from fastmcp import Context, FastMCP
- @dataclass
- class UserDecision:
- decision: str = "decline"
- mcp = FastMCP("Elicitation Server")
- @mcp.tool()
- async def execute_command_local(command: str, ctx: Context, is_need_user_check: bool = False, timeout: int = 10) -> str:
- """Execute a shell command locally.
-
- Args:
- command (str): The shell command to execute.
- is_need_user_check (bool): Set to True when performing create, delete, or modify operations on the host, indicating that user confirmation is required.
- timeout (int): Timeout in seconds for command execution. Default is 10 seconds.
- Returns:
- str: The output of the shell command.
- """
- if is_need_user_check:
- user_check_result = await ctx.elicit(
- message=f"Do you want to execute this command(yes or no): {command}",
- response_type=UserDecision, # response_type 必须是符合 JSON Schema
- )
- if user_check_result.action != "accept":
- return "User denied command execution."
- try:
- proc = await asyncio.create_subprocess_shell(
- command,
- stdout=asyncio.subprocess.PIPE,
- stderr=asyncio.subprocess.PIPE
- )
- stdout, stderr = await asyncio.wait_for(proc.communicate(), timeout=timeout)
- stdout_str = stdout.decode().strip()
- stderr_str = stderr.decode().strip()
- if stdout_str:
- return f"Stdout: {stdout_str}"
- elif stderr_str:
- return f"Stderr: {stderr_str}"
- else:
- return "Command executed successfully with no output"
- except asyncio.TimeoutError:
- if proc and not proc.returncode:
- try:
- proc.terminate()
- await proc.wait()
- except:
- pass
- return f"Error: Command '{command}' timed out after {timeout} seconds"
- except Exception as e:
- return f"Error executing command '{command}': {str(e)}"
-
- if __name__ == "__main__":
- mcp.run(transport="streamable-http", host="localhost", port=8001, show_banner=False)
复制代码 MCP Client
客户端的主体逻辑与之前的示例基本一致,主要实现了elicitation_handler()方法来处理服务器的征询请求:- import asyncio
- import json
- import readline # For enhanced input editing
- import traceback
- from typing import cast
- from fastmcp import Client
- from fastmcp.client.elicitation import ElicitResult
- from mcp.shared.context import RequestContext
- from mcp.types import ElicitRequestParams
- from openai import AsyncOpenAI
- from openai.types.chat import ChatCompletionMessageFunctionToolCall
- from pkg.config import cfg
- from pkg.log import logger
- class MCPHost:
- """MCP主机类,用于管理与MCP服务器的连接和交互"""
-
- def __init__(self, server_uri: str):
- """
- 初始化MCP客户端
-
- Args:
- server_uri (str): MCP服务器的URI地址
- """
- # 初始化MCP客户端连接
- self.mcp_client: Client = Client(server_uri, elicitation_handler=self.elicitation_handler)
- # 初始化异步OpenAI客户端用于与LLM交互
- self.llm = AsyncOpenAI(
- base_url=cfg.llm_base_url,
- api_key=cfg.llm_api_key,
- )
- # 存储对话历史消息
- self.messages = []
- async def close(self):
- """关闭MCP客户端连接"""
- if self.mcp_client:
- await self.mcp_client.close()
- async def elicitation_handler(self, message: str, response_type: type, params: ElicitRequestParams, context: RequestContext):
- print(f"MCP Server asks: {message}")
- user_decision = input("Please check(yes or no): ").strip()
- if not user_decision or user_decision != "yes":
- return ElicitResult(action="decline")
-
- response_data = response_type(decision="accept")
- return response_data
- async def process_query(self, query: str) -> str:
- """Process a user query by interacting with the MCP server and LLM.
-
- Args:
- query (str): The user query to process.
- Returns:
- str: The response from the MCP server.
- """
- # 将用户查询添加到消息历史中
- self.messages.append({
- "role": "user",
- "content": query,
- })
- # 使用异步上下文管理器确保MCP客户端连接正确建立和关闭
- async with self.mcp_client:
- # 从MCP服务器获取可用工具列表
- tools = await self.mcp_client.list_tools()
- # 构造LLM可以理解的工具格式
- available_tools = []
- # 将MCP工具转换为OpenAI格式
- for tool in tools:
- available_tools.append({
- "type": "function",
- "function": {
- "name": tool.name,
- "description": tool.description,
- "parameters": tool.inputSchema,
- }
- })
- logger.info(f"Available tools: {[tool['function']['name'] for tool in available_tools]}")
- # 调用LLM,传入对话历史和可用工具
- resp = await self.llm.chat.completions.create(
- model=cfg.llm_model,
- messages=self.messages,
- tools=available_tools,
- temperature=0.3,
- )
- # 存储最终响应文本
- final_text = []
- # 获取LLM的首个响应消息
- message = resp.choices[0].message
- # 如果响应包含直接内容,则添加到结果中
- if hasattr(message, "content") and message.content:
- final_text.append(message.content)
- # 循环处理工具调用,直到没有更多工具调用为止
- while message.tool_calls:
- # 遍历所有工具调用
- for tool_call in message.tool_calls:
- # 确保工具调用有函数信息
- if not hasattr(tool_call, "function"):
- continue
- # 类型转换以获取函数调用详情
- function_call = cast(ChatCompletionMessageFunctionToolCall, tool_call)
- function = function_call.function
- tool_name = function.name
- # 解析函数参数
- tool_args = json.loads(function.arguments)
- # 检查MCP客户端是否已连接
- if not self.mcp_client.is_connected():
- raise RuntimeError("Session not initialized. Cannot call tool.")
-
- # 调用MCP服务器上的指定工具
- result = await self.mcp_client.call_tool(tool_name, tool_args)
- # 将助手的工具调用添加到消息历史中
- self.messages.append({
- "role": "assistant",
- "tool_calls": [
- {
- "id": tool_call.id,
- "type": "function",
- "function": {
- "name": function.name,
- "arguments": function.arguments
- }
- }
- ]
- })
- # 将工具调用结果添加到消息历史中
- self.messages.append({
- "role": "tool",
- "tool_call_id":tool_call.id,
- "content": str(result.content) if result.content else ""
- })
-
- # 基于工具调用结果再次调用LLM
- final_resp = await self.llm.chat.completions.create(
- model=cfg.llm_model,
- messages=self.messages,
- tools=available_tools,
- temperature=0.3,
- )
- # 更新消息为最新的LLM响应
- message = final_resp.choices[0].message
- # 如果响应包含内容,则添加到最终结果中
- if message.content:
- final_text.append(message.content)
-
- # 返回连接后的完整响应
- return "\n".join(final_text)
- async def chat_loop(self):
- """主聊天循环,处理用户输入并显示响应"""
- print("Welcome to the MCP chat! Type 'quit' to exit.")
- # 持续处理用户输入直到用户退出
- while True:
- try:
- # 获取用户输入
- query = input("You: ").strip()
- # 检查退出命令
- if query.lower() == "quit":
- print("Exiting chat. Goodbye!")
- break
- # 跳过空输入
- if not query:
- continue
- # 处理用户查询并获取响应
- resp = await self.process_query(query)
- print(f"Assistant: {resp}")
-
- # 捕获并记录聊天循环中的任何异常
- except Exception as e:
- logger.error(f"Error in chat loop: {str(e)}")
- logger.error(traceback.format_exc())
- async def main():
- """主函数,程序入口点"""
- # 创建MCP主机实例
- client = MCPHost(server_uri="http://localhost:8001/mcp")
- try:
- # 启动聊天循环
- await client.chat_loop()
- except Exception as e:
- # 记录主程序中的任何异常
- logger.error(f"Error in main: {str(e)}")
- logger.error(traceback.format_exc())
- finally:
- # 确保客户端连接被正确关闭
- await client.close()
-
- if __name__ == "__main__":
- # 运行主程序
- asyncio.run(main())
复制代码 运行示例:- Welcome to the MCP chat! Type 'quit' to exit.
- You: what can you do?
- Assistant: I can execute shell commands on your local machine. Please let me know what specific task you'd like me to help with. Keep in mind that any command I run will be on your local system, and you should ensure that the commands are safe and appropriate for your environment.
- You: 查询下当前内存使用情况
- Assistant: 当前内存使用情况如下:
- - **总内存**: 62Gi
- - **已使用内存**: 11Gi
- - **空闲内存**: 45Gi
- - **共享内存**: 137Mi
- - **缓存/缓冲区**: 6.6Gi
- - **可用内存**: 50Gi
- 交换分区情况:
- - **总交换空间**: 3.8Gi
- - **已使用交换空间**: 0B
- - **空闲交换空间**: 3.8Gi
- 如果还有其他问题,请随时告诉我!
- You: 在家目录创建一个文件,文件内容为当前平均负载,文件名为当前日期
- MCP Server asks: Do you want to execute this command(yes or no): echo $(uptime | awk -F 'load average:' '{print $2}') > ~/$(date +%Y%m%d).txt
- Please check(yes or no): yes
- Assistant: 已在您的家目录下创建了一个文件,文件名为当前日期(例如:20231005.txt),文件内容为系统的当前平均负载。如果您需要进一步的帮助,请告诉我!
- You: quit
- Exiting chat. Goodbye!
复制代码 小结
通过以上示例,我们可以看到MCP的Elicitation机制在实际应用中的强大之处:
- 增强安全性:在执行敏感操作前征询用户意见,避免意外操作带来的风险。就像有个贴心的助手在执行重要任务前总是先问一句"您确定吗?"
- 提升用户体验:让用户参与到AI的决策过程中,而不是被动接受结果。这种交互方式让用户感觉更有控制感,也更信任AI系统。
- 灵活的数据收集:可以按需收集结构化数据,避免一次性要求用户提供过多信息造成的认知负担。
- 优雅的错误处理:当用户拒绝或取消操作时,系统能够优雅地处理这些情况,而不是崩溃或产生不可预期的行为。
- 无缝集成:Elicitation机制与MCP的其他功能(如工具调用、资源访问)无缝集成,形成一个完整的AI交互生态系统。
在实际开发中,Elicitation机制特别适用于以下场景:
- 系统管理工具在执行修改操作前的确认
- 数据分析工具在处理敏感数据前的权限验证
- 文件操作工具在创建、删除或修改文件前的用户确认
- 金融或医疗等高风险领域的操作审批流程
总的来说,Elicitation机制为AI系统与人类用户之间建立了一座沟通的桥梁,让AI不再是冷冰冰的执行者,而是一个懂得在关键时刻寻求帮助的智能伙伴。
参考
- yuan - MCP 征询机制(Elicitation)详解,附代码实现
- FastMCP - Server Elicitation
- FastMCP - Client Elicitation
来源:程序园用户自行投稿发布,如果侵权,请联系站长删除
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作! |