一、Openai的接口调用
pip包下载
配置sk,url- OPENAI_API_KEY = sk-xxxxx
- OPENAI_BASE_URL = https://api.openai.com/v1
复制代码
接口调用- import os
- from flask import Flask, jsonify
- from openai import OpenAI
- config = configparser.ConfigParser()
- config.read("config.cfg", encoding="utf-8")
- OPENAI_API_KEY = config.get("default", "OPENAI_API_KEY", fallback=None)
- OPENAI_BASE_URL = config.get("default", "OPENAI_BASE_URL", fallback=None)
- @app.route("/gpt_test")
- def gpt_test():
- """
- 简单调用一次 GPT,返回一个固定问题的回答
- """
- if not OPENAI_API_KEY:
- return jsonify({"error": "OPENAI_API_KEY 未配置"}), 500
- try:
- # 这里用的是 chat.completions.create 风格
- resp = client.chat.completions.create(
- model="gpt-4.1-mini", # 或者你有的任意模型,比如 gpt-4.1, gpt-4o 等
- messages=[
- {"role": "system", "content": "你是一个简洁回答的助手。"},
- {"role": "user", "content": "简单用一句话介绍一下你自己。"},
- ],
- )
- answer = resp.choices[0].message.content
- return jsonify({"answer": answer})
- except Exception as e:
- print("GPT 调用异常:", repr(e))
- return jsonify({"error": str(e)}), 500
复制代码
二、阿里通义
安装官方sdk
使用dashscope.Generation.call基本可以复用- ALIYUN_API_KEY = config.get("default", "ALIYUN_API_KEY", fallback=None)
- @app.route("/llm_test/")
- def llm_test():
- """测试与大模型的对话功能"""
- try:
- messages = [
- {'role': 'system', 'content': 'You are a helpful assistant.'},
- {'role': 'user', 'content': '你是谁?'}
- ]
- answer = chat_with_model(messages)
- return jsonify({"answer": answer})
- except Exception as e:
- print("LLM error:", repr(e))
- return jsonify({"error": str(e)}), 500
复制代码
这里有几类模型id都可以使用:
- qwen3-max
- qwen-plus
- qwen-turbo
参考:阿里云百炼
如果需要使用到prompt,比如我们有路径app/prompt_store/底下的prompt文件:doc-llm-latest.md
首先按照字符串处理的思路,先读取出来:- from pathlib import Path
- # run.py 所在目录
- BASE_DIR = Path(__file__).resolve().parent
- PROMPT_DIR = BASE_DIR / "app" / "prompt_store"
- PROMPT_LATEST_FILE = PROMPT_DIR / "doc-llm-latest.md"
- def load_latest_prompt() -> str | None:
- """读取 doc-llm-latest.md 的内容"""
- try:
- with PROMPT_LATEST_FILE.open("r", encoding="utf-8") as f:
- return f.read()
- except FileNotFoundError:
- print(f"[WARN] Prompt file not found: {PROMPT_LATEST_FILE}")
- return None
- except Exception as e:
- print(f"[ERROR] Failed to read prompt: {e!r}")
- return None
复制代码
然后message格式补充- @app.route("/llm_with_prompt/")
- def llm_with_prompt():
- """使用最新的Prompt与大模型对话"""
- prompt = load_latest_prompt()
- if not prompt:
- return jsonify({"error": "No prompt available"}), 500
- try:
- messages = [
- {
- 'role': 'system',
- 'content': prompt
- },
- {
- 'role': 'user',
- 'content': "请用一两句话,概括一下这个文档测试规范的核心目标。"
- }
- ]
- answer = chat_with_model(messages)
- return jsonify({"answer": answer})
-
- except Exception as e:
- print("LLM with prompt error:", repr(e))
- return jsonify({"error": str(e)}), 500
复制代码
我们将llm对话能力函数封装起来,提供一个类或者函数来调用- # 统一管理大模型调用,提供配置
- import configparser
- from pathlib import Path
- import dashscope
- BASE_DIR = Path(__file__).resolve().parent.parent
- CONFIG_FILE = BASE_DIR / "config.cfg"
- config = configparser.ConfigParser()
- config.read(CONFIG_FILE, encoding="utf-8")
- ALIYUN_API_KEY = config.get("default", "ALIYUN_API_KEY", fallback=None)
- ALIYUN_MODEL = config.get("default", "ALIYUN_MODEL")
- def init_llm():
- """在Flask启动时调用一次,设置api_key"""
- if not ALIYUN_API_KEY:
- print("[WARN] No ALIYUN_API_KEY configured in config.cfg")
- dashscope.api_key = ALIYUN_API_KEY
- def chat_with_model(messages: list[dict]) -> str:
- """调用大模型进行对话
- Args:
- messages (list[dict]): 消息列表,格式参考OpenAI Chat API
- Returns:
- str: 模型回复内容
- """
- if not ALIYUN_API_KEY:
- raise ValueError("No ALIYUN_API_KEY configured")
- response = dashscope.Generation.call(
- model=ALIYUN_MODEL,
- messages=messages,
- )
- print(f"raw response: {response}")
- answer = response["output"]["choices"][0]["message"]["content"]
- return answer
复制代码
来源:程序园用户自行投稿发布,如果侵权,请联系站长删除
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作! |