Migrate
从 OpenAI SDK 迁移
从手写的智能体循环迁移到结构化、持久的 JamJet 工作流。
为什么要迁移
原始的 OpenAI SDK 智能体循环非常适合演示和原型开发。但在生产环境中,你不可避免地需要构建:
- 带指数退避的手动重试逻辑
- 工具调用和模型调用之间的状态传递
- 日志记录("第 7 步实际收到了什么?")
- 进程在运行中崩溃时的重启逻辑
- 随着工具增加而膨胀的工具分发表
JamJet 将所有这些作为基础设施处理,而不是应用代码。
概念映射
| 原始 OpenAI SDK | JamJet |
|---|---|
messages 列表 | State(Pydantic 模型 — 类型化、经过验证) |
while True: 智能体循环 | 工作流图 — 明确、可检查 |
手动 tool_calls 分发 | MCP 工具节点(type: tool) |
client.chat.completions.create(...) | type: model 节点(或调用客户端的 @wf.step) |
| 手写重试逻辑 | retry: max_attempts: 3, backoff: exponential |
print() 调试 | jamjet inspect <exec-id> — 完整事件时间线 |
| 崩溃时进程重启 | 持久化运行时 — 从上次完成的步骤恢复 |
| 无 | jamjet eval run — 每次提交时的 CI 回归测试 |
并排对比示例
原始 OpenAI
import json
from openai import OpenAI
client = OpenAI()
TOOLS = [{
"type": "function",
"function": {
"name": "web_search",
"description": "Search the web for current information",
"parameters": {
"type": "object",
"properties": {"query": {"type": "string"}},
"required": ["query"],
},
},
}]
def web_search(query: str) -> str:
return f"[results for: {query}]" # replace with real call
def run_agent(question: str) -> str:
messages = [
{"role": "system", "content": "You are a helpful research assistant."},
{"role": "user", "content": question},
]
while True:
resp = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
tools=TOOLS,
tool_choice="auto",
)
msg = resp.choices[0].message
if msg.tool_calls:
messages.append(msg)
for tc in msg.tool_calls:
args = json.loads(tc.function.arguments)
result = web_search(args["query"])
messages.append({
"role": "tool",
"tool_call_id": tc.id,
"content": result,
})
else:
return msg.content or ""
print(run_agent("Latest AI agent frameworks?"))JamJet
from openai import OpenAI
from pydantic import BaseModel
from jamjet import Workflow
client = OpenAI()
class State(BaseModel):
question: str
search_results: str = ""
answer: str = ""
wf = Workflow("research-agent")
@wf.state
class AgentState(State):
pass
@wf.step
async def search(state: AgentState) -> AgentState:
# 生产环境:使用 type: tool + MCP 服务器(无需调度表)
results = f"[results for: {state.question}]"
return state.model_copy(update={"search_results": results})
@wf.step
async def synthesize(state: AgentState) -> AgentState:
resp = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful research assistant."},
{"role": "user", "content": (
f"Question: {state.question}\n"
f"Search results: {state.search_results}\n"
"Provide a comprehensive answer."
)},
],
)
return state.model_copy(update={"answer": resp.choices[0].message.content or ""})
result = wf.run_sync(AgentState(question="Latest AI agent frameworks?"))
print(result.state.answer)
print(f"Ran {result.steps_executed} steps in {result.total_duration_us / 1000:.1f}ms")迁移路径
-
将状态提升为 Pydantic 模型。
# 之前:分散的变量 messages = [...] search_results = None final_answer = None # 之后:显式、可验证的状态 class State(BaseModel): question: str search_results: str = "" answer: str = "" -
将循环拆分为命名步骤。
循环中的每个逻辑"阶段"变成一个
@wf.step。工具调度变成type: tool节点。 -
保持 LLM 调用不变。
在步骤函数内像以前一样使用 OpenAI 客户端。稍后当你需要运行时处理重试、成本跟踪和可观测性时,可以切换到 YAML
type: model节点。 -
先在本地运行。
wf.run_sync(State(...))无需任何服务器即可工作——与循环行为完全相同。 -
需要时启用持久化。
jamjet dev # 启动 Rust 运行时 jamjet run workflow.yaml --input '{"question": "..."}'你的工作流现在具备崩溃安全、可观测性,并可通过
jamjet eval run进行测试。
免费获得的功能
一旦使用 JamJet,这些功能无需额外代码即可获得:
无需 try/except 堆砌的重试机制:
nodes:
search:
type: tool
server: brave-search
tool: web_search
arguments:
query: "{{ state.question }}"
retry:
max_attempts: 3
backoff: exponential
delay_ms: 1000完整的执行时间线:
jamjet inspect exec-abc123
# → step: search 200ms completed
# → step: synthesize 1840ms completedCI 回归测试:
jamjet eval run evals/dataset.jsonl --workflow research-agent --fail-under 0.9提示: 完整的可运行示例请参阅 jamjet-labs/jamjet-benchmarks。