Quickstart
Get a JamJet workflow running in 60 seconds — no server, no config, just Python.
Quickstart
60-second start
No server. No config. No Pydantic. Just Python.
pip install jamjet# agent.py
from jamjet import task, tool
@tool
async def web_search(query: str) -> str:
"""Search the web for current information."""
# plug in your actual search implementation
return f"Results for: {query}"
@task(model="claude-sonnet-4-6", tools=[web_search])
async def research(question: str) -> str:
"""You are a research assistant. Search first, then summarize clearly."""
import asyncio
result = asyncio.run(research("What is JamJet?"))
print(result)ANTHROPIC_API_KEY=sk-ant-... python agent.pyThat's it. The @tool decorator exposes any Python function to the agent. The @task docstring becomes the agent's instructions. Works with OpenAI, Anthropic, Ollama, Groq — any model.
tip: Using Ollama locally? No API key needed:
OPENAI_API_KEY=ollama OPENAI_BASE_URL=http://localhost:11434/v1 python agent.pyChange model= to any Ollama model (e.g. "llama3.2").
What you get
@tool— turns any async Python function into an agent tool, with automatic schema generation@task— docstring = agent instructions, function signature = input contract- Durable execution — crash and resume from where it stopped (with
jamjet dev) - Enforced limits —
max_iterations,max_cost_usd,timeout_secondsare first-class params
No boilerplate. No state classes. No dependency injection.
note: Need full graph control — multi-step pipelines, conditional routing, human-in-the-loop? Use the
WorkflowAPI or YAML workflows for complex orchestration.
Try the examples
Four self-contained examples in the jamjet-benchmarks repo — each runs locally with Ollama:
| Example | What it shows |
|---|---|
| 01 — Pipeline with timeline | Per-step execution timeline, automatic |
| 02 — Conditional routing | Routing as plain Python predicates |
| 03 — Eval harness | Built-in scoring, LLM-as-judge |
| 04 — Self-evaluating workflow | Draft → judge → retry loop |
git clone https://github.com/jamjet-labs/jamjet-benchmarks
cd jamjet-benchmarks/examples/01_pipeline_with_timeline
pip install -r requirements.txt
OPENAI_API_KEY=ollama OPENAI_BASE_URL=http://localhost:11434/v1 MODEL_NAME=llama3.2 python main.pyScaffold a full project
Use templates to start a more complete project:
jamjet init my-agent --template hello-agent
cd my-agentAvailable templates:
jamjet init my-agent --list-templates
# hello-agent Minimal Q&A workflow
# research-agent Web search + synthesis (Brave Search MCP)
# rag-assistant RAG with filesystem MCP
# mcp-tool-consumer Connect to any MCP tool server
# mcp-tool-provider Expose Python functions as MCP tools
# code-reviewer GitHub PR review with quality scoring
# hitl-approval Human-in-the-loop approval gate
# multi-agent-review Writer + critic review loop
# a2a-delegator Delegate tasks via A2A protocol
# a2a-server Serve A2A requests from external agents
# approval-workflow Durable approval with 24h timeoutAdd the durable runtime (production)
The in-process executor (wf.run_sync) is great for development. For production — crash recovery, multi-instance scheduling, durable state — start the runtime server:
jamjet dev▶ JamJet Dev Runtime
Port: 7700
Mode: local (SQLite)
API: http://localhost:7700Then run workflows through it:
jamjet run workflow.yaml --input '{"query": "What is JamJet?"}'✓ node_completed think gpt-4o-mini 512ms
✓ Execution completed.Crash mid-execution? Resume from exactly where it stopped — no re-running earlier steps, no wasted API calls.
Set your API key
OpenAI
export OPENAI_API_KEY=sk-...Anthropic
export ANTHROPIC_API_KEY=sk-ant-...Ollama (free, local)
export OPENAI_API_KEY=ollama
export OPENAI_BASE_URL=http://localhost:11434/v1
# ollama pull llama3.2Groq
export OPENAI_API_KEY=gsk_...
export OPENAI_BASE_URL=https://api.groq.com/openai/v1Next steps
- Core Concepts — agents, nodes, state, and durability
- Python SDK — decorators, routing, parallel steps
- Workflow Authoring — all node types, retry policies, conditions
- MCP Integration — connect to external tool servers in one line
- Eval Harness — test your agents like software
Troubleshooting
jamjet not found after install?
Make sure your Python scripts directory is in your PATH. Try python -m jamjet.
Connection refused at port 7700?
jamjet dev must be running before you use jamjet run. The in-process wf.run_sync() path needs no server.
Need help? Open a GitHub Discussion or file an issue.