JamJet

Quickstart

Get a JamJet workflow running in 60 seconds — no server, no config, just Python.

Quickstart

60-second start

No server. No config. No Pydantic. Just Python.

pip install jamjet
# agent.py
from jamjet import task, tool

@tool
async def web_search(query: str) -> str:
    """Search the web for current information."""
    # plug in your actual search implementation
    return f"Results for: {query}"

@task(model="claude-sonnet-4-6", tools=[web_search])
async def research(question: str) -> str:
    """You are a research assistant. Search first, then summarize clearly."""

import asyncio
result = asyncio.run(research("What is JamJet?"))
print(result)
ANTHROPIC_API_KEY=sk-ant-... python agent.py

That's it. The @tool decorator exposes any Python function to the agent. The @task docstring becomes the agent's instructions. Works with OpenAI, Anthropic, Ollama, Groq — any model.

tip: Using Ollama locally? No API key needed:

OPENAI_API_KEY=ollama OPENAI_BASE_URL=http://localhost:11434/v1 python agent.py

Change model= to any Ollama model (e.g. "llama3.2").


What you get

  • @tool — turns any async Python function into an agent tool, with automatic schema generation
  • @task — docstring = agent instructions, function signature = input contract
  • Durable execution — crash and resume from where it stopped (with jamjet dev)
  • Enforced limitsmax_iterations, max_cost_usd, timeout_seconds are first-class params

No boilerplate. No state classes. No dependency injection.

note: Need full graph control — multi-step pipelines, conditional routing, human-in-the-loop? Use the Workflow API or YAML workflows for complex orchestration.


Try the examples

Four self-contained examples in the jamjet-benchmarks repo — each runs locally with Ollama:

ExampleWhat it shows
01 — Pipeline with timelinePer-step execution timeline, automatic
02 — Conditional routingRouting as plain Python predicates
03 — Eval harnessBuilt-in scoring, LLM-as-judge
04 — Self-evaluating workflowDraft → judge → retry loop
git clone https://github.com/jamjet-labs/jamjet-benchmarks
cd jamjet-benchmarks/examples/01_pipeline_with_timeline
pip install -r requirements.txt
OPENAI_API_KEY=ollama OPENAI_BASE_URL=http://localhost:11434/v1 MODEL_NAME=llama3.2 python main.py

Scaffold a full project

Use templates to start a more complete project:

jamjet init my-agent --template hello-agent
cd my-agent

Available templates:

jamjet init my-agent --list-templates
# hello-agent           Minimal Q&A workflow
# research-agent        Web search + synthesis (Brave Search MCP)
# rag-assistant         RAG with filesystem MCP
# mcp-tool-consumer     Connect to any MCP tool server
# mcp-tool-provider     Expose Python functions as MCP tools
# code-reviewer         GitHub PR review with quality scoring
# hitl-approval         Human-in-the-loop approval gate
# multi-agent-review    Writer + critic review loop
# a2a-delegator         Delegate tasks via A2A protocol
# a2a-server            Serve A2A requests from external agents
# approval-workflow     Durable approval with 24h timeout

Add the durable runtime (production)

The in-process executor (wf.run_sync) is great for development. For production — crash recovery, multi-instance scheduling, durable state — start the runtime server:

jamjet dev
▶ JamJet Dev Runtime
  Port:  7700
  Mode:  local (SQLite)
  API:   http://localhost:7700

Then run workflows through it:

jamjet run workflow.yaml --input '{"query": "What is JamJet?"}'
✓ node_completed   think   gpt-4o-mini  512ms
✓ Execution completed.

Crash mid-execution? Resume from exactly where it stopped — no re-running earlier steps, no wasted API calls.


Set your API key

OpenAI

export OPENAI_API_KEY=sk-...

Anthropic

export ANTHROPIC_API_KEY=sk-ant-...

Ollama (free, local)

export OPENAI_API_KEY=ollama
export OPENAI_BASE_URL=http://localhost:11434/v1
# ollama pull llama3.2

Groq

export OPENAI_API_KEY=gsk_...
export OPENAI_BASE_URL=https://api.groq.com/openai/v1

Next steps

  1. Core Concepts — agents, nodes, state, and durability
  2. Python SDK — decorators, routing, parallel steps
  3. Workflow Authoring — all node types, retry policies, conditions
  4. MCP Integration — connect to external tool servers in one line
  5. Eval Harness — test your agents like software

Troubleshooting

jamjet not found after install? Make sure your Python scripts directory is in your PATH. Try python -m jamjet.

Connection refused at port 7700? jamjet dev must be running before you use jamjet run. The in-process wf.run_sync() path needs no server.

Need help? Open a GitHub Discussion or file an issue.

On this page