# agent.pyfrom jamjet import task, tool@toolasync def web_search(query: str) -> str: """Search the web for current information.""" # plug in your actual search implementation return f"Results for: {query}"@task(model="claude-sonnet-4-6", tools=[web_search])async def research(question: str) -> str: """You are a research assistant. Search first, then summarize clearly."""import asyncioresult = asyncio.run(research("What is JamJet?"))print(result)
The @tool decorator exposes any Python function to the agent. The @task docstring becomes the agent's instructions. Works with OpenAI, Anthropic, Ollama, Groq — any model.
Change model= to any Ollama model (e.g. "llama3.2").
Define a tool — the bridge between your agent and the outside world:
import dev.jamjet.tool.Tool;import dev.jamjet.tool.ToolCall;@Tool(description = "Search the web for information about a topic")record WebSearch(String query) implements ToolCall<String> { public String execute() { // In production, call your search API here return "Results for '" + query + "': JamJet is a performance-first, " + "agent-native runtime and framework for AI agents."; }}
Build an agent with a reasoning strategy and runtime-enforced limits:
import dev.jamjet.agent.Agent;var agent = Agent.builder("researcher") .model("claude-haiku-4-5-20251001") .tools(WebSearch.class) .instructions("You are a helpful research assistant. " + "Always search first, then provide a thorough summary.") .strategy("react") .maxIterations(5) .build();
public static void main(String[] args) { var agent = Agent.builder("researcher") .model("claude-haiku-4-5-20251001") .tools(WebSearch.class) .instructions("You are a helpful research assistant. " + "Always search first, then provide a thorough summary.") .strategy("react") .maxIterations(5) .build(); var result = agent.run("What is JamJet?"); System.out.println(result.output()); System.out.printf("Duration: %.2f ms%n", result.durationUs() / 1000.0); System.out.printf("Tool calls: %d%n", result.toolCalls().size());}