Java Quickstart
Build your first AI agent and workflow in Java with JamJet — tools, strategies, durable execution, and IR compilation.
Java Quickstart
This guide walks you through building an AI agent and a durable workflow in Java. By the end you will understand how JamJet tools, agent strategies, IR compilation, and typed workflow state fit together — and why each design choice matters for production agent systems.
Prerequisites
Before you start, make sure you have:
- Java 21+ — JamJet uses virtual threads (
Thread.ofVirtual) for non-blocking I/O without callback soup. Virtual threads are a final (non-preview) feature in Java 21 (JEP 444). - Maven 3.9+ or Gradle 8+ — any build tool that can pull from Maven Central.
- A running JamJet runtime — for production execution. During development you can run everything in-process (no server required), or start the local runtime with:
jamjet dev - An LLM API key — set
OPENAI_API_KEYorANTHROPIC_API_KEYin your environment. For local-only development, Ollama works without any key.
tip: You can follow this entire guide without a running runtime. The in-process executor lets you compile, validate, and run workflows locally. Add the runtime when you need crash recovery and durable state.
Add the dependency
The Java SDK is published on Maven Central as dev.jamjet:jamjet-sdk.
Maven
<dependency>
<groupId>dev.jamjet</groupId>
<artifactId>jamjet-sdk</artifactId>
<version>0.4.0</version>
</dependency>Gradle (Kotlin DSL)
implementation("dev.jamjet:jamjet-sdk:0.4.0")Gradle (Groovy DSL)
implementation 'dev.jamjet:jamjet-sdk:0.4.0'Make sure your project targets Java 21 or later. In Maven:
<properties>
<maven.compiler.source>21</maven.compiler.source>
<maven.compiler.target>21</maven.compiler.target>
</properties>Define a tool
Tools are the bridge between your agent and the outside world — web search, database queries, API calls, file I/O. In JamJet, a tool is a Java record annotated with @Tool that implements ToolCall<T>.
import dev.jamjet.tool.Tool;
import dev.jamjet.tool.ToolCall;
@Tool(description = "Search the web for information about a topic")
record WebSearch(String query) implements ToolCall<String> {
public String execute() {
// In production, call your search API here
return "Results for '" + query + "': JamJet is a performance-first, "
+ "agent-native runtime and framework for AI agents.";
}
}Why records?
This design is intentional. Java records give you three properties that matter for agent tooling:
-
Immutability — a tool call's parameters never change after construction. This makes tool calls safe to serialize, replay, and audit. When JamJet replays a failed workflow, it re-invokes the exact same tool call with the exact same parameters.
-
Automatic JSON Schema derivation — the SDK inspects record components (
String queryabove) and generates the JSON Schema that the LLM needs to invoke the tool. No manual schema writing, no annotation soup, no drift between your code and your schema. -
Structural equality — two
WebSearch("jamjet")instances are equal. This enables deduplication and caching of tool calls across retries.
The @Tool annotation provides the description that the LLM sees when deciding which tool to use. Write it like you would explain the tool to a colleague — clear, specific, action-oriented.
note: For deeper coverage of tool design patterns, agent strategies, and when to use each — see Agentic AI Patterns.
Build an agent
An agent combines a model, tools, instructions, and a reasoning strategy. The strategy determines how the agent thinks — not just what it does.
import dev.jamjet.agent.Agent;
var agent = Agent.builder("researcher")
.model("claude-haiku-4-5-20251001")
.tools(WebSearch.class)
.instructions("You are a helpful research assistant. "
+ "Always search first, then provide a thorough summary.")
.strategy("react")
.maxIterations(5)
.build();Let's break down each part.
The react strategy
When you set .strategy("react"), you are telling JamJet to use the ReAct (Reasoning + Acting) loop:
- Thought — the model reasons about what to do next
- Action — the model calls a tool
- Observation — the tool result is fed back to the model
- Repeat until the model produces a final answer or hits the iteration limit
This is the most common agent strategy because it is flexible: the model decides dynamically which tools to call and in what order. It works well for open-ended tasks where you cannot predict the exact sequence of steps.
JamJet supports three built-in strategies:
| Strategy | When to use | How it works |
|---|---|---|
react | Open-ended tasks, exploratory research | Thought-action-observation loop |
plan-and-execute | Structured tasks that benefit from upfront planning | Generate plan, then execute each step sequentially |
critic | Tasks requiring quality control | Draft, critique, revise loop |
tip: Not sure which strategy to pick? Start with
react. Upgrade toplan-and-executewhen you see the agent wandering, or tocriticwhen output quality matters more than speed. See strategy comparisons on jamjet.dev/research for benchmarks.
Guardrails: cost, time, and iterations
Production agents need hard limits. Without them, a confused model can burn through your API budget in a loop:
var agent = Agent.builder("investment-researcher")
.model("gpt-4o")
.tools(WebSearch.class, FetchUrl.class, StoreNote.class)
.instructions("""
You are a professional investment research analyst.
Search for recent news and financials, then produce
a structured investment memo.
""")
.strategy("plan-and-execute")
.maxIterations(6)
.maxCostUsd(0.50)
.timeoutSeconds(120)
.build();maxIterations(6)— the agent stops after 6 reasoning steps, even if it has not finished. This prevents infinite loops.maxCostUsd(0.50)— the runtime tracks token costs in real time and halts the agent if spending exceeds 50 cents.timeoutSeconds(120)— wall-clock timeout. If the agent has not completed in 2 minutes, execution is aborted.
These are not suggestions — they are runtime-enforced hard limits. The JamJet runtime checks them between every step, not just at the end.
Run it
With the agent built, you can run it and inspect the results:
public static void main(String[] args) {
var agent = Agent.builder("researcher")
.model("claude-haiku-4-5-20251001")
.tools(WebSearch.class)
.instructions("You are a helpful research assistant. "
+ "Always search first, then provide a thorough summary.")
.strategy("react")
.maxIterations(5)
.build();
// Run the agent
var result = agent.run("What is JamJet?");
System.out.println(result.output());
System.out.printf("Duration: %.2f ms%n", result.durationUs() / 1000.0);
System.out.printf("Tool calls: %d%n", result.toolCalls().size());
}export OPENAI_API_KEY=sk-...
mvn compile exec:java -Dexec.mainClass=com.example.MyAgentIR compilation: what happens under the hood
Before your agent runs, JamJet compiles it to an intermediate representation (IR) — a canonical graph format shared across the Java SDK, Python SDK, and YAML workflows. You can inspect the IR directly:
var ir = agent.compile();
System.out.println("workflow_id: " + ir.id());
System.out.println("start_node: " + ir.startNode());
System.out.println("nodes: " + ir.nodes().size());
System.out.println("edges: " + ir.edges().size());This prints something like:
workflow_id: researcher
start_node: react_start
nodes: 3
edges: 4Why does this matter? Because the IR is what the JamJet runtime actually executes. Whether you write your agent in Java, Python, or YAML, it compiles to the same graph format. This means:
- Portability — an agent written in Java can be deployed on any JamJet runtime
- Inspection — you can validate and visualize the execution graph before running it
- Durability — the runtime checkpoints at node boundaries, so it can resume after a crash
You can also validate the IR before submission:
import dev.jamjet.ir.IrValidator;
IrValidator.validateOrThrow(ir);This catches structural problems (disconnected nodes, missing edges, invalid state schemas) at compile time rather than at runtime.
Build a workflow
Agents are great for open-ended tasks where the model decides what to do. But many real-world systems need deterministic multi-step pipelines — data enrichment, RAG, approval chains, ETL. For these, use a Workflow.
The key difference: in an agent, the LLM decides the execution path. In a workflow, you decide the execution path and the LLM is just one step in the pipeline.
Here is a two-step RAG (Retrieval-Augmented Generation) workflow:
import dev.jamjet.workflow.Workflow;
import java.util.List;
// Typed state — a Java record
record RagState(
String query,
List<String> retrievedDocs,
String answer) {}
var workflow = Workflow.<RagState>builder("rag-assistant")
.version("1.0.0")
.state(RagState.class)
// Step 1: Retrieve relevant documents
.step("retrieve", state -> {
var docs = searchKnowledgeBase(state.query());
return new RagState(state.query(), docs, null);
})
// Step 2: Synthesize an answer from the context
.step("synthesize", state -> {
var context = String.join("\n\n", state.retrievedDocs());
var answer = callLlm(state.query(), context);
return new RagState(state.query(), state.retrievedDocs(), answer);
})
.build();How state flows through a workflow
Each step receives the current RagState and returns a new RagState. State is always immutable — you never mutate the existing record, you construct a new one. This is what makes workflows durable: if the runtime crashes between "retrieve" and "synthesize", it replays from the last completed checkpoint with the exact state that was persisted.
Here is how this workflow executes:
graph LR
A["Start"] --> B["retrieve"]
B --> C["synthesize"]
C --> D["End"]Step "retrieve" populates retrievedDocs. Step "synthesize" reads them and produces the final answer. Each step is checkpointed — if the process crashes after "retrieve" completes, the runtime resumes at "synthesize" without re-running the retrieval.
Running the workflow
import dev.jamjet.workflow.ExecutionResult;
import java.util.ArrayList;
var initialState = new RagState(
"How does JamJet handle concurrent tool calls?",
new ArrayList<>(),
null);
ExecutionResult<RagState> result = workflow.run(initialState);
System.out.println(result.state().answer());
System.out.printf("Ran %d steps in %.2f ms%n",
result.stepsExecuted(), result.totalDurationUs() / 1000.0);Agent vs Workflow: when to use which
| Agent | Workflow | |
|---|---|---|
| Control flow | LLM decides | You decide |
| Best for | Open-ended tasks, research, chat | Pipelines, RAG, approval chains, ETL |
| Determinism | Non-deterministic (model-driven) | Deterministic (code-driven) |
| Durability | Checkpoints at strategy boundaries | Checkpoints at every step |
| Tools | Model chooses which tools to call | Steps call tools explicitly |
You can combine both: use a workflow as the outer orchestrator and embed agents inside individual steps. This gives you deterministic pipelines with intelligent sub-steps.
Next steps
You now have a working agent and workflow. Here is where to go deeper:
- Java SDK Reference — full API coverage: conditional routing, evaluation, state management, runtime client, annotation-based agents
- Spring Boot Starter Guide — integrate JamJet with Spring AI, Spring Security, and Micrometer observability
- LangChain4j Integration — use JamJet as a durable execution layer for LangChain4j agents
- Core Concepts — agents, nodes, state, and durability in depth
- Examples on GitHub — runnable examples including basic tool flow, plan-and-execute agent, and RAG assistant
- Agentic AI Patterns — strategy selection, tool design, and production patterns for agent systems
tip: Already using Spring Boot? Skip ahead to the Spring Boot Starter Guide — it wraps everything in this quickstart into auto-configuration with health checks, metrics, and audit trails.