An AI Agent (check out AI Agent basics here) is more than just a chat interface; it is a system where the Large Language Model (LLM) can interact with external tools to perform actions or fetch data it doesn't natively possess.
In LangChain4j, an agentic flow is achieved by providing the model with ToolSpecification objects. These specifications are created when tool methods are annotated with @Tool annotations.
How tools are executed?
When a user asks a question requiring real-time data (like the current system time):
- The LLM identifies a matching tool
- Returns information that LangChain4j uses to create a
ToolExecutionRequest
- The application executes the tool
- Returns the result to the user
What is ToolSpecification?
It contains all the necessary information about that tool, such as:
- Its name (e.g., "getWeather").
- A description of what it does (e.g., "Returns the weather forecast for a given city").
- What inputs (parameters) it needs and their descriptions (e.g., a "city" name, a "temperatureUnit" like CELSIUS or FAHRENHEIT).
The purpose of a ToolSpecification is to tell a Large Language Model (LLM) everything it needs to know about an available tool so the LLM can decide if and how to use it to fulfill a user's request.
What is @Tool annotation
In simple terms, the @Tool annotation is like a special label you put on a method to tell LLM that "this method is a tool it can use."
When you label a method with @Tool and register with the Agent:
- What the method does: It can read the method's name (e.g.,
add, squareRoot) and an optional description you provide in the annotation's value field (e.g., "Searches Google for relevant URLs, given the query").
- What inputs it needs: It knows about the method's parameters (e.g.,
int a, int b for add).
- It can execute it: If the AI determines that calling this method would help answer a user's question, it will generate a request to execute this method with the appropriate arguments.
Essentially, @Tool transforms a regular Java method into a function that an AI can understand and call automatically to perform specific tasks.
@Tool is a convenient shortcut in your Java code to tell LangChain4j, "Hey, this method is an AI tool."
- LangChain4j then converts this
@Tool-annotated method into a ToolSpecification.
- The
ToolSpecification is the standardized definition of the tool that is actually communicated to the LLM.
Java source and doc
Definition of ToolVersion: 1.10.0 package dev.langchain4j.agent.tool;
@Retention(RUNTIME)
@Target({ METHOD })
public @interface Tool {
String name() default ""; 1
String[] value() default ""; 2
@Experimental
ReturnBehavior returnBehavior() default ReturnBehavior.TO_LLM; 3
@Experimental
String metadata() default "{}"; 4
}
Definition of ToolSpecificationVersion: 1.10.0 package dev.langchain4j.agent.tool;
...
public class ToolSpecification {
private final String name;
private final String description;
private final JsonObjectSchema parameters;
private final Map<String, Object> metadata;
...
}
Example
This example demonstrates a single-step agent, where the LLM requests one tool call and then produces a final answer. More advanced agents may invoke multiple tools across multiple reasoning steps, which will be covered in later tutorials.
The code shows how a LangChain4j agent requests a single function to retrieve the current system time. The application executes the function and feeds the result back to the model.
This example uses the low-level LangChain4j API to demonstrate core concepts. A high-level alternative (Ai Services) exists to accomplish the same tasks, which we will cover in the later tutorials.
Creating Tools
package com.logicbig.example;
import dev.langchain4j.agent.tool.Tool;
public class SystemTools {
@Tool("Returns the current system time in milliseconds")
public long systemMillis() {
return System.currentTimeMillis();
}
}
Complete Agent Example with Tool Execution
In this example we are using Ollama with llama3.2:latest model.
package com.logicbig.example;
import dev.langchain4j.agent.tool.ToolExecutionRequest;
import dev.langchain4j.agent.tool.ToolSpecification;
import dev.langchain4j.agent.tool.ToolSpecifications;
import dev.langchain4j.data.message.*;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.chat.request.ChatRequest;
import dev.langchain4j.model.chat.response.ChatResponse;
import dev.langchain4j.model.ollama.OllamaChatModel;
import dev.langchain4j.service.tool.DefaultToolExecutor;
import dev.langchain4j.service.tool.ToolExecutor;
import java.lang.reflect.Method;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class AgentExample {
private static final Map<String, ToolExecutor> executors = new HashMap<>();
private static final List<ToolSpecification> specs = new ArrayList<>();
static {
SystemTools tools = new SystemTools();
for (Method method : SystemTools.class.getDeclaredMethods()) {
ToolSpecification spec = ToolSpecifications.toolSpecificationFrom(method);
specs.add(spec);
executors.put(spec.name(), new DefaultToolExecutor(tools, method));
}
}
public static void main(String[] args) {
ChatModel model =
OllamaChatModel.builder()
.baseUrl("http://localhost:11434")
.modelName("llama3.2:latest")
.temperature(0.0)
.numCtx(4096)
.build();
List<ChatMessage> messages = new ArrayList<>();
messages.add(SystemMessage.from("You are a helpful assistant with access to tools. " +
"You may call tools when needed."));
messages.add(UserMessage.from("What is the current system time in milliseconds?"));
ChatRequest request = ChatRequest.builder()
.messages(messages)
.toolSpecifications(specs)
.build();
ChatResponse response = model.chat(request);
AiMessage aiMessage = response.aiMessage();
if (aiMessage.hasToolExecutionRequests()) {
messages.add(aiMessage);
for (ToolExecutionRequest r : aiMessage.toolExecutionRequests()) {
String result = executors.get(r.name()).execute(r, r.id());
messages.add(ToolExecutionResultMessage.from(r, result));
}
}
//final message
messages.add(model.chat(messages).aiMessage());
for (ChatMessage chatMessage : messages) {
System.out.println("-- %s --".formatted(chatMessage.type()));
switch (chatMessage.type()) {
case SYSTEM -> System.out.println(((SystemMessage) chatMessage).text());
case USER -> System.out.println(((UserMessage) chatMessage).singleText());
case TOOL_EXECUTION_RESULT -> System.out.println(
((ToolExecutionResultMessage) chatMessage)
.text());
case AI -> {
AiMessage aiMsg = (AiMessage) chatMessage;
if (aiMsg.text() != null) {
System.out.println(aiMsg.text());
}
if (aiMsg.hasToolExecutionRequests()) {
System.out.println(aiMsg.toolExecutionRequests());
}
}
}
}
}
}
Output-- SYSTEM -- You are a helpful assistant with access to tools. You may call tools when needed. -- USER -- What is the current system time in milliseconds? -- AI -- [ToolExecutionRequest { id = null, name = "systemMillis", arguments = "{"<nil>":"[]"}" }] -- TOOL_EXECUTION_RESULT -- 1769213045125 -- AI -- The current system time in milliseconds is 1,769,213,045,125.
Note: For tools without parameters, LangChain4j still generates an argument schema. The <nil> entry in the tool execution request is expected and can be ignored.
Understanding the Code
The agent is provided with a tool specification generated from a Java method annotated with @Tool. When the model encounters a question it cannot answer directly, it issues a tool execution request.
The application inspects the request, executes the corresponding Java method, and returns the result using a ToolExecutionResultMessage. The model then produces a grounded final response.
Conclusion
The output confirms that the model did not guess the system time. Instead, it requested a function call and relied on the application to provide real JVM data. This demonstrates the core agent pattern in LangChain4j.
Example ProjectDependencies and Technologies Used: - langchain4j 1.10.0 (Build LLM-powered applications in Java: chatbots, agents, RAG, and much more)
- langchain4j-ollama 1.10.0 (LangChain4j :: Integration :: Ollama)
- JDK 21
- Maven 3.9.11
|