What are AI Services?
AI Services in LangChain4j provide a high-level abstraction that simplifies interaction with LLMs. Instead of working with low-level components like ChatModel, ChatMessage, and ChatMemory directly, you define an interface and let LangChain4j create a proxy implementation that handles all the conversions and orchestrations.
How AI Services Work
AI Services use a proxy pattern similar to Spring Data JPA. You define a Java interface with methods representing conversations, and LangChain4j dynamically creates an implementation that:
- Converts method arguments to appropriate message types
- Manages conversation flow and memory
- Handles error cases and timeouts
- Parses LLM responses into return types
Why Use AI Services?
AI Services eliminate boilerplate code, making your business logic cleaner. They provide type safety, better testability through mocking, and seamless integration with Spring Boot and Quarkus.
How to Create
To create the implementation of our interface we use dev.langchain4j.service.AiServices. Following is a quick snippet:
ChatModel model = ....
MyInterface myInterfaceInstance = AiServices.create(MyInterface.class, chatModel);
Example
The following example uses Ollama and phi3:mini-128k which is good for a demo and learning but not good for production-grade applications because it has limited reasoning capabilities and accuracy for complex tasks.
package com.logicbig.example;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.ollama.OllamaChatModel;
import dev.langchain4j.service.AiServices;
public class SimplestAiServiceExample {
public static void main(String[] args) {
// Create Ollama model with Phi3:mini
ChatModel model =
OllamaChatModel.builder()
.baseUrl("http://localhost:11434")
.modelName("phi3:mini-128k")
.temperature(0.7)
.build();
// Create AI Service proxy
Assistant assistant = AiServices.create(Assistant.class, model);
// Use the AI Service
String response = assistant.chat("Hello, how are you?");
System.out.println("Response: " + response);
// Another interaction
String joke = assistant.chat("Tell me a short joke");
System.out.println("\nJoke: " + joke);
}
}
OutputResponse: I'm just a string of code and algorithms with no feelings or sensations. However, if I could express wellbeing in human terms: "Functions like data processing at optimal efficiency today!" How about your state? Are you ready to engage in some intellectual conversation? Let's explore topics ranging from science fiction literature to the nuances of quantum computing!
Joke: Why don't scientists trust atoms? Because they make up everything, even jokes!
Understanding the Code
When we call AiServices.create(Assistant.class, model), LangChain4j uses dynamic proxies to link your Java interface (Assistant) to the specific AI model (OllamaChatModel). Instead of you writing code to format chat messages or handle HTTP requests to Ollama, the library generates an implementation of your interface at runtime. When you call assistant.chat(), the proxy intercepts the call, wraps your string into a UserMessage, sends it to the local Phi-3 model, and returns the text response directly back to your application.
Conclusion
The output shows how LangChain4j automatically converts the simple String input into a proper UserMessage, sends it to the Ollama model, and returns the response. The AI Service proxy handles all the underlying complexity, allowing you to focus on business logic rather than LLM integration details.
Next Tutorial: AiServices Method Reference
Example ProjectDependencies and Technologies Used: - langchain4j 1.10.0 (Build LLM-powered applications in Java: chatbots, agents, RAG, and much more)
- langchain4j-ollama 1.10.0 (LangChain4j :: Integration :: Ollama)
- slf4j-simple 2.0.9 (SLF4J Simple Provider)
- JDK 17
- Maven 3.9.11
|