Close

AI LangChain4j - ChatMessage and ChatResponse in LangChain4j

[Last Updated: Jan 19, 2026]

In this tutorial, we will learn how to construct LangChain4j ChatMessages to build AI conversations and how to handle the ChatResponse from LLM models.

ChatMessage and ChatResponse are fundamental components in LangChain4j for building conversational AI applications. They provide a structured way to exchange information between users and AI models while maintaining conversation context.

This tutorial demonstrates how to use LangChain4j's ChatModel interface with two key methods:

ChatResponse chat(ChatMessage... messages);
ChatResponse chat(List<ChatMessage> messages);

ChatMessage Implementations

LangChain4j provides five types of chat messages, each serving a specific role in AI conversations:

  • SystemMessage: Sets the AI's behavior and constraints (always retained in conversation)
  • UserMessage: Contains user input, supports text and multimodal content
  • AiMessage: The AI's response, may include tool execution requests
  • ToolExecutionResultMessage: Contains results from executed tools
  • CustomMessage: For provider-specific extensions (Ollama only)

Understanding UserMessage

Definition of UserMessage

Version: 1.10.0
 package dev.langchain4j.data.message;
 ...
 public class UserMessage implements ChatMessage {
     private final String name;
     private final List<Content> contents;
     private final Map<String, Object> attributes;
     ...
 }

name: An optional field used to identify a specific user. This is helpful in group chat scenarios or when the LLM needs to track multiple distinct human participants.

contents: A list of Content objects. This is the most important field, as it allows a single message to contain a mix of text, images, audio, or video.

attributes: A map of metadata. This stores extra context or vendor-specific information that does not fit into the standard message content types. Attributes are not sent to the model, but they are used locally.

Implementations of Content

The Content interface enables the model to process different types of media. Below are its primary implementations:

TextContent

This is the standard implementation for text-based prompts. It wraps a simple string containing the user's instructions or questions.

Note: If you use the shortcut UserMessage.from("text"), the library automatically creates a TextContent object and adds it to the list for you.

ImageContent

Used for vision-capable models. It holds image data via a URL or Base64 string. It also allows you to specify a detail level (Low, High, or Auto) to manage how the model analyzes the image resolution.

AudioContent

Used for models with native audio processing. It contains the audio data and the associated MIME type (such as audio/wav), allowing the model to "hear" the input directly.

VideoContent

Supports video-to-text or video analysis tasks. It allows users to attach video files or links as part of the message context.

PdfFileContent

It is a wrapper for PDF file data that allows a PDF to be attached to a chat message as part of the message contents.

Examples

In these examples we are going to use Ollama with Moondream LLM locally.
Moondream is an excellent choice for local deployment because it's a compact, efficient vision-language model that delivers strong multimodal reasoning despite its small size. Moondream supports images by using a modern encoder to convert pictures into visual tokens, which are then used to answer questions and describe visual content.

Varargs Method Example

The varargs method chat(ChatMessage... messages) is ideal for simple, stateless interactions where you know the exact message sequence upfront. It's concise and perfect for one-off requests.

package com.logicbig.example;

import dev.langchain4j.data.message.SystemMessage;
import dev.langchain4j.data.message.UserMessage;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.chat.response.ChatResponse;
import dev.langchain4j.model.ollama.OllamaChatModel;

public class VarargsChatExample {

    public static void main(String[] args) {
        ChatModel model = OllamaChatModel.builder()
                                         .baseUrl("http://localhost:11434")
                                         .modelName("moondream:latest")
                                         .numCtx(4096)
                                         .temperature(0.7)
                                         .build();

        // Using varargs method for concise one-off requests
        ChatResponse response = model.chat(
                SystemMessage.from("You are a poetic assistant."),
                UserMessage.from("Write a five-word poem about Java.")
        );

        String poem = response.aiMessage().text();
        System.out.println("Poem: " + poem);

        // Another example with different instructions
        ChatResponse response2 = model.chat(
                SystemMessage.from("You are a helpful coding assistant."),
                UserMessage.from("Explain recursion in one sentence.")
        );

        System.out.println("\nRecursion explanation: " +
                                   response2.aiMessage().text());
    }
}

Output

Poem: 
A language that's versatile and clear,
Java runs on machines of all kinds,
Serving millions with ease,
Making the world go round in a flash.

Recursion explanation: ers recurssion is a function that calls itself to solve a problem by breaking it down into smaller subproblems, then repeats this process until the correct solution is found.

List Method Example

The list method chat(List<ChatMessage> messages) is designed for dynamic conversations that require memory. It allows you to build and manage message lists that grow as conversations progress, making it suitable for chatbots.

package com.logicbig.example;

import dev.langchain4j.data.message.ChatMessage;
import dev.langchain4j.data.message.SystemMessage;
import dev.langchain4j.data.message.UserMessage;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.chat.response.ChatResponse;
import dev.langchain4j.model.ollama.OllamaChatModel;
import java.util.ArrayList;
import java.util.List;

public class ListChatExample {

    public static void main(String[] args) {
        ChatModel model = OllamaChatModel.builder()
                                         .baseUrl("http://localhost:11434")
                                         .modelName("moondream:latest")
                                         .numCtx(4096)
                                         .temperature(0.7)
                                         .build();

        // Using list method for dynamic conversation building
        List<ChatMessage> messages = new ArrayList<>();
        messages.add(SystemMessage.from("You are a helpful support agent "
                                                + "who can take orders but cannot cancel order."));
        messages.add(UserMessage.from("I need help with my order."));

        ChatResponse response = model.chat(messages);

        System.out.println("Support agent: " + response.aiMessage().text());

        // Add follow-up question
        messages.add(response.aiMessage());
        messages.add(UserMessage.from("Can I cancel my order?"));

        ChatResponse followUp = model.chat(messages);
        System.out.println("\nFollow-up response: " + followUp.aiMessage().text());
    }
}

Output

Support agent: 
Hello,
I'm sorry but I can't assist you any further without more specific details about the issue. Could you please provide me with some context or further instructions so that I could better understand what you're trying to help me with?

Follow-up response:
No, once an order has been placed and confirmed by the customer, it cannot be canceled without a valid reason.

Building Messages with Builder Pattern

While .from() factory methods are convenient, the builder pattern offers fine-grained control over message construction, including adding metadata and handling complex content structures.

package com.logicbig.example;

import dev.langchain4j.data.message.ImageContent;
import dev.langchain4j.data.message.TextContent;
import dev.langchain4j.data.message.UserMessage;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.chat.response.ChatResponse;
import dev.langchain4j.model.ollama.OllamaChatModel;
import java.util.List;
import java.util.Map;

public class MessageBuilderExample {

    public static void main(String[] args) {
        ChatModel model = OllamaChatModel.builder()
                                         .baseUrl("http://localhost:11434")
                                         .modelName("moondream:latest")
                                         .numCtx(4096)
                                         .temperature(0.7)
                                         .build();

        // Building UserMessage with Builder pattern for advanced control
        UserMessage userMessage =
                UserMessage.builder()
                           .attributes(Map.of("session_id", "123", "user_type", "premium"))
                           .name("User_123")
                           .contents(List.of(
                                   TextContent.from("Describe this image."),
                                   //loading langChain4j logo image
                                   ImageContent.from("https://www.logicbig.com/tutorials/ai-tutorials"
                                                             + "/lang-chain-4j/images/langChain4j.png"))
                           )
                           .build();

        ChatResponse response = model.chat(userMessage);

        System.out.println("Response: " + response.aiMessage().text());

    }
}

Output

Response: 
The image features a green parrot with a red beak and orange head perched on top of a white coffee cup, holding the cup in its wings as if it's sipping from it. Below the cup is a brown banner that reads "langCHAIN4J", indicating the name or service associated with this image.

Understanding ChatResponse

ChatResponse contains both the AI's response and valuable execution metadata. It provides access to the generated content, token usage statistics, and completion reasons, which are essential for monitoring and cost management.

package com.logicbig.example;

import dev.langchain4j.data.message.UserMessage;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.chat.response.ChatResponse;
import dev.langchain4j.model.ollama.OllamaChatModel;
import dev.langchain4j.model.output.FinishReason;

public class ChatResponseExample {

    public static void main(String[] args) {
        ChatModel model = OllamaChatModel.builder()
                                         .baseUrl("http://localhost:11434")
                                         .modelName("moondream:latest")
                                         .numCtx(4096)
                                         .temperature(0.7)
                                         .build();

        UserMessage userMessage = UserMessage.from("Explain object-oriented programming.");
        ChatResponse response = model.chat(userMessage);

        // Extracting text content
        String text = response.aiMessage().text();
        System.out.println("Response: " + text);

        // Accessing metadata
        int inputTokens = response.tokenUsage().inputTokenCount();
        int outputTokens = response.tokenUsage().outputTokenCount();
        int totalTokens = response.tokenUsage().totalTokenCount();
        FinishReason finishReason = response.finishReason();

        System.out.println("\nToken Usage:");
        System.out.println("Input tokens: " + inputTokens);
        System.out.println("Output tokens: " + outputTokens);
        System.out.println("Total tokens: " + totalTokens);
        System.out.println("Finish reason: " + finishReason);

    }
}

Output

Response: 

The code block is an example of Object-Oriented Programming (OOP) in the context of a class named "Vehicle" and an instance of that class. OOP promotes modularity, encapsulation, inheritance, and polymorphism by organizing code into reusable components called objects, which can interact with each other to perform specific tasks or behaviors.

In this case, we have a Vehicle object (defined as "Vehicle" in the class) with its own properties like speed, and methods that control its behavior. This organization of code helps maintain clean and reusable code while allowing the developer to focus on adding more functionalities without needing to understand the underlying technical details.

The question's relevance to OOP is to demonstrate how Object-Oriented Programming principles can be applied in a real-world context, such as classifying vehicles based on their speeds or categorizing them into various types of vehicles (cars, trucks, etc.).

Token Usage:
Input tokens: 12
Output tokens: 195
Total tokens: 207
Finish reason: STOP

Multi-Turn Conversation Example

This example demonstrates how to maintain conversation context across multiple exchanges. Since LLMs are stateless, you must provide the complete conversation history with each request.

package com.logicbig.example;

import dev.langchain4j.data.message.AiMessage;
import dev.langchain4j.data.message.ChatMessage;
import dev.langchain4j.data.message.UserMessage;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.ollama.OllamaChatModel;
import java.util.ArrayList;
import java.util.List;

public class MultiTurnConversation {

    public static void main(String[] args) {

        ChatModel model = OllamaChatModel.builder()
                                         .baseUrl("http://localhost:11434")
                                         .modelName("moondream:latest")
                                         .numCtx(4096)
                                         .build();

        List<ChatMessage> conversation = new ArrayList<>();

        // ---- Stage 1: Initial requirement ----
        UserMessage stage1 = UserMessage.from(
                "I want to build a REST API for managing orders."
        );
        conversation.add(stage1);

        AiMessage response1 = model.chat(conversation).aiMessage();
        conversation.add(response1);

        System.out.println("Stage 1 Response:");
        System.out.println(response1.text());
        System.out.println();

        // ---- Stage 2: Refinement based on previous turn ----
        UserMessage stage2 = UserMessage.from(
                "The API should support creating and cancelling orders."
        );
        conversation.add(stage2);

        AiMessage response2 = model.chat(conversation).aiMessage();
        conversation.add(response2);

        System.out.println("Stage 2 Response:");
        System.out.println(response2.text());
        System.out.println();

        // ---- Stage 3: Concrete output using accumulated context ----
        UserMessage stage3 = UserMessage.from(
                "Give me a simple list of REST endpoints for this API."
        );
        conversation.add(stage3);

        AiMessage response3 = model.chat(conversation).aiMessage();
        conversation.add(response3);

        System.out.println("Stage 3 Response:");
        System.out.println(response3.text());
    }
}

Output

Stage 1 Response:

To build a REST API for managing orders, you would need to define the necessary data models and schemas for your application's requirements. This includes defining the fields that will be used in the API, such as the order id, customer id, product name, quantity, price, and payment method. You can use Python libraries like Flask or Django to create a RESTful API with appropriate routing and HTTP methods. Once you have defined the data models and schemas, you can use tools like SwaggerUI or Postman to test your API and ensure that it meets the necessary requirements for integration into other systems.

In addition to defining the data models and schemas, you will also need to define the business logic behind managing orders in your application. This includes how to create new orders, update existing orders, delete orders, and handle errors or exceptions related to order management. You can use Python libraries like SQLAlchemy or Django ORM to interact with your database and perform these operations.

Finally, you will need to deploy your API securely using tools such as Docker or Kubernetes to ensure that it is protected from unauthorized access and can be easily scaled to handle increased traffic.

Stage 2 Response:

To create an order in the API, you would define a new route for the 'POST' method with the appropriate HTTP methods (e.g., PUT or DELETE) that will allow you to update the necessary fields of the existing order. For example, you could define routes like '/orders/1' and '/orders/2', which would create a new order in the database for orders with id '1' and '2'.

To cancel an order, you can define a route that accepts POST requests to delete the specific order from the database. This would involve defining a route such as '/orders/1' or '/orders/2' and deleting the corresponding order using the appropriate HTTP methods (e.g., PUT). You should also ensure that the API has proper error handling in place for cases where an order cannot be canceled due to insufficient funds, unavailable products, or other issues.

Stage 3 Response:

1. '/orders/1' - Create new order with id '1'.
2. '/orders/2' - Create new order with id '2'.
3. '/orders/1' - Update existing order with id '1'.
4. '/orders/2' - Update existing order with id '2'.
5. '/orders/1' - Delete existing order with id '1'.
6. '/orders/2' - Delete existing order with id '2'.
7. '/orders/1' - Create new order with customer id '1', product name 'apple', quantity '3', price '0.75', and payment method 'card'.
8. '/orders/2' - Update existing order with customer id '1', product name 'orange', quantity '4', price '0.79', and payment method 'card'.

Conclusion

The examples demonstrate successful interactions with LLM using different ChatMessage construction methods and conversation patterns. The varargs method produces concise one-off responses, while the list method enables dynamic multi-turn conversations. The builder pattern provides maximum flexibility for complex scenarios, and ChatResponse effectively captures both content and execution metadata. These components form the foundation for building sophisticated AI conversational applications with LangChain4j.

Example Project

Dependencies and Technologies Used:

  • langchain4j 1.10.0 (Build LLM-powered applications in Java: chatbots, agents, RAG, and much more)
  • langchain4j-ollama 1.10.0 (LangChain4j :: Integration :: Ollama)
  • JDK 17
  • Maven 3.9.11

AI LangChain4j - Understanding ChatMessage and ChatResponse Select All Download
  • chat-message-response-lang-chain-4j
    • src
      • main
        • java
          • com
            • logicbig
              • example
                • MultiTurnConversation.java

    See Also

    Join