Close

AI - LangChain4j ChatModel Overview

[Last Updated: Jan 20, 2026]

In LangChain4j, the ChatModel (formally ChatLanguageModel) is the primary abstraction for interacting with modern Large Language Models (LLMs). Unlike earlier APIs that operated on plain strings, a ChatModel works with structured messages, enabling richer conversational context and more predictable behavior.

The older LanguageModel abstraction is now considered outdated. Most contemporary LLMs are designed for chat-style interactions and expect explicit roles such as system, user, and assistant.

Why ChatModel Exists

Modern LLMs are optimized for conversational exchanges rather than isolated prompts. A ChatModel allows you to:

  • Provide system-level instructions that define behavior and constraints
  • Preserve conversational context across multiple turns
  • Separate user intent from assistant responses in a structured way

This design results in clearer intent, better alignment with provider APIs, and fewer prompt-engineering workarounds in application code.

Definition of ChatModel

Version: 1.10.0
 package dev.langchain4j.model.chat;
 public interface ChatModel {
     default ChatResponse chat(ChatRequest chatRequest); 
     default ChatResponse doChat(ChatRequest chatRequest);
     default ChatRequestParameters defaultRequestParameters();
     default List<ChatModelListener> listeners();
     default ModelProvider provider();
     default String chat(String userMessage);
     default ChatResponse chat(ChatMessage... messages);
     default ChatResponse chat(List<ChatMessage> messages);
     default Set<Capability> supportedCapabilities();
 }

Example

The simplest usage of a ChatModel involves constructing a model instance and calling the primary chat method:

// Construct a ChatModel implementation
ChatModel model = OpenAiChatModel.builder()
        .apiKey(ApiKeys.OPENAI_API_KEY)
        .modelName(GPT_4_O_MINI)
        .build();

// Start interacting with the model
String answer = model.chat("Hello world!");

Builder Style and Model Implementations

Each ChatModel implementation in LangChain4j typically provides its own builder. While the configuration options differ by provider, the overall pattern is consistent:

  • Authentication (API key, endpoint, or credentials)
  • Model selection
  • Optional tuning parameters

For example, OpenAiChatModel.builder() is specific to OpenAI and exposes options aligned with OpenAI’s API. Other providers follow the same concept but expose provider-specific configuration.

How It Works

The ChatModel acts as a bridge between your Java application and an underlying AI provider. Your application sends structured messages, and LangChain4j translates them into provider-specific requests.

This abstraction allows you to switch providers with minimal code changes, as long as your application code depends only on the ChatModel interface.

Supported Providers and Models

LangChain4j supports multiple AI providers. Each provider is packaged as a separate Maven dependency, so you explicitly include only what you use.

  • OpenAI – GPT-4, GPT-4o, GPT-3.5 and related chat models
  • Anthropic – Claude family of models
  • Google Vertex AI – Gemini-based chat models
  • Azure OpenAI – Azure-hosted OpenAI-compatible models
  • Ollama – Local models such as LLaMA, Mistral, Phi, and others
  • Hugging Face – Hosted and local inference-backed chat models

Because each provider has its own Maven module, switching or adding a model usually involves adding the corresponding dependency and changing the ChatModel builder.

A complete list of support LLM models are listed here.

Conclusion

The ChatModel abstraction in LangChain4j reflects how modern LLMs are designed to operate. By modeling conversations explicitly and standardizing provider access, it enables clean, maintainable, and future-proof AI integrations in Java applications.

See Also

Join