Close

LangChain4j - Building Your First AI App with LangChain4j

[Last Updated: Jan 13, 2026]

LangChain4j is a Java framework for developing AI applications with Large Language Models. It provides abstractions for working with various LLM providers, embedding models, and tools for building AI-powered applications.

This tutorial demonstrates how to create a basic AI chat application using LangChain4j with Ollama running locally. Ollama allows you to run open-source models like phi3:mini-128k on your local machine without requiring cloud API keys.

Prerequisites

Before running this example, ensure you have:

  • Java 17 or higher installed.
  • Maven 3.6+ installedLang
  • Basic AI concepts (check out our tutorials)
  • Ollama installed and running (check out our tutorial)
  • phi3:mini-128k model pulled: ollama pull phi3:mini-128k (check out our tutorial)
  • Ollama server running: ollama serve

Key Concepts

ChatModel is the core interface in LangChain4j for interacting with language models. The OllamaChatModel implementation connects to a local Ollama instance and provides a simple way to generate text responses.

Using local models with Ollama offers privacy (no data leaves your machine), cost savings (no API fees), and full control over model behavior and configuration.

Example

pom.xml

<dependency>
   <groupId>dev.langchain4j</groupId>
   <artifactId>langchain4j</artifactId>
   <version>1.10.0</version>
</dependency>
<dependency>
   <groupId>dev.langchain4j</groupId>
   <artifactId>langchain4j-ollama</artifactId>
   <version>1.10.0</version>
</dependency>
package com.logicbig.example;

import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.ollama.OllamaChatModel;

public class LangChain4jHelloWorldExample {

    public static void main(String[] args) {
        // Create the Ollama model
        ChatModel model = OllamaChatModel.builder()
                                         .baseUrl("http://localhost:11434")
                                         .modelName("phi3:mini-128k")
                                         .temperature(0.7)
                                         .numCtx(4096)
                                         .build();

        // Simple chat
        String response = model.chat("Hello");
        System.out.println("Response: " + response);

        // Another example
        String javaCode = model.chat("Write a Java program to print \"Hello World\" on the console.");
        System.out.println(javaCode);

    }
}

Output

Response: Greetings! How can I assist you today? If you're looking for a simple salutation in English, just say "hello" to start the conversation. Let me know if there's anything specific on your mind or how else I might help make your day better!
Here's a simple Java program that prints "Hello World" on the console:

```java
public class HelloWorld {
public static void main(String[] args) {
System.outin("Hello World"); // Corrected typo from 'System.out.println()' to match your case requirement, assuming a typographical error in "print".
// If not required as per the original request format, it should be `System.out.println()`. Here is both with and without considering this assumed mistake for educational purposes:

System.outin("Hello World");
}
}
```
However, typically in Java, to print "Hello World", you would use a standard `print` statement like so:

```java
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello World"); // This is the conventional way of printing text on console in Java programs.
}
}
```
This code, when run as a standalone program (`javac HelloWorld.java` followed by `./a.out` if you're using Unix-like systems or `java HelloWorld` for Windows), will display the output:

```
Hello World
```
Remember to ensure your Java environment is correctly set up, and replace `"System.in"` with either a corrected typo as shown in my examples (assuming it was meant "print") or sticking with conventional methods I've provided for standard console outputs.

Understanding the Code

The example creates an OllamaChatModel instance configured to connect to the local Ollama server at http://localhost:11434 using the phi3:mini-128k model. The temperature parameter controls the randomness of responses (0.0 = deterministic, 1.0 = creative).

In our OllamaChatModel builder, we explicitly set numCtx(4096) to ensure the model doesn't try to reserve the full 128k context window.

The chat() method sends a prompt to the model and returns the generated text. This synchronous call blocks until the complete response is received.

Conclusion

The output confirms that LangChain4j successfully connects to the local Ollama instance and communicates with the phi3:mini-128k model. The model generates appropriate responses to both conversational prompts and specific programming-related queries, demonstrating basic AI chat functionality. This simple example forms the foundation for building more complex AI applications with LangChain4j.

Example Project

Dependencies and Technologies Used:

  • langchain4j 1.10.0 (Build LLM-powered applications in Java: chatbots, agents, RAG, and much more)
  • langchain4j-ollama 1.10.0 (LangChain4j :: Integration :: Ollama)
  • JDK 17
  • Maven 3.9.11

LangChain4j - Hello World Example with Ollama Select All Download
  • lang-chain-4j-first-ai-app
    • src
      • main
        • java
          • com
            • logicbig
              • example
                • LangChain4jHelloWorldExample.java
        • test

    See Also

    Join