↗ JetBrains 20% discount
Buy any product from JetBrains and get a 20% discount.
Building Java AI apps using LangChain4j
Learn how to create your first AI application using LangChain4j and Ollama with Phi3 model.
Learn how to integrate OpenAI with LangChain4j using the free demo API key and proxy for quick prototyping.
Introduction to LangChain4j ChatModel (ChatLanguageModel) and supported LLM providers.
Learn how to construct LangChain4j's ChatMessage for building AI conversations and how to useChatResponse with LLM models.
Learn how to use ChatRequest in LangChain4j for granular control over LLM parameters per request.
Learn how to use LangChain4j's PromptTemplate to create dynamic AI prompts with structured placeholders.
Learn how to enable request and response logging in LangChain4j applications for debugging and monitoring.
Learn how to extract structured JSON data from unstructured text using LangChain4j's ChatModel API with JSON Schema.
Learn how to use advanced JSON Schema elements like arrays, enums, and complex structures with Ollama's phi3 model for structured data extraction.
Learn how to use StreamingChatModel in LangChain4j to handle AI responses token-by-token for better user experience.
Learn how to programmatically cancel an AI response stream in LangChain4j based on specific token conditions.
Quick introduction to ChatMemory in LangChain4j with a minimal Java example.
Difference between ChatMemory and using a plain List for conversation state in LangChain4j.
Using MessageWindowChatMemory to limit conversation history by message count in LangChain4j.
Using TokenWindowChatMemory to control memory size by token count in LangChain4j.
Persisting ChatMemory using a ChatMemoryStore in LangChain4j.
Creating a custom ChatMemoryStore to persist chat history into a file using Java Files API.
Learn how to implement Intent Classification using LangChain4j and Ollama to route user requests.
Learn how to classify user intent and extract entities using LangChain4j with a lightweight LLM setup.
Learn the basics of Retrieval-Augmented Generation (RAG) using LangChain4j with Ollama and Phi-3.
Learn how to implement a basic LangChain4j agent using tools calling without AiService.
Learn how a LangChain4j agent chains multiple tools calls to complete a task.
Learn how to create a basic AI Service using LangChain4j with Ollama and Phi3 model.
AiServices Methods Grouped by Purpose
Learn how to use SystemMessage and UserMessage annotations to control LLM behavior with AI Services.
Learn how to use prompt templates with AI Services for dynamic content generation and structured prompts.
Learn how to get structured JSON responses from LLMs using AI Services and JSON mode with Ollama.
Learn how to implement stateful conversations with single user chat memory using AI Services and Ollama.
Learn how to implement per-user chat memory with AI Services for stateful conversations.
Learn how to enable LLMs to call Java methods (tools) through AI Services for extended capabilities.
Learn how to implement Retrieval-Augmented Generation (RAG) with AI Services for context-aware responses.
Learn how to implement token-by-token streaming with AI Services for real-time response generation.
Learn how to implement automatic content moderation with AI Services to filter inappropriate content.
Learn how to chain multiple AI Services together to build complex LLM-powered applications.