Close

AI - Getting Started With Ollama

[Last Updated: Jan 6, 2026]

Ollama is a local runtime for running Large Language Models (LLMs) on your own machine. It lets developers use AI models without cloud APIs, internet access, or external services.

What Ollama Is Used For

  • Running LLMs locally
  • Experimenting with prompts and models
  • Building local AI-powered developer tools
  • Learning how LLMs behave without cloud dependencies

System Requirements

  • macOS, Linux, or Windows
  • At least 8 GB RAM (16 GB recommended)
  • Terminal / command line access

Download and Install Ollama

macOS

Download and install using Homebrew:

brew install ollama

Or download the installer from the official website.

Linux

Install using the official install script:

curl -fsSL https://ollama.com/install.sh | sh

Windows

Download the Windows installer from the official Ollama website and follow the setup steps. Once installed, Ollama runs as a background service.

Verify Installation

After installation, verify that Ollama is working:

ollama --version

Running Your First Model

Ollama automatically downloads models the first time you run them.

ollama run llama3

This command:

  • Downloads the model (if not already present)
  • Starts an interactive chat session

Type your prompt and press Enter. Use Ctrl + D or /bye to exit.

Basic Ollama Commands

List Installed Models

ollama list

Download a Model (Without Running)

ollama pull llama3

Run a Model with a One-Off Prompt

ollama run llama3 "Explain Java records in simple terms"

Remove a Model

ollama rm llama3

Where Models Are Stored

Ollama stores models locally on your machine. Once downloaded, models can be reused offline without re-downloading.

Why Developers Use Ollama

  • No API keys or usage limits
  • Full control over data and prompts
  • Works offline
  • Ideal for local development and experimentation

Ollama Models

Please visit ollama library page for a complete list of supported models (Llama, Phi, Qwen, etc.).

Summary

Ollama is the easiest way to start running LLMs locally. With a single command, developers can download and interact with modern language models directly from the terminal, making it a great entry point for local AI development.

See Also