Ollama
Run large language models locally with a simple CLI
Visit Ollama
What It Does
Command-line tool for running LLMs locally. Pull models like Llama, Mistral, or Gemma with a single command and run them entirely on your hardware.
Available On
- Terminal / CLI - Mac, Windows, Linux
- Local API - OpenAI-compatible REST API
- Integrations - Used by Cursor, Aider, Open WebUI, and dozens of other tools
Key Features
- One-command setup -
ollama pull llama3and you're running - Model library - Hundreds of models available to pull
- OpenAI-compatible API - Works as a drop-in local backend
- Modelfile - Customize models with system prompts and parameters
- Lightweight - Minimal overhead, runs as a background service
- GPU support - Metal (Mac), CUDA (Nvidia), ROCm (AMD)
+
Related Tools
Explore similar tools in this category.
01
Coding & Development
LM Studio
Desktop app for running local LLMs with a friendly GUI
02 Coding & DevelopmentClaude Code
Anthropic's agentic coding assistant for the terminal
03 Coding & DevelopmentReplit
Browser-based IDE with AI coding assistant
04 Coding & DevelopmentCodex
OpenAI's cloud-based agentic coding environment with frontier model