Tool Coding & Development

Ollama

Run large language models locally with a simple CLI

Visit Ollama

What It Does

Command-line tool for running LLMs locally. Pull models like Llama, Mistral, or Gemma with a single command and run them entirely on your hardware.

Available On

  • Terminal / CLI - Mac, Windows, Linux
  • Local API - OpenAI-compatible REST API
  • Integrations - Used by Cursor, Aider, Open WebUI, and dozens of other tools

Key Features

  • One-command setup - ollama pull llama3 and you're running
  • Model library - Hundreds of models available to pull
  • OpenAI-compatible API - Works as a drop-in local backend
  • Modelfile - Customize models with system prompts and parameters
  • Lightweight - Minimal overhead, runs as a background service
  • GPU support - Metal (Mac), CUDA (Nvidia), ROCm (AMD)
+

Related Tools

Explore similar tools in this category.