How to Run Open-Source LLMs Locally with Ollama: A Step-by-Step Guide
This article explains what Ollama is, how to download it for different operating systems, and provides detailed command‑line examples for running LLaMA 2 and the multimodal LLaVA models locally, showcasing the power of open‑source large language models on your own computer.
Ollama is a free open‑source project that lets you run various open‑source large language models (LLMs) locally on Linux, Windows, or macOS.
What is Ollama?
It is a command‑line interface (CLI) tool that downloads LLMs such as LLaMA 3, Mixtral, LLaVA and runs them privately on your machine, similar to using Docker for containers.
Downloading Ollama
Visit https://ollama.com/ and click the “download” button for your operating system. Windows is currently in preview.
Running Models
Example 1: Run Meta’s LLaMA 2 model. ollama run llama2 The command pulls the model (≈3.8 GB) and then you can interact:
>> What can you do for me?
As a responsible AI language model, I can provide information, generate ideas, assist with writing, translate text, summarize content, create creative pieces, help with language learning, and chat about any topic you choose.Type /exit to quit.
Running the multimodal LLaVA model
LLaVA accepts images as input. ollama run llava After the model is downloaded, send an image path:
>> What's in this image? ./Downloads/test-image-for-llava.jpeg
Added image './Downloads/test-image-for-llava.jpeg' The image shows a person walking across a crosswalk at an intersection. Traffic lights are visible, a bus is parked on one side, and lane markings and a pedestrian signal are present in an urban setting.Conclusion
With Ollama you can experiment with powerful open‑source LLMs such as LLaMA 2 and LLaVA directly on your computer, making AI development accessible and fun.
21CTO
21CTO (21CTO.com) offers developers community, training, and services, making it your go‑to learning and service platform.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
