Download and Run Ollama with LLaMA 2 and LLaVA Locally

This tutorial walks you through downloading Ollama, an open‑source LLM platform, and demonstrates how to run the Meta LLaMA 2 text model and the multimodal LLaVA model on your own computer, including command‑line usage and image‑based queries.

21CTO
21CTO
21CTO
Download and Run Ollama with LLaMA 2 and LLaVA Locally
Guide: How to download and use Ollama, a powerful tool for interacting with open‑source large language models (LLMs) on your local machine.

Unlike closed‑source models such as ChatGPT, Ollama offers transparency and customizability, making it a valuable resource for developers and AI enthusiasts.

How to Download Ollama

Visit the official Ollama website https://ollama.com/ and click the “download” button. Ollama supports three operating systems; the Windows version is currently in preview.

Ollama.com homepage
Ollama.com homepage
Ollama download page
Ollama download page

Select the executable for your OS; after downloading, run the executable to install. Linux users should execute the on‑screen command instead of downloading an executable.

How to Run Ollama

Below are examples showing how to use different open‑source models with Ollama.

Running the Meta LLaMA 2 Model

Llama 2 is Meta’s open‑source LLM. Use the following command to download and start the model: ollama run llama2 The download process will display output similar to:

pulling manifest
pulling 8934d96d3f08... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 3.8 GB
... (additional layer logs) ...
success
>>> Send a message (/? for help)

After the model is ready, you can type a prompt, e.g., “What can you do for me?” and receive a response outlining capabilities such as answering questions, generating ideas, writing assistance, translation, summarization, creativity, language learning, and casual chat.

To exit, type /exit.

Running the Multimodal LLaVA Model

LLaVA is an open‑source multimodal LLM that can accept images as input.

Download and start the model with: ollama run llava After the download finishes, you can send an image file path to the model:

>> What's in this image? ./Downloads/test-image-for-llava.jpeg
Added image './Downloads/test-image-for-llava.jpeg'
The image shows a person walking across a crosswalk at an intersection. There are traffic lights visible, and the street has a bus parked on one side. The road is marked with lane markings and a pedestrian crossing signal. The area appears to be urban and there are no visible buildings or structures in the immediate vicinity of the person.
>>> Send a message (/? for help)

This demonstrates accurate visual understanding by the model. You can experiment with other images and prompts to explore the capabilities further.

Conclusion

With Ollama, you can easily try powerful open‑source LLMs like LLaMA 2 and LLaVA on your own computer, opening up a fun and exciting world of local AI experimentation.

Ollamaopen-source LLMlocal AIAI TutorialLlama-2LLaVA
21CTO
Written by

21CTO

21CTO (21CTO.com) offers developers community, training, and services, making it your go‑to learning and service platform.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.