How to Run Meta’s New Llama 3.1 Model Locally with Ollama

Meta’s latest open‑source Llama 3.1 model, available in 8B, 70B, and 405B sizes, is evaluated against top competitors and can be easily run locally on the 8B version using Ollama with a simple step‑by‑step guide.

Programmer DD
Programmer DD
Programmer DD
How to Run Meta’s New Llama 3.1 Model Locally with Ollama

Meta released its latest open‑source large model, Llama 3.1.

The model comes in three sizes: 8B, 70B, and 405B.

8B

70B

405B

Meta evaluated the model on more than 150 benchmark datasets covering a wide range of languages and performed extensive human evaluations against competing models in real‑world scenarios.

The 405B version shows competitive performance compared with the current strongest models such as GPT‑4, GPT‑4o and Claude 3.5 Sonnet.

The 8B and 70B versions also match or exceed models of similar parameter scale.

Running Llama 3.1 with Ollama

Below is a step‑by‑step guide to run the 8B version locally using Ollama, requiring no prior technical background.

Install Ollama from the official website https://ollama.com/ .

After installation, open a terminal and execute ollama run llama3.1. The model will be downloaded.

Wait for the download to complete (see screenshot).

Once the prompt “Send a message” appears, you can interact with the model, e.g., ask it to teach Java.

Easter Egg

Try asking the model “Who are you?” to see its response.

OllamaMeta AILlama 3.1
Programmer DD
Written by

Programmer DD

A tinkering programmer and author of "Spring Cloud Microservices in Action"

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.