Artificial Intelligence 7 min read

Setting Up and Using LocalAI as an Open‑Source Alternative to the ChatGPT API

LocalAI is an open‑source, cost‑effective alternative to the ChatGPT API that lets you download and run thousands of language models locally via Docker or compiled binaries, offering privacy, customization, and easy integration into projects through a compatible API.

php中文网 Courses
php中文网 Courses
php中文网 Courses
Setting Up and Using LocalAI as an Open‑Source Alternative to the ChatGPT API

ChatGPT API costs can be high, especially with GPT‑4, prompting the search for open‑source alternatives.

LocalAI offers a solution; it can run thousands of open‑source models locally, avoiding cloud costs and preserving privacy.

Why Choose LocalAI?

Cost efficiency: LocalAI is more cost‑effective than cloud services, especially at scale.

Privacy: LocalAI does not send your data to the cloud, providing higher privacy and security.

Customization: LocalAI allows you to train or fine‑tune your own models to meet specific needs.

Open‑source experimentation: Supports various open‑source models, ensuring broad functionality and compatibility with existing projects.

Setting Up LocalAI

To set up LocalAI, follow these steps:

Download Model

First, download a model from Hugging Face at https://huggingface.co/ . After downloading, copy the model into a directory named models .

Run with Docker

If you are on Linux or macOS, you can run LocalAI using Docker:

<code>docker run -p 8080:8080 -v $PWD/models:/models -ti --rm quay.io/go-skynet/local-ai:latest --models-path /models --context-size 700 --threads 4</code>

This starts a Docker container that includes the LocalAI API server.

Build Binary (Apple Silicon)

If you are using Apple Silicon, you need to build your own binary:

<code>git clone https://github.com/go-skynet/LocalAI
cd LocalAI
go build
</code>

After building, start the API server with:

<code>$ ./local-ai --models-path=./models/ --debug=true</code>

Test API Server

Now you can test the API server using curl:

<code>$ curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
    "model": "llama2-13-2q-chat.gguf",
    "prompt": "A long time ago in a galaxy far, far away",
    "temperature": 0.7
}'
</code>

This returns a JSON response containing the model's generated text.

Replace Model Name

In the curl command, replace llama2-13-2q-chat.gguf with the filename of the model you downloaded from Hugging Face.

Integrate LocalAI into Your Project

LocalAI can be easily integrated with existing code by switching the endpoint URL to your LocalAI instance. Replace the following two items in the OpenAI client code:

openai.base_url : set to your LocalAI URL.

openai.api_key : set to any placeholder key; a valid key is not required.

<code>import openai

# API key does not need to be valid
openai.base_url = "http://localhost:8080"
openai.api_key = "sk-XXXXXXXXXXXXXXXXXXXX"

completion = openai.chat.completions.create(
    model="llama2-13-2q-chat.gguf",
    messages=[
        {
            "role": "user",
            "content": "How do I output all files in a directory using Python?"
        },
    ],
)

print(completion.choices[0].message.content)
</code>

Model Library

LocalAI also supports a model library feature, allowing you to integrate additional language models by defining the PRELOAD_MODELS environment variable.

For example, replace the default gpt-3.5-turbo with GPT4ALL-j :

<code>$ export PRELOAD_MODELS='[{"url": "github:go-skynet/model-gallery/gpt4all-j.yaml", "name": "gpt-3.5-turbo"}]'
</code>

When LocalAI starts with this variable, the API server will automatically download and cache the specified model files.

Browse all available models at https://localai.io/models/ .

LocalAI provides a versatile, cost‑effective, and privacy‑focused AI solution for hobbyists and developers alike.

Dockerprivacyopen-sourceAPIAI inferenceLocalAI
php中文网 Courses
Written by

php中文网 Courses

php中文网's platform for the latest courses and technical articles, helping PHP learners advance quickly.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.