Free and open-source alternative to OpenAI. Self-hosted, community-driven, local-first. A direct replacement for OpenAI running on consumer-grade hardware. No GPU required. Run ggml, gguf, GPTQ, onnx, TF compatible models: llama, llama2, rwkv, whisper, vicuna, koala, cerebras, falcon, dolly, starcoder, etc.
Official website: https://localai.io/
GitHub address: https://github.com/mudler/LocalAI
Chinese mirror (translated version): http://www.gitpp.com/llm/localai-cn
LocalAI is a free and open-source alternative to OpenAI. It serves as a direct replacement for the REST API and is compatible with the local inference OpenAI API specification. It allows you to run LLM, generate images, audio (and more) locally or on-premises using consumer-grade hardware. No GPU required.
LocalAI is an open-source project aimed at providing developers with a free and easy-to-use local artificial intelligence solution. It aims to be compatible with OpenAI's API specification, allowing developers to leverage similar functionality without relying on OpenAI's services. LocalAI supports various model series, including language models, image generation models, audio generation models, etc.
Key features of LocalAI include:
-
Open-source: LocalAI is open-source, meaning its source code can be freely viewed, modified, and distributed by anyone.
-
Free to use: LocalAI provides free usage permissions, allowing developers to use its functionality without spending money.
-
Local inference: LocalAI supports inference on local hardware, eliminating the need to connect to cloud services or use remote servers.
-
Consumer-grade hardware: LocalAI can run on consumer-grade hardware without requiring high-performance GPUs or special hardware support.
-
Model compatibility: LocalAI supports multiple model series, allowing developers to choose the appropriate model based on their needs.
-
API compatibility: LocalAI's interface design is aimed at compatibility with OpenAI's API specification, making it relatively easy for developers to migrate existing OpenAI code to LocalAI.
The emergence of LocalAI provides an alternative choice for developers who wish to avoid using centralized services or who want to keep their data in a local environment for privacy and security reasons. However, it is important to note that while LocalAI offers some conveniences, it may not fully replace OpenAI's services, especially in terms of model performance and functionality. Developers should evaluate LocalAI based on their own needs and expectations.
Getting Started:
The easiest way to run LocalAI is to use Docker Compose or Docker (to build locally, see the build section).
LocalAI requires at least one model file or a configuration YAML file, or both. You can customize further model defaults and specific settings using the configuration file.
Container image requirements:
- Docker or Podman, or a container engine
To build LocalAI, you can use the Docker container image locally, for example:
# build the image
docker build -t localai .
docker run localai
Local:
To build LocalAI locally, you need to meet the following requirements:
- Golang >= 1.21
- Cmake/make
- GCC
- GRPC
To build LocalAI using the following command:
git clone https://github.com/go-skynet/LocalAI
cd LocalAI
make build