v0studio is a desktop app for developing and experimenting with LLMs locally on your computer. Get started in minutes!
A desktop application for running local LLMs with an intuitive interface.
Chat with your models using a familiar messaging interface.
Search and download models directly from Hugging Face with built-in integration.
A local server that provides OpenAI-compatible endpoints for your apps.
v0studio generally supports:
For detailed requirements, consult our System Requirements page.
v0studio supports running LLMs on Mac, Windows, and Linux using llama.cpp
.
To manage LM Runtimes, press ⌘ + Shift + R on Mac or Ctrl + Shift + R on Windows/Linux.
On Apple Silicon Macs, v0studio also supports running LLMs using Apple's MLX
framework for optimized performance.
Download v0studio for your operating system and run the installer. Follow the setup wizard to complete installation.
Browse and download LLMs like Llama, Phi, or DeepSeek R1 directly within v0studio.
Popular starter models include:
Load your model and start chatting! You can also attach documents for RAG (Retrieval-Augmented Generation) entirely offline.
Attach documents to your chat messages and interact with them entirely offline using Retrieval-Augmented Generation.
Use v0studio's OpenAI-compatible API to integrate with your existing applications and workflows.
Run everything completely offline with no internet connection required after initial setup.
Use the v0s
command-line tool for scripting and automation.