🐍

v0studio-python SDK

v1.8.2

Official Python SDK for v0studio AI platform with async support

Installation
Install v0studio-python in your Python environment

pip

pip install v0studio

Poetry

poetry add v0studio

Conda

conda install -c conda-forge v0studio

Development Install

pip install v0studio[dev]

Includes development dependencies like numpy, pandas, and scikit-learn

System Requirements

  • • Python 3.8 or higher
  • • 64-bit operating system
  • • 4GB+ RAM recommended
  • • CUDA support optional (for GPU acceleration)
Quick Start
Get started with v0studio-python in minutes
import v0studio as v0s

# Initialize the client
client = v0s.V0Studio(
    server_url="http://localhost:1234",
    api_key="your-api-key"  # Optional for local usage
)

# Simple text completion
def basic_completion():
    try:
        response = client.completions.create(
            model="llama-3-8b-instruct",
            prompt="Explain machine learning in simple terms:",
            max_tokens=200,
            temperature=0.7
        )

        print(response.choices[0].text)
    except v0s.V0StudioError as e:
        print(f"Error: {e}")

if __name__ == "__main__":
    basic_completion()
Examples
Common use cases and implementation patterns
import v0studio as v0s

# Initialize the client
client = v0s.V0Studio(
    server_url="http://localhost:1234",
    api_key="your-api-key"  # Optional for local usage
)

# Simple text completion
def basic_completion():
    try:
        response = client.completions.create(
            model="llama-3-8b-instruct",
            prompt="Explain machine learning in simple terms:",
            max_tokens=200,
            temperature=0.7
        )

        print(response.choices[0].text)
    except v0s.V0StudioError as e:
        print(f"Error: {e}")

if __name__ == "__main__":
    basic_completion()
API Reference
Complete reference for all SDK classes and methods

Core Classes

V0Studio

Main synchronous client

  • • completions
  • • chat
  • • embeddings
  • • models
  • • fine_tuning
AsyncV0Studio

Async client for concurrent operations

  • • async completions
  • • async chat
  • • async embeddings
  • • async models
  • • async fine_tuning

Methods

completions.create(**kwargs)

Generate text completions

model: str, prompt: str, max_tokens: Optional[int] = None, temperature: Optional[float] = None, stream: bool = False

chat.completions.create(**kwargs)

Create chat completions

model: str, messages: List[Dict[str, str]], max_tokens: Optional[int] = None, temperature: Optional[float] = None, stream: bool = False

embeddings.create(**kwargs)

Generate text embeddings

model: str, input: Union[str, List[str]], encoding_format: str = "float"
Async Support
Leverage Python's asyncio for concurrent AI operations

The v0studio-python SDK provides full async support for high-performance applications that need to handle multiple AI requests concurrently.

import asyncio
import v0studio as v0s

# Async client for concurrent operations
async_client = v0s.AsyncV0Studio()

async def async_completion():
    try:
        response = await async_client.completions.create(
            model="llama-3-8b-instruct",
            prompt="Explain async programming in Python:",
            max_tokens=300,
            temperature=0.6
        )

        return response.choices[0].text
    except v0s.V0StudioError as e:
        print(f"Async error: {e}")
        return None

async def multiple_completions():
    prompts = [
        "What is Python?",
        "Explain list comprehensions",
        "How do decorators work?",
        "What are generators?"
    ]

    # Run multiple completions concurrently
    tasks = [
        async_client.completions.create(
            model="llama-3-8b-instruct",
            prompt=prompt,
            max_tokens=150
        ) for prompt in prompts
    ]

    responses = await asyncio.gather(*tasks)

    for prompt, response in zip(prompts, responses):
        print(f"Q: {prompt}")
        print(f"A: {response.choices[0].text}\n")

# Run async functions
asyncio.run(multiple_completions())

Benefits of Async

  • • Handle multiple requests concurrently
  • • Better resource utilization
  • • Improved application responsiveness
  • • Efficient for I/O-bound operations
Configuration
Customize SDK behavior and connection settings
import v0studio as v0s
import os

# Basic configuration
client = v0s.V0Studio(
    server_url="http://localhost:1234",  # v0studio server URL
    api_key="your-api-key",              # Optional API key
    timeout=30.0,                        # Request timeout (seconds)
    max_retries=3,                       # Number of retry attempts
    headers={                            # Additional headers
        "User-Agent": "my-app/1.0.0"
    }
)

# Advanced configuration
client = v0s.V0Studio(
    server_url=os.getenv("V0STUDIO_SERVER_URL", "http://localhost:1234"),
    api_key=os.getenv("V0STUDIO_API_KEY"),
    timeout=60.0,
    max_retries=5,
    backoff_factor=2.0,                  # Exponential backoff multiplier
    verify_ssl=True,                     # SSL certificate verification
    proxy={                              # Proxy settings
        "http": "http://proxy.example.com:8080",
        "https": "https://proxy.example.com:8080"
    }
)

# Environment variables
# V0STUDIO_SERVER_URL
# V0STUDIO_API_KEY
# V0STUDIO_TIMEOUT
# V0STUDIO_MAX_RETRIES
Troubleshooting
Common issues and solutions for Python integration

Import Errors

If you encounter import errors:

  • • Ensure v0studio is installed: pip show v0studio
  • • Check Python version compatibility (3.8+)
  • • Verify virtual environment activation
  • • Try reinstalling: pip uninstall v0studio && pip install v0studio

Connection Issues

For connection problems:

  • • Verify v0studio server is running
  • • Check firewall and network settings
  • • Test connectivity: curl http://localhost:1234/v1/models
  • • Enable debug logging: v0s.set_debug(True)

Performance Issues

To optimize performance:

  • • Use async client for concurrent requests
  • • Implement connection pooling
  • • Cache embeddings when possible
  • • Monitor memory usage with large models
  • • Use streaming for long responses

Debugging

Enable detailed logging:

import logging import v0studio as v0s # Enable debug logging logging.basicConfig(level=logging.DEBUG) v0s.set_debug(True) # The SDK will now log detailed request/response information