VoidAI is fully compatible with the OpenAI SDK. You can use all the features you’re familiar with by simply changing the base URL and API key.
Installation
Configuration
The only changes needed are:
- Set
base_url / baseURL to https://api.voidai.app/v1
- Use your VoidAI API key instead of OpenAI’s
from openai import OpenAI
client = OpenAI(
api_key="sk-voidai-your_key_here",
base_url="https://api.voidai.app/v1"
)
Supported Features
Chat Completions
Full support for chat completions including streaming, function calling, and tool use.
# Basic completion
response = client.chat.completions.create(
model="gpt-5.1",
messages=[{"role": "user", "content": "Hello!"}]
)
# Streaming
stream = client.chat.completions.create(
model="gpt-5.1",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather in a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": ["location"]
}
}
}
]
response = client.chat.completions.create(
model="gpt-5.1",
messages=[{"role": "user", "content": "What's the weather in Tokyo?"}],
tools=tools,
tool_choice="auto"
)
# Check if a tool was called
if response.choices[0].message.tool_calls:
tool_call = response.choices[0].message.tool_calls[0]
print(f"Function: {tool_call.function.name}")
print(f"Arguments: {tool_call.function.arguments}")
Image Generation
response = client.images.generate(
model="gpt-image-1",
prompt="A sunset over mountains",
size="1024x1024",
n=1
)
image_url = response.data[0].url
print(image_url)
Audio Transcription
with open("audio.mp3", "rb") as audio_file:
transcript = client.audio.transcriptions.create(
model="whisper-1",
file=audio_file
)
print(transcript.text)
Text-to-Speech
response = client.audio.speech.create(
model="tts-1",
voice="alloy",
input="Hello, this is a test of text to speech."
)
response.stream_to_file("output.mp3")
Embeddings
response = client.embeddings.create(
model="text-embedding-3-small",
input="Your text to embed"
)
embedding = response.data[0].embedding
print(f"Embedding dimension: {len(embedding)}")
Using Different Providers
The main benefit of VoidAI is accessing multiple providers through one SDK. Simply change the model name:
# OpenAI
client.chat.completions.create(model="gpt-5.1", ...)
# Anthropic
client.chat.completions.create(model="claude-sonnet-4-5-20250929", ...)
# Google
client.chat.completions.create(model="gemini-3-pro-preview", ...)
# DeepSeek
client.chat.completions.create(model="deepseek-v3", ...)
All providers use the same OpenAI-compatible request/response format. No code changes needed beyond the model name.
Environment Variables
For production, use environment variables:
import os
from openai import OpenAI
client = OpenAI(
api_key=os.environ["VOIDAI_API_KEY"],
base_url="https://api.voidai.app/v1"
)