Skip to main content

Text Generation

Provider Disclosure: VoidAI offers text generation services powered by multiple providers, including OpenAI, Anthropic, Google, Mistral AI, X.AI, Deepseek, and more. The specific provider used depends on the model you select in your API call.

VoidAI provides a powerful text generation API that's fully compatible with the OpenAI SDK. This guide will show you how to get started and make the most of the API.

Basic Usage

Here's a simple example of generating text with our API:

from openai import OpenAI 

# Initialize the client with your API key
client = OpenAI(
api_key="yourapikey",
base_url="https://api.voidai.app/v1/"
)

# Create a basic chat completion
messages = [{"role": "user", "content": "What is 1 + 1? Can you also give me the approach to it?"}]
response = client.chat.completions.create(
model="claude-3-5-sonnet-20241022", # An Anthropic model
messages=messages
)

# Extract and print the response
assistant_response = response.choices[0].message.content
print(assistant_response)

Conversation Memory

The chat completions API remembers the conversation context when you include previous messages in your requests. Here's how to build a conversation:

from openai import OpenAI

client = OpenAI(
api_key="yourapikey",
base_url="https://api.voidai.app/v1/"
)

messages = [
{"role": "system", "content": "You're a math teacher."},
{"role": "user", "content": "How much is 2 plus 2?"},
{"role": "assistant", "content": "2 plus 2 equals 4."},
{"role": "user", "content": "You're really good at math!"},
{"role": "assistant", "content": "Thank you! I'm glad I could help you with your math question."},
{"role": "user", "content": "What was the first question I asked you?"}
]

response = client.chat.completions.create(
model="gpt-4o",
messages=messages
)

assistant_response = response.choices[0].message.content
print(assistant_response)

Building a Chat Application

You can easily create an interactive chat application that maintains conversation history:

from openai import OpenAI

client = OpenAI(
api_key="yourapikey",
base_url="https://api.voidai.app/v1/"
)

# Start with a system message
system_prompt = input("Set your system prompt: ")
messages = [{"role": "system", "content": system_prompt}]
print("Starting the chat. Type 'exit' to end the conversation.")

# Main conversation loop
while True:
user_input = input("You: ")
if user_input.lower() == 'exit':
print("Ending the conversation.")
break

# Add the user's message and get a response
messages.append({"role": "user", "content": user_input})
response = client.chat.completions.create(
model="mistral-large-latest",
messages=messages
)
assistant_response = response.choices[0].message.content

# Add the assistant's response to the conversation history
messages.append({"role": "assistant", "content": assistant_response})
print(f"Assistant: {assistant_response}")

Advanced Parameters

Our API supports various parameters for controlling the generation:

from openai import OpenAI

client = OpenAI(
api_key="yourapikey",
base_url="https://api.voidai.app/v1/"
)

response = client.chat.completions.create(
model="grok-3",
messages=[{"role": "user", "content": "Write a short poem about AI"}],
temperature=0.7, # Controls randomness (0-1)
max_tokens=100, # Limits response length
top_p=0.9, # Nucleus sampling
frequency_penalty=0, # Reduces repetition (-2 to 2)
presence_penalty=0, # Encourages new topics (-2 to 2)
stop=["Poem:", "---"] # Stop sequence(s)
)

print(response.choices[0].message.content)

Model Selection

VoidAI offers models from multiple providers. You can choose models based on your specific needs:

from openai import OpenAI

client = OpenAI(
api_key="yourapikey",
base_url="https://api.voidai.app/v1/"
)

# Using an OpenAI model
openai_response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Explain quantum computing briefly"}]
)

# Using a Google model
google_response = client.chat.completions.create(
model="gemini-1.5-pro",
messages=[{"role": "user", "content": "Explain quantum computing briefly"}]
)

# Using an Anthropic model
anthropic_response = client.chat.completions.create(
model="claude-3-opus-20240229",
messages=[{"role": "user", "content": "Explain quantum computing briefly"}]
)

print("OpenAI response:", openai_response.choices[0].message.content)
print("Google response:", google_response.choices[0].message.content)
print("Anthropic response:", anthropic_response.choices[0].message.content)

For a complete list of available models and their providers, see the Models page.

Available Models

To see a complete list of available models, you can use:

from openai import OpenAI

client = OpenAI(
api_key="yourapikey",
base_url="https://api.voidai.app/v1/"
)

models = client.models.list()
for model in models.data:
print(model.id)

You can also view all available models at https://api.voidai.app/v1/models.