ONE API
One endpoint. Every open-source model. Zero complexity.
Access 100+ open-source AI models through a single, unified API. No vendor lock-in, complete transparency.
Before ONE API
# Different code for each provider
if provider == "groq":
client = Groq(api_key=groq_key)
elif provider == "together":
client = Together(api_key=together_key)
# ... more complexityWith ONE API
# Same code for all models
client = OpenAI(
api_key="your_one_api_key",
base_url="https://theopensource.ai/v1"
)
# That's it!Key Features
Everything you need to build with open-source AI
Standard REST API
Simple, well-documented REST API that works with any HTTP client. Easy integration with existing tools.
/v1/chat/completionsMulti-Provider Access
Access models from Groq, Together AI, and more. No need to manage multiple accounts or API keys.
Intelligent Routing
Automatic load balancing, cost optimization, and failover. Always get the best available provider.
Usage Tracking
Detailed usage analytics and monitoring. Track token consumption and model performance in real-time.
Enterprise-Grade Security
API key authentication, rate limiting, HTTPS encryption, and no data storage. Your data stays private.
Developer-Friendly
5-minute integration with extensive documentation, code examples, and SDKs for all major languages.
Getting Started
Integrate in 5 minutes
1. Install SDK
pip install openai2. Make Your First Request
from openai import OpenAI
client = OpenAI(
api_key="your_one_api_key",
base_url="https://theopensource.ai/v1"
)
response = client.chat.completions.create(
model="llama-3.1-8b-instant",
messages=[
{"role": "user", "content": "Explain quantum computing"}
]
)
print(response.choices[0].message.content)3. Advanced: Streaming
response = client.chat.completions.create(
model="llama-3.1-8b-instant",
messages=[{"role": "user", "content": "Write a story"}],
stream=True
)
for chunk in response:
print(chunk.choices[0].delta.content, end="")100+ Open-Source Models
From small and fast to large and powerful
Small (7-8B)
Fast, cost-effective
llama-3.1-8b-instantgemma-7b-itmistral-7b-instructMedium (13-20B)
Balanced performance
mixtral-8x7b-instructdeepseek-coder-33bsolar-10.7b-instructLarge (70B+)
Maximum quality
llama-3.1-70b-versatileqwen-2.5-72b-instructmixtral-8x22bUse Cases
Built for every AI application
💬 Chatbots & Customer Support
Fast response times, cost-effective at scale, easy A/B testing
llama-3.1-8b-instant🔍 Content Generation
Large context windows, multiple models for different content types
llama-3.1-70b💻 Code Generation & Review
Specialized code models, fast iteration, cost-effective
deepseek-coder-33b📊 Data Analysis & Extraction
Structured output support, function calling, high accuracy
qwen-2.5-72b🌍 Multilingual Applications
Models trained on 50+ languages, translation, localization
qwen-2.5-72b🔐 Enterprise AI
No data retention, audit logging, usage controls, team management
Quick Integration
Get started in minutes
Simple Setup
from openai import OpenAI
client = OpenAI(
api_key="your_one_api_key",
base_url="https://theopensource.ai/v1"
)
response = client.chat.completions.create(
model="llama-3.1-8b-instant",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)That's it! You're now connected to 100+ open-source models through a single endpoint.
Frequently Asked Questions
Ready to Get Started?
Join developers building with 100+ open-source AI models through ONE API
No credit card required • 5-minute setup • Open-source only