FAQ
1. What is ai.TokenHub?
ai.TokenHub is an enterprise-level LLM Token platform that provides a unified API interface to access different AI models. You can use multiple models including GPT-4o, Claude, DeepSeek with a single API Key.
2. How to get started?
- Register and login to ai.TokenHub
- Create API Key in the console
- Use API Key to call endpoints
See Quick Start for details.
3. Which models are supported?
We support multiple mainstream AI models:
Chat Models:
- OpenAI: GPT-4o, GPT-4o-mini
- Anthropic: Claude-3-5-Sonnet, Claude-3-Opus
- DeepSeek: DeepSeek-Chat
Embedding Models:
- text-embedding-3-small
- text-embedding-3-large
Rerank Models:
- rerank-1, rerank-2
4. Is API compatible with OpenAI?
Yes, ai.TokenHub API is fully compatible with OpenAI API format. You can use OpenAI SDK directly, just modify base_url:
from openai import OpenAI
client = OpenAI(
base_url="https://ai-tokenhub.com/v1",
api_key="YOUR_TOKENHUB_KEY"
)5. How does billing work?
- Charged by token usage
- Different models have different pricing
- View real-time usage and costs in console
- Supports recharge and quota management
6. Is there free quota?
New users receive a certain amount of free testing quota after registration for development and debugging.
7. How to handle rate limits?
When rate limit is exceeded, API returns 429 error. Recommendations:
- Implement retry mechanism, wait and retry
- Batch process requests to reduce request count
- Use streaming to reduce concurrency
See API Key Limits for details.
8. How to set fallback models?
Use fallback_models parameter:
{
"model": "gpt-4o",
"fallback_models": ["claude-3-5-sonnet", "deepseek-chat"]
}System will automatically try fallback models when primary model is unavailable.
9. Is streaming supported?
Yes. Set stream: true to enable streaming. See Streaming for details.
10. How to get technical support?
- Check Errors and Debugging documentation
- Login to console to view usage logs
- Contact customer service for help