Skip to content

Python SDK

Requirements

  • Python 3.8 or higher
  • Install the openai package: pip install openai

Quick Start

Installation

bash
pip install openai

Configure Client

python
from openai import OpenAI

client = OpenAI(
    base_url="https://ai-tokenhub.com/v1",
    api_key="your_api_key_here"
)

Basic Request

python
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant"},
        {"role": "user", "content": "Hello, please introduce yourself"}
    ]
)

print(response.choices[0].message.content)

Streaming

python
stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "Tell me a joke about AI"}
    ],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)

Using Claude Models

python
response = client.chat.completions.create(
    model="claude-3-5-sonnet-20241022",
    messages=[
        {"role": "user", "content": "Hello"}
    ]
)

Error Handling

python
from openai import OpenAI
from openai import APIError, RateLimitError

client = OpenAI(
    base_url="https://ai-tokenhub.com/v1",
    api_key="your_api_key_here"
)

try:
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": "Hello"}]
    )
except APIError as e:
    print(f"API Error: {e}")
except RateLimitError:
    print("Rate limit exceeded, please try again later")

Advanced Configuration

Set Timeout

python
from openai import OpenAI

client = OpenAI(
    base_url="https://ai-tokenhub.com/v1",
    api_key="your_api_key_here",
    timeout=60.0
)

Set Proxy

python
import os

os.environ["HTTPS_PROXY"] = "http://proxy.example.com:8080"

client = OpenAI(
    base_url="https://ai-tokenhub.com/v1",
    api_key="your_api_key_here"
)