Skip to content

Streaming

Streaming allows you to get AI model output in real-time, rather than waiting for a complete response. This is particularly useful for real-time conversations and long text generation scenarios.

Enable Streaming

Set stream: true in the request to enable streaming.

bash
curl https://ai-tokenhub.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Write a poem"}],
    "stream": true
  }'

Streaming Response Format

Streaming responses use Server-Sent Events (SSE) format, each chunk contains partial generated content:

data: {"id":"chatcmpl-abc","choices":[{"delta":{"content":"The"}}]}

data: {"id":"chatcmpl-abc","choices":[{"delta":{"content":"spring"}}]}

data: {"id":"chatcmpl-abc","choices":[{"delta":{"content":"wind"}}]}

data: [DONE]

Using Python for Streaming

python
from openai import OpenAI

client = OpenAI(
    base_url="https://ai-tokenhub.com/v1",
    api_key="YOUR_API_KEY"
)

stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Write a poem"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)

Using JavaScript for Streaming

javascript
const response = await fetch('https://ai-tokenhub.com/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer YOUR_API_KEY'
  },
  body: JSON.stringify({
    model: 'gpt-4o',
    messages: [{role: 'user', content: 'Write a poem'}],
    stream: true
  })
});

const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  
  const chunk = decoder.decode(value);
  const lines = chunk.split('\n');
  
  for (const line of lines) {
    if (line.startsWith('data: ') && line !== 'data: [DONE]') {
      const data = JSON.parse(line.slice(6));
      if (data.choices[0]?.delta?.content) {
        console.log(data.choices[0].delta.content);
      }
    }
  }
}

Streaming Response Structure

Each streaming chunk contains:

FieldDescription
idUnique conversation identifier
choices[].delta.contentContent fragment added this time
choices[].delta.roleFirst chunk contains role information
choices[].finish_reasonCompletion reason, "stop" for last chunk

Notes

  1. When streaming, usage field only returns in the last chunk or complete response
  2. Need to properly handle [DONE] signal to end the stream
  3. Network interruption may cause partial data loss, implement retry mechanism