Errors and Debugging
Understanding API error types and debugging methods helps you quickly identify and resolve issues.
Error Response Format
All error responses follow a unified format:
json
{
"error": {
"message": "Error description",
"type": "Error type",
"code": "Error code",
"param": "Related parameter (optional)"
}
}HTTP Status Codes
| Status Code | Description |
|---|---|
| 200 | Success |
| 400 | Request parameter error |
| 401 | Authentication failed |
| 403 | Permission denied |
| 404 | Resource not found |
| 429 | Rate limit exceeded |
| 500 | Internal server error |
| 503 | Service temporarily unavailable |
Common Error Codes
Authentication Errors (401)
| Error Code | Description | Solution |
|---|---|---|
| invalid_api_key | API Key invalid | Check Key correctness |
| expired_api_key | API Key expired | Create new Key |
| revoked_api_key | API Key revoked | Create new Key |
Request Errors (400)
| Error Code | Description | Solution |
|---|---|---|
| missing_required_field | Missing required parameter | Add missing parameter |
| invalid_value | Parameter value invalid | Check parameter format and range |
| model_not_supported | Unsupported model | Use supported model ID |
| context_length_exceeded | Token limit exceeded | Reduce messages or max_tokens |
Limit Errors (429)
| Error Code | Description | Solution |
|---|---|---|
| rate_limit_exceeded | Rate limit exceeded | Lower request frequency, implement retry |
| insufficient_quota | Quota insufficient | Recharge or wait for quota recovery |
| concurrent_limit_exceeded | Concurrent limit exceeded | Control concurrent requests |
Server Errors (500/503)
| Error Code | Description | Solution |
|---|---|---|
| internal_error | Internal error | Retry later |
| service_unavailable | Service unavailable | Wait for recovery, use fallback model |
Debugging Tips
1. Check Request Format
Ensure request body is valid JSON:
python
import json
data = {
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello"}]
}
# Verify JSON format
try:
json.dumps(data)
except json.JSONDecodeError as e:
print(f"JSON format error: {e}")2. Check Parameter Range
python
# Check temperature range
temperature = 1.5
if not 0 <= temperature <= 2:
print("temperature must be in 0-2 range")
# Check messages format
for msg in messages:
if msg["role"] not in ["system", "user", "assistant"]:
print(f"Invalid role: {msg['role']}")3. Calculate Token Count
Estimate request token count to avoid exceeding limit:
python
# Simple estimation (English ~4 chars = 1 token)
def estimate_tokens(text):
# Chinese characters
chinese_chars = len([c for c in text if '\u4e00' <= c <= '\u9fff'])
# English and others
other_chars = len(text) - chinese_chars
return chinese_chars + other_chars // 44. Implement Detailed Logging
python
import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
try:
response = client.chat.completions.create(...)
logger.debug(f"Request successful: {response.id}")
except Exception as e:
logger.error(f"Request failed: {e}")
logger.debug(f"Request parameters: {data}")5. Use Retry Mechanism
python
import time
def call_api(messages, max_retries=3):
for attempt in range(max_retries):
try:
return client.chat.completions.create(
model="gpt-4o",
messages=messages
)
except Exception as e:
if attempt < max_retries - 1:
wait = (attempt + 1) * 2
print(f"Retry {attempt + 1}/{max_retries}, waiting {wait}s")
time.sleep(wait)
else:
raise eGet Help
If you encounter unresolved issues:
- Check
messagefield in error response - Review API usage logs in console
- Contact technical support