Error Handling
Learn how to handle errors gracefully and build robust applications
Overview
Router uses standard HTTP status codes and provides detailed error messages to help you quickly identify and resolve issues. All errors follow a consistent format for easy parsing and handling.
Error Response Format
All API errors return a consistent JSON structure:
{
"error": {
"message": "Invalid API key provided",
"type": "authentication_error",
"code": "invalid_api_key",
"status": 401
}
}messagestringHuman-readable error message describing what went wrong
typestringGeneral error category (e.g., authentication_error, rate_limit_error)
codestringSpecific error code for programmatic handling
statusintegerHTTP status code of the response
Common Error Codes
Authentication Errors
invalid_api_keyThe API key provided is invalid or has been revoked
missing_api_keyNo API key was provided in the Authorization header
expired_api_keyThe API key has expired and needs to be renewed
Rate Limit Errors
rate_limit_exceededToo many requests sent in a short period
{
"X-RateLimit-Limit": "60",
"X-RateLimit-Remaining": "0",
"X-RateLimit-Reset": "1699000000",
"Retry-After": "60"
}Bad Request Errors
invalid_requestThe request format is invalid or missing required fields
unsupported_modelThe requested model is not available or doesn't exist
context_length_exceededThe request exceeds the model's maximum context length
Payment Required
insufficient_creditsYour account doesn't have enough credits for this request
payment_requiredPayment method needs to be added or updated
Server Errors
internal_server_errorAn unexpected error occurred on our servers
model_overloadedThe model is temporarily overloaded, please retry
Service Unavailable
service_unavailableThe service is temporarily unavailable (maintenance or high load)
Error Handling Examples
import openai
from openai import OpenAI
client = OpenAI(
base_url="https://api.router.com/v1",
api_key="your-api-key"
)
try:
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
except openai.AuthenticationError as e:
print(f"Authentication failed: {e.message}")
# Handle authentication issues (check API key)
except openai.RateLimitError as e:
print(f"Rate limit hit: {e.message}")
# Implement exponential backoff
retry_after = int(e.response.headers.get('Retry-After', 60))
time.sleep(retry_after)
except openai.BadRequestError as e:
print(f"Bad request: {e.message}")
# Fix request parameters
except openai.InternalServerError as e:
print(f"Server error: {e.message}")
# Retry with exponential backoff
except Exception as e:
print(f"Unexpected error: {e}")
# Log and handle gracefullyimport OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.router.com/v1',
apiKey: 'your-api-key',
});
async function makeRequest() {
try {
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
});
return response;
} catch (error) {
if (error instanceof OpenAI.APIError) {
console.error('API Error:', error.status, error.message);
switch (error.status) {
case 401:
throw new Error('Invalid API key');
case 429:
// Implement retry logic
const retryAfter = error.headers?.['retry-after'] || 60;
await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
return makeRequest(); // Retry
case 400:
throw new Error(`Bad request: ${error.message}`);
case 500:
case 503:
// Implement exponential backoff
await new Promise(resolve => setTimeout(resolve, 5000));
return makeRequest(); // Retry
default:
throw error;
}
}
throw error;
}
}async function callAPI(messages) {
const response = await fetch('https://api.router.com/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': 'Bearer your-api-key',
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'gpt-4',
messages: messages,
}),
});
// Check if response is ok
if (!response.ok) {
const error = await response.json();
// Handle specific error types
switch (response.status) {
case 401:
throw new Error(`Authentication error: ${error.error.message}`);
case 429:
const retryAfter = response.headers.get('Retry-After') || '60';
throw new Error(`Rate limited. Retry after ${retryAfter} seconds`);
case 400:
throw new Error(`Bad request: ${error.error.message}`);
case 402:
throw new Error(`Payment required: ${error.error.message}`);
case 500:
case 503:
throw new Error(`Server error: ${error.error.message}`);
default:
throw new Error(`API error: ${error.error.message}`);
}
}
return response.json();
}Best Practices
Implement Retry Logic
Use exponential backoff for transient errors (5xx, 429)
import time
def retry_with_backoff(func, max_retries=3):
for i in range(max_retries):
try:
return func()
except (openai.RateLimitError, openai.InternalServerError):
if i == max_retries - 1:
raise
wait_time = (2 ** i) + random.random()
time.sleep(wait_time)Log Errors Properly
Always log error details for debugging
import logging
logger = logging.getLogger(__name__)
try:
# API call
except openai.APIError as e:
logger.error(
f"API Error: {e.status} - {e.message}",
extra={
"error_code": e.code,
"error_type": e.type,
"request_id": e.request_id
}
)Handle Rate Limits
Respect rate limit headers to avoid errors
class RateLimiter {
async checkLimit(response: Response) {
const remaining = response.headers.get('X-RateLimit-Remaining');
if (remaining && parseInt(remaining) < 5) {
const reset = response.headers.get('X-RateLimit-Reset');
const waitTime = reset ? parseInt(reset) - Date.now() / 1000 : 60;
await new Promise(r => setTimeout(r, waitTime * 1000));
}
}
}Validate Input
Validate requests before sending to avoid 400 errors
function validateRequest(request: ChatRequest) {
if (!request.model) {
throw new Error('Model is required');
}
if (!request.messages || request.messages.length === 0) {
throw new Error('Messages array cannot be empty');
}
if (request.temperature && (request.temperature < 0 || request.temperature > 2)) {
throw new Error('Temperature must be between 0 and 2');
}
}Streaming Error Handling
Errors during streaming require special handling:
try:
stream = client.chat.completions.create(
model="gpt-4",
messages=messages,
stream=True
)
for chunk in stream:
# Process chunk
if hasattr(chunk, 'error'):
# Handle error in stream
print(f"Stream error: {chunk.error}")
break
# Normal processing
print(chunk.choices[0].delta.content, end="")
except Exception as e:
# Handle connection errors
print(f"Stream connection error: {e}")
# Implement reconnection logic if needed