OpenAI
OpenAI is one of the main model providers supported by ByteBuddy, offering powerful GPT series models.
Supported Models
GPT-4 Series
- gpt-4o - Latest GPT-4 model with multimodal support
- gpt-4o-mini - Lightweight version with lower cost
- gpt-4-turbo - Faster GPT-4 version, 128K context
- gpt-4-turbo-preview - GPT-4 Turbo preview
GPT-3.5 Series
- gpt-3.5-turbo - High-performance conversation model
- gpt-3.5-turbo-16k - 16K context version
Embedding Models
- text-embedding-ada-002 - Text embedding model
- text-embedding-3-small - Small embedding model
- text-embedding-3-large - Large embedding model
Configuration Methods
Basic Configuration
yaml
models:
- name: "gpt-4o"
provider: "openai"
model: "gpt-4o"
apiKey: "${OPENAI_API_KEY}"
roles: ["chat", "edit"]
defaultCompletionOptions:
temperature: 0.7
maxTokens: 4000Multi-role Configuration
yaml
models:
- name: "chat-model"
provider: "openai"
model: "gpt-4o"
apiKey: "${OPENAI_API_KEY}"
roles: ["chat"]
defaultCompletionOptions:
temperature: 0.7
maxTokens: 4000
- name: "edit-model"
provider: "openai"
model: "gpt-4o"
apiKey: "${OPENAI_API_KEY}"
roles: ["edit"]
defaultCompletionOptions:
temperature: 0.3
maxTokens: 2000
- name: "autocomplete-model"
provider: "openai"
model: "gpt-3.5-turbo"
apiKey: "${OPENAI_API_KEY}"
roles: ["autocomplete"]
defaultCompletionOptions:
temperature: 0.1
maxTokens: 500Embedding Model Configuration
yaml
models:
- name: "embedding-model"
provider: "openai"
model: "text-embedding-3-small"
apiKey: "${OPENAI_API_KEY}"
roles: ["embed"]Tool Use Model Configuration
yaml
models:
- name: "tool-model"
provider: "openai"
model: "gpt-4o"
apiKey: "${OPENAI_API_KEY}"
roles: ["chat", "edit"]
capabilities: ["tool_use"]
defaultCompletionOptions:
temperature: 0.7
maxTokens: 4000Environment Variables
bash
# Required: API key
export OPENAI_API_KEY="sk-..."
# Optional: Organization ID
export OPENAI_ORG_ID="org-..."
# Optional: Custom API endpoint
export OPENAI_API_BASE="https://api.openai.com/v1"Request Options
yaml
models:
- name: "gpt-4o"
provider: "openai"
model: "gpt-4o"
apiKey: "${OPENAI_API_KEY}"
roles: ["chat"]
requestOptions:
timeout: 60000
verifySsl: true
headers:
"User-Agent": "ByteBuddy/1.0"Advanced Configuration Options
Complete Configuration Example
yaml
models:
- name: "gpt-4o-advanced"
provider: "openai"
model: "gpt-4o"
apiKey: "${OPENAI_API_KEY}"
roles: ["chat", "edit", "apply"]
capabilities: ["tool_use", "image_input"]
defaultCompletionOptions:
temperature: 0.7
maxTokens: 4000
topP: 0.9
frequencyPenalty: 0.1
presencePenalty: 0.1
stream: true
requestOptions:
timeout: 60000
verifySsl: true
headers:
"User-Agent": "ByteBuddy/1.0"Use Case Configurations
Code Development
yaml
models:
- name: "code-assistant"
provider: "openai"
model: "gpt-4o"
apiKey: "${OPENAI_API_KEY}"
roles: ["chat", "edit"]
defaultCompletionOptions:
temperature: 0.1
maxTokens: 2000
topP: 0.8Conversation Interaction
yaml
models:
- name: "conversation"
provider: "openai"
model: "gpt-4o"
apiKey: "${OPENAI_API_KEY}"
roles: ["chat"]
defaultCompletionOptions:
temperature: 0.7
maxTokens: 1000
stream: trueFast Autocomplete
yaml
models:
- name: "fast-autocomplete"
provider: "openai"
model: "gpt-3.5-turbo"
apiKey: "${OPENAI_API_KEY}"
roles: ["autocomplete"]
defaultCompletionOptions:
temperature: 0.1
maxTokens: 200
topP: 0.9
requestOptions:
timeout: 10000Getting API Keys
- Visit platform.openai.com
- Register account and complete verification
- Create new key on API Keys page
- Set usage limits and payment method
Common Issues
Q: Why am I getting rate limit errors?
A: OpenAI has limits on API calls, you can:
- Wait for reset (usually per minute)
- Upgrade to paid plan
- Adjust request intervals
Q: How to handle long text?
A:
- Use models with long context support (like gpt-4-turbo)
- Reduce maxTokens setting
- Process long text in segments
Q: Response time is too long?
A:
- Use faster models (like gpt-3.5-turbo)
- Enable streaming response (stream: true)
- Reduce maxTokens
- Optimize prompt length
Q: How to use image input?
A: Use models that support image input:
yaml
models:
- name: "vision-model"
provider: "openai"
model: "gpt-4o"
apiKey: "${OPENAI_API_KEY}"
roles: ["chat"]
capabilities: ["image_input"]Best Practices
1. Key Security
- Use environment variables to store API keys
- Rotate keys regularly
- Monitor abnormal usage
2. Cost Control
- Choose appropriate models based on task complexity
- Set reasonable maxTokens limits
- Use gpt-3.5-turbo for simple tasks
3. Performance Optimization
- Enable streaming response for better user experience
- Adjust temperature parameters based on use case
- Set reasonable request timeout
4. Model Selection Guide
- Complex tasks: Use gpt-4o
- Code related: Use gpt-4o or gpt-4-turbo
- Simple conversation: Use gpt-3.5-turbo
- Autocomplete: Use gpt-3.5-turbo
- Image processing: Use gpt-4o