DeepSeek
DeepSeek is an advanced AI model provider, offering high-performance code generation and natural language processing capabilities.
Supported Models
- deepseek-chat - General conversational model
- deepseek-coder - Code-specialized model
- deepseek-coder-instruct - Instruction-following code model
Configuration
Basic Configuration
Configure in config.yaml or ~/.bytebuddy/config.yaml:
yaml
models:
- name: "deepseek-coder"
provider: "deepseek"
model: "deepseek-coder-instruct"
apiKey: "${DEEPSEEK_API_KEY}"
roles: ["chat", "edit"]
defaultCompletionOptions:
temperature: 0.2
maxTokens: 4096Multi-Model Configuration
yaml
models:
- name: "deepseek-chat"
provider: "deepseek"
model: "deepseek-chat"
apiKey: "${DEEPSEEK_API_KEY}"
roles: ["chat"]
defaultCompletionOptions:
temperature: 0.7
maxTokens: 2048
- name: "deepseek-coder"
provider: "deepseek"
model: "deepseek-coder-instruct"
apiKey: "${DEEPSEEK_API_KEY}"
roles: ["edit", "apply"]
defaultCompletionOptions:
temperature: 0.1
maxTokens: 4096Configuration Fields
Required Fields
- name: Unique identifier for the model configuration
- provider: Set to
"deepseek" - model: Model name
- apiKey: DeepSeek API key
Optional Fields
- roles: Model roles [
chat,edit,apply,autocomplete] - defaultCompletionOptions:
temperature: Control randomness (0-1)maxTokens: Maximum tokenstopP: Nucleus sampling parameter
Environment Variables
bash
# ~/.bashrc or ~/.zshrc
export DEEPSEEK_API_KEY="your-deepseek-api-key"Getting API Key
- Visit DeepSeek Website
- Register and log in to account
- Navigate to API keys page
- Generate new API key
- Save the key to environment variable
Use Case Configurations
Code Generation
yaml
models:
- name: "code-gen"
provider: "deepseek"
model: "deepseek-coder-instruct"
apiKey: "${DEEPSEEK_API_KEY}"
roles: ["chat", "edit"]
defaultCompletionOptions:
temperature: 0.2
maxTokens: 4096General Chat
yaml
models:
- name: "general-chat"
provider: "deepseek"
model: "deepseek-chat"
apiKey: "${DEEPSEEK_API_KEY}"
roles: ["chat"]
defaultCompletionOptions:
temperature: 0.7
maxTokens: 2048Troubleshooting
Common Errors
- 401 Unauthorized: Check if API key is correct
- 429 Too Many Requests: Rate limit reached
- Model Not Found: Verify model name
Debugging Steps
- Verify API key format and validity
- Check rate limits
- Confirm model name is correct
- View error logs
Best Practices
1. Security
- Use environment variables to store API keys
- Rotate keys regularly
- Monitor unusual usage
2. Performance Optimization
- Use low temperature for code tasks (0.1-0.3)
- Set appropriate maxTokens limits
- Choose the right model for the task
3. Cost Control
- Monitor API usage
- Use appropriate models for different tasks
- Set reasonable token limits