Mistral
Mistral AI provides high-performance open-source language models with excellent reasoning capabilities and efficiency.
Supported Models
Latest Models
- mistral-large-latest - Latest large model (123B parameters)
- mistral-medium-latest - Medium model
- mistral-small-latest - Small model
Open-Source Models
- open-mistral-7b - Open-source 7B model
- open-mixtral-8x7b - Open-source MoE 8x7B model
- open-mixtral-8x22b - Open-source MoE 8x22B model
Code Models
- codestral-latest - Code-specialized model
- codestral-2405 - Specific version of code model
Configuration
Basic Configuration
Configure in config.yaml or ~/.bytebuddy/config.yaml:
yaml
models:
- name: "mistral-large"
provider: "mistral"
model: "mistral-large-latest"
apiKey: "${MISTRAL_API_KEY}"
roles: ["chat", "edit"]
defaultCompletionOptions:
temperature: 0.7
maxTokens: 8192Code Generation Configuration
yaml
models:
- name: "codestral"
provider: "mistral"
model: "codestral-latest"
apiKey: "${MISTRAL_API_KEY}"
roles: ["edit", "apply"]
defaultCompletionOptions:
temperature: 0.2
maxTokens: 4096Multi-Model Configuration
yaml
models:
- name: "mistral-large"
provider: "mistral"
model: "mistral-large-latest"
apiKey: "${MISTRAL_API_KEY}"
roles: ["chat"]
defaultCompletionOptions:
temperature: 0.7
maxTokens: 8192
- name: "mistral-small"
provider: "mistral"
model: "mistral-small-latest"
apiKey: "${MISTRAL_API_KEY}"
roles: ["chat", "autocomplete"]
defaultCompletionOptions:
temperature: 0.5
maxTokens: 4096
- name: "codestral"
provider: "mistral"
model: "codestral-latest"
apiKey: "${MISTRAL_API_KEY}"
roles: ["edit", "apply"]
defaultCompletionOptions:
temperature: 0.2
maxTokens: 4096Configuration Fields
Required Fields
- name: Unique identifier for the model configuration
- provider: Set to
"mistral" - model: Model name
- apiKey: Mistral API key
Optional Fields
- roles: Model roles [
chat,edit,apply,autocomplete] - defaultCompletionOptions:
temperature: Control randomness (0-1)maxTokens: Maximum tokenstopP: Nucleus sampling parameterpresencePenalty: Presence penalty (-2 to 2)frequencyPenalty: Frequency penalty (-2 to 2)stopSequences: Stop sequences
Environment Variables
bash
# ~/.bashrc or ~/.zshrc
export MISTRAL_API_KEY="your-mistral-api-key"Getting API Key
- Visit Mistral AI Console
- Register and log in to account
- Create new key in API Keys page
- Save the key to environment variable
Use Case Configurations
High-Quality Chat
yaml
models:
- name: "quality-chat"
provider: "mistral"
model: "mistral-large-latest"
apiKey: "${MISTRAL_API_KEY}"
roles: ["chat"]
defaultCompletionOptions:
temperature: 0.7
maxTokens: 8192Code Assistant
yaml
models:
- name: "code-assistant"
provider: "mistral"
model: "codestral-latest"
apiKey: "${MISTRAL_API_KEY}"
roles: ["edit", "apply"]
defaultCompletionOptions:
temperature: 0.2
maxTokens: 4096Fast Response
yaml
models:
- name: "fast-response"
provider: "mistral"
model: "mistral-small-latest"
apiKey: "${MISTRAL_API_KEY}"
roles: ["chat", "autocomplete"]
defaultCompletionOptions:
temperature: 0.5
maxTokens: 2048Model Features
Mistral Large
- Parameters: 123B
- Context Length: 32k tokens
- Strengths: Powerful reasoning and multilingual capabilities
- Use Cases: Complex tasks, high-quality generation
Mistral Medium
- Context Length: 32k tokens
- Strengths: Balanced performance and cost
- Use Cases: General tasks
Mistral Small
- Context Length: 32k tokens
- Strengths: Fast response
- Use Cases: Lightweight tasks
Codestral
- Parameters: 22B
- Context Length: 32k tokens
- Strengths: Specialized code generation capabilities
- Use Cases: Programming assistance
Multilingual Support
Mistral models support multiple languages:
- English
- French
- German
- Spanish
- Italian
- Portuguese
- Chinese (limited support)
Troubleshooting
Common Errors
- 401 Unauthorized: Invalid API key
- 429 Too Many Requests: Rate limit reached
- 400 Bad Request: Parameter error
- Model Not Found: Incorrect model name
Debugging Steps
- Verify API key format and validity
- Check model name spelling
- Monitor usage and rate limits
- View error response details
- Confirm account balance is sufficient
Best Practices
1. Model Selection
- High Quality Needs: Use Mistral Large
- Balanced Cost: Use Mistral Medium
- Fast Response: Use Mistral Small
- Code Tasks: Use Codestral
- Open-Source Priority: Use Open model series
2. Parameter Optimization
- Lower temperature for more consistent results
- Set reasonable maxTokens limits
- Use stop sequences to control output
- Adjust penalty parameters to avoid repetition
3. Cost Control
- Monitor token usage
- Choose appropriate model size
- Optimize prompt length
- Set quota alerts
4. Performance Optimization
- Use streaming responses for better UX
- Implement request caching
- Batch similar requests
- Choose appropriate model version