Understanding Configs
This comprehensive guide explores all configuration options available in ByteBuddy, helping you fine-tune your AI-assisted development environment for optimal performance and customization.
Configuration File Locations
ByteBuddy looks for configuration files in the following order:
- Project-Level:
.bytebuddy/config.yamlin your project root - User-Level:
~/.bytebuddy/config.yamlin your home directory - System-Level:
/etc/bytebuddy/config.yaml(if applicable)
Project-Level Configuration
The most common configuration location:
# .bytebuddy/config.yaml
# This file is specific to your project
models:
# Project-specific model configurations
rules:
# Project-specific rules
tools:
# Project-specific toolsUser-Level Configuration
Global settings that apply to all projects:
# ~/.bytebuddy/config.yaml
# These settings apply to all projects
preferences:
language: "en-US"
theme: "dark"
fontSize: 14
# Default models for new projects
models:
- name: "default-chat"
provider: "openai"
model: "gpt-4-turbo"
apiKey: "${OPENAI_API_KEY}"Complete Configuration Schema
Models Configuration
Detailed model configuration options:
models:
- name: "model-name" # Required: Unique identifier
provider: "openai" # Required: Provider name
model: "gpt-4-turbo" # Required: Model identifier
apiKey: "${API_KEY}" # API key (use env vars)
baseURL: "https://api.openai.com/v1" # Custom endpoint
role: "chat" # Role: chat, autocomplete, edit, apply, embeddings
temperature: 0.7 # Creativity (0.0 to 1.0)
maxTokens: 2048 # Maximum response length
topP: 0.9 # Nucleus sampling
frequencyPenalty: 0.5 # Reduce repetition
presencePenalty: 0.5 # Encourage new topics
stopSequences: # Stop generation tokens
- "\n\n"
- "```"
contextWindow: 128000 # Context window size
timeout: 30 # Request timeout in secondsRules Configuration
Comprehensive rules configuration:
rules:
- name: "rule-name" # Required: Unique identifier
description: "Rule description" # Human-readable description
prompt: "Rule instructions" # Instructions for the AI
group: "development" # Optional: Group for organization
enabled: true # Enable/disable rule
priority: 100 # Higher numbers = higher priority
conditions: # Optional: When to apply rule
- fileType: "javascript"
- filePath: "src/**/*.js"
exceptions: # Optional: When not to apply rule
- filePath: "tests/**/*.js"Tools Configuration
Full tool configuration options:
tools:
- name: "tool-name" # Required: Unique identifier
tool: "tool-type" # Required: Tool implementation
description: "Tool description" # Human-readable description
enabled: true # Enable/disable tool
args: # Tool-specific arguments
param1: "value1"
param2: "${ENV_VAR}"
timeout: 30 # Tool execution timeout
retryAttempts: 3 # Number of retry attempts
retryDelay: 1000 # Delay between retries (ms)Preferences Configuration
All available preference options:
preferences:
# Interface settings
language: "en-US" # UI language
theme: "dark" # Theme: light, dark, auto
fontSize: 14 # Editor font size
fontFamily: "Fira Code" # Editor font family
lineHeight: 1.5 # Line height multiplier
# Editor behavior
autoSave: true # Auto-save files
autoIndent: true # Auto-indent code
bracketPairColorization: true # Colorize bracket pairs
suggestWhileTyping: true # Show suggestions while typing
inlineSuggestions: true # Show inline suggestions
minAutoCompleteTriggerLength: 3 # Min chars for autocomplete
# AI behavior
useContext: true # Use code context
useDocumentation: true # Use documentation
useHistory: true # Use conversation history
maxContextTokens: 8192 # Max context tokens
maxHistoryItems: 10 # Max history items to include
# Performance
maxParallelRequests: 5 # Max concurrent AI requests
requestTimeout: 30 # Request timeout (seconds)
cacheEnabled: true # Enable response caching
cacheTTL: 3600 # Cache time-to-live (seconds)
# Privacy
telemetry: false # Send usage data
crashReports: true # Send crash reports
anonymousUsage: false # Send anonymous usage dataAdvanced Configuration Features
Conditional Configuration
Apply settings based on conditions:
# Conditional model selection
models:
- name: "conditional-model"
provider: "openai"
model: "gpt-4-turbo"
apiKey: "${OPENAI_API_KEY}"
conditions:
- projectType: "web-application"
- fileSize: "<100MB"
# Conditional rules
rules:
- name: "frontend-rule"
prompt: "Follow frontend best practices"
conditions:
- fileType: "javascript"
- filePath: "src/components/**/*.js"Environment-Specific Configuration
Different settings for different environments:
# Development environment
development:
models:
- name: "dev-model"
provider: "ollama"
model: "llama3:8b"
baseURL: "http://localhost:11434"
preferences:
telemetry: false
# Production environment
production:
models:
- name: "prod-model"
provider: "openai"
model: "gpt-4-turbo"
apiKey: "${PROD_OPENAI_API_KEY}"
preferences:
telemetry: trueProfile-Based Configuration
Switch between different configuration profiles:
# Default profile
default:
preferences:
theme: "light"
# Developer profile
developer:
preferences:
theme: "dark"
fontSize: 16
models:
- name: "dev-model"
provider: "ollama"
model: "codellama:7b"
# Presentation profile
presentation:
preferences:
theme: "light"
fontSize: 18
fontFamily: "Arial"Switch profiles using:
bytebuddy --profile developerConfiguration Validation and Debugging
Validation Commands
Check your configuration for errors:
# Validate configuration
bytebuddy validate-config
# Validate with detailed output
bytebuddy validate-config --verbose
# Validate specific file
bytebuddy validate-config --file .bytebuddy/config.yamlDebugging Configuration
Enable debug mode to troubleshoot issues:
# Enable debug logging
bytebuddy --debug
# Debug specific component
bytebuddy --debug=models
# Save debug logs
bytebuddy --debug --log-file debug.logConfiguration Inspection
View current configuration:
# Show current configuration
bytebuddy config show
# Show specific section
bytebuddy config show models
# Export configuration
bytebuddy config export --file my-config.yamlSecurity Configuration
Secure API Keys
Best practices for API key management:
# Use environment variables
models:
- name: "secure-model"
provider: "openai"
model: "gpt-4-turbo"
apiKey: "${OPENAI_API_KEY}" # Never hardcode keys
# Use key files
- name: "file-key-model"
provider: "openai"
model: "gpt-4-turbo"
apiKeyFile: "~/.keys/openai.key" # Store keys in secure filesNetwork Security
Configure secure connections:
models:
- name: "secure-model"
provider: "openai"
model: "gpt-4-turbo"
apiKey: "${OPENAI_API_KEY}"
baseURL: "https://api.openai.com/v1" # HTTPS required
proxy: "http://proxy.company.com:8080" # Corporate proxy
sslVerify: true # Verify SSL certificatesData Privacy
Control data sharing and privacy:
preferences:
telemetry: false # Disable usage tracking
crashReports: false # Disable crash reporting
anonymousUsage: false # Disable anonymous usage data
sendCodeSnippets: false # Don't send code snippets in error reportsPerformance Optimization
Resource Management
Optimize resource usage:
preferences:
maxParallelRequests: 3 # Limit concurrent requests
requestTimeout: 30 # Set reasonable timeouts
cacheEnabled: true # Enable caching
cacheTTL: 3600 # Cache for 1 hour
# Model-specific timeouts
models:
- name: "fast-model"
provider: "anthropic"
model: "claude-3-haiku"
timeout: 15 # Shorter timeout for fast modelsContext Management
Optimize context usage:
preferences:
maxContextTokens: 8192 # Limit context size
useContext: true # Use context when beneficial
useDocumentation: true # Include documentation
useHistory: true # Include conversation history
# Fine-tune per model
models:
- name: "context-aware-model"
provider: "openai"
model: "gpt-4-turbo"
maxContextTokens: 16384 # Larger context for capable modelsMigration and Compatibility
Version Migration
Handling configuration changes between versions:
# Version-specific configurations
version: "1.0"
compatibility: ">=1.0.0"
# Backward compatibility
legacy:
oldSetting: "deprecated-value"
newSetting: "modern-value"Deprecated Settings
Handle deprecated configuration options:
# Old configuration (deprecated)
oldModels:
- provider: "openai"
model: "gpt-3.5-turbo"
# New configuration
models:
- name: "updated-model"
provider: "openai"
model: "gpt-3.5-turbo"Example Configurations
Minimal Configuration
Simple setup for getting started:
# .bytebuddy/config.yaml
models:
- name: "default"
provider: "openai"
model: "gpt-4-turbo"
apiKey: "${OPENAI_API_KEY}"
preferences:
language: "en-US"
theme: "auto"Advanced Configuration
Complex setup for power users:
# .bytebuddy/config.yaml
version: "1.0"
models:
- name: "chat-model"
provider: "openai"
model: "gpt-4-turbo"
apiKey: "${OPENAI_API_KEY}"
role: "chat"
temperature: 0.7
maxTokens: 2048
- name: "autocomplete-model"
provider: "anthropic"
model: "claude-3-haiku"
apiKey: "${ANTHROPIC_API_KEY"
role: "autocomplete"
temperature: 0.3
maxTokens: 512
rules:
- name: "security-first"
description: "Prioritize security in all code"
prompt: |
Always follow security best practices:
1. Input validation and sanitization
2. Parameterized database queries
3. Proper authentication and authorization
4. Secure error handling without exposing sensitive information
- name: "performance-optimized"
description: "Write performance-conscious code"
prompt: |
Consider performance implications:
1. Algorithmic efficiency
2. Memory usage optimization
3. Database query optimization
4. Caching strategies
tools:
- name: "database-client"
tool: "databaseClient"
args:
databaseType: "postgresql"
connectionString: "${DATABASE_URL}"
timeout: 30
- name: "ssh-client"
tool: "remote_command_execution"
args:
serverName: "production-server"
username: "admin"
keyPath: "~/.ssh/id_rsa"
timeout: 60
preferences:
language: "en-US"
theme: "dark"
fontSize: 14
fontFamily: "Fira Code"
autoSave: true
suggestWhileTyping: true
maxParallelRequests: 3
cacheEnabled: true
telemetry: falseTroubleshooting Common Issues
Configuration Loading Problems
- File Not Found: Verify file location and permissions
- Syntax Errors: Check YAML syntax and indentation
- Validation Failures: Use
bytebuddy validate-configcommand
Model Connection Issues
- Invalid API Keys: Verify keys are correct and have proper permissions
- Network Problems: Check connectivity and proxy settings
- Rate Limiting: Monitor API usage and implement retries
Performance Issues
- Slow Responses: Check timeout settings and network latency
- High Resource Usage: Limit parallel requests and context size
- Memory Problems: Reduce context window and cache size
Next Steps
After mastering configuration, explore these advanced topics:
- Continuous AI - Automate development workflows
- Custom Code RAG - Optimize code context understanding
- Plan Mode Guide - Complex task planning and execution