Skip to content

Understanding Configs

This comprehensive guide explores all configuration options available in ByteBuddy, helping you fine-tune your AI-assisted development environment for optimal performance and customization.

Configuration File Locations

ByteBuddy looks for configuration files in the following order:

  1. Project-Level: .bytebuddy/config.yaml in your project root
  2. User-Level: ~/.bytebuddy/config.yaml in your home directory
  3. System-Level: /etc/bytebuddy/config.yaml (if applicable)

Project-Level Configuration

The most common configuration location:

yaml
# .bytebuddy/config.yaml
# This file is specific to your project
models:
  # Project-specific model configurations
rules:
  # Project-specific rules
tools:
  # Project-specific tools

User-Level Configuration

Global settings that apply to all projects:

yaml
# ~/.bytebuddy/config.yaml
# These settings apply to all projects
preferences:
  language: "en-US"
  theme: "dark"
  fontSize: 14

# Default models for new projects
models:
  - name: "default-chat"
    provider: "openai"
    model: "gpt-4-turbo"
    apiKey: "${OPENAI_API_KEY}"

Complete Configuration Schema

Models Configuration

Detailed model configuration options:

yaml
models:
  - name: "model-name" # Required: Unique identifier
    provider: "openai" # Required: Provider name
    model: "gpt-4-turbo" # Required: Model identifier
    apiKey: "${API_KEY}" # API key (use env vars)
    baseURL: "https://api.openai.com/v1" # Custom endpoint
    role: "chat" # Role: chat, autocomplete, edit, apply, embeddings
    temperature: 0.7 # Creativity (0.0 to 1.0)
    maxTokens: 2048 # Maximum response length
    topP: 0.9 # Nucleus sampling
    frequencyPenalty: 0.5 # Reduce repetition
    presencePenalty: 0.5 # Encourage new topics
    stopSequences: # Stop generation tokens
      - "\n\n"
      - "```"
    contextWindow: 128000 # Context window size
    timeout: 30 # Request timeout in seconds

Rules Configuration

Comprehensive rules configuration:

yaml
rules:
  - name: "rule-name" # Required: Unique identifier
    description: "Rule description" # Human-readable description
    prompt: "Rule instructions" # Instructions for the AI
    group: "development" # Optional: Group for organization
    enabled: true # Enable/disable rule
    priority: 100 # Higher numbers = higher priority
    conditions: # Optional: When to apply rule
      - fileType: "javascript"
      - filePath: "src/**/*.js"
    exceptions: # Optional: When not to apply rule
      - filePath: "tests/**/*.js"

Tools Configuration

Full tool configuration options:

yaml
tools:
  - name: "tool-name" # Required: Unique identifier
    tool: "tool-type" # Required: Tool implementation
    description: "Tool description" # Human-readable description
    enabled: true # Enable/disable tool
    args: # Tool-specific arguments
      param1: "value1"
      param2: "${ENV_VAR}"
    timeout: 30 # Tool execution timeout
    retryAttempts: 3 # Number of retry attempts
    retryDelay: 1000 # Delay between retries (ms)

Preferences Configuration

All available preference options:

yaml
preferences:
  # Interface settings
  language: "en-US" # UI language
  theme: "dark" # Theme: light, dark, auto
  fontSize: 14 # Editor font size
  fontFamily: "Fira Code" # Editor font family
  lineHeight: 1.5 # Line height multiplier

  # Editor behavior
  autoSave: true # Auto-save files
  autoIndent: true # Auto-indent code
  bracketPairColorization: true # Colorize bracket pairs
  suggestWhileTyping: true # Show suggestions while typing
  inlineSuggestions: true # Show inline suggestions
  minAutoCompleteTriggerLength: 3 # Min chars for autocomplete

  # AI behavior
  useContext: true # Use code context
  useDocumentation: true # Use documentation
  useHistory: true # Use conversation history
  maxContextTokens: 8192 # Max context tokens
  maxHistoryItems: 10 # Max history items to include

  # Performance
  maxParallelRequests: 5 # Max concurrent AI requests
  requestTimeout: 30 # Request timeout (seconds)
  cacheEnabled: true # Enable response caching
  cacheTTL: 3600 # Cache time-to-live (seconds)

  # Privacy
  telemetry: false # Send usage data
  crashReports: true # Send crash reports
  anonymousUsage: false # Send anonymous usage data

Advanced Configuration Features

Conditional Configuration

Apply settings based on conditions:

yaml
# Conditional model selection
models:
  - name: "conditional-model"
    provider: "openai"
    model: "gpt-4-turbo"
    apiKey: "${OPENAI_API_KEY}"
    conditions:
      - projectType: "web-application"
      - fileSize: "<100MB"

# Conditional rules
rules:
  - name: "frontend-rule"
    prompt: "Follow frontend best practices"
    conditions:
      - fileType: "javascript"
      - filePath: "src/components/**/*.js"

Environment-Specific Configuration

Different settings for different environments:

yaml
# Development environment
development:
  models:
    - name: "dev-model"
      provider: "ollama"
      model: "llama3:8b"
      baseURL: "http://localhost:11434"
  preferences:
    telemetry: false

# Production environment
production:
  models:
    - name: "prod-model"
      provider: "openai"
      model: "gpt-4-turbo"
      apiKey: "${PROD_OPENAI_API_KEY}"
  preferences:
    telemetry: true

Profile-Based Configuration

Switch between different configuration profiles:

yaml
# Default profile
default:
  preferences:
    theme: "light"

# Developer profile
developer:
  preferences:
    theme: "dark"
    fontSize: 16
  models:
    - name: "dev-model"
      provider: "ollama"
      model: "codellama:7b"

# Presentation profile
presentation:
  preferences:
    theme: "light"
    fontSize: 18
    fontFamily: "Arial"

Switch profiles using:

bash
bytebuddy --profile developer

Configuration Validation and Debugging

Validation Commands

Check your configuration for errors:

bash
# Validate configuration
bytebuddy validate-config

# Validate with detailed output
bytebuddy validate-config --verbose

# Validate specific file
bytebuddy validate-config --file .bytebuddy/config.yaml

Debugging Configuration

Enable debug mode to troubleshoot issues:

bash
# Enable debug logging
bytebuddy --debug

# Debug specific component
bytebuddy --debug=models

# Save debug logs
bytebuddy --debug --log-file debug.log

Configuration Inspection

View current configuration:

bash
# Show current configuration
bytebuddy config show

# Show specific section
bytebuddy config show models

# Export configuration
bytebuddy config export --file my-config.yaml

Security Configuration

Secure API Keys

Best practices for API key management:

yaml
# Use environment variables
models:
  - name: "secure-model"
    provider: "openai"
    model: "gpt-4-turbo"
    apiKey: "${OPENAI_API_KEY}" # Never hardcode keys

  # Use key files
  - name: "file-key-model"
    provider: "openai"
    model: "gpt-4-turbo"
    apiKeyFile: "~/.keys/openai.key" # Store keys in secure files

Network Security

Configure secure connections:

yaml
models:
  - name: "secure-model"
    provider: "openai"
    model: "gpt-4-turbo"
    apiKey: "${OPENAI_API_KEY}"
    baseURL: "https://api.openai.com/v1" # HTTPS required
    proxy: "http://proxy.company.com:8080" # Corporate proxy
    sslVerify: true # Verify SSL certificates

Data Privacy

Control data sharing and privacy:

yaml
preferences:
  telemetry: false # Disable usage tracking
  crashReports: false # Disable crash reporting
  anonymousUsage: false # Disable anonymous usage data
  sendCodeSnippets: false # Don't send code snippets in error reports

Performance Optimization

Resource Management

Optimize resource usage:

yaml
preferences:
  maxParallelRequests: 3 # Limit concurrent requests
  requestTimeout: 30 # Set reasonable timeouts
  cacheEnabled: true # Enable caching
  cacheTTL: 3600 # Cache for 1 hour

# Model-specific timeouts
models:
  - name: "fast-model"
    provider: "anthropic"
    model: "claude-3-haiku"
    timeout: 15 # Shorter timeout for fast models

Context Management

Optimize context usage:

yaml
preferences:
  maxContextTokens: 8192 # Limit context size
  useContext: true # Use context when beneficial
  useDocumentation: true # Include documentation
  useHistory: true # Include conversation history

# Fine-tune per model
models:
  - name: "context-aware-model"
    provider: "openai"
    model: "gpt-4-turbo"
    maxContextTokens: 16384 # Larger context for capable models

Migration and Compatibility

Version Migration

Handling configuration changes between versions:

yaml
# Version-specific configurations
version: "1.0"
compatibility: ">=1.0.0"

# Backward compatibility
legacy:
  oldSetting: "deprecated-value"
  newSetting: "modern-value"

Deprecated Settings

Handle deprecated configuration options:

yaml
# Old configuration (deprecated)
oldModels:
  - provider: "openai"
    model: "gpt-3.5-turbo"

# New configuration
models:
  - name: "updated-model"
    provider: "openai"
    model: "gpt-3.5-turbo"

Example Configurations

Minimal Configuration

Simple setup for getting started:

yaml
# .bytebuddy/config.yaml
models:
  - name: "default"
    provider: "openai"
    model: "gpt-4-turbo"
    apiKey: "${OPENAI_API_KEY}"

preferences:
  language: "en-US"
  theme: "auto"

Advanced Configuration

Complex setup for power users:

yaml
# .bytebuddy/config.yaml
version: "1.0"

models:
  - name: "chat-model"
    provider: "openai"
    model: "gpt-4-turbo"
    apiKey: "${OPENAI_API_KEY}"
    role: "chat"
    temperature: 0.7
    maxTokens: 2048

  - name: "autocomplete-model"
    provider: "anthropic"
    model: "claude-3-haiku"
    apiKey: "${ANTHROPIC_API_KEY"
    role: "autocomplete"
    temperature: 0.3
    maxTokens: 512

rules:
  - name: "security-first"
    description: "Prioritize security in all code"
    prompt: |
      Always follow security best practices:
      1. Input validation and sanitization
      2. Parameterized database queries
      3. Proper authentication and authorization
      4. Secure error handling without exposing sensitive information

  - name: "performance-optimized"
    description: "Write performance-conscious code"
    prompt: |
      Consider performance implications:
      1. Algorithmic efficiency
      2. Memory usage optimization
      3. Database query optimization
      4. Caching strategies

tools:
  - name: "database-client"
    tool: "databaseClient"
    args:
      databaseType: "postgresql"
      connectionString: "${DATABASE_URL}"
    timeout: 30

  - name: "ssh-client"
    tool: "remote_command_execution"
    args:
      serverName: "production-server"
      username: "admin"
      keyPath: "~/.ssh/id_rsa"
    timeout: 60

preferences:
  language: "en-US"
  theme: "dark"
  fontSize: 14
  fontFamily: "Fira Code"
  autoSave: true
  suggestWhileTyping: true
  maxParallelRequests: 3
  cacheEnabled: true
  telemetry: false

Troubleshooting Common Issues

Configuration Loading Problems

  1. File Not Found: Verify file location and permissions
  2. Syntax Errors: Check YAML syntax and indentation
  3. Validation Failures: Use bytebuddy validate-config command

Model Connection Issues

  1. Invalid API Keys: Verify keys are correct and have proper permissions
  2. Network Problems: Check connectivity and proxy settings
  3. Rate Limiting: Monitor API usage and implement retries

Performance Issues

  1. Slow Responses: Check timeout settings and network latency
  2. High Resource Usage: Limit parallel requests and context size
  3. Memory Problems: Reduce context window and cache size

Next Steps

After mastering configuration, explore these advanced topics: