Skip to content

Apply Role

The apply role is specialized for executing specific tasks and application scenarios, providing targeted intelligent solutions.

Configuration

Configure in config.yaml or ~/.bytebuddy/config.yaml:

yaml
models:
  - name: "apply-task"
    provider: "openai"
    model: "gpt-4"
    apiKey: "${OPENAI_API_KEY}"
    roles: ["apply"]
    defaultCompletionOptions:
      temperature: 0.5
      maxTokens: 8192 # ⚠️ IMPORTANT: Use model's maximum supported value

⚠️ IMPORTANT: Always configure maxTokens to the model's maximum supported value!

Apply roles handle complex tasks like code generation, documentation creation, and data processing that require substantial token budgets. Setting maxTokens too low will truncate output, potentially leaving tasks incomplete or code generation unfinished.

Recommended maximums for common models:

  • GPT-4: 8,192 tokens
  • GPT-4 Turbo: 128,000 tokens
  • Claude 3 models: 200,000 tokens
  • Gemini Pro: 32,768 tokens

Always use the model's maximum supported value to ensure tasks can be completed fully without truncation.

Core Application Scenarios

Code Generation

yaml
models:
  - name: "code-generator"
    provider: "openai"
    model: "gpt-4"
    apiKey: "${OPENAI_API_KEY}"
    roles: ["apply"]
    defaultCompletionOptions:
      temperature: 0.3
      maxTokens: 4096

Document Creation

yaml
models:
  - name: "doc-creator"
    provider: "anthropic"
    model: "claude-3-sonnet"
    apiKey: "${ANTHROPIC_API_KEY}"
    roles: ["apply"]
    defaultCompletionOptions:
      temperature: 0.4
      maxTokens: 6144

Data Processing

yaml
models:
  - name: "data-processor"
    provider: "openai"
    model: "gpt-4"
    apiKey: "${OPENAI_API_KEY}"
    roles: ["apply"]
    defaultCompletionOptions:
      temperature: 0.2
      maxTokens: 8192

Specialized Application Configuration

Data Analysis Application

yaml
models:
  - name: "data-analyst"
    provider: "openai"
    model: "gpt-4"
    apiKey: "${OPENAI_API_KEY}"
    roles: ["apply"]
    defaultCompletionOptions:
      temperature: 0.2
      maxTokens: 8192

Business Intelligence Application

yaml
models:
  - name: "business-intelligence"
    provider: "anthropic"
    model: "claude-3-sonnet"
    apiKey: "${ANTHROPIC_API_KEY}"
    roles: ["apply"]
    defaultCompletionOptions:
      temperature: 0.3
      maxTokens: 6144

Customer Service Application

yaml
models:
  - name: "customer-service"
    provider: "google"
    model: "gemini-pro"
    apiKey: "${GOOGLE_API_KEY}"
    roles: ["apply"]
    defaultCompletionOptions:
      temperature: 0.6
      maxTokens: 2048

Technical Support Application

yaml
models:
  - name: "technical-support"
    provider: "openai"
    model: "gpt-4"
    apiKey: "${OPENAI_API_KEY}"
    roles: ["apply"]
    defaultCompletionOptions:
      temperature: 0.1
      maxTokens: 4096

Advanced Application Configuration

Code Review Application

yaml
models:
  - name: "code-reviewer"
    provider: "anthropic"
    model: "claude-3-sonnet"
    apiKey: "${ANTHROPIC_API_KEY}"
    roles: ["apply"]
    defaultCompletionOptions:
      temperature: 0.2
      maxTokens: 8192

Project Management Application

yaml
models:
  - name: "project-manager"
    provider: "openai"
    model: "gpt-4"
    apiKey: "${OPENAI_API_KEY}"
    roles: ["apply"]
    defaultCompletionOptions:
      temperature: 0.3
      maxTokens: 6144

Learning Tutor Application

yaml
models:
  - name: "learning-tutor"
    provider: "google"
    model: "gemini-pro"
    apiKey: "${GOOGLE_API_KEY}"
    roles: ["apply"]
    defaultCompletionOptions:
      temperature: 0.6
      maxTokens: 4096

Best Practices

1. Task Specialization

  • Configure specialized roles for specific tasks
  • Use domain-specific models
  • Optimize temperature parameters to match task requirements

2. Context Management

  • Provide sufficient task context
  • Use structured input formats
  • Maintain relevant historical information

3. Quality Assurance

  • Set output validation rules
  • Use multi-step verification processes
  • Enable error handling mechanisms

4. Performance Optimization

  • Choose appropriate model size
  • Optimize context length
  • Use reasonable token limits

Environment Variables

bash
# ~/.bashrc or ~/.zshrc
export OPENAI_API_KEY="your-openai-api-key"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export GOOGLE_API_KEY="your-google-api-key"

Through proper apply role configuration, you can create specialized AI assistants that provide precise and efficient solutions for specific tasks and scenarios.