Apply Role
The apply role is specialized for executing specific tasks and application scenarios, providing targeted intelligent solutions.
Configuration
Configure in config.yaml or ~/.bytebuddy/config.yaml:
models:
- name: "apply-task"
provider: "openai"
model: "gpt-4"
apiKey: "${OPENAI_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.5
maxTokens: 8192 # ⚠️ IMPORTANT: Use model's maximum supported value⚠️ IMPORTANT: Always configure maxTokens to the model's maximum supported value!
Apply roles handle complex tasks like code generation, documentation creation, and data processing that require substantial token budgets. Setting maxTokens too low will truncate output, potentially leaving tasks incomplete or code generation unfinished.
Recommended maximums for common models:
- GPT-4: 8,192 tokens
- GPT-4 Turbo: 128,000 tokens
- Claude 3 models: 200,000 tokens
- Gemini Pro: 32,768 tokens
Always use the model's maximum supported value to ensure tasks can be completed fully without truncation.
Core Application Scenarios
Code Generation
models:
- name: "code-generator"
provider: "openai"
model: "gpt-4"
apiKey: "${OPENAI_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.3
maxTokens: 4096Document Creation
models:
- name: "doc-creator"
provider: "anthropic"
model: "claude-3-sonnet"
apiKey: "${ANTHROPIC_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.4
maxTokens: 6144Data Processing
models:
- name: "data-processor"
provider: "openai"
model: "gpt-4"
apiKey: "${OPENAI_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.2
maxTokens: 8192Specialized Application Configuration
Data Analysis Application
models:
- name: "data-analyst"
provider: "openai"
model: "gpt-4"
apiKey: "${OPENAI_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.2
maxTokens: 8192Business Intelligence Application
models:
- name: "business-intelligence"
provider: "anthropic"
model: "claude-3-sonnet"
apiKey: "${ANTHROPIC_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.3
maxTokens: 6144Customer Service Application
models:
- name: "customer-service"
provider: "google"
model: "gemini-pro"
apiKey: "${GOOGLE_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.6
maxTokens: 2048Technical Support Application
models:
- name: "technical-support"
provider: "openai"
model: "gpt-4"
apiKey: "${OPENAI_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.1
maxTokens: 4096Advanced Application Configuration
Code Review Application
models:
- name: "code-reviewer"
provider: "anthropic"
model: "claude-3-sonnet"
apiKey: "${ANTHROPIC_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.2
maxTokens: 8192Project Management Application
models:
- name: "project-manager"
provider: "openai"
model: "gpt-4"
apiKey: "${OPENAI_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.3
maxTokens: 6144Learning Tutor Application
models:
- name: "learning-tutor"
provider: "google"
model: "gemini-pro"
apiKey: "${GOOGLE_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.6
maxTokens: 4096Best Practices
1. Task Specialization
- Configure specialized roles for specific tasks
- Use domain-specific models
- Optimize temperature parameters to match task requirements
2. Context Management
- Provide sufficient task context
- Use structured input formats
- Maintain relevant historical information
3. Quality Assurance
- Set output validation rules
- Use multi-step verification processes
- Enable error handling mechanisms
4. Performance Optimization
- Choose appropriate model size
- Optimize context length
- Use reasonable token limits
Environment Variables
# ~/.bashrc or ~/.zshrc
export OPENAI_API_KEY="your-openai-api-key"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export GOOGLE_API_KEY="your-google-api-key"Through proper apply role configuration, you can create specialized AI assistants that provide precise and efficient solutions for specific tasks and scenarios.