Apply Role
The apply role is specialized for executing specific tasks and application scenarios, providing targeted intelligent solutions.
Configuration
Configure in config.yaml or ~/.bytebuddy/config.yaml:
yaml
models:
- name: "apply-task"
provider: "openai"
model: "gpt-4"
apiKey: "${OPENAI_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.5
maxTokens: 4096Core Application Scenarios
Code Generation
yaml
models:
- name: "code-generator"
provider: "openai"
model: "gpt-4"
apiKey: "${OPENAI_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.3
maxTokens: 4096Document Creation
yaml
models:
- name: "doc-creator"
provider: "anthropic"
model: "claude-3-sonnet"
apiKey: "${ANTHROPIC_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.4
maxTokens: 6144Data Processing
yaml
models:
- name: "data-processor"
provider: "openai"
model: "gpt-4"
apiKey: "${OPENAI_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.2
maxTokens: 8192Specialized Application Configuration
Data Analysis Application
yaml
models:
- name: "data-analyst"
provider: "openai"
model: "gpt-4"
apiKey: "${OPENAI_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.2
maxTokens: 8192Business Intelligence Application
yaml
models:
- name: "business-intelligence"
provider: "anthropic"
model: "claude-3-sonnet"
apiKey: "${ANTHROPIC_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.3
maxTokens: 6144Customer Service Application
yaml
models:
- name: "customer-service"
provider: "google"
model: "gemini-pro"
apiKey: "${GOOGLE_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.6
maxTokens: 2048Technical Support Application
yaml
models:
- name: "technical-support"
provider: "openai"
model: "gpt-4"
apiKey: "${OPENAI_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.1
maxTokens: 4096Advanced Application Configuration
Code Review Application
yaml
models:
- name: "code-reviewer"
provider: "anthropic"
model: "claude-3-sonnet"
apiKey: "${ANTHROPIC_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.2
maxTokens: 8192Project Management Application
yaml
models:
- name: "project-manager"
provider: "openai"
model: "gpt-4"
apiKey: "${OPENAI_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.3
maxTokens: 6144Learning Tutor Application
yaml
models:
- name: "learning-tutor"
provider: "google"
model: "gemini-pro"
apiKey: "${GOOGLE_API_KEY}"
roles: ["apply"]
defaultCompletionOptions:
temperature: 0.6
maxTokens: 4096Best Practices
1. Task Specialization
- Configure specialized roles for specific tasks
- Use domain-specific models
- Optimize temperature parameters to match task requirements
2. Context Management
- Provide sufficient task context
- Use structured input formats
- Maintain relevant historical information
3. Quality Assurance
- Set output validation rules
- Use multi-step verification processes
- Enable error handling mechanisms
4. Performance Optimization
- Choose appropriate model size
- Optimize context length
- Use reasonable token limits
Environment Variables
bash
# ~/.bashrc or ~/.zshrc
export OPENAI_API_KEY="your-openai-api-key"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export GOOGLE_API_KEY="your-google-api-key"Through proper apply role configuration, you can create specialized AI assistants that provide precise and efficient solutions for specific tasks and scenarios.