LLM Providers
Multi-provider AI integration with unified interfaces and advanced features.
Overview
Fiberwise provides a unified interface for working with multiple Large Language Model (LLM) providers. This abstraction layer allows you to switch between providers, compare outputs, and leverage the best features of each platform without changing your code.
🔗 Supported Providers
🟢 OpenAI
GPT-4, GPT-3.5-Turbo
Function calling, structured output
🟣 Anthropic
Claude 3.5 Sonnet, Claude 3 Haiku
Large context windows, safety-focused
Gemini Pro, Gemini Flash
Multimodal capabilities
🤗 Hugging Face
200,000+ models, Free tier
Research models, embeddings
🌐 OpenRouter
100+ models, unified API
Cost optimization, free options
☁️ Cloudflare
Edge computing, low latency
Global deployment, generous free tier
🟠 Local Models
Ollama, Custom endpoints
Privacy-focused, on-premise
Provider Management
Add and manage LLM providers using the Fiberwise CLI:
🔧 Adding Providers
# Add OpenAI provider
fiber account add-provider --provider openai --api-key "sk-..." --model gpt-4 --set-default
# Add Anthropic provider
fiber account add-provider --provider anthropic --api-key "sk-ant-..." --model claude-3-5-sonnet-20241022 --set-default
# Add Google provider
fiber account add-provider --provider google --api-key "AIza..." --model gemini-pro
# Add Hugging Face provider (free tier available)
fiber account add-provider --provider huggingface --api-key "hf_..." --model "meta-llama/Llama-2-7b-chat-hf"
# Add OpenRouter provider (with free model)
fiber account add-provider --provider openrouter --api-key "sk-or-..." --model "meta-llama/llama-3.1-8b-instruct:free" --site-url "https://yourapp.com"
# Add Cloudflare Workers AI provider
fiber account add-provider --provider cloudflare --api-key "xxx" --account-id "xxx" --model "@cf/meta/llama-3.1-8b-instruct"
📋 Listing Providers
# View all configured providers
fiber account list-providers
# View provider configurations
fiber account list-configs
Key Features
🔄 Unified Interface
Single API for all providers with consistent request/response formats
⚡ Automatic Failover
Fallback to secondary providers if primary is unavailable
📊 Model Selection
Choose optimal models for specific tasks (speed vs accuracy)
🔒 Secure Key Management
Encrypted storage of API keys with scoped access control
📈 Usage Tracking
Monitor token usage and costs across all providers
🎯 Function Calling
Structured output generation with JSON schema validation
API Usage
Use LLM providers in your agents and functions:
🐍 Python SDK
from fiberwise_sdk import FiberAgent, LLMProvider
class MyAgent(FiberAgent):
def __init__(self):
super().__init__()
# Provider is automatically injected
self.llm = self.get_llm_provider()
def process(self, input_data):
# Use the configured default provider
response = self.llm.complete(
prompt=f"Analyze this data: {input_data}",
model="gpt-4" # Optional: override default
)
return {"analysis": response}
🌐 REST API
curl -X POST http://localhost:7001/api/v1/llm-providers/complete \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Configuration
Provider configuration options and environment variables:
⚙️ Environment Variables
# Provider API Keys
export OPENAI_API_KEY="sk-your-key-here"
export ANTHROPIC_API_KEY="sk-ant-your-key-here"
export GOOGLE_API_KEY="AIza-your-key-here"
# Default Provider
export FIBERWISE_DEFAULT_LLM_PROVIDER="anthropic"
export FIBERWISE_DEFAULT_MODEL="claude-3-sonnet"
📄 Configuration File
# fiberwise.yaml
llm_providers:
default: "anthropic"
providers:
openai:
api_key: "${OPENAI_API_KEY}"
models: ["gpt-4", "gpt-3.5-turbo"]
anthropic:
api_key: "${ANTHROPIC_API_KEY}"
models: ["claude-3-sonnet", "claude-3-haiku"]
Best Practices
🔐 Security
- Store API keys as environment variables
- Use scoped API keys when available
- Rotate keys regularly
- Never commit keys to version control
💰 Cost Optimization
- Choose appropriate models for tasks
- Monitor token usage patterns
- Cache responses when possible
- Use cheaper models for simple tasks
⚡ Performance
- Configure failover providers
- Set appropriate timeout values
- Use streaming for long responses
- Implement retry logic with exponential backoff