Custom Providers
Emby lets you plug in any OpenAI-compatible provider behind the same/v1 endpoint.
Use this when you want to:
- Connect a self-hosted model (vLLM, SGLang, LM Studio, Ollama, etc.)
- Route to a third-party gateway or vendor that speaks OpenAI format
- Expose an internal company LLM behind Emby’s routing, logging, and controls
- Test new providers without changing your app code
- A name for the provider
- A base URL
- An API key / token
How Custom Providers Work
Once configured, you call models using:mycompany/gpt-4.1-minilab/deepseek-r1internal/qwen-3.5-coder
- Forward the request to your custom provider
- Keep the OpenAI-style request/response shape
- Apply the same usage logging & observability as built-in models
Quick Setup
1. Add a Custom Provider in the Dashboard
1
Open Providers in Emby
Go to your Emby dashboard → Settings → Providers.
2
Create Custom Provider
Add a new Custom Provider with:
- Name – lowercase name, e.g.
mycompanyorinternal - Base URL – your provider’s API root, e.g.
https://api.mycompany.ai - API Key / Token – the key Emby should use when calling your provider
3
Save & Test
Save the provider, then use the “Test connection” button (if available) or send a small request from your app to confirm everything works.
2. Call Your Custom Provider
Once the provider is created, you can call it like any other model.cURL Example
- Setting
Authorization: Bearer <your-custom-provider-key>when calling your backend - Forwarding the OpenAI-style payload to
https://api.mycompany.ai/v1/chat/completions - Normalizing the response back to the Emby/OpenAI format
model value.
Configuration Requirements
Provider Name
- Lowercase letters only:
a-z - Examples:
mycompany,internal,lab - Invalid:
MyCompany,my-company,my_company,123test
Must match:
/^[a-z]+$/Base URL
- Must be a valid HTTPS URL
- Should point to the root of your OpenAI-compatible API
- Emby will append
/v1/chat/completionsautomatically if needed - Example:
https://api.mycompany.ai
API Token
- Provider-specific API key or token
- Sent as
Authorization: Bearer <token> - Stored securely on Emby’s side
OpenAI Compatibility
Your provider must support the OpenAI-style
/v1/chat/completions format (or equivalent),
including model, messages, and optional
parameters like temperature, max_tokens, etc.Example: Self-Hosted vLLM / SGLang
If you’re running a local or cloud vLLM/SGLang server exposed at:- Name:
mycompany - Base URL:
https://llm.mycompany.ai - API Key: (optional, if your gateway uses keys)
What Features You Get via Emby
Custom providers still benefit from:Unified Endpoint
One base URL for everything:
Works for Emby-hosted and custom providers.
https://api.emby.dev/v1Works for Emby-hosted and custom providers.
Centralized Auth
Your apps only store one key:
Per-provider secrets live inside Emby.
$EMBY_API_KEY.Per-provider secrets live inside Emby.
Logging & Analytics
See usage, latency, error rates, and model mix for custom providers right in the Emby dashboard.
Routing Rules
Combine Emby-hosted models with your custom ones in the same project, or gradually shift traffic as you evaluate performance.
Best Practices
Name things clearly
Use names like
lab, staging, or prod so it’s obvious which backend a model uses.Keep models discoverable
Mirror the provider’s own model IDs:
mycompany/deepseek-r1, mycompany/qwen-32b, etc.Monitor latency & errors
Custom providers might sit on slower networks or less-optimized hardware. Watch their metrics and adjust accordingly.
Compliance & data flow
Emby shows you where requests go. For custom providers, you control location, logs, and retention policies.
Troubleshooting
❓ Calls to my custom provider fail- Check that the base URL is HTTPS and reachable from the internet (or your Emby region)
- Verify that the provider supports the OpenAI chat completions format
- Confirm the API key is valid and not expired
- Try a direct request to your provider outside Emby to confirm behavior
If that works but Emby fails, check your Custom Provider settings in the dashboard.
Need Help Wiring Up a Custom Provider?
If you’re connecting a private gateway, on-prem model, or custom EU deployment and want to get it right: 📞 Book a call: https://cal.com/absolum/30min💬 WhatsApp us: https://wa.absolum.nl

