Skip to main content
Version: Next

Model Providers

The Model Providers page allows administrators to configure and manage various AI model providers. This guide will walk you through the setup process and explain the available options.

Configuring Model Providers

Obot supports a variety of model providers, including:

Community

  • OpenAI
  • Anthropic
  • xAI
  • Ollama
  • Groq
  • vLLM
  • DeepSeek
  • Google

Enterprise

The UI will indicate whether each provider has been configured. If a provider is configured you will have the ability to modify or deconfigure it.

note

Our Enterprise release adds support for additional Enterprise-grade model providers. See here for more details.

Configuring and enabling a provider

To configure a provider:

  1. Click its "Configure" button
  2. Enter the required information, such as API keys or endpoints
  3. Save the configuration to apply the settings

Upon saving the configuration, the platform will validate your configuration to ensure it can connect to the model provider. You can configure multiple model providers, which will allow you to pick the right provider and model for each use case.

Viewing and managing models

Once a provider is configured, you can view and manage the models it offers. You can set the usage type for each model, which determines how the models are utilized within the application:

Usage TypeDescriptionApplication
Language ModelUsed to drive text generation and tool callsUsed in agents and tasks; can be set as an agent's primary model
Text EmbeddingConverts text into numerical vectorsUsed in the knowledge tool for RAG functionality
Image GenerationCreates images from textual descriptionsUsed by image generation tools
VisionAnalyzes and processes visual dataUsed by the image vision tool
OtherDefault if no specific usage is selectedAvailable for all purposes

You can also activate or deactivate specific models, controlling their availability to users.

Setting Default Models

The "Set Default Models" feature allows you to configure default models for various tasks. Choose default models for the following categories:

  • Language Model (Chat) - Primary conversational model
  • Language Model (Chat - Fast) - Optimized for quick responses
  • Text Embedding (Knowledge) - Used for knowledge base operations
  • Image Generation - For creating images
  • Vision - For image analysis and processing

These defaults determine which specific model is used when a Model Access Policy grants access to a default model alias (such as "Language Model (Chat)"). When you change a default here, any user with access to that alias automatically gains access to the new model.

After selecting the desired defaults, click "Save Changes" to confirm your configurations.

note

Setting a default model here does not automatically grant users access to it. Users must be included in a Model Access Policy that grants access to the corresponding alias. See Model Access Policies for details.

Instructions for configuring specific providers

Azure (Enterprise only)

Obot supports two Azure providers, each with a different authentication method. These are compatible with both Azure OpenAI deployments and Foundry deployments.

API Key Authentication

Use the Azure provider for API key-based authentication.

In the Azure portal, find your API key and endpoint URL after setting up at least one deployment — both are required.

You must also specify deployment names. The format is a comma-separated list of deployment names, optionally as model:deployment pairs (e.g. gpt-4o,gpt-4o-mini or gpt-4o:my-gpt4o,gpt-4o-mini:my-mini).

You can also optionally specify the API version (defaults to 2025-01-01-preview).

Microsoft Entra ID Authentication

Use the Azure (Entra ID) provider for service principal authentication via Microsoft Entra ID. Deployments are discovered automatically from the Azure Management API.

1. Create a service principal
az ad sp create-for-rbac --name "<sp-name>" \
--role "Cognitive Services OpenAI User" \
--scopes /subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.CognitiveServices/accounts/<account-name>

This outputs the appId (Client ID), password (Client Secret), and tenant (Tenant ID) needed below.

2. Find your resource details
az cognitiveservices account show \
--name <account-name> \
--resource-group <resource-group> \
--query "{endpoint:properties.endpoint, id:id}"
3. Configure the provider

Obot requires:

  • Azure Endpoint — your Azure OpenAI endpoint URL (https://<resource_name>.openai.azure.com)
  • Client ID — the Entra app's application (client) ID
  • Client Secret — the Entra app's client secret
  • Tenant ID — the Entra app's tenant ID
  • Subscription ID — the Azure subscription ID containing the Cognitive Services account
  • Resource Group — the resource group containing the Cognitive Services account
  • Account Name — the Cognitive Services account name

You can also optionally specify the API version (defaults to 2025-01-01-preview).

The service principal requires at minimum the Cognitive Services OpenAI User or Cognitive Services User role on the account to read deployments. Deployments are discovered automatically — each deployment's base model name becomes the model ID exposed to Obot.

See the Microsoft docs for more details.

Ollama

Ollama allows you to run LLMs locally. Two configuration steps are required to use it with Obot:

  1. Expose Ollama to the network - By default, Ollama only binds to 127.0.0.1:11434. Since Obot runs in a container, localhost addresses resolve to Obot's container, not your host. Set OLLAMA_HOST=0.0.0.0 before starting Ollama, then use your host's IP address in the endpoint URL.

  2. Set the Ollama host

    http://<your-host-ip>:11434
    • If you are running Obot in Docker, you should use http://host.docker.internal:11434. For linux users, run Obot in Docker with this additional flag --add-host=host.docker.internal:host-gateway or use an alternative method of allowing the container to access the host network

See Ollama's FAQ for platform-specific instructions on setting OLLAMA_HOST.