# AI configuration

By default, AI Assistant operates using models from OpenAI. To connect AI Assistant to OpenAI, you’ll need to pass your OpenAI API key as an environment variable to the AI Assistant service.

If you’re concerned with the security aspects of OpenAI, or if you want to keep your data in a certain region, you may want to configure a different model provider, such as Azure, AWS Bedrock, or a locally served model. For help deciding between these options, refer to our guide on [selecting the right LLM hosting strategy](https://www.nutrient.io/guides/ai-assistant/service-configuration/selecting-right-llm-hosting-strategy.md).

This guide will introduce you to the configuration file used to customize the models and providers during AI Assistant operation.

## Mounting the configuration file

AI Assistant supports two ways to provide `service-config.yml`: a mounted file, or a Base64-encoded environment variable.

### Option 1: Use a mounted file

AI Assistant looks for `service-config.yml` at the root of the AI Assistant directory. You can mount the file in Docker Compose:

```yml

ai-assistant:
  volumes:
    - /path-to-service-config-yaml-on-host:/service-config.yml

```

If your infrastructure doesn’t permit mounting files at the root directory of the container, mount the file in a different directory and set `CONFIG_DIR`. For more details on `CONFIG_DIR` and other environment variables, refer to the [Docker configuration](https://www.nutrient.io/guides/ai-assistant/service-configuration/docker-configuration.md) guide.

### Option 2: Use a Base64 environment variable

You can provide the same configuration using `MODEL_CONFIG`. This variable expects the full `service-config.yml` content encoded as Base64:

```bash

export MODEL_CONFIG=$(base64 -i service-config.yml)

```

```yaml

ai-assistant:
  environment:
    - MODEL_CONFIG=dmVyc2lvbjogIjIiCnByb3ZpZGVyczoKICA...

```

### Configuration precedence

1. `MODEL_CONFIG` environment variable (highest priority)

2. `service-config.yml` in `CONFIG_DIR`

3. Default configuration (fallback)

## Service configuration file

AI Assistant uses version 2 of the model configuration schema:

```yaml

version: "2"

providers:
  - name: "azure"
    instanceName: "ci-testing"

models:
  - model: "azure:gpt4o-mini"
    labels: ["default-llm"]
  - model: "azure:text-embedding-3-small"
    labels: ["default-embedding"]

```

### Version

Set `version` to `"2"` to use the current model configuration schema.

### Providers (optional)

`providers` defines explicit provider configuration.

Use this when your provider requires extra settings (for example, Azure `instanceName`, Bedrock `region`, or OpenAI-compatible `baseUrl`).

For simple OpenAI and Anthropic setups, you can rely on environment variables and omit this block.

You can define one of the following providers:

- [`openai`](https://openai.com/) ([guide](https://www.nutrient.io/guides/ai-assistant/service-configuration/model-providers/openai-provider.md)) — Choose this for a wide range of performant models at cheap prices.

- [`anthropic`](https://www.anthropic.com/) ([guide](https://www.nutrient.io/guides/ai-assistant/service-configuration/model-providers/anthropic-provider.md)) — Use Claude models with advanced reasoning capabilities. Note: Anthropic doesn’t provide embeddings, so define another model with the `default-embedding` label from a different provider.

- [`azure`](https://azure.microsoft.com/en-us/products/ai-foundry/models/#Models) ([guide](https://www.nutrient.io/guides/ai-assistant/service-configuration/model-providers/azure-provider.md)) — Use OpenAI models in the Azure environment with more region and data security features.

- [`bedrock`](https://aws.amazon.com/bedrock/) ([guide](https://www.nutrient.io/guides/ai-assistant/service-configuration/model-providers/bedrock-provider.md)) — Choose from a range of open and closed models while keeping your data in the AWS ecosystem.

- `openai-compat` ([guide](https://www.nutrient.io/guides/ai-assistant/service-configuration/model-providers/openai-api-compatible-providers.md)) — Use any OpenAI API-compatible inference framework, such as [vLLM](https://docs.vllm.ai/en/latest/), [Hugging Face TGI](https://huggingface.co/docs/text-generation-inference/en/index), or [Ollama](https://ollama.com/).

See each provider guide for configuration options, key requirements, and model suggestions.

### models (required)

`models` defines available models in `{provider}:{model}` format.

Each model requires one or more labels:

- `default-llm` — default model for LLM inference

- `default-embedding` — default model for embeddings

Embedding model selection impacts how documents are indexed. Changing embedding models for existing data requires reprocessing documents.

Migrating from model configuration version 1 (`aiServices`) to version 2? Refer to the [AI Assistant 2.1.0 migration guide](https://www.nutrient.io/guides/ai-assistant/migration-guides/ai-assistant-2-1-0-migration-guide.md).
---

## Related pages

- [Docker configuration](/guides/ai-assistant/service-configuration/docker-configuration.md)
- [Selecting the right LLM hosting strategy](/guides/ai-assistant/service-configuration/selecting-right-llm-hosting-strategy.md)

