OpenAI model provider
Using the OpenAI API service is the simplest way to get set up. This service offers a wide range of configurations to track cost, throttle usage, and manage access, depending on your needs. If you’re in the testing stage, we strongly recommend starting with OpenAI.
![]()
The OpenAI API has attained SOC 2 Type 2 compliance (see the official announcement).
To get started, you’ll need to create an API key by following the OpenAI API key guide.
Next, use the API key you’ve created to set the OPENAI_API_KEY
environment variable in your Docker Compose file, or on the command line when using Docker directly. See the configuration options guide for more information about environment variables:
services: ai-assistant: environment: - OPENAI_API_KEY=your-openai-api-key ...
Service configuration file
You may also customize the OpenAI models used in AI Assistant by creating a service configuration file, as explained in the model-provider configuration guide.
Our current suggestion for the OpenAI model is gpt4o-mini
, along with an embedding model of text-embedding-3-small
:
version: '1' aiServices: chat: provider: name: 'openai' apiKey: 'your-openai-api-key' # Optional model: 'gpt4o-mini' textEmbeddings: provider: name: 'openai' apiKey: 'your-openai-api-key' # Optional model: 'text-embedding-3-small'
-
provider
:-
name
: The name of the provider. Set this toopenai
. -
apiKey
: The API key for the OpenAI service. You can retrieve your keys once you’ve created an instance. See the OpenAI API key guide for more information.
-
-
model
: The name of the model you want to use. For example,gpt4o-mini
for the chat service, ortext-embedding-3-small
for the embedding service.
Pricing
As you’re providing your own OpenAI API key, you’ll be subject to all the costs related to using the OpenAI or Azure OpenAI service. We’ve created an interactive LLM spend calculator to help you estimate your monthly costs.
What determines your monthly cost?
AI Assistant usage is priced in three main categories, outlined below.
Document ingestion
Each new document added to AI Assistant goes through an ingestion process, which enables search, summarization, and Q&A. Documents are only ingested once, even if multiple users access them later.
Calculator inputs:
-
Number of new documents ingested per month
-
Average document size (for example, small invoices vs. large manuals)
User interactions
Costs are also based on how users interact with documents — whether through summarization, Q&A, or deep research conversations.
Calculator inputs:
-
Number of active users
-
Documents viewed by each user per month
-
Level of engagement:
-
None — No chat usage
-
Low — Occasional summaries or simple queries
-
Medium — Regular in-document conversations
-
High — Frequent follow-ups and deeper context
-
Deep — Analytical, multipart sessions
-
Redaction (if enabled in your license)
If you’ve licensed the Redaction component, AI Assistant can help redact sensitive content from documents using LLMs.
Calculator inputs:
-
Number of documents redacted per month
Redaction can be used on its own or in combination with other AI Assistant capabilities.
Why use the LLM spend calculator?
Rather than estimate these costs manually, we highly recommend using our calculator to:
-
Accurately factor in current OpenAI pricing
-
Automatically apply the correct token estimates
-
Get a detailed monthly cost breakdown
-
Model different usage scenarios
Try it today and see what your usage might cost!