Generating image descriptions using local AI
Use local AI image description when you need privacy, cost control, or offline operation.
Common use cases include:
- On-premises accessibility workflows
- Medical or regulated environments with local-only data handling
- Secure networks where external APIs are restricted
- High-volume processing without per-image API fees
- Offline-capable applications
This guide uses a local OpenAI-compatible VLM endpoint with Nutrient Vision API.
Download sampleHow Nutrient helps
Nutrient Python SDK handles local VLM integration, endpoint configuration, and request/response processing.
The SDK handles:
- OpenAI-compatible endpoint formatting and communication
- Image encoding and multimodal payload construction
- Model parameters such as temperature and max tokens
- Local server and model runtime failure handling
Complete implementation
This example generates image descriptions using a local AI endpoint:
from nutrient_sdk import Document, VisionOpening the image file and configuring the local server
Open the image in a context manager(opens in a new tab) and configure local endpoint settings if needed.
In this sample:
- The SDK supports PNG, JPEG, GIF, BMP, and TIFF.
- The default endpoint is
http://localhost:1234/v1. - The default model is
qwen/qwen3-vl-4b. custom_vlm_api_settingsoverrides defaults.
with Document.open("input_photo.png") as document: # Optional: Configure the VLM API endpoint settings # These settings are customizable based on your VLM provider vlm_settings = document.settings.custom_vlm_api_settings vlm_settings.api_endpoint = "http://localhost:1234/v1" vlm_settings.model = "qwen/qwen3-vl-4b"Creating a vision instance
Create a vision instance with Vision.set(document).
Before calling vision methods, ensure:
- The local server is running.
- A vision-capable model is loaded.
- The configured endpoint is reachable.
vision = Vision.set(document)Generating the description
Call vision.describe() to generate a natural language description.
The SDK handles image encoding, request construction, and response parsing:
description = vision.describe()Outputting the description
Print the description for review, or store it in your application.
Common destinations include:
- Database fields
- JSON output files
- HTML
altattributes
print("Image description:") print(description)Understanding the output
describe() returns natural language text generated by the local model.
Descriptions are typically:
- Concise — Focused on key subjects and details, often one to three sentences
- Accessible — Suitable for users who rely on screen readers
- Accurate — Based on visible content only
- Model-dependent quality — Quality varies by model architecture and size
Use this output for accessibility metadata, image search, and document workflows, all with local processing.
Configuring the VLM API endpoint
Vision API uses OpenAI-compatible endpoints. Configure custom_vlm_api_settings if your setup differs from defaults:
api_endpoint— Base URL for the OpenAI-compatible API (default:http://localhost:1234/v1)api_key— API key for secured deployments (optional on many local servers)model— Model identifier (default:qwen/qwen3-vl-4b)temperature— Creativity control (0.0deterministic,1.0varied phrasing)max_tokens— Response token limit (-1unlimited)
Local server setup examples:
- LM Studio — Start server with vision model loaded, default endpoint
http://localhost:1234/v1 - Ollama — Run
ollama servewith vision model, default endpointhttp://localhost:11434/v1 - vLLM — Launch with
--api-keyflag for authentication, custom port configuration
The SDK handles request formatting and response parsing for local endpoints.
Error handling
The SDK raises NutrientException when vision operations fail.
Common failure scenarios include:
- The input image can’t be read due to path, permission, or format issues
- The local VLM server isn’t running or reachable
- No compatible vision model is loaded
- Responses time out on large images or heavy models
- Available CPU/GPU memory is insufficient
- Endpoint or authentication settings are invalid
In production code:
- Catch
NutrientException. - Return a clear error message.
- Log failure details for debugging.
- Add retry logic for transient local server issues.
Conclusion
Use this workflow to generate image descriptions with local AI:
- Open the image file using a context manager(opens in a new tab) for automatic resource cleanup.
- The SDK supports multiple image formats, including PNG, JPEG, GIF, BMP, and TIFF.
- Vision API uses OpenAI-compatible endpoints for local VLM servers by default.
- Default configuration connects to
http://localhost:1234/v1with modelqwen/qwen3-vl-4b. - Supported local VLM servers include LM Studio, Ollama, vLLM, and custom inference servers.
- Optionally configure custom server settings through
document.settings.custom_vlm_api_settingsproperty assignments. - Create a vision instance with
Vision.set()bound to the document for local AI processing. - Generate the description with
vision.describe(), which sends the image to the local server endpoint and returns natural language text. - The SDK encodes image data, constructs OpenAI-compatible multimodal requests, and parses responses automatically.
- Generated descriptions are concise (1–3 sentences), accessible (WCAG-compliant alt text), accurate (observable details only), and model-dependent.
- Description quality varies by model size — larger models (7B+) produce more nuanced descriptions than smaller variants (4B).
- Print or save the description for use in accessibility systems, content management, or cataloging workflows.
- Handle
NutrientExceptionfor vision processing failures, including server unavailable, model not loaded, or timeout errors. - The context manager ensures proper resource cleanup when processing completes or exceptions occur.
For related image workflows, refer to the Python SDK guides.
Download this ready-to-use sample package to explore local AI image description.