1.12 release notes

Before attempting to upgrade to Document Engine 1.12, first upgrade to Document Engine 1.11 if you haven’t already, and make sure your application still runs as expected.

We always verify that the release process is seamless when upgrading to the newest version from the last minor release. If you’re using an older version, make sure to upgrade one minor version at a time.

Example: If the latest version is 1.6.1 and you’re running 1.3.x, upgrade to 1.4.x first, then to 1.5.x, and only then to 1.6.1.

Refer to our upgrade guide for more information.

Highlights

GdPicture update

This release ships with GdPicture 14.3.17. The previous version (14.3.12) has now been replaced, and the changelog highlights many bug fixes, as well as a major performance boost — the conversion engine runs up to three times faster.

Structured logging

You can set the LOG_STRUCTURED environment variable to enable using structured logging; if enabled, logs will be output in a rich JSON format.

The final log entry structure will be formatted in JSON as follows:

{
"time": "2025-10-05T12:34:56.789123Z",
"level": "info",
"message": "User logged in",
"exception": {
"kind": "error",
"reason": "An error occurred",
"stacktrace": [
"Module:function/arity",
"OtherModule:other_function/other_arity"
]
},
"location": {
"file": "shared/example.ex",
"line": 123,
"mfa": "Example.function/arity",
"pid": "<0.123.0>",
},
"meta": {
"domain": ["elixir", "shared", "example"]
"request_id": "abc123",
"user_id": 42
}
}
  • timestamp is in ISO 8601 format with microsecond precision
  • level is one of the standard t:Logger.level/0 levels
  • message is the main readable message, truncated to :max_value_size if necessary
  • location will always contain a t:location/0 map with available information about the log origin; these fields are entirely preallocated if using Logger macros
  • meta will contain all other metadata fields except those used in location and error, and including:
    • Any metadata given to the log event — Logger.error(event, key: value) will append key: value to the meta map
    • Any globally configured metadata in config.exs
    • Any per-process configured metadata with Logger.metadata/1
  • exception will be present only if the log event is a report containing exception information
  • extra will contain any additional fields from a report that aren’t part of the standard message

Markup rendering control in DOCX-to-PDF conversion

This release adds the ability to control how markup elements (comments, track changes, and redlines) are handled during DOCX-to-PDF conversion.

Supported values

  • noMarkup — (default) Render the document as if all the changes were accepted. Comments aren’t converted to comment annotations.
  • original — Render the document as if all the changes were rejected (as if no changes/redlines were made to the document). Comments aren’t converted to comment annotations.
  • simpleMarkup — Render the document as if all the changes were accepted. Comments are converted to comment annotations.
  • allMarkup — Render the document with all markups. Redlines (suggestions) show as redlines (strikethrough with a red line, red font for changes). Comments are converted to comment annotations.

Configuration

You can configure markup preservation in two ways:

  1. Globally — Set the DOCX_MARKUP_MODE environment variable.
  2. Per request — Use the markup mode parameter in Build API requests.

When both are specified, the API parameter takes precedence over the environment variable.

Bulk document deletion

You can now delete documents in bulk using the new asynchronous deletion endpoint. This is useful for cleaning up large numbers of documents efficiently.

New endpoint

Use the new POST /api/async/delete_documents endpoint to create an asynchronous job that deletes documents matching your specified criteria. You can then track the job’s progress with the existing GET /api/async/jobs/{job_id} endpoint.

This replaces the previous DELETE /api/documents endpoint, which is now deprecated.

Setup

This endpoint is disabled by default. To enable it, set the environment variable ENABLE_BULK_DOCUMENT_DELETION=true.

Supported filters

You can combine multiple filters to precisely target which documents to delete:

  • document_id_prefix — Delete all documents whose IDs start with the specified prefix
  • created_after — Delete documents created after the specified date
  • created_before — Delete documents created before the specified date
  • keep_documents — Specify document IDs to exclude from deletion (array of document IDs)

At least one filter must be provided, and filters can be combined for more precise control.

Example usage

Delete all documents with IDs starting with "temp_":

Terminal window
curl -X POST "/api/async/delete_documents" \
-H "Content-Type: application/json" \
-d '{"document_id_prefix": "temp_"}'

Delete old documents while preserving specific ones:

Terminal window
curl -X POST "/api/async/delete_documents" \
-H "Content-Type: application/json" \
-d '{
"created_before": "2024-01-01T00:00:00Z",
"keep_documents": ["important_doc_1", "important_doc_2"]
}'

The endpoint returns a job ID that you can use to monitor the deletion progress:

Terminal window
curl "/api/async/jobs/{job_id}"

Document listing improvements

The GET /api/documents endpoint now supports a document_id_prefix parameter for filtering documents by ID prefix. The document_id parameter is now deprecated in favor of this more flexible option.

Breaking changes

This release doesn’t include any breaking changes.

Deprecations

Bulk deletion endpoint

The document bulk deletion endpoint, DELETE /api/documents, is deprecated and will be removed in the next major release of Document Engine. Use the new POST /api/async/delete_documents endpoint instead.

StatsD metrics reporting

StatsD metrics reporting is now deprecated and will be removed in the next major release of Document Engine.

If you’re currently using StatsD for metrics collection via the STATSD_HOST and STATSD_PORT environment variables, migrate to the Prometheus metrics endpoint, which has been available since version 1.10.0.

Deprecated configuration

  • STATSD_HOST
  • STATSD_PORT
  • STATSD_CUSTOM_TAGS

Migration path

Use the Prometheus metrics endpoint at /metrics and configure your monitoring system to scrape metrics from this endpoint.

Database migrations

This release includes a database migration that migrates Oban tables from v12 to v13(opens in a new tab). The migration is expected to run quickly.

Changelog

A full list of changes, along with the issue numbers, is available here.