From support tickets to pull requests: A support engineer’s journey into code contributions
Table of contents

This post covers how AI workflows helped me:
- Navigate a massive monorepo with confidence.
- Recreate complex customer environments in minutes.
- Move from filing bug reports to submitting pull requests with fixes.
My role as a Technical Support Engineer at Nutrient has always been about solving customer problems. I’m great at identifying issues, explaining workarounds, and writing detailed bug reports. But recently, my role evolved: I’m no longer just reporting bugs; I’m fixing them.
This isn’t a story about a career change. It’s about how AI-assisted development empowered me to contribute code, dive deep into our monorepo, and directly improve our product. It’s a journey from bug reporter to bug fixer, and it’s a path open to any support engineer.
Breaking the code barrier
Before AI, technical support was detective work with limited tools. We spent hours trying to understand our complex codebase, recreate customer environments, and determine if an issue was a bug or configuration problem. Our bug reports were detailed, but they were still just reports.
With AI, we provide richer context in bug reports, making it easier for engineers to address issues. But the real gamechanger is that we can now go a step further.
When we identify a genuine bug, AI helps us navigate our extensive monorepo. We can ask questions like “Show me the Document Editor page movement logic” or “Find the validation code for pageLabel values.” The AI instantly identifies relevant files and explains how components interact, helping us understand the code’s logical flow and potential failure points. This means we can quickly get up to speed without interrupting an engineer’s work.
Once I realized I could read and navigate the code with AI’s help, the next step became obvious.
Recreating customer environments in minutes
Reproducing customer issues used to be one of our most time-consuming tasks. A bug in Nutrient Web SDK could be specific to any number of frameworks or library versions, and manually creating a matching test environment took hours.
AI has revolutionized this. We now describe the customer’s environment to the AI, and it generates a complete, working reproduction case in minutes. This enables us to confirm bugs faster and provide the engineering team with a precise, isolated test case.
Analyzing Document Engine issues
Once recreating environments became faster, I was able to take on deeper, server-side challenges. For example, Nutrient Document Engine issues often involve complex server-side operations. With AI, we can generate the exact API requests needed to test customer scenarios — from document merging to conversion. This enables us to quickly identify whether an issue is a bug or a configuration problem, and to escalate confirmed bugs with detailed reproduction steps.
The breakthrough: A shift in mindset
The moment everything changed was when I stopped thinking “I’ll file a bug report” and started thinking “I wonder if I can fix this myself.” This mental shift, enabled by AI-assisted code navigation, transformed me from a passive reporter into an active contributor.
This transformation was driven by the six key character traits that we at Nutrient most value: Low Ego, Speed of Learning, Curiosity to Understand, Self-Initiative, Ownership, and Focus on Results.
Accelerating documentation updates
Beyond bug fixes, we’re also improving documentation with AI. When we find a gap, we can now:
- Generate accurate technical explanations.
- Create comprehensive code examples.
- Maintain a consistent style and tone.
- Update documentation at scale.
This means customers get clearer, more accurate information, faster.
Privacy and security: The local advantage
A crucial aspect of our AI workflow is privacy. By using tools like Ollama(opens in a new tab) and LM Studio(opens in a new tab), we run powerful language models directly on our own machines. This ensures customer data and proprietary code never leave our local environment. As a support engineer, that’s a gamechanger — it means I can handle even the most sensitive issues without compromise.
Running models locally
With a single command, you can run a powerful open source model on your machine. For code analysis, models like Qwen and DeepSeek are excellent choices.
# Run Qwen, a great general-purpose chat and coding modelollama run qwen:7b-chat
# Run DeepSeek Coder, a model specifically fine-tuned for codeollama run deepseek-coder:6.7b-instruct
Many local models now support advanced features like tool calling and reasoning. Tool calling enables AI to interact with external systems — like searching codebases, running commands, or querying databases — rather than just generating text. Reasoning models, on the other hand, show their step-by-step thought process, making their logic transparent and more reliable for complex tasks.
For support engineers like myself, this is transformative. Instead of manually searching for files or functions, you can ask the model to execute searches, analyze dependencies, trace function calls, and even run tests, all while seeing exactly how the model reasoned through each step. This combination of transparent thinking and dynamic code interaction dramatically accelerates the understanding of complex software systems.
Additionally, this local approach means we can analyze even the most sensitive customer issues without privacy concerns, ensuring compliance with regulations like HIPAA.
A quick primer on local models
Running large language models (LLMs) locally is made possible by a few key technologies. Here’s the cheat sheet I wish I had when I first started running models locally:
- llama.cpp — A C++ library that enables efficient inference of LLMs on a variety of hardware, including CPUs. It’s the foundation for many local AI tools, enabling you to run powerful models without relying on a GPU.
- GGUF (GPT-Generated Unified Format) — A file format designed for
llama.cpp
that packages a model’s architecture, weights, and metadata into a single file. This makes it easy to distribute and load models, and it’s designed to be extensible for future developments in model architecture. - Quantization — The process of reducing the precision of a model’s weights (the numbers that store its knowledge) to make it smaller and faster. For example, a model’s weights might be stored as 32-bit floating-point numbers, but quantization can reduce them to 4-bit integers. This significantly reduces the model’s size and the amount of RAM it requires, with a minimal impact on performance. This is what allows a massive model to run on a laptop.
- MLX — A machine learning framework from Apple that’s specifically designed for Apple silicon. It enables you to run and fine-tune models on Macs with high efficiency, taking full advantage of the unified memory architecture of Apple’s M-series chips. If you’re a Mac user, MLX is a great option for running models locally.
What to keep in mind when running local models
- Hardware is key — The more VRAM (on a GPU) and RAM you have, the larger and more capable models you can run. For Mac users, the unified memory of Apple silicon is a significant advantage.
- Model size matters — Models are measured in billions of parameters (e.g. 7B, 13B). Larger models are generally more capable but require more resources. Quantization is key to running larger models on less powerful hardware.
- Use trusted sources — Always download models from reputable sources, like Hugging Face, to avoid security risks.
Getting started with AI workflows
To implement AI workflows in your support process, you’ll need:
- Access to a development environment.
- Local AI tools like Ollama(opens in a new tab) or LM Studio(opens in a new tab).
- Team training on AI-assisted debugging.
- Clear privacy policies for handling customer data.
- Collaboration frameworks for pull request reviews.
The investment pays off quickly, with most teams seeing significant improvements in issue resolution speed within the first month.
Conclusion
This journey is about more than learning new technical skills. It’s about embodying the traits that drive innovation — low ego, curiosity, initiative, and more. Combined with AI-assisted workflows, these traits can transform any support engineer into an active contributor to product improvement.
For me, AI didn’t just make my job easier — it made it bigger. It turned support work into product work. And if it can do that for me, it can do the same for any support engineer ready to take the leap.
Learn more about Nutrient’s AI-powered solutions and discover how our document processing technology can enhance your development and support processes.