Fine-tuning large language models (LLMs) on domain-specific text corpora has emerged as a crucial step in enhancing their performance on technical tasks. This article investigates various fine-tuning approaches for LLMs when applied to technical text. We analyze the impact of different variables, such as dataset size, neural structure, and optimiza