Investigating fine-tuning of large language models for text summarisation

Khaliq, Usama and Patel, Preeti (2024) Investigating fine-tuning of large language models for text summarisation. In: 12th International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA-2024), 6-7 June 2024, London Metropolitan University, London (UK) / Online. (In Press)

Abstract

This research aim was to investigate whether fine-tuning medium-sized Large Language Models on text summarisation across different fields, from news to scientific papers and governmental reports, would improve the quality of the summaries generated in the given field. The performance was evaluated quantitatively using ROUGE and BLEU scores and qualitatively through human analysis and judgement. However, fine-tuning led to minimal performance improvements compared to the untrained models. The generated summaries were generally accurate; surprisingly, one model introduces false information. In contrast, bigger models, like GPT-4, with its 1.7 trillion parameters, generated near-perfect summaries without having been trained on a specific dataset. The limited performance increase from the fine-tuned models was likely due to small datasets and medium-sized LLMs. Deploying and fine-tuning these models is computationally expensive, requiring careful planning and optimisation. Overall, this project provided insights into transformer fine-tuning, capabilities and limitations that can guide future work to adapt these powerful models for real-world applications.

Documents
9597:48782
[thumbnail of LLM text summarisation Khaliq-Patel.pdf]
LLM text summarisation Khaliq-Patel.pdf - Accepted Version
Restricted to Repository staff only until 14 August 2025.

Download (141kB) | Request a copy
Details
Record
View Item View Item