Text clustering with large language model embeddings

Λεπτομέρειες βιβλιογραφικής εγγραφής
Τίτλος: Text clustering with large language model embeddings
Συγγραφείς: Alina Petukhova, João P. Matos-Carvalho, Nuno Fachada
Πηγή: International Journal of Cognitive Computing in Engineering, Vol 6, Iss, Pp 100-108 (2025)
Publication Status: Preprint
Στοιχεία εκδότη: Elsevier BV, 2025.
Έτος έκδοσης: 2025
Θεματικοί όροι: I.7.m, FOS: Computer and information sciences, Computer Science - Machine Learning, Computer Science - Computation and Language, Computer Science - Artificial Intelligence, I.2.6, Science, I.2.7, QA75.5-76.95, Text summarisation, Machine Learning (cs.LG), Text clustering, Artificial Intelligence (cs.AI), Electronic computers. Computer science, Large language models, 68T50 (Primary), 62H30 (Secondary), Computation and Language (cs.CL)
Περιγραφή: Text clustering is an important method for organising the increasing volume of digital content, aiding in the structuring and discovery of hidden patterns in uncategorised data. The effectiveness of text clustering largely depends on the selection of textual embeddings and clustering algorithms. This study argues that recent advancements in large language models (LLMs) have the potential to enhance this task. The research investigates how different textual embeddings, particularly those utilised in LLMs, and various clustering algorithms influence the clustering of text datasets. A series of experiments were conducted to evaluate the impact of embeddings on clustering results, the role of dimensionality reduction through summarisation, and the adjustment of model size. The findings indicate that LLM embeddings are superior at capturing subtleties in structured language. OpenAI's GPT-3.5 Turbo model yields better results in three out of five clustering metrics across most tested datasets. Most LLM embeddings show improvements in cluster purity and provide a more informative silhouette score, reflecting a refined structural understanding of text data compared to traditional methods. Among the more lightweight models, BERT demonstrates leading performance. Additionally, it was observed that increasing model dimensionality and employing summarisation techniques do not consistently enhance clustering efficiency, suggesting that these strategies require careful consideration for practical application. These results highlight a complex balance between the need for refined text representation and computational feasibility in text clustering applications. This study extends traditional text clustering frameworks by integrating embeddings from LLMs, offering improved methodologies and suggesting new avenues for future research in various types of textual analysis.
The peer-reviewed version of this paper is published in the International Journal of Cognitive Computing in Engineering at https://doi.org/10.1016/j.ijcce.2024.11.004. This version is typeset by the authors and differs only in pagination and typographical detail
Τύπος εγγράφου: Article
Γλώσσα: English
ISSN: 2666-3074
DOI: 10.1016/j.ijcce.2024.11.004
DOI: 10.48550/arxiv.2403.15112
Σύνδεσμος πρόσβασης: http://arxiv.org/abs/2403.15112
https://doaj.org/article/0d07ddd5e301430f8c147693d73f9264
Rights: CC BY
Αριθμός Καταχώρησης: edsair.doi.dedup.....3c9f9ef1f2dec362a6f288b59d25dce5
Βάση Δεδομένων: OpenAIRE
Περιγραφή
ISSN:26663074
DOI:10.1016/j.ijcce.2024.11.004