LLMTreeRec: Unleashing the Power of Large Language Models for Cold-Start Recommendations

  • Wenlin Zhang
  • , Chuhan Wu
  • , Xiangyang Li
  • , Yuhao Wang
  • , Kuicai Dong
  • , Yichao Wang
  • , Xinyi Dai
  • , Xiangyu Zhao*
  • , Huifeng Guo
  • , Ruiming Tang*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The lack of training data gives rise to the system cold-start problem in recommendation systems, making them struggle to provide effective recommendations. To address this problem, Large Language Models(LLMs) can model recommendation tasks as language analysis tasks and provide zero-shot results based on their vast open-world knowledge. However, the large scale of the item corpus poses a challenge to LLMs, leading to substantial token consumption that makes it impractical to deploy in real-world recommendation systems. To tackle this challenge, we introduce a tree-based LLM recommendation framework LLMTreeRec, which structures all items into an item tree to improve the efficiency of LLM's item retrieval. LLMTreeRec achieves state-of-the-art performance under the system cold-start setting in two widely used datasets, which is even competitive with conventional deep recommendation systems that use substantial training data. Furthermore, LLMTreeRec outperforms the baseline model in the A/B test on Huawei industrial system. Consequently, LLMTreeRec demonstrates its effectiveness as an industry-friendly solution that has been successfully deployed online. Our code is available at: https://github.com/Applied-Machine-Learning-Lab/LLMTreeRec.

Original languageEnglish
Title of host publicationMain Conference
EditorsOwen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
PublisherAssociation for Computational Linguistics (ACL)
Pages886-896
Number of pages11
ISBN (Electronic)9798891761964
StatePublished - 2025
Externally publishedYes
Event31st International Conference on Computational Linguistics, COLING 2025 - Abu Dhabi, United Arab Emirates
Duration: 19 Jan 202524 Jan 2025

Publication series

NameProceedings - International Conference on Computational Linguistics, COLING
ISSN (Print)2951-2093

Conference

Conference31st International Conference on Computational Linguistics, COLING 2025
Country/TerritoryUnited Arab Emirates
CityAbu Dhabi
Period19/01/2524/01/25

Fingerprint

Dive into the research topics of 'LLMTreeRec: Unleashing the Power of Large Language Models for Cold-Start Recommendations'. Together they form a unique fingerprint.

Cite this