[논문 리뷰] RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture
Gupta, A., Shirgaonkar, A., Balaguer, A. D. L., Silva, B., Holstein, D., Li, D., ... & Benara, V. (2024). RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture. arXiv preprint arXiv:2401.08406.https://arxiv.org/abs/2401.08406 RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on AgricultureThere are two common ways in which developers are incorporating proprietary and..
2024. 10. 16.
[논문리뷰] LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-tuning of Large Language Models
Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, and Roy Lee. 2023. LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5254–5276, Singapore. Association for Computational Linguistics. https://arxiv.org/abs/2304...
2024. 10. 14.
[논문리뷰] Open Ko-LLM Leaderboard: Evaluating Large Language Models in Korean with Ko-H5 Benchmark
Chanjun Park, Hyeonwoo Kim, Dahyun Kim, SeongHwan Cho, Sanghoon Kim, Sukyung Lee, Yungi Kim, and Hwalsuk Lee. 2024. Open Ko-LLM Leaderboard: Evaluating Large Language Models in Korean with Ko-H5 Benchmark. In Proceedings of the 62nd Annual Meeting of the Association for Computational Liguistics (Volume 1: Long Papers), pages 3220-3234, Bangkok, Thailand. Association for Computational Linguistics..
2024. 10. 7.