본문 바로가기
728x90

Paper Review/Large Language Model (LLM)13

[논문리뷰] LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-tuning of Large Language Models Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, and Roy Lee. 2023. LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5254–5276, Singapore. Association for Computational Linguistics. https://arxiv.org/abs/2304... 2024. 10. 14.
[논문리뷰] Open Ko-LLM Leaderboard: Evaluating Large Language Models in Korean with Ko-H5 Benchmark Chanjun Park, Hyeonwoo Kim, Dahyun Kim, SeongHwan Cho, Sanghoon Kim, Sukyung Lee, Yungi Kim, and Hwalsuk Lee. 2024. Open Ko-LLM Leaderboard: Evaluating Large Language Models in Korean with Ko-H5 Benchmark. In Proceedings of the 62nd Annual Meeting of the Association for Computational Liguistics (Volume 1: Long Papers), pages 3220-3234, Bangkok, Thailand. Association for Computational Linguistics.. 2024. 10. 7.
[논문리뷰] What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization? What Language Model Architecture and Pretraining Objectvie Work Best for Zero-Shot Generalization, International Conference on Machine Learning, PMLR (Proceedings of Machine Learning Research), 2022.https://arxiv.org/abs/2204.05832 What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?Large pretrained Transformer language models have been shown to exhi.. 2024. 9. 30.
[논문리뷰] AI models collapse when trained on recursively generated data Shumailov, I., Shumaylov, Z., Zhao, Y. et al. AI models collapse when trained on recursively generated data. Nature 631, 755–759 (2024). https://doi.org/10.1038/s41586-024-07566-y 2024년 7월 Nature 저널에 등록된 article이다. 요즘 LLM에서 대부분의 논문들이 model의 성능을 높이거나 효율성을 향상시키는 방향이라는 점과 대비되는 논문인데, 이 논문은 생성형 AI가 만든 데이터를 LLM에 다시 학습을 시켰을 때 일어날 수 있는 model collapse 현상에 대해서 다루는 논문이라 신선했다. AbstractStable diffusion은 imag.. 2024. 9. 27.
[논문리뷰] Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO) Direct Preference Optimization: Your Language Model is Secretly a Reward Model, Advances in Neural Information Processing Systems (Neurips,'24). Rafailov, R., Sharma, A., Mitchell, E., Manning, C. D., Ermon, S., & Finn, C.https://arxiv.org/abs/2305.18290 Direct Preference Optimization: Your Language Model is Secretly a Reward ModelWhile large-scale unsupervised language models (LMs) learn broad .. 2024. 9. 26.
[논문리뷰] LIMA: Less Is More for Alignment LIMA: Less Is More for Alignment, In Proceedings of the 37th International Conference on Neural Information Processing Systems (NIPS '23).  Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy.https://arxiv.org/abs/2305.11206 LIMA: Less Is More for AlignmentLarge lang.. 2024. 9. 23.
728x90