Paper Review(25)
-
[데이터셋리뷰] JailBreakV: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks
Luo, W., Ma, S., Liu, X., Guo, X., & Xiao, C. (2024). JailBreakV: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks. https://arxiv.org/abs/2404.03027 JailBreakV: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak AttacksWith the rapid advancements in Multimodal Large Language Models (MLLMs), securing..
2025.01.06 -
[논문리뷰] Searching for Best Practices in Retrieval-Augmented Generation
Xiaohua Wang, Zhenghua Wang, Xuan Gao, Feiran Zhang, Yixin Wu, Zhibo Xu, Tianyuan Shi, Zhengyuan Wang, Shizheng Li, Qi Qian, Ruicheng Yin, Changze Lv, Xiaoqing Zheng, and Xuanjing Huang. 2024. Searching for best practices in retrieval-augmented generation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 17716-17736, Miami, Florida, USA. Associatio..
2025.01.01 -
[논문리뷰] Jailbreak Attacks and Defenses against Multimodal Generative Models: A Survey
Liu, X., Cui, X., Li, P., Li, Z., Huang, H., Xia, S., Zhang, M., Zou, Y., & He, R. (2024). Jailbreak Attacks and Defenses against Multimodal Generative Models: A Survey. https://arxiv.org/abs/2411.09259 Jailbreak Attacks and Defenses against Multimodal Generative Models: A SurveyThe rapid evolution of multimodal foundation models has led to significant advancements in cross-modal understanding a..
2024.12.31 -
[논문리뷰] BLUE SUFFIX: REINFORCED BLUE TEAMING FOR VISION-LANGUAGE MODELS AGAINST JAILBREAK ATTACKS
Zhao, Y., Zheng, X., Luo, L., Li, Y., Ma, X., & Jiang, Y. (2024). BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks. ArXiv, abs/2410.20971.https://arxiv.org/abs/2410.20971 BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak AttacksDespite their superb multimodal capabilities, Vision-Language Models (VLMs) have been shown to be v..
2024.12.30 -
[논문리뷰] White-box Multimodal Jailbreaks Against Large Vision-Language Models
Wang, R., Ma, X., Zhou, H., Ji, C., Ye, G., & Jiang, Y. (2024). White-box Multimodal Jailbreaks Against Large Vision-Language Models. ACM Multimedia.https://arxiv.org/abs/2405.17894 White-box Multimodal Jailbreaks Against Large Vision-Language ModelsRecent advancements in Large Vision-Language Models (VLMs) have underscored their superiority in various multimodal tasks. However, the adversarial ..
2024.12.27 -
[논문리뷰] Visual Adversarial Examples Jailbreak Aligned Large Language Models
Qi, X., Huang, K., Panda, A., Henderson, P., Wang, M., & Mittal, P. (2024). Visual Adversarial Examples Jailbreak Aligned Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19), 21527-21536. https://doi.org/10.1609/aaai.v38i19.30150 https://arxiv.org/abs/2306.13213 Visual Adversarial Examples Jailbreak Aligned Large Language ModelsRecently, there has been a ..
2024.12.26