mllm(2)
-
[데이터셋리뷰] JailBreakV: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks
Luo, W., Ma, S., Liu, X., Guo, X., & Xiao, C. (2024). JailBreakV: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks. https://arxiv.org/abs/2404.03027 JailBreakV: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak AttacksWith the rapid advancements in Multimodal Large Language Models (MLLMs), securing..
2025.01.06 -
[논문리뷰] Jailbreak Attacks and Defenses against Multimodal Generative Models: A Survey
Liu, X., Cui, X., Li, P., Li, Z., Huang, H., Xia, S., Zhang, M., Zou, Y., & He, R. (2024). Jailbreak Attacks and Defenses against Multimodal Generative Models: A Survey. https://arxiv.org/abs/2411.09259 Jailbreak Attacks and Defenses against Multimodal Generative Models: A SurveyThe rapid evolution of multimodal foundation models has led to significant advancements in cross-modal understanding a..
2024.12.31