[데이터셋리뷰] JailBreakV: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks
Luo, W., Ma, S., Liu, X., Guo, X., & Xiao, C. (2024). JailBreakV: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks. https://arxiv.org/abs/2404.03027 JailBreakV: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak AttacksWith the rapid advancements in Multimodal Large Language Models (MLLMs), securing..
2025.01.06