Multimodal Large Language Models Harmlessness Alignment Dataset
The dataset used in the paper to evaluate the harmlessness alignment of multimodal large language models (MLLMs). The dataset consists of 750 harmful instructions paired with corresponding images.
BibTex: