StableLLaVA: Enhanced Visual Instruction Tuning with Synthesized Image-Dialogue Data

Yanda Li1, Chi Zhang2, Gang Yu2, Zhibin Wang2, Bin Fu2, Guosheng Lin3 Chunhua Shen4 Ling Chen1 Yunchao Wei5
1University of Technology Sydney 2Tencent 3Nanyang Technological University 4Zhejiang University 5Beijing Jiaotong University

Abstract

The remarkable multimodal capabilities demonstrated by OpenAI's GPT-4 have sparked significant interest in the development of multimodal Large Language Models (LLMs). A primary research objective of such models is to align visual and textual modalities effectively while comprehending human instructions. Current methodologies often rely on annotations derived from benchmark datasets to construct image-dialogue datasets for training purposes, akin to instruction tuning in LLMs. However, these datasets often exhibit domain bias, potentially constraining the generative capabilities of the models. In an effort to mitigate these limitations, we propose a novel data collection methodology that synchronously synthesizes images and dialogues for visual instruction tuning. This approach harnesses the power of generative models, marrying the abilities of ChatGPT and text-to-image generative models to yield a diverse and controllable dataset with varied image content. This not only provides greater flexibility compared to existing methodologies but also significantly enhances several model capabilities. Our research includes comprehensive experiments conducted on various datasets using the open-source LLAVA model as a testbed for our proposed pipeline. Our results underscore marked enhancements across more than ten commonly assessed capabilities.

Interpolate start reference image.
Examples of synthesized visual instruction data. We instruct ChatGPT to generate prompts for the text-to-image generative model, StableDiffusion, and a dialogue regarding the image content. The image-dialogue pairs are used to train the multimodal large language models.

Results on testing benchmarks for twelve abilities

Left: AI-generated benchmarks. Right: Real-image benchmarks.Percentages indicate correct answers per ability.

Quan Image

Qualitative Results

Demonstrations of our method’s effectiveness across diverse real world image scenarios.

Case Image

Comparison of the results generated by LLaVA and our trained model.

Content in red represents inaccurate information. Our model can better adhere to question instructions, rendering more precise answers.

Comp Image

BibTeX

@article{li2023stablellava,
  title={StableLLaVA: Enhanced Visual Instruction Tuning with Synthesized Image-Dialogue Data},
  author={Li, Yanda and Zhang, Chi and Yu, Gang and Wang, Zhibin and Fu, Bin and Lin, Guosheng and Shen, Chunhua and Chen, Ling and Wei, Yunchao},
  journal={arXiv preprint arXiv:2308.10253},
  year={2023}
}