Multimodal large language models (MLLMs) have attracted widespread interest and have rich applications. However, the inherent attention mechanism in its Transformer structure requires quadratic complexity and results in expensive computational overhead. Therefore, in this work, we propose VL-Mamba, a multimodal large language model based on state space models, which have been shown to have great potential for long-sequence modeling with fast inference and linear scaling in sequence length.
Specifically, we first replace the transformer-based backbone language model such as LLama or Vicuna with the pre-trained Mamba language model. Then, we empirically explore how to effectively apply the 2D vision selective scan mechanism for multimodal learning and the combinations of different vision encoders and variants of pretrained Mamba language models. The extensive experiments on diverse multimodal benchmarks with competitive performance show the effectiveness of our proposed VL-Mamba and demonstrate the great potential of applying state space models for multimodal learning tasks.
VL-Mamba is the first work that explores the state space model Mamba to solve multimodal learning tasks. The VL-Mamba consists of a language model, a vision encoder, and a multimodal connector. To be specific,we utilize the pre-trained Mamba Large Language Model (Mamba LLM) as the language model. Then, we study three architectures of MultiModal Connector (MMC) and introduce a Vision Selective Scan (VSS) module in MMC to bridge the gap between 2D non-causal image information and the inherent causal modeling capabilities of state space models (SSMs). In the VSS module, we propose two 2D scan mechanisms: the Bidirectional Scanning Mechanism (BSM) and Cross Scanning Mechanism (CSM).
Since the state space models are designed to process 1D sequential data such as language sequences that have causal relationships, but the visual sequences generated by the vision encoder are non-causal data, 2D vision selective scan mechanisms are proposed to solve computer vision tasks. In this work, we try to apply the 2D vision selective scan mechanisms for multimodal learning by ensembling them in the multimodal connector of VL-Mamba. Specifically, we explore three variants of multimodal connectors:
The VSS module aims to bridge the gap between the 1D sequential processing capabilities inherent in the SSM and the 2D non-causal visual information. Specifically, the VSS module consists of a 2D vision scan mechanism and one mamba layer. In this work, we utilize two 2D scan mechanisms: Bidirectional-Scan Mechanism and Cross-Scan Mechanism, as follows:
@article{qiao2024vlmamba,
title={VL-Mamba: Exploring State Space Models for Multimodal Learning},
author={Qiao Yanyuan, Yu Zheng, Guo Longteng, Chen Sihan, Zhao Zijia, Sun Mingzhen, Wu Qi, and Liu Jing},
journal={arXiv preprint arXiv:2403.13600},
year={2024}
}
This website is adapted from Nerfies, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.