VL-Mamba: Exploring State Space Models for Multimodal Learning

Yanyuan Qiao1, Zheng Yu1, Longteng Guo2, Sihan Chen2,3, Zijia Zhao2,3, Mingzhen Sun2,3, Qi Wu1*, Jing Liu2,3
1Australian Institute for Machine Learning, The University of Adelaide, 2Institute of Automation, Chinese Academy of Sciences, 3School of Artificial Intelligence, University of Chinese Academy of Sciences.

Abstract

Multimodal large language models (MLLMs) have attracted widespread interest and have rich applications. However, the inherent attention mechanism in its Transformer structure requires quadratic complexity and results in expensive computational overhead. Therefore, in this work, we propose VL-Mamba, a multimodal large language model based on state space models, which have been shown to have great potential for long-sequence modeling with fast inference and linear scaling in sequence length.

Specifically, we first replace the transformer-based backbone language model such as LLama or Vicuna with the pre-trained Mamba language model. Then, we empirically explore how to effectively apply the 2D vision selective scan mechanism for multimodal learning and the combinations of different vision encoders and variants of pretrained Mamba language models. The extensive experiments on diverse multimodal benchmarks with competitive performance show the effectiveness of our proposed VL-Mamba and demonstrate the great potential of applying state space models for multimodal learning tasks.

VL-Mamba

VL-Mamba is the first work that explores the state space model Mamba to solve multimodal learning tasks. The VL-Mamba consists of a language model, a vision encoder, and a multimodal connector. To be specific,we utilize the pre-trained Mamba Large Language Model (Mamba LLM) as the language model. Then, we study three architectures of MultiModal Connector (MMC) and introduce a Vision Selective Scan (VSS) module in MMC to bridge the gap between 2D non-causal image information and the inherent causal modeling capabilities of state space models (SSMs). In the VSS module, we propose two 2D scan mechanisms: the Bidirectional Scanning Mechanism (BSM) and Cross Scanning Mechanism (CSM).

MultiModal Connector (MMC)

Since the state space models are designed to process 1D sequential data such as language sequences that have causal relationships, but the visual sequences generated by the vision encoder are non-causal data, 2D vision selective scan mechanisms are proposed to solve computer vision tasks. In this work, we try to apply the 2D vision selective scan mechanisms for multimodal learning by ensembling them in the multimodal connector of VL-Mamba. Specifically, we explore three variants of multimodal connectors:

  • MLP: a two-layer Multi-Layer Perceptron (MLP).
  • VSS-MLP: a 2D Vision Selective Scan (VSS) module combined with an MLP.
  • VSS-L2: a VSS module combined with two linear layers.

Vision Selective Scan (VSS)

The VSS module aims to bridge the gap between the 1D sequential processing capabilities inherent in the SSM and the 2D non-causal visual information. Specifically, the VSS module consists of a 2D vision scan mechanism and one mamba layer. In this work, we utilize two 2D scan mechanisms: Bidirectional-Scan Mechanism and Cross-Scan Mechanism, as follows:

  • Bidirectional-Scan Mechanism (BSM): scans the image patch features in both forward and backward directions, which aims to capture a broader context without increasing computational complexity.
  • Cross-Scan Mechanism (CSM): unfolds image patch features into sequences along rows and columns and scans them in four directions (diagonally across the image).

Examples of VL-Mamba chat

BibTeX

@article{qiao2024vlmamba,
        title={VL-Mamba: Exploring State Space Models for Multimodal Learning},
        author={Qiao Yanyuan, Yu Zheng, Guo Longteng, Chen Sihan, Zhao Zijia, Sun Mingzhen, Wu Qi, and Liu Jing},
        journal={arXiv preprint arXiv:2403.13600},
        year={2024}
      }

Acknowledgement

This website is adapted from Nerfies, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.