M^{2}UGen: Multi-modal Music Understanding and Generation with the Power of Large Language Models
Abstract
The current landscape of research leveraging large language models (LLMs) is experiencing a surge. Many works harness the powerful reasoning capabilities of these models to comprehend various modalities, such as text, speech, images, videos, etc. They also utilize LLMs to understand human intention and generate desired outputs like images, videos, and music. However, research that combines both understanding and generation using LLMs is still limited and in its nascent stage. To address this gap, we introduce a Multi-modal Music Understanding and Generation (M^{2}UGen) framework that integrates LLM's abilities to comprehend and generate music for different modalities. The M^{2}UGen framework is purpose-built to unlock creative potential from diverse sources of inspiration, encompassing music, image, and video through the use of pretrained MERT, ViT, and ViViT models, respectively. To enable music generation, we explore the use of AudioLDM 2 and MusicGen. Bridging multi-modal understanding and music generation is accomplished through the integration of the LLaMA 2 model. Furthermore, we make use of the MU-LLaMA model to generate extensive datasets that support text/image/video-to-music generation, facilitating the training of our M^{2}UGen framework. We conduct a thorough evaluation of our proposed framework. The experimental results demonstrate that our model achieves or surpasses the performance of the current state-of-the-art models.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- TEAL: Tokenize and Embed ALL for Multi-modal Large Language Models (2023)
- AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model (2023)
- How to Bridge the Gap between Modalities: A Comprehensive Survey on Multimodal Large Language Model (2023)
- Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding (2023)
- LLark: A Multimodal Foundation Model for Music (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
Models citing this paper 3
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 7
Collections including this paper 0
No Collection including this paper