README / README.md
a43992899's picture
Update README.md
4bc65aa verified
|
raw
history blame
2.78 kB
metadata
title: README
emoji: πŸ‘€
colorFrom: green
colorTo: yellow
sdk: static
pinned: false

Multimodal Art Projection (M-A-P) is an opensource research community. The community members are working on Artificial Intelligence-Generated Content (AIGC) topics, including text, audio, and vision modalities. We do large language/music/multimodal models (LLMs/LMMs) training, data collection, and development of fun applications.

Welcome to join us!

Organization page: https://m-a-p.ai

The development log of our Multimodal Art Projection (m-a-p) model family:

  • πŸ”₯11/04/2024: MuPT paper and demo are out. HF collection.
  • πŸ”₯08/04/2024: Chinese Tiny LLM is out. HF collection.
  • πŸ”₯28/02/2024: The release of ChatMusician's demo, code, model, data, and benchmark. πŸ˜†
  • πŸ”₯23/02/2024: The release of OpenCodeInterpreter, beats GPT-4 code interpreter on HumanEval.
  • 23/01/2024: we release CMMMU for better Chinese LMMs' Evaluation.
  • 13/01/2024: we release a series of Music Pretrained Transformer (MuPT) checkpoints, with size up to 1.3B and 8192 context length. Our models are LLAMA2-based, pre-trained on world's largest 10B tokens symbolic music dataset (ABC notation format). We currently support Megatron-LM format and will release huggingface checkpoints soon.
  • 02/06/2023: officially release the MERT pre-print paper and training codes.
  • 17/03/2023: we release two advanced music understanding models, MERT-v1-95M and MERT-v1-330M , trained with new paradigm and dataset. They outperform the previous models and can better generalize to more tasks.
  • 14/03/2023: we retrained the MERT-v0 model with open-source-only music dataset MERT-v0-public
  • 29/12/2022: a music understanding model MERT-v0 trained with MLM paradigm, which performs better at downstream tasks.
  • 29/10/2022: a pre-trained MIR model music2vec trained with BYOL paradigm.