PULSE-7B
Dataset for paper "Teach Multimodal LLMs to Comprehend Electrocardiographic Images".
π Project Page: https://aimedlab.github.io/PULSE/
π Paper: https://arxiv.org/abs/2410.19008
π§βπ» Code: https://github.com/AIMedLab/PULSE
π©ββοΈ ECGInstruct(Training): https://huggingface.co/datasets/PULSE-ECG/ECGInstruct
βοΈ ECGBench(Testing): https://huggingface.co/datasets/PULSE-ECG/ECGBench
Introduction
We introduce PULSE-7B, a multimodal large language model (MLLM) specifically designed for ECG image interpretation. Leveraging the comprehensive ECGInstruct dataset, which contains over one million instruction-tuning samples, PULSE-7B is tailored to handle a wide range of ECG-related tasks drawn from diverse data sources. While traditional ECG interpretation methods are often constrained by their reliance on raw physiological signals and limited to specific cardiac conditions, PULSE-7B addresses these limitations by enabling robust interpretation of both printed and digital ECG images, making it especially valuable in resource-limited settings where access to raw signals may be restricted. In conjunction with the introduction of ECGBench, a benchmark that includes four key tasks spanning nine datasets, our experiments demonstrate that PULSE-7B establishes new state-of-the-art performance, surpassing general MLLMs with an average accuracy improvement of 15% to 30%. This model showcases the potential to significantly advance ECG image interpretation, providing a more versatile and accurate tool for clinical practice.
Overall performance of PULSE-7B on ECGBench
Model Performance
In-domain
Out-of-domain
Case Study
Citation
If you find this work helpful, please cite our paper:
@article{liu2024teach,
title={Teach Multimodal LLMs to Comprehend Electrocardiographic Images},
author={Ruoqi Liu, Yuelin Bai, Xiang Yue, Ping Zhang},
journal={arXiv preprint arXiv:2410.19008},
year={2024}
}
- Downloads last month
- 371