Edit model card

LayoutLMv3

Microsoft Document AI | GitHub

Model description

LayoutLMv3 is a pre-trained multimodal Transformer for Document AI with unified text and image masking. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model. For example, LayoutLMv3 can be fine-tuned for both text-centric tasks, including form understanding, receipt understanding, and document visual question answering, and image-centric tasks such as document image classification and document layout analysis.

LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei, Preprint 2022.

Results

Dataset Language Precision Recall F1
XFUND ZH 0.8980 0.9435 0.9202
Dataset Subject Test Time Name School Examination Number Seat Number Class Student Number Grade Score Mean
EPHOIE 98.99 100.0 99.77 99.2 100.0 100.0 98.82 99.78 98.31 97.27 99.21

Citation

If you find LayoutLM useful in your research, please cite the following paper:

@inproceedings{huang2022layoutlmv3,
  author={Yupan Huang and Tengchao Lv and Lei Cui and Yutong Lu and Furu Wei},
  title={LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking},
  booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
  year={2022}
}

License

The content of this project itself is licensed under the Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0). Portions of the source code are based on the transformers project. Microsoft Open Source Code of Conduct

Downloads last month
3,286
Inference API
Unable to determine this model’s pipeline type. Check the docs .