Transformers
PyTorch
layoutlmv2
Inference Endpoints
ranpox commited on
Commit
2192952
1 Parent(s): 4172848

Update ReEADME.md

Browse files
Files changed (1) hide show
  1. README.md +15 -0
README.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+
4
+ ---
5
+
6
+ # LayoutXLM
7
+ **Multimodal (text + layout/format + image) pre-training for document AI**
8
+
9
+ [Github Repository](https://github.com/microsoft/unilm/tree/master/layoutxlm)
10
+ ## Introduction
11
+ LayoutXLM is a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually-rich document understanding. Experiment results show that it has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUN dataset.
12
+
13
+ [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836)
14
+
15
+ Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei, arXiv Preprint 2021