Transformers
PyTorch
layoutlmv2
Inference Endpoints
Yiheng Xu commited on
Commit
0b0e209
1 Parent(s): b61e18b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -6,7 +6,7 @@ license: cc-by-sa-4.0
6
  # LayoutXLM
7
  **Multimodal (text + layout/format + image) pre-training for document AI**
8
 
9
- [Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [Github Repository](https://github.com/microsoft/unilm/tree/master/layoutxlm)
10
  ## Introduction
11
  LayoutXLM is a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually-rich document understanding. Experiment results show that it has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUN dataset.
12
 
 
6
  # LayoutXLM
7
  **Multimodal (text + layout/format + image) pre-training for document AI**
8
 
9
+ [Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://github.com/microsoft/unilm/tree/master/layoutxlm)
10
  ## Introduction
11
  LayoutXLM is a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually-rich document understanding. Experiment results show that it has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUN dataset.
12