omoured's picture
Update README.md
baed5d0 verified
|
raw
history blame
2.65 kB
metadata
license: agpl-3.0
datasets:
  - ds4sd/DocLayNet
language:
  - en
metrics:
  - accuracy
  - mape
  - precision
  - recall
pipeline_tag: object-detection

πŸ€— Live Demo here: https://huggingface.co/spaces/omoured/YOLOv10-Document-Layout-Analysis

About πŸ“‹

The models were fine-tuned using 4xA100 GPUs on the Doclaynet-base dataset, which consists of 69103 training images, 6480 validation images, and 4994 test images.

Results πŸ“Š

Model mAP50 mAP50-95 Model Weights
YOLOv10-x 0.924 0.740 Download
YOLOv10-b 0.922 0.732 Download
YOLOv10-l 0.921 0.732 Download
YOLOv10-m 0.917 0.737 Download
YOLOv10-s 0.905 0.713 Download
YOLOv10-n 0.892 0.685 Download

Codes πŸ”₯

Check out our Github repo for inference codes: https://github.com/moured/YOLOv10-Document-Layout-Analysis

References πŸ“

  1. YOLOv10
BibTeX
@article{wang2024yolov10,
  title={YOLOv10: Real-Time End-to-End Object Detection},
  author={Wang, Ao and Chen, Hui and Liu, Lihao and Chen, Kai and Lin, Zijia and Han, Jungong and Ding, Guiguang},
  journal={arXiv preprint arXiv:2405.14458},
  year={2024}
}
  1. DocLayNet
@article{doclaynet2022,
  title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis},  
  doi = {10.1145/3534678.353904},
  url = {https://arxiv.org/abs/2206.01062},
  author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
  year = {2022}
}

Contact

LinkedIn: https://www.linkedin.com/in/omar-moured/