bokesyo commited on
Commit
94fd7d7
β€’
1 Parent(s): be78911

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -29,13 +29,11 @@ Our model is capable of:
29
 
30
  # News
31
 
32
- - 2024-08-18: πŸ‘€ We released a **new [end-to-end Visual RAG huggingface demo](https://huggingface.co/spaces/bokesyo/MiniCPMV-RAG-PDFQA)**, which supports **both retrieval and generation**, which means, you can use our system to **answer your questions within a long PDF** now!
33
 
34
  - 2024-08-17: πŸ‘Š We open-sourced [cleaned version of training codebase](https://github.com/RhapsodyAILab/MiniCPM-V-Embedding-v0-Train) for MiniCPM-Visual-Embedding, which supports **deepspeed zero stage 1,2** and **large batchsize** like `4096` for full-parameter training to turn VLMs into dense retrievers. We also developed methods to filter training datasets and generating queries using unlablled datasets. We supports **multi-nodes, multi-GPUs** high-efficiency **evaluation** on large retrieval datasets. With such efforts, we support up to `20B` VLM contrastive learning with `4096` batch size. We have tested that one can train a VLM dense retriever with only **1 GPU, but with batch size of `4096`**.
35
 
36
- - 2024-07-14: πŸ€— We released **online huggingface demo**! Try our [online demo](https://huggingface.co/spaces/bokesyo/MiniCPM_Visual_Document_Retriever_Demo)!
37
-
38
- - 2024-07-14: πŸ˜‹ We released a **locally deployable Gradio demo** of `Memex`, take a look at [pipeline_gradio.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline_gradio.py). You can build a demo on your PC now!
39
 
40
  - 2024-07-13: πŸ’» We released a **locally deployable command-line based demo** for users to retireve most relavant pages from a given PDF file (could be very long), take a look at [pipeline.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline.py).
41
 
 
29
 
30
  # News
31
 
32
+ - 2024-08-18: πŸ‘€ We released a new [end-to-end Visual RAG huggingface demo](https://huggingface.co/spaces/bokesyo/MiniCPMV-RAG-PDFQA), which supports **both retrieval and generation**, which means, you can use our system to **answer your questions within a long PDF** now! This demo is also locally-deployable, clone the codes in the space and run on your own device.
33
 
34
  - 2024-08-17: πŸ‘Š We open-sourced [cleaned version of training codebase](https://github.com/RhapsodyAILab/MiniCPM-V-Embedding-v0-Train) for MiniCPM-Visual-Embedding, which supports **deepspeed zero stage 1,2** and **large batchsize** like `4096` for full-parameter training to turn VLMs into dense retrievers. We also developed methods to filter training datasets and generating queries using unlablled datasets. We supports **multi-nodes, multi-GPUs** high-efficiency **evaluation** on large retrieval datasets. With such efforts, we support up to `20B` VLM contrastive learning with `4096` batch size. We have tested that one can train a VLM dense retriever with only **1 GPU, but with batch size of `4096`**.
35
 
36
+ - 2024-07-14: πŸ€— We released **online huggingface demo**! Try our [online demo](https://huggingface.co/spaces/bokesyo/MiniCPM_Visual_Document_Retriever_Demo)! This demo is also locally-deployable, clone the codes in the space and run on your own device.
 
 
37
 
38
  - 2024-07-13: πŸ’» We released a **locally deployable command-line based demo** for users to retireve most relavant pages from a given PDF file (could be very long), take a look at [pipeline.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline.py).
39