bokesyo commited on
Commit
c81611e
β€’
1 Parent(s): d02a0b4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -21,6 +21,8 @@ Our model is capable of:
21
 
22
  - Help you build a personal library and retireve book pages from a large collection of books.
23
 
 
 
24
  - It works like human: read and comprehend with **vision** and remember **multimodal** information in hippocampus.
25
 
26
  ![Memex Archtechture](images/memex.png)
@@ -29,13 +31,13 @@ Our model is capable of:
29
 
30
  - 2024-07-14: πŸ€— We released **online huggingface demo**! Try our [online demo](https://huggingface.co/spaces/bokesyo/minicpm-visual-embeeding-v0-demo)!
31
 
32
- - 2024-07-14: πŸ˜‹ We released a **locally deployable Gradio demo** of `Memex`, take a look at [pipeline_gradio.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline_gradio.py). You can run `pipeline_gradio.py` to build a demo on your PC.
33
 
34
- - 2024-07-13: πŸ’» We released a **locally deployable command-line based demo** of `Memex` for users to retireve most relavant pages from a given PDF file (could be very long), take a look at [pipeline.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline.py).
35
 
36
  - 2024-06-27: πŸš€ We released our first visual embedding model checkpoint on [huggingface](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0).
37
 
38
- - 2024-05-08: 🌍 We [open-sourced](https://github.com/RhapsodyAILab/minicpm-visual-embedding-v0) our training code (full-parameter tuning with GradCache and DeepSpeed, supports large batch size across multiple GPUs with zero-stage1) and eval code.
39
 
40
  # Deploy on your PC
41
 
 
21
 
22
  - Help you build a personal library and retireve book pages from a large collection of books.
23
 
24
+ - It has only 2.8B parameters, and has the potential to run on your PC.
25
+
26
  - It works like human: read and comprehend with **vision** and remember **multimodal** information in hippocampus.
27
 
28
  ![Memex Archtechture](images/memex.png)
 
31
 
32
  - 2024-07-14: πŸ€— We released **online huggingface demo**! Try our [online demo](https://huggingface.co/spaces/bokesyo/minicpm-visual-embeeding-v0-demo)!
33
 
34
+ - 2024-07-14: πŸ˜‹ We released a **locally deployable Gradio demo** of `Memex`, take a look at [pipeline_gradio.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline_gradio.py). You can build a demo on your PC now!
35
 
36
+ - 2024-07-13: πŸ’» We released a **locally deployable command-line based demo** for users to retireve most relavant pages from a given PDF file (could be very long), take a look at [pipeline.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline.py).
37
 
38
  - 2024-06-27: πŸš€ We released our first visual embedding model checkpoint on [huggingface](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0).
39
 
40
+ - 2024-05-08: 🌍 We [open-sourced](https://github.com/RhapsodyAILab/minicpm-visual-embedding-v0) our training code (full-parameter tuning with GradCache and DeepSpeed zero-stage2, supports large batch size across multiple GPUs with zero-stage1) and eval code.
41
 
42
  # Deploy on your PC
43