variante commited on
Commit
28ec691
1 Parent(s): 5fac8d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -3
README.md CHANGED
@@ -1,3 +1,45 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ pipeline_tag: image-text-to-text
4
+ license: apache-2.0
5
+ datasets:
6
+ - VIMA/VIMA-Data
7
+ tags:
8
+ - llara
9
+ - llava
10
+ - robotics
11
+ - vlm
12
+ ---
13
+ <br>
14
+ <be>
15
+
16
+ # LLaRA Model Card
17
+
18
+ This model is released with paper **[LLaRA: Supercharging Robot Learning Data for Vision-Language Policy](https://arxiv.org/abs/2406.20095)**
19
+
20
+ [Xiang Li](https://xxli.me)<sup>1</sup>, [Cristina Mata](https://openreview.net/profile?id=~Cristina_Mata1)<sup>1</sup>, [Jongwoo Park](https://github.com/jongwoopark7978)<sup>1</sup>, [Kumara Kahatapitiya](https://www3.cs.stonybrook.edu/~kkahatapitiy)<sup>1</sup>, [Yoo Sung Jang](https://yjang43.github.io/)<sup>1</sup>, [Jinghuan Shang](https://elicassion.github.io/)<sup>1</sup>, [Kanchana Ranasinghe](https://kahnchana.github.io/)<sup>1</sup>, [Ryan Burgert](https://ryanndagreat.github.io/)<sup>1</sup>, [Mu Cai](https://pages.cs.wisc.edu/~mucai/)<sup>2</sup>, [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/)<sup>2</sup>, and [Michael S. Ryoo](http://michaelryoo.com/)<sup>1</sup>
21
+
22
+ <sup>1</sup>Stony Brook University <sup>2</sup>University of Wisconsin-Madison
23
+
24
+ ## Model details
25
+
26
+ **Model type:**
27
+ D-RT2-Style is one of the baselines in our LLaRA paper, following the style of [RT2](https://robotics-transformer.github.io/).
28
+ This is an open-source visuomotor policy trained by fine-tuning [LLaVA-7b-v1.5](https://huggingface.co/liuhaotian/llava-v1.5-7b) on instruction-following data `D-RT2-Style`, converted from [VIMA-Data](https://huggingface.co/datasets/VIMA/VIMA-Data).
29
+ For the conversion code, please refer to [convert_vima.ipynb](https://github.com/LostXine/LLaRA/blob/main/datasets/convert_vima.ipynb)
30
+
31
+ **Model date:**
32
+ llava-1.5-7b-llara-D-RT2-Style-VIMA-80k was trained in June 2024.
33
+
34
+ **Paper or resources for more information:**
35
+ https://github.com/LostXine/LLaRA
36
+
37
+ **Where to send questions or comments about the model:**
38
+ https://github.com/LostXine/LLaRA/issues
39
+
40
+ ## Intended use
41
+ **Primary intended uses:**
42
+ The primary use of LLaRA is research on large multimodal models for robotics.
43
+
44
+ **Primary intended users:**
45
+ The primary intended users of the model are researchers and hobbyists in robotics, computer vision, natural language processing, machine learning, and artificial intelligence.