File size: 2,773 Bytes
a49a84a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
license: apache-2.0
language:
- en
base_model:
- meta-llama/Meta-Llama-3.1-8B
---
# 🦙 Llama3.1-8b-vision-audio Model Card

## Model Details

This repository contains a version of the [LLaVA](https://github.com/haotian-liu/LLaVA) model that supports image and audio input from the [Llama 3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) foundation model using the [PKU-Alignment/align-anything](https://github.com/PKU-Alignment/align-anything) library.

- **Developed by:** the [PKU-Alignment](https://github.com/PKU-Alignment) Team.
- **Model Type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license.
- **Fine-tuned from model:** [meta-llama/Llama 3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B).

## Model Sources

- **Repository:** <https://github.com/PKU-Alignment/align-anything>
- **Dataset:**
  - <https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K>
  - <https://huggingface.co/datasets/PKU-Alignment/Align-Anything-Instruction-100K>
  - <https://huggingface.co/datasets/cvssp/WavCaps>

## How to use model (reprod.)

- Using align-anything

```python
from align_anything.models.llama_vision_audio_model import (
    LlamaVisionAudioForConditionalGeneration,
    LlamaVisionAudioProcessor,
)
import torch
import torchaudio
from PIL import Image

path = <path_to_model_dir>
processor = LlamaVisionAudioProcessor.from_pretrained(path)
model = LlamaVisionAudioForConditionalGeneration.from_pretrained(path)

prompt = "<|start_header_id|>user<|end_header_id|>: Where is the capital of China?\n<|start_header_id|>assistant<|end_header_id|>: "

inputs = processor(text=prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=1024)
print(processor.decode(outputs[0], skip_special_tokens=True))

prompt = "<|start_header_id|>user<|end_header_id|>: Summarize the audio's contents.<audio>\n<|start_header_id|>assistant<|end_header_id|>: "

audio_path = "align-anything/assets/test_audio.wav"
audio, _ = torchaudio.load(audio_path)
if audio.shape[0] == 2:
    audio = audio.mean(dim=0, keepdim=True)
audio = audio.squeeze().tolist()

inputs = processor(text=prompt, raw_speech=audio, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=1024)
print(processor.decode(outputs[0], skip_special_tokens=False))

prompt = "<|start_header_id|>user<|end_header_id|>: <image> Give an overview of what's in the image.\n<|start_header_id|>assistant<|end_header_id|>: "
image_path = "align-anything/assets/test_image.webp"
image = Image.open(image_path)

inputs = processor(text=prompt, images=image, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=1024)
print(processor.decode(outputs[0], skip_special_tokens=True))
```