LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and Editing
Abstract
LLaVA-Interactive is a research prototype for multimodal human-AI interaction. The system can have multi-turn dialogues with human users by taking multimodal user inputs and generating multimodal responses. Importantly, LLaVA-Interactive goes beyond language prompt, where visual prompt is enabled to align human intents in the interaction. The development of LLaVA-Interactive is extremely cost-efficient as the system combines three multimodal skills of pre-built AI models without additional model training: visual chat of LLaVA, image segmentation from SEEM, as well as image generation and editing from GLIGEN. A diverse set of application scenarios is presented to demonstrate the promises of LLaVA-Interactive and to inspire future research in multimodal interactive systems.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Breaking Barriers to Creative Expression: Co-Designing and Implementing an Accessible Text-to-Image Interface (2023)
- Mini-DALLE3: Interactive Text to Image by Prompting Large Language Models (2023)
- LLMR: Real-time Prompting of Interactive Worlds using Large Language Models (2023)
- TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wild (2023)
- ILuvUI: Instruction-tuned LangUage-Vision modeling of UIs from Machine Conversations (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
descrobela
}
extract all the fields and return in json
what kind of images can I try ?
Models citing this paper 8
Browse 8 models citing this paperDatasets citing this paper 0
No dataset linking this paper