-
Visual Context Window Extension: A New Perspective for Long Video Understanding
Paper • 2409.20018 • Published • 8 -
LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture
Paper • 2409.02889 • Published • 54 -
Long Context Transfer from Language to Vision
Paper • 2406.16852 • Published • 32 -
lmms-lab/LongVA-7B-DPO
Text Generation • Updated • 4.37k • 7
Collections
Discover the best community collections!
Collections including paper arxiv:2409.01071
-
Distilling Vision-Language Models on Millions of Videos
Paper • 2401.06129 • Published • 14 -
Koala: Key frame-conditioned long video-LLM
Paper • 2404.04346 • Published • 5 -
MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding
Paper • 2404.05726 • Published • 20 -
OphNet: A Large-Scale Video Benchmark for Ophthalmic Surgical Workflow Understanding
Paper • 2406.07471 • Published • 1
-
iVideoGPT: Interactive VideoGPTs are Scalable World Models
Paper • 2405.15223 • Published • 12 -
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
Paper • 2405.15574 • Published • 53 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 85 -
Matryoshka Multimodal Models
Paper • 2405.17430 • Published • 30
-
Extending Llama-3's Context Ten-Fold Overnight
Paper • 2404.19553 • Published • 33 -
Model Tells You Where to Merge: Adaptive KV Cache Merging for LLMs on Long-Context Tasks
Paper • 2407.08454 • Published -
VideoLLaMB: Long-context Video Understanding with Recurrent Memory Bridges
Paper • 2409.01071 • Published • 26 -
Spinning the Golden Thread: Benchmarking Long-Form Generation in Language Models
Paper • 2409.02076 • Published • 9
-
BLINK: Multimodal Large Language Models Can See but Not Perceive
Paper • 2404.12390 • Published • 24 -
TextSquare: Scaling up Text-Centric Visual Instruction Tuning
Paper • 2404.12803 • Published • 29 -
Groma: Localized Visual Tokenization for Grounding Multimodal Large Language Models
Paper • 2404.13013 • Published • 30 -
InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD
Paper • 2404.06512 • Published • 29
-
Chart-based Reasoning: Transferring Capabilities from LLMs to VLMs
Paper • 2403.12596 • Published • 9 -
Groma: Localized Visual Tokenization for Grounding Multimodal Large Language Models
Paper • 2404.13013 • Published • 30 -
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning
Paper • 2404.16994 • Published • 35 -
AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability
Paper • 2405.14129 • Published • 12
-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 25 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 12 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 38 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 19