Unnamed: 0
int64
0
2.72k
title
stringlengths
14
153
Arxiv link
stringlengths
1
31
authors
stringlengths
5
1.5k
arxiv_id
float64
2k
2.41k
abstract
stringlengths
435
2.86k
Model
stringclasses
1 value
GitHub
stringclasses
1 value
Space
stringclasses
1 value
Dataset
stringclasses
1 value
id
int64
0
2.72k
2,500
Text-Guided 3D Face Synthesis - From Generation to Editing
Yunjie Wu, Yapeng Meng, Zhipeng Hu, Lincheng Li, Haoqian Wu, Kun Zhou, Weiwei Xu, Xin Yu
null
Text-guided 3D face synthesis has achieved remarkable results by leveraging text-to-image (T2I) diffusion models. However most existing works focus solely on the direct generation ignoring the editing restricting them from synthesizing customized 3D faces through iterative adjustments. In this paper we propose a unified text-guided framework from face generation to editing. In the generation stage we propose a geometry-texture decoupled generation to mitigate the loss of geometric details caused by coupling. Besides decoupling enables us to utilize the generated geometry as a condition for texture generation yielding highly geometry-texture aligned results. We further employ a fine-tuned texture diffusion model to enhance texture quality in both RGB and YUV space. In the editing stage we first employ a pre-trained diffusion model to update facial geometry or texture based on the texts. To enable sequential editing we introduce a UV domain consistency preservation regularization preventing unintentional changes to irrelevant facial attributes. Besides we propose a self-guided consistency weight strategy to improve editing efficacy while preserving consistency. Through comprehensive experiments we showcase our method's superiority in face synthesis.
[]
[]
[]
[]
2,500
2,501
AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving
http://arxiv.org/abs/2403.17373
Mingfu Liang, Jong-Chyi Su, Samuel Schulter, Sparsh Garg, Shiyu Zhao, Ying Wu, Manmohan Chandraker
2,403.17373
Autonomous vehicle (AV) systems rely on robust perception models as a cornerstone of safety assurance. However objects encountered on the road exhibit a long-tailed distribution with rare or unseen categories posing challenges to a deployed perception model. This necessitates an expensive process of continuously curating and annotating data with significant human effort. We propose to leverage recent advances in vision-language and large language models to design an Automatic Data Engine (AIDE) that automatically identifies issues efficiently curates data improves the model through auto-labeling and verifies the model through generation of diverse scenarios. This process operates iteratively allowing for continuous self-improvement of the model. We further establish a benchmark for open-world detection on AV datasets to comprehensively evaluate various learning paradigms demonstrating our method's superior performance at a reduced cost.
[]
[]
[]
[]
2,501
2,502
Multiplane Prior Guided Few-Shot Aerial Scene Rendering
Zihan Gao, Licheng Jiao, Lingling Li, Xu Liu, Fang Liu, Puhua Chen, Yuwei Guo
null
Neural Radiance Fields (NeRF) have been successfully applied in various aerial scenes yet they face challenges with sparse views due to limited supervision. The acquisition of dense aerial views is often prohibitive as unmanned aerial vehicles (UAVs) may encounter constraints in perspective range and energy constraints. In this work we introduce Multiplane Prior guided NeRF (MPNeRF) a novel approach tailored for few-shot aerial scene rendering--marking a pioneering effort in this domain. Our key insight is that the intrinsic geometric regularities specific to aerial imagery could be leveraged to enhance NeRF in sparse aerial scenes. By investigating NeRF's and Multiplane Image (MPI)'s behavior we propose to guide the training process of NeRF with a Multiplane Prior. The proposed Multiplane Prior draws upon MPI's benefits and incorporates advanced image comprehension through a SwinV2 Transformer pre-trained via SimMIM. Our extensive experiments demonstrate that MPNeRF outperforms existing state-of-the-art methods applied in non-aerial contexts by tripling the performance in SSIM and LPIPS even with three views available. We hope our work offers insights into the development of NeRF-based applications in aerial scenes with limited data.
[]
[]
[]
[]
2,502
2,503
MAS: Multi-view Ancestral Sampling for 3D Motion Generation Using 2D Diffusion
http://arxiv.org/abs/2310.14729
Roy Kapon, Guy Tevet, Daniel Cohen-Or, Amit H. Bermano
2,310.14729
We introduce Multi-view Ancestral Sampling (MAS) a method for 3D motion generation using 2D diffusion models that were trained on motions obtained from in-the-wild videos. As such MAS opens opportunities to exciting and diverse fields of motion previously under-explored as 3D data is scarce and hard to collect. MAS works by simultaneously denoising multiple 2D motion sequences representing different views of the same 3D motion. It ensures consistency across all views at each diffusion step by combining the individual generations into a unified 3D sequence and projecting it back to the original views. We demonstrate MAS on 2D pose data acquired from videos depicting professional basketball maneuvers rhythmic gymnastic performances featuring a ball apparatus and horse races. In each of these domains 3D motion capture is arduous and yet MAS generates diverse and realistic 3D sequences. Unlike the Score Distillation approach which optimizes each sample by repeatedly applying small fixes our method uses a sampling process that was constructed for the diffusion framework. As we demonstrate MAS avoids common issues such as out-of-domain sampling and mode-collapse. https://guytevet.github.io/mas-page/
[]
[]
[]
[]
2,503
2,504
Smart Help: Strategic Opponent Modeling for Proactive and Adaptive Robot Assistance in Households
http://arxiv.org/abs/2404.09001
Zhihao Cao, Zidong Wang, Siwen Xie, Anji Liu, Lifeng Fan
2,404.09001
Despite the significant demand for assistive technology among vulnerable groups (e.g. the elderly children and the disabled) in daily tasks research into advanced AI-driven assistive solutions that genuinely accommodate their diverse needs remains sparse. Traditional human-machine interaction tasks often require machines to simply help without nuanced consideration of human abilities and feelings such as their opportunity for practice and learning sense of self-improvement and self-esteem. Addressing this gap we define a pivotal and novel challenge Smart Help which aims to provide proactive yet adaptive support to human agents with diverse disabilities and dynamic goals in various tasks and environments. To establish this challenge we leverage AI2-THOR to build a new interactive 3D realistic household environment for the Smart Help task. We introduce an innovative opponent modeling module that provides a nuanced understanding of the main agent's capabilities and goals in order to optimize the assisting agent's helping policy. Rigorous experiments validate the efficacy of our model components and show the superiority of our holistic approach against established baselines. Our findings illustrate the potential of AI-imbued assistive robots in improving the well-being of vulnerable groups.
[]
[]
[]
[]
2,504
2,505
Bilateral Event Mining and Complementary for Event Stream Super-Resolution
http://arxiv.org/abs/2405.10037
Zhilin Huang, Quanmin Liang, Yijie Yu, Chujun Qin, Xiawu Zheng, Kai Huang, Zikun Zhou, Wenming Yang
2,405.10037
Event Stream Super-Resolution (ESR) aims to address the challenge of insufficient spatial resolution in event streams which holds great significance for the application of event cameras in complex scenarios. Previous works for ESR often process positive and negative events in a mixed paradigm. This paradigm limits their ability to effectively model the unique characteristics of each event and mutually refine each other by considering their correlations. In this paper we propose a bilateral event mining and complementary network (BMCNet) to fully leverage the potential of each event and capture the shared information to complement each other simultaneously. Specifically we resort to a two-stream network to accomplish comprehensive mining of each type of events individually. To facilitate the exchange of information between two streams we propose a bilateral information exchange (BIE) module. This module is layer-wisely embedded between two streams enabling the effective propagation of hierarchical global information while alleviating the impact of invalid information brought by inherent characteristics of events. The experimental results demonstrate that our approach outperforms the previous state-of-the-art methods in ESR achieving performance improvements of over 11% on both real and synthetic datasets. Moreover our method significantly enhances the performance of event-based downstream tasks such as object recognition and video reconstruction. Our code is available at https://github.com/Lqm26/BMCNet-ESR.
[]
[]
[]
[]
2,505
2,506
Online Task-Free Continual Generative and Discriminative Learning via Dynamic Cluster Memory
Fei Ye, Adrian G. Bors
null
Online Task-Free Continual Learning (OTFCL) aims to learn novel concepts from streaming data without accessing task information. Most memory-based approaches used in OTFCL are not suitable for unsupervised learning because they require accessing supervised signals to implement their sample selection mechanisms. In this study we address this issue by proposing a novel memory management approach namely the Dynamic Cluster Memory (DCM) which builds new memory clusters to capture distribution shifts over time without accessing any supervised signals. DCM introduces a novel memory expansion mechanism based on the knowledge discrepancy criterion which evaluates the novelty of the incoming data as the signal for the memory expansion ensuring a compact memory capacity. We also propose a new sample selection approach that automatically stores incoming data samples with similar semantic information in the same memory cluster while also facilitating the knowledge diversity among memory clusters. Furthermore a novel memory pruning approach is proposed to automatically remove overlapping memory clusters through a graph relation evaluation ensuring a fixed memory capacity while maintaining the diversity among the samples stored in the memory. The proposed DCM is model-free plug-and-play and can be used in both supervised and unsupervised learning without modifications. Empirical results on OTFCL experiments show that the proposed DCM outperforms the state-of-the-art while requiring fewer data samples to be stored. The source code is available at https://github.com/dtuzi123/DCM.
[]
[]
[]
[]
2,506
2,507
Rapid Motor Adaptation for Robotic Manipulator Arms
http://arxiv.org/abs/2312.04670
Yichao Liang, Kevin Ellis, João Henriques
2,312.0467
Developing generalizable manipulation skills is a core challenge in embodied AI. This includes generalization across diverse task configurations encompassing variations in object shape density friction coefficient and external disturbances such as forces applied to the robot. Rapid Motor Adaptation (RMA) offers a promising solution to this challenge. It posits that essential hidden variables influencing an agent's task performance such as object mass and shape can be effectively inferred from the agent's action and proprioceptive history. Drawing inspiration from RMA in locomotion and in-hand rotation we use depth perception to develop agents tailored for rapid motor adaptation in a variety of manipulation tasks. We evaluated our agents on four challenging tasks from the Maniskill2 benchmark namely pick-and-place operations with hundreds of objects from the YCB and EGAD datasets peg insertion with precise position and orientation and operating a variety of faucets and handles with customized environment variations. Empirical results demonstrate that our agents surpass state-of-the-art methods like automatic domain randomization and vision-based policies obtaining better generalization performance and sample efficiency.
[]
[]
[]
[]
2,507
2,508
SANeRF-HQ: Segment Anything for NeRF in High Quality
Yichen Liu, Benran Hu, Chi-Keung Tang, Yu-Wing Tai
null
Recently the Segment Anything Model (SAM) has showcased remarkable capabilities of zero-shot segmentation while NeRF (Neural Radiance Fields) has gained popularity as a method for various 3D problems beyond novel view synthesis. Though there exist initial attempts to incorporate these two methods into 3D segmentation they face the challenge of accurately and consistently segmenting objects in complex scenarios. In this paper we introduce the Segment Anything for NeRF in High Quality (SANeRF-HQ) to achieve high-quality 3D segmentation of any target object in a given scene. SANeRF-HQ utilizes SAM for open-world object segmentation guided by user-supplied prompts while leveraging NeRF to aggregate information from different viewpoints. To overcome the aforementioned challenges we employ density field and RGB similarity to enhance the accuracy of segmentation boundary during the aggregation. Emphasizing on segmentation accuracy we evaluate our method on multiple NeRF datasets where high-quality ground-truths are available or manually annotated. SANeRF-HQ shows a significant quality improvement over state-of-the-art methods in NeRF object segmentation provides higher flexibility for object localization and enables more consistent object segmentation across multiple views.
[]
[]
[]
[]
2,508
2,509
DSGG: Dense Relation Transformer for an End-to-end Scene Graph Generation
http://arxiv.org/abs/2403.14886
Zeeshan Hayder, Xuming He
2,403.14886
Scene graph generation aims to capture detailed spatial and semantic relationships between objects in an image which is challenging due to incomplete labeling long-tailed relationship categories and relational semantic overlap. Existing Transformer-based methods either employ distinct queries for objects and predicates or utilize holistic queries for relation triplets and hence often suffer from limited capacity in learning low-frequency relationships. In this paper we present a new Transformer-based method called DSGG that views scene graph detection as a direct graph prediction problem based on a unique set of graph-aware queries. In particular each graph-aware query encodes a compact representation of both the node and all of its relations in the graph acquired through the utilization of a relaxed sub-graph matching during the training process. Moreover to address the problem of relational semantic overlap we utilize a strategy for relation distillation aiming to efficiently learn multiple instances of semantic relationships. Extensive experiments on the VG and the PSG datasets show that our model achieves state-of-the-art results showing a significant improvement of 3.5% and 6.7% in mR@50 and mR@100 for the scene-graph generation task and achieves an even more substantial improvement of 8.5% and 10.3% in mR@50 and mR@100 for the panoptic scene graph generation task. Code is available at https://github.com/zeeshanhayder/DSGG.
[]
[]
[]
[]
2,509
2,510
Transcending the Limit of Local Window: Advanced Super-Resolution Transformer with Adaptive Token Dictionary
http://arxiv.org/abs/2401.08209
Leheng Zhang, Yawei Li, Xingyu Zhou, Xiaorui Zhao, Shuhang Gu
2,401.08209
Single Image Super-Resolution is a classic computer vision problem that involves estimating high-resolution (HR) images from low-resolution (LR) ones. Although deep neural networks (DNNs) especially Transformers for super-resolution have seen significant advancements in recent years challenges still remain particularly in limited receptive field caused by window-based self-attention. To address these issues we introduce a group of auxiliary Adaptive Token Dictionary to SR Transformer and establish an ATD-SR method. The introduced token dictionary could learn prior information from training data and adapt the learned prior to specific testing image through an adaptive refinement step. The refinement strategy could not only provide global information to all input tokens but also group image tokens into categories. Based on category partitions we further propose a category-based self-attention mechanism designed to leverage distant but similar tokens for enhancing input features. The experimental results show that our method achieves the best performance on various single image super-resolution benchmarks.
[]
[]
[]
[]
2,510
2,511
Object Dynamics Modeling with Hierarchical Point Cloud-based Representations
http://arxiv.org/abs/2404.06044
Chanho Kim, Li Fuxin
2,404.06044
Modeling object dynamics with a neural network is an important problem with numerous applications. Most recent work has been based on graph neural networks. However physics happens in 3D space where geometric information potentially plays an important role in modeling physical phenomena. In this work we propose a novel U-net architecture based on continuous point convolution which naturally embeds information from 3D coordinates and allows for multi-scale feature representations with established downsampling and upsampling procedures. Bottleneck layers in the downsampled point clouds lead to better long-range interaction modeling. Besides the flexibility of point convolutions allows our approach to generalize to sparsely sampled points from mesh vertices and dynamically generate features on important interaction points on mesh faces. Experimental results demonstrate that our approach significantly improves the state-of-the-art especially in scenarios that require accurate gravity or collision reasoning.
[]
[]
[]
[]
2,511
2,512
WWW: A Unified Framework for Explaining What Where and Why of Neural Networks by Interpretation of Neuron Concepts
http://arxiv.org/abs/2402.18956
Yong Hyun Ahn, Hyeon Bae Kim, Seong Tae Kim
2,402.18956
Recent advancements in neural networks have showcased their remarkable capabilities across various domains. Despite these successes the "black box" problem still remains. To address this we propose a novel framework WWW that offers the 'what' 'where' and 'why' of the neural network decisions in human-understandable terms. Specifically WWW utilizes an adaptive selection for concept discovery employing adaptive cosine similarity and thresholding techniques to effectively explain 'what'. To address the 'where' and 'why' we proposed a novel combination of neuron activation maps (NAMs) with Shapley values generating localized concept maps and heatmaps for individual inputs. Furthermore WWW introduces a method for predicting uncertainty leveraging heatmap similarities to estimate the prediction's reliability. Experimental evaluations of WWW demonstrate superior performance in both quantitative and qualitative metrics outperforming existing methods in interpretability. WWW provides a unified solution for explaining 'what' 'where' and 'why' introducing a method for localized explanations from global interpretations and offering a plug-and-play solution adaptable to various architectures. The code is available at: https://github.com/ailab-kyunghee/WWW
[]
[]
[]
[]
2,512
2,513
SkySense: A Multi-Modal Remote Sensing Foundation Model Towards Universal Interpretation for Earth Observation Imagery
http://arxiv.org/abs/2312.10115
Xin Guo, Jiangwei Lao, Bo Dang, Yingying Zhang, Lei Yu, Lixiang Ru, Liheng Zhong, Ziyuan Huang, Kang Wu, Dingxiang Hu, Huimei He, Jian Wang, Jingdong Chen, Ming Yang, Yongjun Zhang, Yansheng Li
2,312.10115
Prior studies on Remote Sensing Foundation Model (RSFM) reveal immense potential towards a generic model for Earth Observation. Nevertheless these works primarily focus on a single modality without temporal and geo-context modeling hampering their capabilities for diverse tasks. In this study we present SkySense a generic billion-scale model pre-trained on a curated multi-modal Remote Sensing Imagery (RSI) dataset with 21.5 million temporal sequences. SkySense incorporates a factorized multi-modal spatiotemporal encoder taking temporal sequences of optical and Synthetic Aperture Radar (SAR) data as input. This encoder is pre-trained by our proposed Multi-Granularity Contrastive Learning to learn representations across different modal and spatial granularities. To further enhance the RSI representations by the geo-context clue we introduce Geo-Context Prototype Learning to learn region-aware prototypes upon RSI's multi-modal spatiotemporal features. To our best knowledge SkySense is the largest Multi-Modal RSFM to date whose modules can be flexibly combined or used individually to accommodate various tasks. It demonstrates remarkable generalization capabilities on a thorough evaluation encompassing 16 datasets over 7 tasks from single- to multi-modal static to temporal and classification to localization. SkySense surpasses 18 recent RSFMs in all test scenarios. Specifically it outperforms the latest models such as GFM SatLas and Scale-MAE by a large margin i.e. 2.76% 3.67% and 3.61% on average respectively. We will release the pre-trained weights to facilitate future research and Earth Observation applications.
[]
[]
[]
[]
2,513
2,514
CaKDP: Category-aware Knowledge Distillation and Pruning Framework for Lightweight 3D Object Detection
Haonan Zhang, Longjun Liu, Yuqi Huang, Zhao Yang, Xinyu Lei, Bihan Wen
null
Knowledge distillation (KD) possesses immense potential to accelerate the deep neural networks (DNNs) for LiDAR-based 3D detection. However in most of prevailing approaches the suboptimal teacher models and insufficient student architecture investigations limit the performance gains. To address these issues we propose a simple yet effective Category-aware Knowledge Distillation and Pruning (CaKDP) framework for compressing 3D detectors. Firstly CaKDP transfers the knowledge of two-stage detector to one-stage student one mitigating the impact of inadequate teacher models. To bridge the gap between the heterogeneous detectors we investigate their differences and then introduce the student-motivated category-aware KD to align the category prediction between distillation pairs. Secondly we propose a category-aware pruning scheme to obtain the customizable architecture of compact student model. The method calculates the category prediction gap before and after removing each filter to evaluate the importance of filters and retains the important filters. Finally to further improve the student performance a modified IOU-aware refinement module with negligible computations is leveraged to remove the redundant false positive predictions. Experiments demonstrate that CaKDP achieves the compact detector with high performance. For example on WOD CaKDP accelerates CenterPoint by half while boosting L2 mAPH by 1.61%. The code is available at https://github.com/zhnxjtu/CaKDP.
[]
[]
[]
[]
2,514
2,515
Mixed-Precision Quantization for Federated Learning on Resource-Constrained Heterogeneous Devices
http://arxiv.org/abs/2311.18129
Huancheng Chen, Haris Vikalo
2,311.18129
While federated learning (FL) systems often utilize quantization to battle communication and computational bottlenecks they have heretofore been limited to deploying fixed-precision quantization schemes. Meanwhile the concept of mixed-precision quantization (MPQ) where different layers of a deep learning model are assigned varying bit-width remains unexplored in the FL settings. We present a novel FL algorithm FedMPQ which introduces mixed-precision quantization to resource-heterogeneous FL systems. Specifically local models quantized so as to satisfy bit-width constraint are trained by optimizing an objective function that includes a regularization term which promotes reduction of precision in some of the layers without significant performance degradation. The server collects local model updates de-quantizes them into full-precision models and then aggregates them into a global model. To initialize the next round of local training the server relies on the information learned in the previous training round to customize bit-width assignments of the models delivered to different clients. In extensive benchmarking experiments on several model architectures and different datasets in both iid and non-iid settings FedMPQ outperformed the baseline FL schemes that utilize fixed-precision quantization while incurring only a minor computational overhead on the participating devices.
[]
[]
[]
[]
2,515
2,516
CFAT: Unleashing Triangular Windows for Image Super-resolution
Abhisek Ray, Gaurav Kumar, Maheshkumar H. Kolekar
null
Transformer-based models have revolutionized the field of image super-resolution (SR) by harnessing their inherent ability to capture complex contextual features. The overlapping rectangular shifted window technique used in transformer architecture nowadays is a common practice in super-resolution models to improve the quality and robustness of image upscaling. However it suffers from distortion at the boundaries and has limited unique shifting modes. To overcome these weaknesses we propose a non-overlapping triangular window technique that synchronously works with the rectangular one to mitigate boundary-level distortion and allows the model to access more unique sifting modes. In this paper we propose a Composite Fusion Attention Transformer (CFAT) that incorporates triangular-rectangular window-based local attention with a channel-based global attention technique in image super-resolution. As a result CFAT enables attention mechanisms to be activated on more image pixels and captures long-range multi-scale features to improve SR performance. The extensive experimental results and ablation study demonstrate the effectiveness of CFAT in the SR domain. Our proposed model shows a significant 0.7 dB performance improvement over other state-of-the-art SR architectures.
[]
[]
[]
[]
2,516
2,517
ICP-Flow: LiDAR Scene Flow Estimation with ICP
Yancong Lin, Holger Caesar
null
Scene flow characterizes the 3D motion between two LiDAR scans captured by an autonomous vehicle at nearby timesteps. Prevalent methods consider scene flow as point-wise unconstrained flow vectors that can be learned by either large-scale training beforehand or time-consuming optimization at inference. However these methods do not take into account that objects in autonomous driving often move rigidly. We incorporate this rigid-motion assumption into our design where the goal is to associate objects over scans and then estimate the locally rigid transformations. We propose ICP-Flow a learning-free flow estimator. The core of our design is the conventional Iterative Closest Point (ICP) algorithm which aligns the objects over time and outputs the corresponding rigid transformations. Crucially to aid ICP we propose a histogram-based initialization that discovers the most likely translation thus providing a good starting point for ICP. The complete scene flow is then recovered from the rigid transformations. We outperform state-of-the-art baselines including supervised models on the Waymo dataset and perform competitively on Argoverse-v2 and nuScenes. Further we train a feedforward neural network supervised by the pseudo labels from our model and achieve top performance among all models capable of real-time inference. We validate the advantage of our model on scene flow estimation with longer temporal gaps up to 0.4 seconds where other models fail to deliver meaningful results.
[]
[]
[]
[]
2,517
2,518
MADTP: Multimodal Alignment-Guided Dynamic Token Pruning for Accelerating Vision-Language Transformer
http://arxiv.org/abs/2403.02991
Jianjian Cao, Peng Ye, Shengze Li, Chong Yu, Yansong Tang, Jiwen Lu, Tao Chen
2,403.02991
Vision-Language Transformers (VLTs) have shown great success recently but are meanwhile accompanied by heavy computation costs where a major reason can be attributed to the large number of visual and language tokens. Existing token pruning research for compressing VLTs mainly follows a single-modality-based scheme yet ignores the critical role of aligning different modalities for guiding the token pruning process causing the important tokens for one modality to be falsely pruned in another modality branch. Meanwhile existing VLT pruning works also lack the flexibility to dynamically compress each layer based on different input samples. To this end we propose a novel framework named Multimodal Alignment-Guided Dynamic Token Pruning (MADTP) for accelerating various VLTs. Specifically we first introduce a well-designed Multi-modality Alignment Guidance (MAG) module that can align features of the same semantic concept from different modalities to ensure the pruned tokens are less important for all modalities. We further design a novel Dynamic Token Pruning (DTP) module which can adaptively adjust the token compression ratio in each layer based on different input instances. Extensive experiments on various benchmarks demonstrate that MADTP significantly reduces the computational complexity of kinds of multimodal models while preserving competitive performance. Notably when applied to the BLIP model in the NLVR2 dataset MADTP can reduce the GFLOPs by 80% with less than 4% performance degradation.
[]
[]
[]
[]
2,518
2,519
G-NeRF: Geometry-enhanced Novel View Synthesis from Single-View Images
Zixiong Huang, Qi Chen, Libo Sun, Yifan Yang, Naizhou Wang, Qi Wu, Mingkui Tan
null
Novel view synthesis aims to generate new view images of a given view image collection. Recent attempts address this problem relying on 3D geometry priors (e.g. shapes sizes and positions) learned from multi-view images. However such methods encounter the following limitations: 1) they require a set of multi-view images as training data for a specific scene (e.g. face car or chair) which is often unavailable in many real-world scenarios; 2) they fail to extract the geometry priors from single-view images due to the lack of multi-view supervision. In this paper we propose a Geometry-enhanced NeRF (G-NeRF) which seeks to enhance the geometry priors by a geometry-guided multi-view synthesis approach followed by a depth-aware training. In the synthesis process inspired that existing 3D GAN models can unconditionally synthesize high-fidelity multi-view images we seek to adopt off-the-shelf 3D GAN models such as EG3D as a free source to provide geometry priors through synthesizing multi-view data. Simultaneously to further improve the geometry quality of the synthetic data we introduce a truncation method to effectively sample latent codes within 3D GAN models. To tackle the absence of multi-view supervision for single-view images we design the depth-aware training approach incorporating a depth-aware discriminator to guide geometry priors through depth maps. Experiments demonstrate the effectiveness of our method in terms of both qualitative and quantitative results.
[]
[]
[]
[]
2,519
2,520
Neural Fields as Distributions: Signal Processing Beyond Euclidean Space
Daniel Rebain, Soroosh Yazdani, Kwang Moo Yi, Andrea Tagliasacchi
null
Neural fields have emerged as a powerful and broadly applicable method for representing signals. However in contrast to classical discrete digital signal processing the portfolio of tools to process such representations is still severely limited and restricted to Euclidean domains. In this paper we address this problem by showing how a probabilistic re-interpretation of neural fields can enable their training and inference processes to become "filter-aware". The formulation we propose not only merges training and filtering in an efficient way but also generalizes beyond the familiar Euclidean coordinate spaces to the more general set of smooth manifolds and convolutions induced by the actions of Lie groups. We demonstrate how this framework can enable novel integrations of signal processing techniques for neural field applications on both Euclidean domains such as images and audio as well as non-Euclidean domains such as rotations and rays. A noteworthy benefit of our method is its applicability. Our method can be summarized as primarily a modification of the loss function and in most cases does not require changes to the network architecture or the inference process.
[]
[]
[]
[]
2,520
2,521
Rolling Shutter Correction with Intermediate Distortion Flow Estimation
http://arxiv.org/abs/2404.06350
Mingdeng Cao, Sidi Yang, Yujiu Yang, Yinqiang Zheng
2,404.0635
This paper proposes to correct the rolling shutter (RS) distorted images by estimating the distortion flow from the global shutter (GS) to RS directly. Existing methods usually perform correction using the undistortion flow from the RS to GS. They initially predict the flow from consecutive RS frames subsequently rescaling it as the displacement fields from the RS frame to the underlying GS image using time-dependent scaling factors. Following this RS-aware forward warping is employed to convert the RS image into its GS counterpart. Nevertheless this strategy is prone to two shortcomings. First the undistortion flow estimation is rendered inaccurate by merely linear scaling the flow due to the complex non-linear motion nature. Second RS-aware forward warping often results in unavoidable artifacts. To address these limitations we introduce a new framework that directly estimates the distortion flow and rectifies the RS image with the backward warping operation. More specifically we first propose a global correlation-based flow attention mechanism to estimate the initial distortion flow and GS feature jointly which are then refined by the following coarse-to-fine decoder layers. Additionally a multi-distortion flow prediction strategy is integrated to mitigate the issue of inaccurate flow estimation further. Experimental results validate the effectiveness of the proposed method which outperforms state-of-the-art approaches on various benchmarks while maintaining high efficiency. The project is available at https://github.com/ljzycmd/DFRSC.
[]
[]
[]
[]
2,521
2,522
Style Blind Domain Generalized Semantic Segmentation via Covariance Alignment and Semantic Consistence Contrastive Learning
http://arxiv.org/abs/2403.06122
Woo-Jin Ahn, Geun-Yeong Yang, Hyun-Duck Choi, Myo-Taeg Lim
2,403.06122
Deep learning models for semantic segmentation often experience performance degradation when deployed to unseen target domains unidentified during the training phase. This is mainly due to variations in image texture (i.e. style) from different data sources. To tackle this challenge existing domain generalized semantic segmentation (DGSS) methods attempt to remove style variations from the feature. However these approaches struggle with the entanglement of style and content which may lead to the unintentional removal of crucial content information causing performance degradation. This study addresses this limitation by proposing BlindNet a novel DGSS approach that blinds the style without external modules or datasets. The main idea behind our proposed approach is to alleviate the effect of style in the encoder whilst facilitating robust segmentation in the decoder. To achieve this BlindNet comprises two key components: covariance alignment and semantic consistency contrastive learning. Specifically the covariance alignment trains the encoder to uniformly recognize various styles and preserve the content information of the feature rather than removing the style-sensitive factor. Meanwhile semantic consistency contrastive learning enables the decoder to construct discriminative class embedding space and disentangles features that are vulnerable to misclassification. Through extensive experiments our approach outperforms existing DGSS methods exhibiting robustness and superior performance for semantic segmentation on unseen target domains.
[]
[]
[]
[]
2,522
2,523
Attack To Defend: Exploiting Adversarial Attacks for Detecting Poisoned Models
Samar Fares, Karthik Nandakumar
null
Poisoning (trojan/backdoor) attacks enable an adversary to train and deploy a corrupted machine learning (ML) model which typically works well and achieves good accuracy on clean input samples but behaves maliciously on poisoned samples containing specific trigger patterns. Using such poisoned ML models as the foundation to build real-world systems can compromise application safety. Hence there is a critical need for algorithms that detect whether a given target model has been poisoned. This work proposes a novel approach for detecting poisoned models called Attack To Defend (A2D) which is based on the observation that poisoned models are more sensitive to adversarial perturbations compared to benign models. We propose a metric called sensitivity to adversarial perturbations (SAP) to measure the sensitivity of a ML model to adversarial attacks at a specific perturbation bound. We then generate strong adversarial attacks against an unrelated reference model and estimate the SAP value of the target model by transferring the generated attacks. The target model is deemed to be a trojan if its SAP value exceeds a decision threshold. The A2D framework requires only black-box access to the target model and a small clean set while being computationally efficient. The A2D approach has been evaluated on four standard image datasets and its effectiveness under various types of poisoning attacks has been demonstrated
[]
[]
[]
[]
2,523
2,524
X-3D: Explicit 3D Structure Modeling for Point Cloud Recognition
Shuofeng Sun, Yongming Rao, Jiwen Lu, Haibin Yan
null
Numerous prior studies predominantly emphasize constructing relation vectors for individual neighborhood points and generating dynamic kernels for each vector and embedding these into high-dimensional spaces to capture implicit local structures. However we contend that such implicit high-dimensional structure modeling approch inadequately represents the local geometric structure of point clouds due to the absence of explicit structural information. Hence we introduce X-3D an explicit 3D structure modeling approach. X-3D functions by capturing the explicit local structural information within the input 3D space and employing it to produce dynamic kernels with shared weights for all neighborhood points within the current local region. This modeling approach introduces effective geometric prior and significantly diminishes the disparity between the local structure of the embedding space and the original input point cloud thereby improving the extraction of local features. Experiments show that our method can be used on a variety of methods and achieves state-of-the-art performance on segmentation classification detection tasks with lower extra computational cost. Such as 90.7% on ScanObjectNN for classification 79.2% on S3DIS 6 fold and 74.3% on S3DIS Area 5 for segmentation 76.3% on ScanNetV2 for segmentation and 64.5% mAP_ 25 46.9% mAP_ 50 on SUN RGB-D and 69.0% mAP_ 25 51.1% mAP_ 50 on ScanNetV2 . Our code is available at \href https://github.com/sunshuofeng/X-3D https://github.com/sunshuofeng/X-3D .
[]
[]
[]
[]
2,524
2,525
SpiderMatch: 3D Shape Matching with Global Optimality and Geometric Consistency
Paul Roetzer, Florian Bernard
null
Finding shortest paths on product spaces is a popular approach to tackle numerous variants of matching problems including the dynamic time warping method for matching signals the matching of curves or the matching of a curve to a 3D shape. While these approaches admit the computation of globally optimal solutions in polynomial time their natural generalisation to 3D shape matching is widely known to be intractable. In this work we address this issue by proposing a novel path-based formalism for 3D shape matching. More specifically we consider an alternative shape discretisation in which one of the 3D shapes (the source shape) is represented as a SpiderCurve i.e. a long self-intersecting curve that traces the 3D shape surface. We then tackle the 3D shape matching problem as finding a shortest path in the product graph of the SpiderCurve and the target 3D shape. Our approach introduces a set of novel constraints that ensure a globally geometrically consistent matching. Overall our formalism leads to an integer linear programming problem for which we experimentally show that it can efficiently be solved to global optimality. We demonstrate that our approach is competitive with recent state-of-the-art shape matching methods while in addition guaranteeing geometric consistency.
[]
[]
[]
[]
2,525
2,526
Troika: Multi-Path Cross-Modal Traction for Compositional Zero-Shot Learning
http://arxiv.org/abs/2303.15230
Siteng Huang, Biao Gong, Yutong Feng, Min Zhang, Yiliang Lv, Donglin Wang
2,303.1523
Recent compositional zero-shot learning (CZSL) methods adapt pre-trained vision-language models (VLMs) by constructing trainable prompts only for composed state-object pairs. Relying on learning the joint representation of seen compositions these methods ignore the explicit modeling of the state and object thus limiting the exploitation of pre-trained knowledge and generalization to unseen compositions. With a particular focus on the universality of the solution in this work we propose a novel paradigm for CZSL models that establishes three identification branches (i.e. Multi-Path) to jointly model the state object and composition. The presented Troika is an outstanding implementation that aligns the branch-specific prompt representations with decomposed visual features. To calibrate the bias between semantically similar multi-modal representations we further devise a Cross-Modal Traction module into Troika that shifts the prompt representation towards the current visual content. We conduct extensive experiments on three popular benchmarks where our method significantly outperforms existing methods in both closed-world and open-world settings. The code will be available at https://github.com/bighuang624/Troika.
[]
[]
[]
[]
2,526
2,527
One More Step: A Versatile Plug-and-Play Module for Rectifying Diffusion Schedule Flaws and Enhancing Low-Frequency Controls
http://arxiv.org/abs/2311.15744
Minghui Hu, Jianbin Zheng, Chuanxia Zheng, Chaoyue Wang, Dacheng Tao, Tat-Jen Cham
2,311.15744
It is well known that many open-released foundational diffusion models have difficulty in generating images that substantially depart from average brightness despite such images being present in the training data. This is due to an inconsistency: while denoising starts from pure Gaussian noise during inference the training noise schedule retains residual data even in the final timestep distribution due to difficulties in numerical conditioning in mainstream formulation leading to unintended bias during inference. To mitigate this issue certain eps-prediction models are combined with an ad-hoc offset-noise methodology. In parallel some contemporary models have adopted zero-terminal SNR noise schedules together with v-prediction which necessitate major alterations to pre-trained models. However such changes risk destabilizing a large multitude of community-driven applications anchored on these pre-trained models. In light of this our investigation revisits the fundamental causes leading to our proposal of an innovative and principled remedy called One More Step (OMS). By integrating a compact network and incorporating an additional simple yet effective step during inference OMS elevates image fidelity and harmonizes the dichotomy between training and inference while preserving original model parameters. Once trained various pre-trained diffusion models with the same latent domain can share the same OMS module.
[]
[]
[]
[]
2,527
2,528
Enhancing Multimodal Cooperation via Sample-level Modality Valuation
Yake Wei, Ruoxuan Feng, Zihe Wang, Di Hu
null
One primary topic of multimodal learning is to jointly incorporate heterogeneous information from different modalities. However most models often suffer from unsatisfactory multimodal cooperation which cannot jointly utilize all modalities well. Some methods are proposed to identify and enhance the worse learnt modality but they are often hard to provide the fine-grained observation of multimodal cooperation at sample-level with theoretical support. Hence it is essential to reasonably observe and improve the fine-grained cooperation between modalities especially when facing realistic scenarios where the modality discrepancy could vary across different samples. To this end we introduce a sample-level modality valuation metric to evaluate the contribution of each modality for each sample. Via modality valuation we observe that modality discrepancy indeed could be different at sample-level beyond the global contribution discrepancy at dataset-level. We further analyze this issue and improve cooperation between modalities at sample-level by enhancing the discriminative ability of low-contributing modalities in a targeted manner. Overall our methods reasonably observe the fine-grained uni-modal contribution and achieve considerable improvement. The source code and dataset are available at https://github.com/GeWu-Lab/Valuate-and-Enhance-Multimodal-Cooperation.
[]
[]
[]
[]
2,528
2,529
Evidential Active Recognition: Intelligent and Prudent Open-World Embodied Perception
http://arxiv.org/abs/2311.13793
Lei Fan, Mingfu Liang, Yunxuan Li, Gang Hua, Ying Wu
2,311.13793
Active recognition enables robots to intelligently explore novel observations thereby acquiring more information while circumventing undesired viewing conditions. Recent approaches favor learning policies from simulated or collected data wherein appropriate actions are more frequently selected when the recognition is accurate. However most recognition modules are developed under the closed-world assumption which makes them ill-equipped to handle unexpected inputs such as the absence of the target object in the current observation. To address this issue we propose treating active recognition as a sequential evidence-gathering process providing by-step uncertainty quantification and reliable prediction under the evidence combination theory. Additionally the reward function developed in this paper effectively characterizes the merit of actions when operating in open-world environments. To evaluate the performance we collect a dataset from an indoor simulator encompassing various recognition challenges such as distance occlusion levels and visibility. Through a series of experiments on recognition and robustness analysis we demonstrate the necessity of introducing uncertainties to active recognition and the superior performance of the proposed method.
[]
[]
[]
[]
2,529
2,530
SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation
http://arxiv.org/abs/2403.16605
Aysim Toker, Marvin Eisenberger, Daniel Cremers, Laura Leal-Taixé
2,403.16605
In recent years semantic segmentation has become a pivotal tool in processing and interpreting satellite imagery. Yet a prevalent limitation of supervised learning techniques remains the need for extensive manual annotations by experts. In this work we explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks. The main idea is to learn the joint data manifold of images and labels leveraging recent advancements in denoising diffusion probabilistic models. To the best of our knowledge we are the first to generate both images and corresponding masks for satellite segmentation. We find that the obtained pairs not only display high quality in fine-scale features but also ensure a wide sampling diversity. Both aspects are crucial for earth observation data where semantic classes can vary severely in scale and occurrence frequency. We employ the novel data instances for downstream segmentation as a form of data augmentation. In our experiments we provide comparisons to prior works based on discriminative diffusion models or GANs. We demonstrate that integrating generated samples yields significant quantitative improvements for satellite semantic segmentation -- both compared to baselines and when training only on the original data.
[]
[]
[]
[]
2,530
2,531
XScale-NVS: Cross-Scale Novel View Synthesis with Hash Featurized Manifold
Guangyu Wang, Jinzhi Zhang, Fan Wang, Ruqi Huang, Lu Fang
null
We propose XScale-NVS for high-fidelity cross-scale novel view synthesis of real-world large-scale scenes. Existing representations based on explicit surface suffer from discretization resolution or UV distortion while implicit volumetric representations lack scalability for large scenes due to the dispersed weight distribution and surface ambiguity. In light of the above challenges we introduce hash featurized manifold a novel hash-based featurization coupled with a deferred neural rendering framework. This approach fully unlocks the expressivity of the representation by explicitly concentrating the hash entries on the 2D manifold thus effectively representing highly detailed contents independent of the discretization resolution. We also introduce a novel dataset namely GigaNVS to benchmark cross-scale high-resolution novel view synthesis of real-world large-scale scenes. Our method significantly outperforms competing baselines on various real-world scenes yielding an average LPIPS that is ?40% lower than prior state-of-the-art on the challenging GigaNVS benchmark. Please see our project page at: xscalenvs.github.io.
[]
[]
[]
[]
2,531
2,532
Ink Dot-Oriented Differentiable Optimization for Neural Image Halftoning
Hao Jiang, Bingfeng Zhou, Yadong Mu
null
Halftoning is a time-honored printing technique that simulates continuous tones using ink dots (halftone dots). The resurgence of deep learning has catalyzed the emergence of innovative technologies in the printing industry fostering the advancement of data-driven halftoning methods. Nevertheless current deep learning-based approaches produce halftones through image-to-image black box transformations lacking direct control over the movement of individual halftone dots. In this paper we propose an innovative halftoning method termed "neural dot-controllable halftoning". This method allows dot-level image dithering by providing direct control over the motion of each ink dot. We conceptualize halftoning as the process of sprinkling dots on a canvas. Initially a specific quantity of dots are randomly dispersed on the canvas and subsequently adjusted based on the surrounding grayscale and gradient. To establish differentiable transformations between discrete ink dot positions and halftone matrices we devise a lightweight dot encoding network to spread dense gradients to sparse dots. Dot control offers several advantages to our approach including the capability to regulate the quantity of halftone dots and enhance specific areas with artifacts in the generated halftones by adjusting the placement of the dots. Our proposed method exhibits superior performance than previous approaches in extensive quantitative and qualitative experiments.
[]
[]
[]
[]
2,532
2,533
The Unreasonable Effectiveness of Pre-Trained Features for Camera Pose Refinement
http://arxiv.org/abs/2404.10438
Gabriele Trivigno, Carlo Masone, Barbara Caputo, Torsten Sattler
2,404.10438
Pose refinement is an interesting and practically relevant research direction. Pose refinement can be used to (1) obtain a more accurate pose estimate from an initial prior (e.g. from retrieval) (2) as pre-processing i.e. to provide a better starting point to a more expensive pose estimator (3) as post-processing of a more accurate localizer. Existing approaches focus on learning features / scene representations for the pose refinement task. This involves training an implicit scene representation or learning features while optimizing a camera pose-based loss. A natural question is whether training specific features / representations is truly necessary or whether similar results can be already achieved with more generic features. In this work we present a simple approach that combines pre-trained features with a particle filter and a renderable representation of the scene. Despite its simplicity it achieves state-of-the-art results demonstrating that one can easily build a pose refiner without the need for specific training. The code will be released upon acceptance.
[]
[]
[]
[]
2,533
2,534
Scalable 3D Registration via Truncated Entry-wise Absolute Residuals
http://arxiv.org/abs/2404.00915
Tianyu Huang, Liangzu Peng, Rene Vidal, Yun-Hui Liu
2,404.00915
Given an input set of 3D point pairs the goal of outlier-robust 3D registration is to compute some rotation and translation that align as many point pairs as possible. This is an important problem in computer vision for which many highly accurate approaches have been recently proposed. Despite their impressive performance these approaches lack scalability often overflowing the 16GB of memory of a standard laptop to handle roughly 30000 point pairs. In this paper we propose a 3D registration approach that can process more than ten million (10^7) point pairs with over 99% random outliers. Moreover our method is efficient entails low memory costs and maintains high accuracy at the same time. We call our method TEAR as it involves minimizing an outlier-robust loss that computes Truncated Entry-wise Absolute Residuals. To minimize this loss we decompose the original 6-dimensional problem into two subproblems of dimensions 3 and 2 respectively solved in succession to global optimality via a customized branch-and-bound method. While branch-and-bound is often slow and unscalable this does not apply to TEAR as we propose novel bounding functions that are tight and computationally efficient. Experiments on various datasets are conducted to validate the scalability and efficiency of our method.
[]
[]
[]
[]
2,534
2,535
ExtraNeRF: Visibility-Aware View Extrapolation of Neural Radiance Fields with Diffusion Models
Meng-Li Shih, Wei-Chiu Ma, Lorenzo Boyice, Aleksander Holynski, Forrester Cole, Brian Curless, Janne Kontkanen
null
We propose ExtraNeRF a novel method for extrapolating the range of views handled by a Neural Radiance Field (NeRF). Our main idea is to leverage NeRFs to model scene-specific fine-grained details while capitalizing on diffusion models to extrapolate beyond our observed data. A key ingredient is to track visibility to determine what portions of the scene have not been observed and focus on reconstructing those regions consistently with diffusion models. Our primary contributions include a visibility-aware diffusion-based inpainting module that is fine-tuned on the input imagery yielding an initial NeRF with moderate quality (often blurry) inpainted regions followed by a second diffusion model trained on the input imagery to consistently enhance notably sharpen the inpainted imagery from the first pass. We demonstrate high-quality results extrapolating beyond a small number of (typically six or fewer) input views effectively outpainting the NeRF as well as inpainting newly disoccluded regions inside the original viewing volume. We compare with related work both quantitatively and qualitatively and show significant gains over prior art.
[]
[]
[]
[]
2,535
2,536
Equivariant Plug-and-Play Image Reconstruction
http://arxiv.org/abs/2312.01831
Matthieu Terris, Thomas Moreau, Nelly Pustelnik, Julian Tachella
2,312.01831
Plug-and-play algorithms constitute a popular framework for solving inverse imaging problems that rely on the implicit definition of an image prior via a denoiser. These algorithms can leverage powerful pre-trained denoisers to solve a wide range of imaging tasks circumventing the necessity to train models on a per-task basis. Unfortunately plug-and-play methods often show unstable behaviors hampering their promise of versatility and leading to suboptimal quality of reconstructed images. In this work we show that enforcing equivariance to certain groups of transformations (rotations reflections and/or translations) on the denoiser strongly improves the stability of the algorithm as well as its reconstruction quality. We provide a theoretical analysis that illustrates the role of equivariance on better performance and stability. We present a simple algorithm that enforces equivariance on any existing denoiser by simply applying a random transformation to the input of the denoiser and the inverse transformation to the output at each iteration of the algorithm. Experiments on multiple imaging modalities and denoising networks show that the equivariant plug-and-play algorithm improves both the reconstruction performance and the stability compared to their non-equivariant counterparts.
[]
[]
[]
[]
2,536
2,537
CLIP as RNN: Segment Countless Visual Concepts without Training Endeavor
http://arxiv.org/abs/2312.07661
Shuyang Sun, Runjia Li, Philip Torr, Xiuye Gu, Siyang Li
2,312.07661
Existing open-vocabulary image segmentation methods require a fine-tuning step on mask labels and/or image-text datasets. Mask labels are labor-intensive which limits the number of categories in segmentation datasets. Consequently the vocabulary capacity of pre-trained VLMs is severely reduced after fine-tuning. However without fine-tuning VLMs trained under weak image-text supervision tend to make suboptimal mask predictions. To alleviate these issues we introduce a novel recurrent framework that progressively filters out irrelevant texts and enhances mask quality without training efforts. The recurrent unit is a two-stage segmenter built upon a frozen VLM. Thus our model retains the VLM's broad vocabulary space and equips it with segmentation ability. Experiments show that our method outperforms not only the training-free counterparts but also those fine-tuned with millions of data samples and sets the new state-of-the-art records for both zero-shot semantic and referring segmentation. Concretely we improve the current record by 28.8 16.0 and 6.9 mIoU on Pascal VOC COCO Object and Pascal Context.
[]
[]
[]
[]
2,537
2,538
LP++: A Surprisingly Strong Linear Probe for Few-Shot CLIP
Yunshi Huang, Fereshteh Shakeri, Jose Dolz, Malik Boudiaf, Houda Bahig, Ismail Ben Ayed
null
In a recent strongly emergent literature on few-shot CLIP adaptation Linear Probe (LP) has been often reported as a weak baseline. This has motivated intensive research building convoluted prompt learning or feature adaptation strategies. In this work we propose and examine from convex-optimization perspectives a generalization of the standard LP baseline in which the linear classifier weights are learnable functions of the text embedding with class-wise multipliers blending image and text knowledge. As our objective function depends on two types of variables i.e. the class visual prototypes and the learnable blending parameters we propose a computationally efficient block coordinate Majorize-Minimize (MM) descent algorithm. In our full-batch MM optimizer which we coin LP++ step sizes are implicit unlike standard gradient descent practices where learning rates are intensively searched over validation sets. By examining the mathematical properties of our loss (e.g. Lipschitz gradient continuity) we build majorizing functions yielding data-driven learning rates and derive approximations of the loss's minima which provide data-informed initialization of the variables. Our image-language objective function along with these non-trivial optimization insights and ingredients yields surprisingly highly competitive few-shot CLIP performances. Furthermore LP++ operates in black-box relaxes intensive validation searches for the optimization hyper-parameters and runs orders-of-magnitudes faster than state-of-the-art few-shot CLIP adaptation methods. Our code is available at: https://github.com/FereshteShakeri/FewShot-CLIP-Strong-Baseline.git.
[]
[]
[]
[]
2,538
2,539
Active Generalized Category Discovery
http://arxiv.org/abs/2403.04272
Shijie Ma, Fei Zhu, Zhun Zhong, Xu-Yao Zhang, Cheng-Lin Liu
2,403.04272
Generalized Category Discovery (GCD) is a pragmatic and challenging open-world task which endeavors to cluster unlabeled samples from both novel and old classes leveraging some labeled data of old classes. Given that knowledge learned from old classes is not fully transferable to new classes and that novel categories are fully unlabeled GCD inherently faces intractable problems including imbalanced classification performance and inconsistent confidence between old and new classes especially in the low-labeling regime. Hence some annotations of new classes are deemed necessary. However labeling new classes is extremely costly. To address this issue we take the spirit of active learning and propose a new setting called Active Generalized Category Discovery (AGCD). The goal is to improve the performance of GCD by actively selecting a limited amount of valuable samples for labeling from the oracle. To solve this problem we devise an adaptive sampling strategy which jointly considers novelty informativeness and diversity to adaptively select novel samples with proper uncertainty. However owing to the varied orderings of label indices caused by the clustering of novel classes the queried labels are not directly applicable to subsequent training. To overcome this issue we further propose a stable label mapping algorithm that transforms ground truth labels to the label space of the classifier thereby ensuring consistent training across different active selection stages. Our method achieves state-of-the-art performance on both generic and fine-grained datasets. Our code is available at https://github.com/mashijie1028/ActiveGCD
[]
[]
[]
[]
2,539
2,540
HIVE: Harnessing Human Feedback for Instructional Visual Editing
http://arxiv.org/abs/2303.09618
Shu Zhang, Xinyi Yang, Yihao Feng, Can Qin, Chia-Chih Chen, Ning Yu, Zeyuan Chen, Huan Wang, Silvio Savarese, Stefano Ermon, Caiming Xiong, Ran Xu
2,303.09618
Incorporating human feedback has been shown to be crucial to align text generated by large language models to human preferences. We hypothesize that state-of-the-art instructional image editing models where outputs are generated based on an input image and an editing instruction could similarly benefit from human feedback as their outputs may not adhere to the correct instructions and preferences of users. In this paper we present a novel framework to harness human feedback for instructional visual editing (HIVE). Specifically we collect human feedback on the edited images and learn a reward function to capture the underlying user preferences. We then introduce scalable diffusion model fine-tuning methods that can incorporate human preferences based on the estimated reward. Besides to mitigate the bias brought by the limitation of data we contribute a new 1.1M training dataset a 3.6K reward dataset for rewards learning and a 1K evaluation dataset to boost the performance of instructional image editing. We conduct extensive empirical experiments quantitatively and qualitatively showing that HIVE is favored over previous state-of-the-art instructional image editing approaches by a large margin.
[]
[]
[]
[]
2,540
2,541
StrokeFaceNeRF: Stroke-based Facial Appearance Editing in Neural Radiance Field
Xiao-Juan Li, Dingxi Zhang, Shu-Yu Chen, Feng-Lin Liu
null
Current 3D-aware facial NeRF generation approaches control the facial appearance by text lighting conditions or reference images limiting precise manipulation of local facial regions and interactivity. Color stroke a user-friendly and effective tool to depict appearance is challenging to edit 3D faces because of the lack of texture coarse geometry representation and detailed editing operations. To solve the above problems we introduce StrokeFaceNeRF a novel stroke-based method for editing facial NeRF appearance. In order to infer the missing texture and 3D geometry information 2D edited stroke maps are firstly encoded into the EG3D's latent space followed by a transformer-based editing module to achieve effective appearance changes while preserving the original geometry in editing regions. Notably we design a novel geometry loss function to ensure surface density remains consistent during training. To further enhance the local manipulation accuracy we propose a stereo fusion approach which lifts the 2D mask (inferred from strokes or drawn by users) into 3D mask volume allowing explicit blending of the original and edited faces. Extensive experiments validate that the proposed method outperforms existing 2D and 3D methods in both editing reality and geometry retention.
[]
[]
[]
[]
2,541
2,542
FlowVQTalker: High-Quality Emotional Talking Face Generation through Normalizing Flow and Quantization
http://arxiv.org/abs/2403.06375
Shuai Tan, Bin Ji, Ye Pan
2,403.06375
Generating emotional talking faces is a practical yet challenging endeavor. To create a lifelike avatar we draw upon two critical insights from a human perspective: 1) The connection between audio and the non-deterministic facial dynamics encompassing expressions blinks poses should exhibit synchronous and one-to-many mapping. 2) Vibrant expressions are often accompanied by emotion-aware high-definition (HD) textures and finely detailed teeth. However both aspects are frequently overlooked by existing methods. To this end this paper proposes using normalizing Flow and Vector-Quantization modeling to produce emotional talking faces that satisfy both insights concurrently (FlowVQTalker). Specifically we develop a flowbased coefficient generator that encodes the dynamics of facial emotion into a multi-emotion-class latent space represented as a mixture distribution. The generation process commences with random sampling from the modeled distribution guided by the accompanying audio enabling both lip-synchronization and the uncertain nonverbal facial cues generation. Furthermore our designed vector-quantization image generator treats the creation of expressive facial images as a code query task utilizing a learned codebook to provide rich high-quality textures that enhance the emotional perception of the results. Extensive experiments are conducted to showcase the effectiveness of our approach.
[]
[]
[]
[]
2,542
2,543
Learning from Observer Gaze: Zero-Shot Attention Prediction Oriented by Human-Object Interaction Recognition
http://arxiv.org/abs/2405.09931
Yuchen Zhou, Linkai Liu, Chao Gou
2,405.09931
Most existing attention prediction research focuses on salient instances like humans and objects. However the more complex interaction-oriented attention arising from the comprehension of interactions between instances by human observers remains largely unexplored. This is equally crucial for advancing human-machine interaction and human-centered artificial intelligence. To bridge this gap we first collect a novel gaze fixation dataset named IG comprising 530000 fixation points across 740 diverse interaction categories capturing visual attention during human observers' cognitive processes of interactions. Subsequently we introduce the zero-shot interaction-oriented attention prediction task (ZeroIA) which challenges models to predict visual cues for interactions not encountered during training. Thirdly we present the Interactive Attention model (IA) designed to emulate human observers' cognitive processes to tackle the ZeroIA problem. Extensive experiments demonstrate that the proposed IA outperforms other state-of-the-art approaches in both ZeroIA and fully supervised settings. Lastly we endeavor to apply interaction-oriented attention to the interaction recognition task itself. Further experimental results demonstrate the promising potential to enhance the performance and interpretability of existing state-of-the-art HOI models by incorporating real human attention data from IG and attention labels generated by IA.
[]
[]
[]
[]
2,543
2,544
ProxyCap: Real-time Monocular Full-body Capture in World Space via Human-Centric Proxy-to-Motion Learning
http://arxiv.org/abs/2307.01200
Yuxiang Zhang, Hongwen Zhang, Liangxiao Hu, Jiajun Zhang, Hongwei Yi, Shengping Zhang, Yebin Liu
2,307.012
Learning-based approaches to monocular motion capture have recently shown promising results by learning to regress in a data-driven manner. However due to the challenges in data collection and network designs it remains challenging to achieve real-time full-body capture while being accurate in world space. In this work we introduce ProxyCap a human-centric proxy-to-motion learning scheme to learn world-space motions from a proxy dataset of 2D skeleton sequences and 3D rotational motions. Such proxy data enables us to build a learning-based network with accurate world-space supervision while also mitigating the generalization issues. For more accurate and physically plausible predictions in world space our network is designed to learn human motions from a human-centric perspective which enables the understanding of the same motion captured with different camera trajectories. Moreover a contact-aware neural motion descent module is proposed to improve foot-ground contact and motion misalignment with the proxy observations. With the proposed learning-based solution we demonstrate the first real-time monocular full-body capture system with plausible foot-ground contact in world space even using hand-held cameras.
[]
[]
[]
[]
2,544
2,545
OpenBias: Open-set Bias Detection in Text-to-Image Generative Models
Moreno D'Incà, Elia Peruzzo, Massimiliano Mancini, Dejia Xu, Vidit Goel, Xingqian Xu, Zhangyang Wang, Humphrey Shi, Nicu Sebe
null
Text-to-image generative models are becoming increasingly popular and accessible to the general public. As these models see large-scale deployments it is necessary to deeply investigate their safety and fairness to not disseminate and perpetuate any kind of biases. However existing works focus on detecting closed sets of biases defined a priori limiting the studies to well-known concepts. In this paper we tackle the challenge of open-set bias detection in text-to-image generative models presenting OpenBias a new pipeline that identifies and quantifies the severity of biases agnostically without access to any precompiled set. OpenBias has three stages. In the first phase we leverage a Large Language Model (LLM) to propose biases given a set of captions. Secondly the target generative model produces images using the same set of captions. Lastly a Vision Question Answering model recognizes the presence and extent of the previously proposed biases. We study the behavior of Stable Diffusion 1.5 2 and XL emphasizing new biases never investigated before. Via quantitative experiments we demonstrate that OpenBias agrees with current closed-set bias detection methods and human judgement.
[]
[]
[]
[]
2,545
2,546
On the Robustness of Language Guidance for Low-Level Vision Tasks: Findings from Depth Estimation
http://arxiv.org/abs/2404.08540
Agneet Chatterjee, Tejas Gokhale, Chitta Baral, Yezhou Yang
2,404.0854
Recent advances in monocular depth estimation have been made by incorporating natural language as additional guidance. Although yielding impressive results the impact of the language prior particularly in terms of generalization and robustness remains unexplored. In this paper we address this gap by quantifying the impact of this prior and introduce methods to benchmark its effectiveness across various settings. We generate "low-level" sentences that convey object-centric three-dimensional spatial relationships incorporate them as additional language priors and evaluate their downstream impact on depth estimation. Our key finding is that current language-guided depth estimators perform optimally only with scene-level descriptions and counter-intuitively fare worse with low level descriptions. Despite leveraging additional data these methods are not robust to directed adversarial attacks and decline in performance with an increase in distribution shift. Finally to provide a foundation for future research we identify points of failures and offer insights to better understand these shortcomings. With an increasing number of methods using language for depth estimation our findings highlight the opportunities and pitfalls that require careful consideration for effective deployment in real-world settings.
[]
[]
[]
[]
2,546
2,547
UFOGen: You Forward Once Large Scale Text-to-Image Generation via Diffusion GANs
http://arxiv.org/abs/2311.09257
Yanwu Xu, Yang Zhao, Zhisheng Xiao, Tingbo Hou
2,311.09257
Text-to-image diffusion models have demonstrated remarkable capabilities in transforming text prompts into coherent images yet the computational cost of the multi-step inference remains a persistent challenge. To address this issue we present UFOGen a novel generative model designed for ultra-fast one-step text-to-image generation. In contrast to conventional approaches that focus on improving samplers or employing distillation techniques for diffusion models UFOGen adopts a hybrid methodology integrating diffusion models with a GAN objective. Leveraging a newly introduced diffusion-GAN objective and initialization with pre-trained diffusion models UFOGen excels in efficiently generating high-quality images conditioned on textual descriptions in a single step. Beyond traditional text-to-image generation UFOGen showcases versatility in applications. Notably UFOGen stands among the pioneering models enabling one-step text-to-image generation and diverse downstream tasks presenting a significant advancement in the landscape of efficient generative models.
[]
[]
[]
[]
2,547
2,548
3DiffTection: 3D Object Detection with Geometry-Aware Diffusion Features
http://arxiv.org/abs/2311.04391
Chenfeng Xu, Huan Ling, Sanja Fidler, Or Litany
2,311.04391
3DiffTection introduces a novel method for 3D object detection from single images utilizing a 3D-aware diffusion model for feature extraction. Addressing the resource-intensive nature of annotating large-scale 3D image data our approach leverages pretrained diffusion models traditionally used for 2D tasks and adapts them for 3D detection through geometric and semantic tuning. Geometrically we enhance the model to perform view synthesis from single images incorporating an epipolar warp operator. This process utilizes easily accessible posed image data eliminating the need for manual annotation. Semantically the model is further refined on target detection data. Both stages utilize ControlNet ensuring the preservation of original feature capabilities. Through our methodology we obtain 3D-aware features that excel in identifying cross-view point correspondences. In 3D detection 3DiffTection substantially surpasses previous benchmarks e.g. Cube-RCNN by 9.43% in AP3D on the Omni3D-ARkitscene dataset. Furthermore 3DiffTection demonstrates robust label efficiency and generalizes well to cross-domain data nearly matching fully-supervised models in zero-shot scenarios.
[]
[]
[]
[]
2,548
2,549
Lift3D: Zero-Shot Lifting of Any 2D Vision Model to 3D
http://arxiv.org/abs/2403.18922
Mukund Varma T, Peihao Wang, Zhiwen Fan, Zhangyang Wang, Hao Su, Ravi Ramamoorthi
2,403.18922
In recent years there has been an explosion of 2D vision models for numerous tasks such as semantic segmentation style transfer or scene editing enabled by large-scale 2D image datasets. At the same time there has been renewed interest in 3D scene representations such as neural radiance fields from multi-view images. However the availability of 3D or multiview data is still substantially limited compared to 2D image datasets making extending 2D vision models to 3D data highly desirable but also very challenging. Indeed extending a single 2D vision operator like scene editing to 3D typically requires a highly creative method specialized to that task and often requires per-scene optimization. In this paper we ask the question of whether any 2D vision model can be lifted to make 3D consistent predictions. We answer this question in the affirmative; our new Lift3D method trains to predict unseen views on feature spaces generated by a few visual models (i.e. DINO and CLIP) but then generalizes to novel vision operators and tasks such as style transfer super-resolution open vocabulary segmentation and image colorization; for some of these tasks there is no comparable previous 3D method. In many cases we even outperform state-of-the-art methods specialized for the task in question. Moreover Lift3D is a zero-shot method in the sense that it requires no task-specific training nor scene-specific optimization.
[]
[]
[]
[]
2,549
2,550
LowRankOcc: Tensor Decomposition and Low-Rank Recovery for Vision-based 3D Semantic Occupancy Prediction
Linqing Zhao, Xiuwei Xu, Ziwei Wang, Yunpeng Zhang, Borui Zhang, Wenzhao Zheng, Dalong Du, Jie Zhou, Jiwen Lu
null
In this paper we present a tensor decomposition and low-rank recovery approach (LowRankOcc) for vision-based 3D semantic occupancy prediction. Conventional methods model outdoor scenes with fine-grained 3D grids but the sparsity of non-empty voxels introduces considerable spatial redundancy leading to potential overfitting risks. In contrast our approach leverages the intrinsic low-rank property of 3D occupancy data factorizing voxel representations into low-rank components to efficiently mitigate spatial redundancy without sacrificing performance. Specifically we present the Vertical-Horizontal (VH) decomposition block factorizes 3D tensors into vertical vectors and horizontal matrices. With our "decomposition-encoding-recovery" framework we encode 3D contexts with only 1/2D convolutions and poolings and subsequently recover the encoded compact yet informative context features back to voxel representations. Experimental results demonstrate that LowRankOcc achieves state-of-the-art performances in semantic scene completion on the SemanticKITTI dataset and 3D occupancy prediction on the nuScenes dataset.
[]
[]
[]
[]
2,550
2,551
Multiway Point Cloud Mosaicking with Diffusion and Global Optimization
http://arxiv.org/abs/2404.00429
Shengze Jin, Iro Armeni, Marc Pollefeys, Daniel Barath
2,404.00429
We introduce a novel framework for multiway point cloud mosaicking (named Wednesday) designed to co-align sets of partially overlapping point clouds -- typically obtained from 3D scanners or moving RGB-D cameras -- into a unified coordinate system. At the core of our approach is ODIN a learned pairwise registration algorithm that iteratively identifies overlaps and refines attention scores employing a diffusion-based process for denoising pairwise correlation matrices to enhance matching accuracy. Further steps include constructing a pose graph from all point clouds performing rotation averaging a novel robust algorithm for re-estimating translations optimally in terms of consensus maximization and translation optimization. Finally the point cloud rotations and positions are optimized jointly by a diffusion-based approach. Tested on four diverse large-scale datasets our method achieves state-of-the-art pairwise and multiway registration results by a large margin on all benchmarks. Our code and models are available at https://github.com/jinsz/Multiway-Point-Cloud-Mosaicking-with-Diffusion-and-Global-Optimization.
[]
[]
[]
[]
2,551
2,552
Novel View Synthesis with View-Dependent Effects from a Single Image
http://arxiv.org/abs/2312.08071
Juan Luis Gonzalez Bello, Munchurl Kim
2,312.08071
In this paper we address single image-based novel view synthesis (NVS) by firstly integrating view-dependent effects (VDE) into the process. Our approach leverages camera motion priors to model VDE treating negative disparity as the representation of these effects in the scene. By identifying that specularities align with camera motion we infuse VDEs into input images by aggregating pixel colors along the negative depth region of epipolar lines. Additionally we introduce a relaxed volumetric rendering approximation enhancing efficiency by computing densities in a single pass for NVS from single images. Notably our method learns single-image NVS from image sequences alone making it a fully self-supervised learning approach that requires no depth or camera pose annotations. We present extensive experimental results and show that our proposed method can learn NVS with VDEs outperforming the SOTA single-view NVS methods on the RealEstate10k and MannequinChallenge datasets. Visit our project site https://kaist-viclab.github.io/monovde-site.
[]
[]
[]
[]
2,552
2,553
Point2RBox: Combine Knowledge from Synthetic Visual Patterns for End-to-end Oriented Object Detection with Single Point Supervision
http://arxiv.org/abs/2311.14758
Yi Yu, Xue Yang, Qingyun Li, Feipeng Da, Jifeng Dai, Yu Qiao, Junchi Yan
2,311.14758
With the rapidly increasing demand for oriented object detection (OOD) recent research involving weakly-supervised detectors for learning rotated box (RBox) from the horizontal box (HBox) has attracted more and more attention. In this paper we explore a more challenging yet label-efficient setting namely single point-supervised OOD and present our approach called Point2RBox. Specifically we propose to leverage two principles: 1) Synthetic pattern knowledge combination: By sampling around each labeled point on the image we spread the object feature to synthetic visual patterns with known boxes to provide the knowledge for box regression. 2) Transform self-supervision: With a transformed input image (e.g. scaled/rotated) the output RBoxes are trained to follow the same transformation so that the network can perceive the relative size/rotation between objects. The detector is further enhanced by a few devised techniques to cope with peripheral issues e.g. the anchor/layer assignment as the size of the object is not available in our point supervision setting. To our best knowledge Point2RBox is the first end-to-end solution for point-supervised OOD. In particular our method uses a lightweight paradigm yet it achieves a competitive performance among point-supervised alternatives 41.05%/27.62%/80.01% on DOTA/DIOR/HRSC datasets.
[]
[]
[]
[]
2,553
2,554
PBWR: Parametric-Building-Wireframe Reconstruction from Aerial LiDAR Point Clouds
http://arxiv.org/abs/2311.12062
Shangfeng Huang, Ruisheng Wang, Bo Guo, Hongxin Yang
2,311.12062
In this paper we present an end-to-end 3D-building-wireframe reconstruction method to regress edges directly from aerial light-detection-and-ranging (LiDAR) point clouds. Our method named parametric-building-wireframe reconstruction (PBWR) takes aerial LiDAR point clouds and initial edge entities as input and fully uses the self-attention mechanism of transformers to regress edge parameters without any intermediate steps such as corner prediction. We propose an edge non-maximum suppression (E-NMS) module based on edge similarity to remove redundant edges. Additionally a dedicated edge loss function is utilized to guide the PBWR in regressing edges parameters when the simple use of the edge distance loss is not suitable. In our experiments our proposed method demonstrated state-of-the-art results on the Building3D dataset achieving an improvement of approximately 36% in Entry-level dataset edge accuracy and around a 42% improvement in the Tallinn dataset.
[]
[]
[]
[]
2,554
2,555
Spectrum AUC Difference (SAUCD): Human-aligned 3D Shape Evaluation
http://arxiv.org/abs/2403.01619
Tianyu Luan, Zhong Li, Lele Chen, Xuan Gong, Lichang Chen, Yi Xu, Junsong Yuan
2,403.01619
Existing 3D mesh shape evaluation metrics mainly focus on the overall shape but are usually less sensitive to local details. This makes them inconsistent with human evaluation as human perception cares about both overall and detailed shape. In this paper we propose an analytic metric named Spectrum Area Under the Curve Difference (SAUCD) that demonstrates better consistency with human evaluation. To compare the difference between two shapes we first transform the 3D mesh to the spectrum domain using the discrete Laplace-Beltrami operator and Fourier transform. Then we calculate the Area Under the Curve (AUC) difference between the two spectrums so that each frequency band that captures either the overall or detailed shape is equitably considered. Taking human sensitivity across frequency bands into account we further extend our metric by learning suitable weights for each frequency band which better aligns with human perception. To measure the performance of SAUCD we build a 3D mesh evaluation dataset called Shape Grading along with manual annotations from more than 800 subjects. By measuring the correlation between our metric and human evaluation we demonstrate that SAUCD is well aligned with human evaluation and outperforms previous 3D mesh metrics.
[]
[]
[]
[]
2,555
2,556
HRVDA: High-Resolution Visual Document Assistant
http://arxiv.org/abs/2404.06918
Chaohu Liu, Kun Yin, Haoyu Cao, Xinghua Jiang, Xin Li, Yinsong Liu, Deqiang Jiang, Xing Sun, Linli Xu
2,404.06918
Leveraging vast training data multimodal large language models (MLLMs) have demonstrated formidable general visual comprehension capabilities and achieved remarkable performance across various tasks. However their performance in visual document understanding still leaves much room for improvement. This discrepancy is primarily attributed to the fact that visual document understanding is a fine-grained prediction task. In natural scenes MLLMs typically use low-resolution images leading to a substantial loss of visual information. Furthermore general-purpose MLLMs do not excel in handling document-oriented instructions. In this paper we propose a High-Resolution Visual Document Assistant (HRVDA) which bridges the gap between MLLMs and visual document understanding. This model employs a content filtering mechanism and an instruction filtering module to separately filter out the content-agnostic visual tokens and instruction-agnostic visual tokens thereby achieving efficient model training and inference for high-resolution images. In addition we construct a document-oriented visual instruction tuning dataset and apply a multi-stage training strategy to enhance the model's document modeling capabilities. Extensive experiments demonstrate that our model achieves state-of-the-art performance across multiple document understanding datasets while maintaining training efficiency and inference speed comparable to low-resolution models.
[]
[]
[]
[]
2,556
2,557
Learning for Transductive Threshold Calibration in Open-World Recognition
http://arxiv.org/abs/2305.12039
Qin Zhang, Dongsheng An, Tianjun Xiao, Tong He, Qingming Tang, Ying Nian Wu, Joseph Tighe, Yifan Xing
2,305.12039
In deep metric learning for visual recognition the calibration of distance thresholds is crucial for achieving desired model performance in the true positive rates (TPR) or true negative rates (TNR). However calibrating this thresh- old presents challenges in open-world scenarios where the test classes can be entirely disjoint from those encountered during training. We define the problem of finding distance thresholds for a trained embedding model to achieve target performance metrics over unseen open-world test classes as open-world threshold calibration. Existing posthoc threshold calibration methods reliant on inductive inference and requiring a calibration dataset with a similar distance distribution as the test data often prove ineffective in open- world scenarios. To address this we introduce OpenGCN a Graph Neural Network-based transductive threshold calibration method with enhanced adaptability and robustness. OpenGCN learns to predict pairwise connectivity for the unlabeled test instances embedded in a graph to determine its TPR and TNR at various distance thresholds allowing for transductive inference of the distance thresholds which also incorporates test-time information. Extensive experiments across open-world visual recognition benchmarks validate OpenGCN's superiority over existing posthoc calibration methods for open-world threshold calibration.
[]
[]
[]
[]
2,557
2,558
Weakly-Supervised Emotion Transition Learning for Diverse 3D Co-speech Gesture Generation
http://arxiv.org/abs/2311.17532
Xingqun Qi, Jiahao Pan, Peng Li, Ruibin Yuan, Xiaowei Chi, Mengfei Li, Wenhan Luo, Wei Xue, Shanghang Zhang, Qifeng Liu, Yike Guo
2,311.17532
Generating vivid and emotional 3D co-speech gestures is crucial for virtual avatar animation in human-machine interaction applications. While the existing methods enable generating the gestures to follow a single emotion label they overlook that long gesture sequence modeling with emotion transition is more practical in real scenes. In addition the lack of large-scale available datasets with emotional transition speech and corresponding 3D human gestures also limits the addressing of this task. To fulfill this goal we first incorporate the ChatGPT-4 and an audio inpainting approach to construct the high-fidelity emotion transition human speeches. Considering obtaining the realistic 3D pose annotations corresponding to the dynamically inpainted emotion transition audio is extremely difficult we propose a novel weakly supervised training strategy to encourage authority gesture transitions. Specifically to enhance the coordination of transition gestures w.r.t. different emotional ones we model the temporal association representation between two different emotional gesture sequences as style guidance and infuse it into the transition generation. We further devise an emotion mixture mechanism that provides weak supervision based on a learnable mixed emotion label for transition gestures. Last we present a keyframe sampler to supply effective initial posture cues in long sequences enabling us to generate diverse gestures. Extensive experiments demonstrate that our method outperforms the state-of-the-art models constructed by adapting single emotion-conditioned counterparts on our newly defined emotion transition task and datasets. Our code and dataset will be released on the project page: https://xingqunqi-lab.github.io/Emo-Transition-Gesture/
[]
[]
[]
[]
2,558
2,559
Multi-Session SLAM with Differentiable Wide-Baseline Pose Optimization
http://arxiv.org/abs/2404.15263
Lahav Lipson, Jia Deng
2,404.15263
We introduce a new system for Multi-Session SLAM which tracks camera motion across multiple disjoint videos under a single global reference. Our approach couples the prediction of optical flow with solver layers to estimate camera pose. The backbone is trained end-to-end using a novel differentiable solver for wide-baseline two-view pose. The full system can connect disjoint sequences perform visual odometry and global optimization. Compared to existing approaches our design is accurate and robust to catastrophic failures.
[]
[]
[]
[]
2,559
2,560
A Dual-Augmentor Framework for Domain Generalization in 3D Human Pose Estimation
http://arxiv.org/abs/2403.11310
Qucheng Peng, Ce Zheng, Chen Chen
2,403.1131
3D human pose data collected in controlled laboratory settings present challenges for pose estimators that generalize across diverse scenarios. To address this domain generalization is employed. Current methodologies in domain generalization for 3D human pose estimation typically utilize adversarial training to generate synthetic poses for training. Nonetheless these approaches exhibit several limitations. First the lack of prior information about the target domain complicates the application of suitable augmentation through a single pose augmentor affecting generalization on target domains. Moreover adversarial training's discriminator tends to enforce similarity between source and synthesized poses impeding the exploration of out-of-source distributions. Furthermore the pose estimator's optimization is not exposed to domain shifts limiting its overall generalization ability. To address these limitations we propose a novel framework featuring two pose augmentors: the weak and the strong augmentors. Our framework employs differential strategies for generation and discrimination processes facilitating the preservation of knowledge related to source poses and the exploration of out-of-source distributions without prior information about target poses. Besides we leverage meta-optimization to simulate domain shifts in the optimization process of the pose estimator thereby improving its generalization ability. Our proposed approach significantly outperforms existing methods as demonstrated through comprehensive experiments on various benchmark datasets.
[]
[]
[]
[]
2,560
2,561
Improving Out-of-Distribution Generalization in Graphs via Hierarchical Semantic Environments
http://arxiv.org/abs/2403.01773
Yinhua Piao, Sangseon Lee, Yijingxiu Lu, Sun Kim
2,403.01773
Out-of-distribution (OOD) generalization in the graph domain is challenging due to complex distribution shifts and a lack of environmental contexts. Recent methods attempt to enhance graph OOD generalization by generating flat environments. However such flat environments come with inherent limitations to capture more complex data distributions. Considering the DrugOOD dataset which contains diverse training environments (e.g. scaffold size etc.) flat contexts cannot sufficiently address its high heterogeneity. Thus a new challenge is posed to generate more semantically enriched environments to enhance graph invariant learning for handling distribution shifts. In this paper we propose a novel approach to generate hierarchical semantic environments for each graph. Firstly given an input graph we explicitly extract variant subgraphs from the input graph to generate proxy predictions on local environments. Then stochastic attention mechanisms are employed to re-extract the subgraphs for regenerating global environments in a hierarchical manner. In addition we introduce a new learning objective that guides our model to learn the diversity of environments within the same hierarchy while maintaining consistency across different hierarchies. This approach enables our model to consider the relationships between environments and facilitates robust graph invariant learning. Extensive experiments on real-world graph data have demonstrated the effectiveness of our framework. Particularly in the challenging dataset DrugOOD our method achieves up to 1.29% and 2.83% improvement over the best baselines on IC50 and EC50 prediction tasks respectively.
[]
[]
[]
[]
2,561
2,562
CN-RMA: Combined Network with Ray Marching Aggregation for 3D Indoor Object Detection from Multi-view Images
Guanlin Shen, Jingwei Huang, Zhihua Hu, Bin Wang
null
This paper introduces CN-RMA a novel approach for 3D indoor object detection from multi-view images. We observe the key challenge as the ambiguity of image and 3D correspondence without explicit geometry to provide occlusion information. To address this issue CN-RMA leverages the synergy of 3D reconstruction networks and 3D object detection networks where the reconstruction network provides a rough Truncated Signed Distance Function (TSDF) and guides image features to vote to 3D space correctly in an end-to-end manner. Specifically we associate weights to sampled points of each ray through ray marching representing the contribution of a pixel in an image to corresponding 3D locations. Such weights are determined by the predicted signed distances so that image features vote only to regions near the reconstructed surface. Our method achieves state-of-the-art performance in 3D object detection from multi-view images as measured by [email protected] and [email protected] on the ScanNet and ARKitScenes datasets. The code and models are released at https://github.com/SerCharles/CN-RMA.
[]
[]
[]
[]
2,562
2,563
ACT-Diffusion: Efficient Adversarial Consistency Training for One-step Diffusion Models
Fei Kong, Jinhao Duan, Lichao Sun, Hao Cheng, Renjing Xu, Hengtao Shen, Xiaofeng Zhu, Xiaoshuang Shi, Kaidi Xu
null
Though diffusion models excel in image generation their step-by-step denoising leads to slow generation speeds. Consistency training addresses this issue with single-step sampling but often produces lower-quality generations and requires high training costs. In this paper we show that optimizing consistency training loss minimizes the Wasserstein distance between target and generated distributions. As timestep increases the upper bound accumulates previous consistency training losses. Therefore larger batch sizes are needed to reduce both current and accumulated losses. We propose Adversarial Consistency Training (ACT) which directly minimizes the Jensen-Shannon (JS) divergence between distributions at each timestep using a discriminator. Theoretically ACT enhances generation quality and convergence. By incorporating a discriminator into the consistency training framework our method achieves improved FID scores on CIFAR10 and ImageNet 64x64 and LSUN Cat 256x256 datasets retains zero-shot image inpainting capabilities and uses less than 1/6 of the original batch size and fewer than 1/2 of the model parameters and training steps compared to the baseline method this leads to a substantial reduction in resource consumption. Our code is available: https://github.com/kong13661/ACT
[]
[]
[]
[]
2,563
2,564
Spectral Meets Spatial: Harmonising 3D Shape Matching and Interpolation
http://arxiv.org/abs/2402.18920
Dongliang Cao, Marvin Eisenberger, Nafie El Amrani, Daniel Cremers, Florian Bernard
2,402.1892
Although 3D shape matching and interpolation are highly interrelated they are often studied separately and applied sequentially to relate different 3D shapes thus resulting in sub-optimal performance. In this work we present a unified framework to predict both point-wise correspondences and shape interpolation between 3D shapes. To this end we combine the deep functional map framework with classical surface deformation models to map shapes in both spectral and spatial domains. On the one hand by incorporating spatial maps our method obtains more accurate and smooth point-wise correspondences compared to previous functional map methods for shape matching. On the other hand by introducing spectral maps our method gets rid of commonly used but computationally expensive geodesic distance constraints that are only valid for near-isometric shape deformations. Furthermore we propose a novel test-time adaptation scheme to capture both pose-dominant and shape-dominant deformations. Using different challenging datasets we demonstrate that our method outperforms previous state-of-the-art methods for both shape matching and interpolation even compared to supervised approaches.
[]
[]
[]
[]
2,564
2,565
Emu Edit: Precise Image Editing via Recognition and Generation Tasks
http://arxiv.org/abs/2311.10089
Shelly Sheynin, Adam Polyak, Uriel Singer, Yuval Kirstain, Amit Zohar, Oron Ashual, Devi Parikh, Yaniv Taigman
2,311.10089
Instruction-based image editing holds immense potential for a variety of applications as it enables users to perform any editing operation using a natural language instruction. However current models in this domain often struggle with accurately executing user instructions. We present Emu Edit a multi-task image editing model which sets state-of-the-art results in instruction-based image editing. To develop Emu Edit we train it to multi-task across an unprecedented range of tasks such as region-based editing free-form editing and Computer Vision tasks all of which are formulated as generative tasks. Additionally to enhance Emu Edit's multi-task learning abilities we provide it with learned task embeddings which guide the generation process towards the correct edit type. Both these elements are essential for Emu Edit's outstanding performance. Furthermore we show that Emu Edit can generalize to new tasks such as image inpainting super-resolution and compositions of editing tasks with just a few labeled examples. This capability offers a significant advantage in scenarios where high-quality samples are scarce. Lastly to facilitate a more rigorous and informed assessment of instructable image editing models we release a new challenging and versatile benchmark that includes seven different image editing tasks.
[]
[]
[]
[]
2,565
2,566
Face2Diffusion for Fast and Editable Face Personalization
http://arxiv.org/abs/2403.05094
Kaede Shiohara, Toshihiko Yamasaki
2,403.05094
Face personalization aims to insert specific faces taken from images into pretrained text-to-image diffusion models. However it is still challenging for previous methods to preserve both the identity similarity and editability due to overfitting to training samples. In this paper we propose Face2Diffusion (F2D) for high-editability face personalization. The core idea behind F2D is that removing identity-irrelevant information from the training pipeline prevents the overfitting problem and improves editability of encoded faces. F2D consists of the following three novel components: 1) Multi-scale identity encoder provides well-disentangled identity features while keeping the benefits of multi-scale information which improves the diversity of camera poses. 2) Expression guidance disentangles face expressions from identities and improves the controllability of face expressions. 3) Class-guided denoising regularization encourages models to learn how faces should be denoised which boosts the text-alignment of backgrounds. Extensive experiments on the FaceForensics++ dataset and diverse prompts demonstrate our method greatly improves the trade-off between the identity- and text-fidelity compared to previous state-of-the-art methods. Code is available at https://github.com/mapooon/Face2Diffusion.
[]
[]
[]
[]
2,566
2,567
Causal-CoG: A Causal-Effect Look at Context Generation for Boosting Multi-modal Language Models
Shitian Zhao, Zhuowan Li, Yadong Lu, Alan Yuille, Yan Wang
null
While Multi-modal Language Models (MLMs) demon strate impressive multimodal ability they still struggle on providing factual and precise responses for tasks like vi sual question answering (VQA). In this paper we address this challenge from the perspective of contextual informa tion. We propose Causal Context Generation Causal-CoG which is a prompting strategy that engages contextual infor mation to enhance precise VQA during inference. Specifi cally we prompt MLMs to generate contexts i.e text de scription of an image and engage the generated contexts for question answering. Moreover we investigate the ad vantage of contexts on VQA from a causality perspective introducing causality filtering to select samples for which contextual information is helpful. To show the effective ness of Causal-CoG we run extensive experiments on 10 multimodal benchmarks and showconsistent improvements e.g. +6.30% on POPE +13.69% on Vizwiz and +6.43% on VQAv2 compared to direct decoding surpassing exist ing methods. We hope Casual-CoG inspires explorations of context knowledge in multimodal models and serves as a plug-and-play strategy for MLM decoding.
[]
[]
[]
[]
2,567
2,568
Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds
http://arxiv.org/abs/2403.05247
Tianrui Lou, Xiaojun Jia, Jindong Gu, Li Liu, Siyuan Liang, Bangyan He, Xiaochun Cao
2,403.05247
Adversarial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models yet the adversarial examples they produce are easily perceived or defended against. The trade-off between the imperceptibility and adversarial strength leads most point attack methods to inevitably introduce easily detectable outlier points upon a successful attack. Another promising strategy shape-based attack can effectively eliminate outliers but existing methods often suffer significant reductions in imperceptibility due to irrational deformations. We find that concealing deformation perturbations in areas insensitive to human eyes can achieve a better trade-off between imperceptibility and adversarial strength specifically in parts of the object surface that are complex and exhibit drastic curvature changes. Therefore we propose a novel shape-based adversarial attack method HiT-ADV which initially conducts a two-stage search for attack regions based on saliency and imperceptibility scores and then adds deformation perturbations in each attack region using Gaussian kernel functions. Additionally HiT-ADV is extendable to physical attack. We propose that by employing benign resampling and benign rigid transformations we can further enhance physical adversarial strength with little sacrifice to imperceptibility. Extensive experiments have validated the superiority of our method in terms of adversarial and imperceptible properties in both digital and physical spaces.
[]
[]
[]
[]
2,568
2,569
SG-BEV: Satellite-Guided BEV Fusion for Cross-View Semantic Segmentation
Junyan Ye, Qiyan Luo, Jinhua Yu, Huaping Zhong, Zhimeng Zheng, Conghui He, Weijia Li
null
This paper aims at achieving fine-grained building attribute segmentation in a cross-view scenario i.e. using satellite and street-view image pairs. The main challenge lies in overcoming the significant perspective differences between street views and satellite views. In this work we introduce SG-BEV a novel approach for satellite-guided BEV fusion for cross-view semantic segmentation. To overcome the limitations of existing cross-view projection methods in capturing the complete building facade features we innovatively incorporate Bird's Eye View (BEV) method to establish a spatially explicit mapping of street-view features. Moreover we fully leverage the advantages of multiple perspectives by introducing a novel satellite-guided reprojection module optimizing the uneven feature distribution issues associated with traditional BEV methods. Our method demonstrates significant improvements on four cross-view datasets collected from multiple cities including New York San Francisco and Boston. On average across these datasets our method achieves an increase in mIOU by 10.13% and 5.21% compared with the state-of-the-art satellite-based and cross-view methods. The code and datasets of this work will be released at https://github.com/sysu-liweijia-lab/SG-BEV.
[]
[]
[]
[]
2,569
2,570
Brush2Prompt: Contextual Prompt Generator for Object Inpainting
Mang Tik Chiu, Yuqian Zhou, Lingzhi Zhang, Zhe Lin, Connelly Barnes, Sohrab Amirghodsi, Eli Shechtman, Humphrey Shi
null
Object inpainting is a task that involves adding objects to real images and seamlessly compositing them. With the recent commercialization of products like Stable Diffusion and Generative Fill inserting objects into images by using prompts has achieved impressive visual results. In this paper we propose a prompt suggestion model to simplify the process of prompt input. When the user provides an image and a mask our model predicts suitable prompts based on the partial contextual information in the masked image and the shape and location of the mask. Specifically we introduce a concept-diffusion in the CLIP space that predicts CLIP-text embeddings from a masked image. These diffused embeddings can be directly injected into open-source inpainting models like Stable Diffusion and its variants. Alternatively they can be decoded into natural language for use in other publicly available applications such as Generative Fill. Our prompt suggestion model demonstrates a balanced accuracy and diversity showing its capability to be both contextually aware and creatively adaptive.
[]
[]
[]
[]
2,570
2,571
Joint-Task Regularization for Partially Labeled Multi-Task Learning
http://arxiv.org/abs/2404.01976
Kento Nishi, Junsik Kim, Wanhua Li, Hanspeter Pfister
2,404.01976
Multi-task learning has become increasingly popular in the machine learning field but its practicality is hindered by the need for large labeled datasets. Most multi-task learning methods depend on fully labeled datasets wherein each input example is accompanied by ground-truth labels for all target tasks. Unfortunately curating such datasets can be prohibitively expensive and impractical especially for dense prediction tasks which require per-pixel labels for each image. With this in mind we propose Joint-Task Regularization (JTR) an intuitive technique which leverages cross-task relations to simultaneously regularize all tasks in a single joint-task latent space to improve learning when data is not fully labeled for all tasks. JTR stands out from existing approaches in that it regularizes all tasks jointly rather than separately in pairs---therefore it achieves linear complexity relative to the number of tasks while previous methods scale quadratically. To demonstrate the validity of our approach we extensively benchmark our method across a wide variety of partially labeled scenarios based on NYU-v2 Cityscapes and Taskonomy.
[]
[]
[]
[]
2,571
2,572
Shallow-Deep Collaborative Learning for Unsupervised Visible-Infrared Person Re-Identification
Bin Yang, Jun Chen, Mang Ye
null
Unsupervised visible-infrared person re-identification (US-VI-ReID) centers on learning a cross-modality retrieval model without labels reducing the reliance on expensive cross-modality manual annotation. Previous US-VI-ReID works gravitate toward learning cross-modality information with the deep features extracted from the ultimate layer. Nevertheless interfered by the multiple discrepancies solely relying on deep features is insufficient for accurately learning modality-invariant features resulting in negative optimization. The shallow feature from the shallow layers contains nuanced detail information which is critical for effective cross-modality learning but is disregarded regrettably by the existing methods. To address the above issues we design a Shallow-Deep Collaborative Learning (SDCL) framework based on the transformer with shallow-deep contrastive learning incorporating Collaborative Neighbor Learning (CNL) and Collaborative Ranking Association (CRA) module. Specifically CNL unveils the intrinsic homogeneous and heterogeneous collaboration which are harnessed for neighbor alignment enhancing the robustness in a dynamic manner. Furthermore CRA associates the cross-modality labels with the ranking association between shallow and deep features furnishing valuable supervision for cross-modality learning. Extensive experiments validate the superiority of our method even outperforming certain supervised counterparts.
[]
[]
[]
[]
2,572
2,573
Dancing with Still Images: Video Distillation via Static-Dynamic Disentanglement
http://arxiv.org/abs/2312.00362
Ziyu Wang, Yue Xu, Cewu Lu, Yong-Lu Li
2,312.00362
Recently dataset distillation has paved the way towards efficient machine learning especially for image datasets. However the distillation for videos characterized by an exclusive temporal dimension remains an underexplored domain. In this work we provide the first systematic study of video distillation and introduce a taxonomy to categorize temporal compression. Our investigation reveals that the temporal information is usually not well learned during distillation and the temporal dimension of synthetic data contributes little. The observations motivate our unified framework of disentangling the dynamic and static information in the videos. It first distills the videos into still images as static memory and then compensates the dynamic and motion information with a learnable dynamic memory block. Our method achieves state-of-the-art on video datasets at different scales with notably smaller memory storage budget. Our code is available at https://github.com/yuz1wan/video_distillation.
[]
[]
[]
[]
2,573
2,574
Context-Aware Integration of Language and Visual References for Natural Language Tracking
http://arxiv.org/abs/2403.19975
Yanyan Shao, Shuting He, Qi Ye, Yuchao Feng, Wenhan Luo, Jiming Chen
2,403.19975
Tracking by natural language specification (TNL) aims to consistently localize a target in a video sequence given a linguistic description in the initial frame. Existing methodologies perform language-based and template-based matching for target reasoning separately and merge the matching results from two sources which suffer from tracking drift when language and visual templates miss-align with the dynamic target state and ambiguity in the later merging stage. To tackle the issues we propose a joint multi-modal tracking framework with 1) a prompt modulation module to leverage the complementarity between temporal visual templates and language expressions enabling precise and context-aware appearance and linguistic cues and 2) a unified target decoding module to integrate the multi-modal reference cues and executes the integrated queries on the search image to predict the target location in an end-to-end manner directly. This design ensures spatio-temporal consistency by leveraging historical visual information and introduces an integrated solution generating predictions in a single step. Extensive experiments conducted on TNL2K OTB-Lang LaSOT and RefCOCOg validate the efficacy of our proposed approach. The results demonstrate competitive performance against state-of-the-art methods for both tracking and grounding. Code is available at https://github.com/twotwo2/QueryNLT
[]
[]
[]
[]
2,574
2,575
An Edit Friendly DDPM Noise Space: Inversion and Manipulations
http://arxiv.org/abs/2304.06140
Inbar Huberman-Spiegelglas, Vladimir Kulikov, Tomer Michaeli
2,304.0614
Denoising diffusion probabilistic models (DDPMs) employ a sequence of white Gaussian noise samples to generate an image. In analogy with GANs those noise maps could be considered as the latent code associated with the generated image. However this native noise space does not possess a convenient structure and is thus challenging to work with in editing tasks. Here we propose an alternative latent noise space for DDPM that enables a wide range of editing operations via simple means and present an inversion method for extracting these edit-friendly noise maps for any given image (real or synthetically generated). As opposed to the native DDPM noise space the edit-friendly noise maps do not have a standard normal distribution and are not statistically independent across timesteps. However they allow perfect reconstruction of any desired image and simple transformations on them translate into meaningful manipulations of the output image (e.g. shifting color edits). Moreover in text-conditional models fixing those noise maps while changing the text prompt modifies semantics while retaining structure. We illustrate how this property enables text-based editing of real images via the diverse DDPM sampling scheme (in contrast to the popular non-diverse DDIM inversion). We also show how it can be used within existing diffusion-based editing methods to improve their quality and diversity. The code of the method is attached to this submission.
[]
[]
[]
[]
2,575
2,576
LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry
Weirong Chen, Le Chen, Rui Wang, Marc Pollefeys
null
Visual odometry estimates the motion of a moving camera based on visual input. Existing methods mostly focusing on two-view point tracking often ignore the rich temporal context in the image sequence thereby overlooking the global motion patterns and providing no assessment of the full trajectory reliability. These shortcomings hinder performance in scenarios with occlusion dynamic objects and low-texture areas. To address these challenges we present the Long-term Effective Any Point Tracking (LEAP) module. LEAP innovatively combines visual inter-track and temporal cues with mindfully selected anchors for dynamic track estimation. Moreover LEAP's temporal probabilistic formulation integrates distribution updates into a learnable iterative refinement module to reason about point-wise uncertainty. Based on these traits we develop LEAP-VO a robust visual odometry system adept at handling occlusions and dynamic scenes. Our mindful integration showcases a novel practice by employing long-term point tracking as the front-end. Extensive experiments demonstrate that the proposed pipeline significantly outperforms existing baselines across various visual odometry benchmarks.
[]
[]
[]
[]
2,576
2,577
RoDLA: Benchmarking the Robustness of Document Layout Analysis Models
http://arxiv.org/abs/2403.14442
Yufan Chen, Jiaming Zhang, Kunyu Peng, Junwei Zheng, Ruiping Liu, Philip Torr, Rainer Stiefelhagen
2,403.14442
Before developing a Document Layout Analysis (DLA) model in real-world applications conducting comprehensive robustness testing is essential. However the robustness of DLA models remains underexplored in the literature. To address this we are the first to introduce a robustness benchmark for DLA models which includes 450K document images of three datasets. To cover realistic corruptions we propose a perturbation taxonomy with 12 common document perturbations with 3 severity levels inspired by real-world document processing. Additionally to better understand document perturbation impacts we propose two metrics Mean Perturbation Effect (mPE) for perturbation assessment and Mean Robustness Degradation (mRD) for robustness evaluation. Furthermore we introduce a self-titled model i.e. Robust Document Layout Analyzer (RoDLA) which improves attention mechanisms to boost extraction of robust features. Experiments on the proposed benchmarks (PubLayNet-P DocLayNet-P and M6Doc-P) demonstrate that RoDLA obtains state-of-the-art mRD scores of 115.7 135.4 and 150.4 respectively. Compared to previous methods RoDLA achieves notable improvements in mAP of +3.8% +7.1% and +12.1% respectively.
[]
[]
[]
[]
2,577
2,578
UniRepLKNet: A Universal Perception Large-Kernel ConvNet for Audio Video Point Cloud Time-Series and Image Recognition
http://arxiv.org/abs/2311.15599
Xiaohan Ding, Yiyuan Zhang, Yixiao Ge, Sijie Zhao, Lin Song, Xiangyu Yue, Ying Shan
2,311.15599
Large-kernel convolutional neural networks (ConvNets) have recently received extensive research attention but two unresolved and critical issues demand further investigation. 1) The architectures of existing large-kernel ConvNets largely follow the design principles of conventional ConvNets or transformers while the architectural design for large-kernel ConvNets remains under-addressed. 2) As transformers have dominated multiple modalities it remains to be investigated whether ConvNets also have a strong universal perception ability in domains beyond vision. In this paper we contribute from two aspects. 1) We propose four architectural guidelines for designing large-kernel ConvNets the core of which is to exploit the essential characteristics of large kernels that distinguish them from small kernels - they can see wide without going deep. Following such guidelines our proposed large-kernel ConvNet shows leading performance in image recognition (ImageNet accuracy of 88.0% ADE20K mIoU of 55.6% and COCO box AP of 56.4%) demonstrating better performance and higher speed than the recent powerful competitors. 2) We discover large kernels are the key to unlocking the exceptional performance of ConvNets in domains where they were originally not proficient. With certain modality-related preprocessing approaches the proposed model achieves state-of-the-art performance on time-series forecasting and audio recognition tasks even without modality-specific customization to the architecture. All the code and models are publicly available on GitHub and Huggingface.
[]
[]
[]
[]
2,578
2,579
Unveiling the Unknown: Unleashing the Power of Unknown to Known in Open-Set Source-Free Domain Adaptation
Fuli Wan, Han Zhao, Xu Yang, Cheng Deng
null
Open-Set Source-Free Domain Adaptation aims to transfer knowledge in realistic scenarios where the target domain has additional unknown classes compared to the limited-access source domain. Due to the absence of information on unknown classes existing methods mainly transfer knowledge of known classes while roughly grouping unknown classes as one attenuating the knowledge transfer and generalization. In contrast this paper advocates that exploring unknown classes can better identify known ones and proposes a domain adaptation model to transfer knowledge on known and unknown classes jointly. Specifically given a source pre-trained model we first introduce an unknown diffuser that can determine whether classes in space need to be split and merged through similarity measures to estimate and generate a wider class space distribution including known and unknown classes. Based on such a wider space distribution we enhance the reliability of known class knowledge in the source pre-trained model through contrastive constraint. Finally various supervision information including reliable known class knowledge and clustered pseudo-labels optimize the model for impressive knowledge transfer and generalization. Extensive experiments show that our network can achieve superior exploration and knowledge generalization on unknown classes while with excellent known class transfer. The code is available at https://github.com/xdwfl/UPUK.
[]
[]
[]
[]
2,579
2,580
BilevelPruning: Unified Dynamic and Static Channel Pruning for Convolutional Neural Networks
Shangqian Gao, Yanfu Zhang, Feihu Huang, Heng Huang
null
Most existing dynamic or runtime channel pruning methods have to store all weights to achieve efficient inference which brings extra storage costs. Static pruning methods can reduce storage costs directly but their performance is limited by using a fixed sub-network to approximate the original model. Most existing pruning works suffer from these drawbacks because they were designed to only conduct either static or dynamic pruning. In this paper we propose a novel method to solve both efficiency and storage challenges via simultaneously conducting dynamic and static channel pruning for convolutional neural networks. We propose a new bi-level optimization based model to naturally integrate the static and dynamic channel pruning. By doing so our method enjoys benefits from both sides and the disadvantages of dynamic and static pruning are reduced. After pruning we permanently remove redundant parameters and then finetune the model with dynamic flexibility. Experimental results on CIFAR-10 and ImageNet datasets suggest that our method can achieve state-of-the-art performance compared to existing dynamic and static channel pruning methods.
[]
[]
[]
[]
2,580
2,581
IDGuard: Robust General Identity-centric POI Proactive Defense Against Face Editing Abuse
Yunshu Dai, Jianwei Fei, Fangjun Huang
null
In this work we propose IDGuard a novel proactive defense method from the perspective of developers to protect Persons-of-Interest (POI) such as national leaders from face editing abuse. We build a bridge between identities and model behavior safeguarding POI identities rather than merely certain face images. Given a face editing model IDGuard enables it to reject editing any image containing POI identities while retaining its editing functionality for regular use. Specifically we insert an ID Normalization Layer into the original face editing model and introduce an ID Extractor to extract the identities of input images. To differentiate the editing behavior between POI and nonPOI we use a transformer-based ID Encoder to encode extracted POI identities as parameters of the ID Normalization Layer. Our method supports the simultaneous protection of multiple POI and allows for the addition of new POI in the inference stage without the need for retraining. Extensive experiments show that our method achieves 100% protection accuracy on POI images even if they are neither included in the training set nor subject to any preprocessing. Notably our method exhibits excellent robustness against image and model attacks and maintains 100% protection performance when generalized to various face editing models further demonstrating its practicality.
[]
[]
[]
[]
2,581
2,582
SwiftBrush: One-Step Text-to-Image Diffusion Model with Variational Score Distillation
http://arxiv.org/abs/2312.05239
Thuan Hoang Nguyen, Anh Tran
2,312.05239
Despite their ability to generate high-resolution and diverse images from text prompts text-to-image diffusion models often suffer from slow iterative sampling processes. Model distillation is one of the most effective directions to accelerate these models. However previous distillation methods fail to retain the generation quality while requiring a significant amount of images for training either from real data or synthetically generated by the teacher model. In response to this limitation we present a novel image-free distillation scheme named SwiftBrush. Drawing inspiration from text-to-3D synthesis in which a 3D neural radiance field that aligns with the input prompt can be obtained from a 2D text-to-image diffusion prior via a specialized loss without the use of any 3D data ground-truth our approach re-purposes that same loss for distilling a pretrained multi-step text-to-image model to a student network that can generate high-fidelity images with just a single inference step. In spite of its simplicity our model stands as one of the first one-step text-to-image generators that can produce images of comparable quality to Stable Diffusion without reliance on any training image data. Remarkably SwiftBrush achieves an FID score of 16.67 and a CLIP score of 0.29 on the COCO-30K benchmark achieving competitive results or even substantially surpassing existing state-of-the-art distillation techniques.
[]
[]
[]
[]
2,582
2,583
DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations
http://arxiv.org/abs/2403.06951
Tianhao Qi, Shancheng Fang, Yanze Wu, Hongtao Xie, Jiawei Liu, Lang Chen, Qian He, Yongdong Zhang
2,403.06951
The diffusion-based text-to-image model harbors immense potential in transferring reference style. However current encoder-based approaches significantly impair the text controllability of text-to-image models while transferring styles. In this paper we introduce DEADiff to address this issue using the following two strategies: 1) a mechanism to decouple the style and semantics of reference images. The decoupled feature representations are first extracted by Q-Formers which are instructed by different text descriptions. Then they are injected into mutually exclusive subsets of cross-attention layers for better disentanglement. 2) A non-reconstructive learning method. The Q-Formers are trained using paired images rather than the identical target in which the reference image and the ground-truth image are with the same style or semantics. We show that DEADiff attains the best visual stylization results and optimal balance between the text controllability inherent in the text-to-image model and style similarity to the reference image as demonstrated both quantitatively and qualitatively. Our project page is https://tianhao-qi.github.io/DEADiff/.
[]
[]
[]
[]
2,583
2,584
Instance-Adaptive and Geometric-Aware Keypoint Learning for Category-Level 6D Object Pose Estimation
http://arxiv.org/abs/2403.19527
Xiao Lin, Wenfei Yang, Yuan Gao, Tianzhu Zhang
2,403.19527
Category-level 6D object pose estimation aims to estimate the rotation translation and size of unseen instances within specific categories. In this area dense correspondence-based methods have achieved leading performance. However they do not explicitly consider the local and global geometric information of different instances resulting in poor generalization ability to unseen instances with significant shape variations. To deal with this problem we propose a novel Instance-adaptive and To deal with this problem we propose a novel Instance-Adaptive and Geometric-Aware Keypoint Learning method for category-level 6D object pose estimation (AG-Pose) which includes two key designs: (1) The first design is an Instance-Adaptive Keypoint Detection module which can adaptively detect a set of sparse keypoints for various instances to represent their geometric structures. (2) The second design is a Geometric-Aware Feature Aggregation module which can efficiently integrate the local and global geometric information into keypoint features. These two modules can work together to establish robust keypoint-level correspondences for unseen instances thus enhancing the generalization ability of the model.Experimental results on CAMERA25 and REAL275 datasets show that the proposed AG-Pose outperforms state-of-the-art methods by a large margin without category-specific shape priors.
[]
[]
[]
[]
2,584
2,585
Universal Semi-Supervised Domain Adaptation by Mitigating Common-Class Bias
http://arxiv.org/abs/2403.11234
Wenyu Zhang, Qingmu Liu, Felix Ong Wei Cong, Mohamed Ragab, Chuan-Sheng Foo
2,403.11234
Domain adaptation is a critical task in machine learning that aims to improve model performance on a target domain by leveraging knowledge from a related source domain. In this work we introduce Universal Semi-Supervised Domain Adaptation (UniSSDA) a practical yet challenging setting where the target domain is partially labeled and the source and target label space may not strictly match. UniSSDA is at the intersection of Universal Domain Adaptation (UniDA) and Semi-Supervised Domain Adaptation (SSDA): the UniDA setting does not allow for fine-grained categorization of target private classes not represented in the source domain while SSDA focuses on the restricted closed-set setting where source and target label spaces match exactly. Existing UniDA and SSDA methods are susceptible to common-class bias in UniSSDA settings where models overfit to data distributions of classes common to both domains at the expense of private classes. We propose a new prior-guided pseudo-label refinement strategy to reduce the reinforcement of common-class bias due to pseudo-labeling a common label propagation strategy in domain adaptation. We demonstrate the effectiveness of the proposed strategy on benchmark datasets Office-Home DomainNet and VisDA. The proposed strategy attains the best performance across UniSSDA adaptation settings and establishes a new baseline for UniSSDA.
[]
[]
[]
[]
2,585
2,586
Exact Fusion via Feature Distribution Matching for Few-shot Image Generation
Yingbo Zhou, Yutong Ye, Pengyu Zhang, Xian Wei, Mingsong Chen
null
Few-shot image generation as an important yet challenging visual task still suffers from the trade-off between generation quality and diversity. According to the principle of feature-matching learning existing fusion-based methods usually fuse different features by using similarity measurements or attention mechanisms which may match features inaccurately and lead to artifacts in the texture and structure of generated images. In this paper we propose an exact Fusion via Feature Distribution matching Generative Adversarial Network (F2DGAN) for few-shot image generation. The rationale behind this is that feature distribution matching is much more reliable than feature matching to explore the statistical characters in image feature space for limited real-world data. To model feature distributions from only a few examples for feature fusion we design a novel variational feature distribution matching fusion module to perform exact fusion by empirical cumulative distribution functions. Specifically we employ a variational autoencoder to transform deep image features into distributions and fuse different features exactly by applying histogram matching. Additionally we formulate two effective losses to guide the matching process for better fitting our fusion strategy. Extensive experiments compared with state-of-the-art methods on three public datasets demonstrate the superiority of F2DGAN for few-shot image generation in terms of generation quality and diversity and the effectiveness of data augmentation in downstream classification tasks.
[]
[]
[]
[]
2,586
2,587
CoDeF: Content Deformation Fields for Temporally Consistent Video Processing
http://arxiv.org/abs/2308.07926
Hao Ouyang, Qiuyu Wang, Yuxi Xiao, Qingyan Bai, Juntao Zhang, Kecheng Zheng, Xiaowei Zhou, Qifeng Chen, Yujun Shen
2,308.07926
We present the content deformation field (CoDeF) as a new type of video representation which consists of a canonical content field aggregating the static contents in the entire video and a temporal deformation field recording the transformations from the canonical image (i.e. rendered from the canonical content field) to each individual frame along the time axis. Given a target video these two fields are jointly optimized to reconstruct it through a carefully tailored rendering pipeline. We advisedly introduce some regularizations into the optimization process urging the canonical content field to inherit semantics (e.g. the object shape) from the video. With such a design CoDeF naturally supports lifting image algorithms for video processing in the sense that one can apply an image algorithm to the canonical image and effortlessly propagate the outcomes to the entire video with the aid of the temporal deformation field. We experimentally show that CoDeF is able to lift image-to-image translation to video-to-video translation and lift keypoint detection to keypoint tracking without any training. More importantly thanks to our lifting strategy that deploys the algorithms on only one image we achieve superior cross-frame consistency in processed videos compared to existing video-to-video translation approaches and even manage to track non-rigid objects like water and smog. Code will be made publicly available.
[]
[]
[]
[]
2,587
2,588
QUADify: Extracting Meshes with Pixel-level Details and Materials from Images
Maximilian Frühauf, Hayko Riemenschneider, Markus Gross, Christopher Schroers
null
Despite exciting progress in automatic 3D reconstruction from images excessive and irregular triangular faces in the resulting meshes still constitute a significant challenge when it comes to adoption in practical artist workflows. Therefore we propose a method to extract regular quad-dominant meshes from posed images. More specifically we generate a high-quality 3D model through decomposition into an easily editable quad-dominant mesh with pixel-level details such as displacement materials and lighting. To enable end-to-end learning of shape and quad topology we QUADify a neural implicit representation using our novel differentiable re-meshing objective. Distinct from previous work our method exploits artifact-free Catmull-Clark subdivision combined with vertex displacement to extract pixel-level details linked to the base geometry. Finally we apply differentiable rendering techniques for material and lighting decomposition to optimize for image reconstruction. Our experiments show the benefits of end-to-end re-meshing and that our method yields state-of-the-art geometric accuracy while providing lightweight meshes with displacements and textures that are directly compatible with professional renderers and game engines.
[]
[]
[]
[]
2,588
2,589
RecDiffusion: Rectangling for Image Stitching with Diffusion Models
http://arxiv.org/abs/2403.19164
Tianhao Zhou, Haipeng Li, Ziyi Wang, Ao Luo, Chen-Lin Zhang, Jiajun Li, Bing Zeng, Shuaicheng Liu
2,403.19164
Image stitching from different captures often results in non-rectangular boundaries which is often considered unappealing. To solve non-rectangular boundaries current solutions involve cropping which discards image content inpainting which can introduce unrelated content or warping which can distort non-linear features and introduce artifacts. To overcome these issues we introduce a novel diffusion-based learning framework RecDiffusion for image stitching rectangling. This framework combines Motion Diffusion Models (MDM) to generate motion fields effectively transitioning from the stitched image's irregular borders to a geometrically corrected intermediary. Followed by Content Diffusion Models (CDM) for image detail refinement. Notably our sampling process utilizes a weighted map to identify regions needing correction during each iteration of CDM. Our RecDiffusion ensures geometric accuracy and overall visual appeal surpassing all previous methods in both quantitative and qualitative measures when evaluated on public benchmarks. Code is released at https://github.com/lhaippp/RecDiffusion.
[]
[]
[]
[]
2,589
2,590
Eclipse: Disambiguating Illumination and Materials using Unintended Shadows
http://arxiv.org/abs/2305.16321
Dor Verbin, Ben Mildenhall, Peter Hedman, Jonathan T. Barron, Todd Zickler, Pratul P. Srinivasan
2,305.16321
Decomposing an object's appearance into representations of its materials and the surrounding illumination is difficult even when the object's 3D shape is known beforehand. This problem is especially challenging for diffuse objects: it is ill-conditioned because diffuse materials severely blur incoming light and it is ill-posed because diffuse materials under high-frequency lighting can be indistinguishable from shiny materials under low-frequency lighting. We show that it is possible to recover precise materials and illumination---even from diffuse objects---by exploiting unintended shadows like the ones cast onto an object by the photographer who moves around it. These shadows are a nuisance in most previous inverse rendering pipelines but here we exploit them as signals that improve conditioning and help resolve material-lighting ambiguities. We present a method based on differentiable Monte Carlo ray tracing that uses images of an object to jointly recover its spatially-varying materials the surrounding illumination environment and the shapes of the unseen light occluders who inadvertently cast shadows upon it.
[]
[]
[]
[]
2,590
2,591
Feature 3DGS: Supercharging 3D Gaussian Splatting to Enable Distilled Feature Fields
http://arxiv.org/abs/2312.03203
Shijie Zhou, Haoran Chang, Sicheng Jiang, Zhiwen Fan, Zehao Zhu, Dejia Xu, Pradyumna Chari, Suya You, Zhangyang Wang, Achuta Kadambi
2,312.03203
3D scene representations have gained immense popularity in recent years. Methods that use Neural Radiance fields are versatile for traditional tasks such as novel view synthesis. In recent times some work has emerged that aims to extend the functionality of NeRF beyond view synthesis for semantically aware tasks such as editing and segmentation using 3D feature field distillation from 2D foundation models. However these methods have two major limitations: (a) they are limited by the rendering speed of NeRF pipelines and (b) implicitly represented feature fields suffer from continuity artifacts reducing feature quality. Recently 3D Gaussian Splatting has shown state-of-the-art performance on real-time radiance field rendering. In this work we go one step further: in addition to radiance field rendering we enable 3D Gaussian splatting on arbitrary-dimension semantic features via 2D foundation model distillation. This translation is not straightforward: naively incorporating feature fields in the 3DGS framework encounters significant challenges notably the disparities in spatial resolution and channel consistency between RGB images and feature maps. We propose architectural and training changes to efficiently avert this problem. Our proposed method is general and our experiments showcase novel view semantic segmentation language-guided editing and segment anything through learning feature fields from state-of-the-art 2D foundation models such as SAM and CLIP-LSeg. Across experiments our distillation method is able to provide comparable or better results while being significantly faster to both train and render. Additionally to the best of our knowledge we are the first method to enable point and bounding-box prompting for radiance field manipulation by leveraging the SAM model. Project website at: https://feature-3dgs.github.io/
[]
[]
[]
[]
2,591
2,592
Balancing Act: Distribution-Guided Debiasing in Diffusion Models
http://arxiv.org/abs/2402.18206
Rishubh Parihar, Abhijnya Bhat, Abhipsa Basu, Saswat Mallick, Jogendra Nath Kundu, R. Venkatesh Babu
2,402.18206
Diffusion Models (DMs) have emerged as powerful generative models with unprecedented image generation capability. These models are widely used for data augmentation and creative applications. However DMs reflect the biases present in the training datasets. This is especially concerning in the context of faces where the DM prefers one demographic subgroup vs others (eg. female vs male). In this work we present a method for debiasing DMs without relying on additional reference data or model retraining. Specifically we propose Distribution Guidance which enforces the generated images to follow the prescribed attribute distribution. To realize this we build on the key insight that the latent features of denoising UNet hold rich demographic semantics and the same can be leveraged to guide debiased generation. We train Attribute Distribution Predictor (ADP) - a small mlp that maps the latent features to the distribution of attributes. ADP is trained with pseudo labels generated from existing attribute classifiers. The proposed Distribution Guidance with ADP enables us to do fair generation. Our method reduces bias across single/multiple attributes and outperforms the baseline by a significant margin for unconditional and text-conditional diffusion models. Further we present a downstream task of training a fair attribute classifier by augmenting the training set with our generated data.
[]
[]
[]
[]
2,592
2,593
Viewpoint-Aware Visual Grounding in 3D Scenes
Xiangxi Shi, Zhonghua Wu, Stefan Lee
null
Referring expressions for visual objects often include descriptions of relative spatial arrangements to other objects -- e.g. "to the right of" -- that depend on the point of view of the speaker. In 2D referring expression tasks this viewpoint is captured unambiguously in the image. However grounding expressions with such spatial language in 3D without viewpoint annotations can be ambiguous. In this paper we investigate the significance of viewpoint information in 3D visual grounding -- introducing a model that explicitly predicts the speaker's viewpoint based on the referring expression and scene. We pretrain this model on a synthetically generated dataset that provides viewpoint annotations and then finetune on 3D referring expression datasets. Further we introduce an auxiliary uniform object representation loss to encourage viewpoint invariance in learned object representations. We find that our proposed ViewPoint Prediction Network (VPP-Net) achieves state-of-the-art performance on ScanRefer SR3D and NR3D -- improving [email protected] by 1.06% 0.60% and 2.00% respectively compared to prior work.
[]
[]
[]
[]
2,593
2,594
4K4D: Real-Time 4D View Synthesis at 4K Resolution
http://arxiv.org/abs/2310.11448
Zhen Xu, Sida Peng, Haotong Lin, Guangzhao He, Jiaming Sun, Yujun Shen, Hujun Bao, Xiaowei Zhou
2,310.11448
This paper targets high-fidelity and real-time view synthesis of dynamic 3D scenes at 4K resolution. Recent methods on dynamic view synthesis have shown impressive rendering quality. However their speed is still limited when rendering high-resolution images. To overcome this problem we propose 4K4D a 4D point cloud representation that supports hardware rasterization and network pre-computation to enable unprecedented rendering speed with a high rendering quality. Our representation is built on a 4D feature grid so that the points are naturally regularized and can be robustly optimized. In addition we design a novel hybrid appearance model that significantly boosts the rendering quality while preserving efficiency. Moreover we develop a differentiable depth peeling algorithm to effectively learn the proposed model from RGB videos. Experiments show that our representation can be rendered at over 400 FPS on the DNA-Rendering dataset at 1080p resolution and 80 FPS on the ENeRF-Outdoor dataset at 4K resolution using an RTX 4090 GPU which is 30x faster than previous methods and achieves the state-of-the-art rendering quality. Our project page is available at https://zju3dv.github.io/4k4d.
[]
[]
[]
[]
2,594
2,595
View-decoupled Transformer for Person Re-identification under Aerial-ground Camera Network
http://arxiv.org/abs/2403.14513
Quan Zhang, Lei Wang, Vishal M. Patel, Xiaohua Xie, Jianhaung Lai
2,403.14513
Existing person re-identification methods have achieved remarkable advances in appearance-based identity association across homogeneous cameras such as ground-ground matching. However as a more practical scenario aerial-ground person re-identification (AGPReID) among heterogeneous cameras has received minimal attention. To alleviate the disruption of discriminative identity representation by dramatic view discrepancy as the most significant challenge in AGPReID the view-decoupled transformer (VDT) is proposed as a simple yet effective framework. Two major components are designed in VDT to decouple view-related and view-unrelated features namely hierarchical subtractive separation and orthogonal loss where the former separates these two features inside the VDT and the latter constrains these two to be independent. In addition we contribute a large-scale AGPReID dataset called CARGO consisting of five/eight aerial/ground cameras 5000 identities and 108563 images. Experiments on two datasets show that VDT is a feasible and effective solution for AGPReID surpassing the previous method on mAP/Rank1 by up to 5.0%/2.7% on CARGO and 3.7%/5.2% on AG-ReID keeping the same magnitude of computational complexity. Our project is available at https://github.com/LinlyAC/VDT-AGPReID
[]
[]
[]
[]
2,595
2,596
CRKD: Enhanced Camera-Radar Object Detection with Cross-modality Knowledge Distillation
http://arxiv.org/abs/2403.19104
Lingjun Zhao, Jingyu Song, Katherine A. Skinner
2,403.19104
In the field of 3D object detection for autonomous driving LiDAR-Camera (LC) fusion is the top-performing sensor configuration. Still LiDAR is relatively high cost which hinders adoption of this technology for consumer automobiles. Alternatively camera and radar are commonly deployed on vehicles already on the road today but performance of Camera-Radar (CR) fusion falls behind LC fusion. In this work we propose Camera-Radar Knowledge Distillation (CRKD) to bridge the performance gap between LC and CR detectors with a novel cross-modality KD framework. We use the Bird's-Eye-View (BEV) representation as the shared feature space to enable effective knowledge distillation. To accommodate the unique cross-modality KD path we propose four distillation losses to help the student learn crucial features from the teacher model. We present extensive evaluations on the nuScenes dataset to demonstrate the effectiveness of the proposed CRKD framework. The project page for CRKD is https://song-jingyu.github.io/CRKD.
[]
[]
[]
[]
2,596
2,597
Differentiable Point-based Inverse Rendering
http://arxiv.org/abs/2312.02480
Hoon-Gyu Chung, Seokjun Choi, Seung-Hwan Baek
2,312.0248
We present differentiable point-based inverse rendering DPIR an analysis-by-synthesis method that processes images captured under diverse illuminations to estimate shape and spatially-varying BRDF. To this end we adopt point-based rendering eliminating the need for multiple samplings per ray typical of volumetric rendering thus significantly enhancing the speed of inverse rendering. To realize this idea we devise a hybrid point-volumetric representation for geometry and a regularized basis-BRDF representation for reflectance. The hybrid geometric representation enables fast rendering through point-based splatting while retaining the geometric details and stability inherent to SDF-based representations. The regularized basis-BRDF mitigates the ill-posedness of inverse rendering stemming from limited light-view angular samples. We also propose an efficient shadow detection method using point-based shadow map rendering. Our extensive evaluations demonstrate that DPIR outperforms prior works in terms of reconstruction accuracy computational efficiency and memory footprint. Furthermore our explicit point-based representation and rendering enables intuitive geometry and reflectance editing.
[]
[]
[]
[]
2,597
2,598
OED: Towards One-stage End-to-End Dynamic Scene Graph Generation
http://arxiv.org/abs/2405.16925
Guan Wang, Zhimin Li, Qingchao Chen, Yang Liu
2,405.16925
Dynamic Scene Graph Generation (DSGG) focuses on identifying visual relationships within the spatial-temporal domain of videos. Conventional approaches often employ multi-stage pipelines which typically consist of object detection temporal association and multi-relation classification. However these methods exhibit inherent limitations due to the separation of multiple stages and independent optimization of these sub-problems may yield sub-optimal solutions. To remedy these limitations we propose a one-stage end-to-end framework termed OED which streamlines the DSGG pipeline. This framework reformulates the task as a set prediction problem and leverages pair-wise features to represent each subject-object pair within the scene graph. Moreover another challenge of DSGG is capturing temporal dependencies we introduce a Progressively Refined Module (PRM) for aggregating temporal context without the constraints of additional trackers or handcrafted trajectories enabling end-to-end optimization of the network. Extensive experiments conducted on the Action Genome benchmark demonstrate the effectiveness of our design. The code and models are available at https://github.com/guanw-pku/OED.
[]
[]
[]
[]
2,598
2,599
CoG-DQA: Chain-of-Guiding Learning with Large Language Models for Diagram Question Answering
Shaowei Wang, Lingling Zhang, Longji Zhu, Tao Qin, Kim-Hui Yap, Xinyu Zhang, Jun Liu
null
Diagram Question Answering (DQA) is a challenging task requiring models to answer natural language questions based on visual diagram contexts. It serves as a crucial basis for academic tutoring technical support and more practical applications. DQA poses significant challenges such as the demand for domain-specific knowledge and the scarcity of annotated data which restrict the applicability of large-scale deep models. Previous approaches have explored external knowledge integration through pre-training but these methods are costly and can be limited by domain disparities. While Large Language Models (LLMs) show promise in question-answering there is still a gap in how to cooperate and interact with the diagram parsing process. In this paper we introduce the Chain-of-Guiding Learning Model for Diagram Question Answering (CoG-DQA) a novel framework that effectively addresses DQA challenges. CoG-DQA leverages LLMs to guide diagram parsing tools (DPTs) through the guiding chains enhancing the precision of diagram parsing while introducing rich background knowledge. Our experimental findings reveal that CoG-DQA surpasses all comparison models in various DQA scenarios achieving an average accuracy enhancement exceeding 5% and peaking at 11% across four datasets. These results underscore CoG-DQA's capacity to advance the field of visual question answering and promote the integration of LLMs into specialized domains.
[]
[]
[]
[]
2,599