Stateful Memory-Augmented Transformers for Dialogue Modeling
Abstract
Transformer encoder-decoder models have shown impressive performance in dialogue modeling. However, as Transformers are inefficient in processing long sequences, dialogue history length often needs to be truncated. To address this problem, we propose a new memory-augmented Transformer that is compatible with existing pre-trained encoder-decoder models and enables efficient preservation of history information. It incorporates a separate memory module alongside the pre-trained Transformer to effectively interchange information between the memory states and the current input context. We evaluate our model on three dialogue datasets and two language modeling datasets. Experimental results show that our method has achieved superior efficiency and performance compared to other pre-trained Transformer baselines.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Investigating Decoder-only Large Language Models for Speech-to-text Translation (2024)
- A Unified Data Augmentation Framework for Low-Resource Multi-Domain Dialogue Generation (2024)
- S3: A Simple Strong Sample-effective Multimodal Dialog System (2024)
- Synthesizing Conversations from Unlabeled Documents using Automatic Response Segmentation (2024)
- Hello Again! LLM-powered Personalized Agent for Long-term Dialogue (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper