Papers
arxiv:2401.01325

LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning

Published on Jan 2
ยท Submitted by akhaliq on Jan 3
Authors:
,
,
,
,
,
,
,

Abstract

This work elicits LLMs' inherent ability to handle long contexts without fine-tuning. The limited length of the training sequence during training may limit the application of Large Language Models (LLMs) on long input sequences for inference. In this work, we argue that existing LLMs themselves have inherent capabilities for handling long contexts. Based on this argument, we suggest extending LLMs' context window by themselves to fully utilize the inherent ability.We propose Self-Extend to stimulate LLMs' long context handling potential. The basic idea is to construct bi-level attention information: the group level and the neighbor level. The two levels are computed by the original model's self-attention, which means the proposed does not require any training. With only four lines of code modification, the proposed method can effortlessly extend existing LLMs' context window without any fine-tuning. We conduct comprehensive experiments and the results show that the proposed method can effectively extend existing LLMs' context window's length.

Community

The authors should reference ReRoPE (see eq(3) of https://kexue.fm/archives/9708 or https://normxu.github.io/Rethinking-Rotary-Position-Embedding-3/). The modified attention introduced in this paper is exactly the same except for the floor.

ReRope also tests a few extra regimes that may be interesting to try out

  1. Scaling w/ a temperature of log(n)
  2. LeakyReLu

#1 in particular is also used by https://huggingface.co/papers/2401.07004 (well, sqrt(log(n)) with an interpretation that it helps balance the information entropy in self-attention as sequences grow longer. Su on the other hand cites https://arxiv.org/abs/2202.12172 (under section 5.3 Log-length scaled attention) for his inspiration in https://normxu.github.io/Rethinking-Rotary-Position-Embedding/ to use a log(n) scaling factor.

Extend Large Language Models Without Fine-Tuning: Introducing SelfExtend!

Links ๐Ÿ”—:

๐Ÿ‘‰ Subscribe: https://www.youtube.com/@Arxflix
๐Ÿ‘‰ Twitter: https://x.com/arxflix
๐Ÿ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2401.01325 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 1

Collections including this paper 25