Eliminating Position Bias of Language Models: A Mechanistic Approach
Paper
•
2407.01100
•
Published
•
6
Note They discuss positional bias—the order of items fed to an LLM influences its answers. The reason is how the Transformer architecture is built. Then they propose a method called PINE (Position-INvariant inferencE) that aims to eliminate position bias in language models without retraining or changing the model architecture - just in inference. This is a significant change to how the model processes inputs during inference.