Papers
arxiv:2406.08598

Language Model Council: Benchmarking Foundation Models on Highly Subjective Tasks by Consensus

Published on Jun 12
· Submitted by justinxzhao on Jun 14

Abstract

The rapid advancement of Large Language Models (LLMs) necessitates robust and challenging benchmarks. Leaderboards like Chatbot Arena rank LLMs based on how well their responses align with human preferences. However, many tasks such as those related to emotional intelligence, creative writing, or persuasiveness, are highly subjective and often lack majoritarian human agreement. Judges may have irreconcilable disagreements about what constitutes a better response. To address the challenge of ranking LLMs on highly subjective tasks, we propose a novel benchmarking framework, the Language Model Council (LMC). The LMC operates through a democratic process to: 1) formulate a test set through equal participation, 2) administer the test among council members, and 3) evaluate responses as a collective jury. We deploy a council of 20 newest LLMs on an open-ended emotional intelligence task: responding to interpersonal dilemmas. Our results show that the LMC produces rankings that are more separable, robust, and less biased than those from any individual LLM judge, and is more consistent with a human-established leaderboard compared to other benchmarks.

Community

Paper author Paper submitter

LLM evals are really hard, so what happens if we let LLMs benchmark themselves via a democratic process?

The Language Model Council operates through a fully democratic process to: 1) formulate a test set through equal participation, 2) administer the test among council members, and 3) evaluate responses as a collective jury.

A Council of 20 newest LLMs (from 8 different organizations from 4 countries) compete and then judge each other on a highly subjective emotional intelligence task: responding to interpersonal dilemmas.

To our surprise, Qwen-1.5-110B emerges as the elected leader, surpassing GPT-4o which ranks second. See the full council-determined ranking on the website. Congrats to Alibaba for getting to the top of our unique democratic LLM leaderboard!

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.08598 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 2

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.