Sample Efficient Demonstration Selection for In-Context Learning
Kiran Purohit*1
Venktesh V*2
Sourangshu Bhattacharya1
Avishek Anand2
1 IIT Kharagpur
2 TU Delft
ICML 2025

Abstract

The in-context learning paradigm with LLMs has been instrumental in advancing a wide range of natural language processing tasks. The selection of few-shot examples (exemplars / demonstration samples) is essential for constructing effective prompts under context-length budget constraints. In this paper, we formulate the exemplar selection task as a top-m best arms identification problem. A key challenge in this setup is the exponentially large number of arms that need to be evaluated to identify the m-best arms. We propose CASE (Challenger Arm Sampling for Exemplar selection), a novel sample-efficient selective exploration strategy that maintains a shortlist of “challenger” arms, which are current candidates for the top-m arms. In each iteration, only one of the arms from this shortlist or the current top-m set is pulled, thereby reducing sample complexity and, consequently, the number of LLM evaluations. Furthermore, we model the scores of exemplar subsets (arms) using a parameterized linear scoring function, leading to stochastic linear bandits setting. Our sample efficient method, CASE, achieves remarkable efficiency gains of up to 7× speedup in runtime while requiring 7× fewer LLM calls (87% reduction) without sacrificing performance compared to state-of-the-art exemplar selection methods.

Paper & Code

Kiran Purohit*, Venktesh V*, Sourangshu Bhattacharya, Avishek Anand
Sample Efficient Demonstration Selection for In-Context Learning
ICML, 2025
[PDF] [Code] [Slides] [Poster] [Video]