Start of Main Content
Nov. 19, 2025

12:00 p.m. – 1:00 p.m. CT

Add To Calendar

To be announced

AI agents operating in the real world must navigate a constant scarcity of data and adapt to ever-changing environments. Effective interactive decision-making requires more than knowledge distillation—it demands a deep understanding of uncertainty and the ability to actively reduce it through information gathering. Yet, despite impressive performance in basic knowledge tasks, state-of-the-art systems still falter in this regard. For instance, OpenAI has acknowledged that its latest agentic system, DeepResearch, “often fails to convey uncertainty accurately.” This talk presents a series of recent works addressing the core challenge of uncertainty quantification in natural language-based decision-making. Rather than modeling hidden environment parameters, we frame uncertainty as stemming from unknown future outcomes and quantify it via autoregressive sequence generation—predicting the next outcome step-by-step based on past interactions. Our approach leverages in-context learning to incorporate new information on the fly, avoiding the complexity of posterior inference and enabling scalable deployment across unstructured domains such as adaptive student assessments with text and image inputs. We formalize this methodology as a reduction from online decision-making to offline next-outcome prediction, allowing us to harness the full power of large-scale datasets and compute infrastructure already optimized for sequence prediction in interactive AI systems.

References:

Active Exploration via Autoregressive Generation of Missing Data Cai, Namkoong, Russo, and Zhang (2024)
Exchangeable Sequence Models Quantify Uncertainty Over Latent Concepts Ye and Namkoong (2024)
Architectural and Inferential Inductive Biases For Exchangeable Sequence Modeling Mittal, Li, Yen, Guetta, and Namkoong (2025)
Adaptive Elicitation of Latent Information Using Natural Language Wang, Zollo, Zemel, and Namkoong (2025)