I study the intersection of incentives, algorithms, and learning to design decision-making systems that can plan, reason, and collaborate as effectively as they generalize. With an interdisciplinary background in computer science, economics, and statistics, my research explores how incentives and rewards, algorithmic design, and learning principles interact to produce intelligent and adaptive systems.
Currently, my work focuses on how foundation models, such as large language models, can be harnessed for sequential decision making. I explore ways to integrate ideas from reinforcement learning, test-time computation, and adaptive search to enable autonomous agents that plan, learn, and generalize in complex environments. I'm particularly fascinated by how such models can achieve self-improvement and post-training adaptation—hallmarks of truly general intelligence.
I enjoy collaborations—feel free to reach out if you've got an interesting problem to chat about!
"Know what you know and know what you do not know. That is true wisdom."
In modern terms: know the known knowns, the known unknowns, and the unknown unknowns. This idea guides my research and reflects what I see as the central challenge in building intelligent systems: understanding and expanding the boundaries of one's own knowledge.
Outside of research, I enjoy playing and designing board games, reading science fiction, electronic music composition, grand strategy games, fencing, and squash. I find well-designed games to be elegant systems—rich sources of insight into planning, reasoning, and interaction.