publications
2024
- Practical Bayesian Algorithm Execution via Posterior SamplingChu Xin Cheng* , Raul Astudillo* , Thomas Desautels , and Yisong YueIn The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024
We consider the Bayesian algorithm execution framework, where the goal is to select points for evaluating an expensive function to best infer a property of interest. By making the key observation that the property of interest for many tasks is a target set of points defined in terms of the function, we derive a simple yet effective and scalable posterior sampling algorithm, termed PS-BAX. Our approach addresses a broad range of problems, including many optimization variants and level-set estimation. Experiments across a diverse set of tasks show that PS-BAX achieves competitive performance against standard baselines, while being significantly faster, simpler to implement, and easily parallelizable. In addition, we show that PS-BAX is asymptotically consistent under mild regularity conditions. Consequently, our work yields new insights into posterior sampling, broadening its application scope and providing a strong baseline for future exploration.
- Improving sample efficiency of high dimensional Bayesian optimization with MCMCZeji Yi* , Yunyue Wei* , Chu Xin Cheng* , Kaibo He , and Yanan SuiIn Proceedings of the 6th Annual Learning for Dynamics & Control Conference , 2024
Sequential optimization methods are often confronted with the curse of dimensionality in high-dimensional spaces. Current approaches under the Gaussian process framework are still burdened by the computational complexity of tracking Gaussian process posteriors and need to partition the optimization problem into small regions to ensure exploration or assume an underlying low-dimensional structure. With the idea of transiting the candidate points towards more promising positions, we propose a new method based on Markov Chain Monte Carlo to efficiently sample from an approximated posterior. We provide theoretical guarantees of its convergence in the Gaussian process Thompson sampling setting. We also show experimentally that both the Metropolis-Hastings and the Langevin Dynamics version of our algorithm outperform state-of-the-art methods in high-dimensional sequential optimization and reinforcement learning benchmarks.
- Preferential Multi-Objective Bayesian OptimizationRaul Astudillo , Kejun Li , Maegan Tucker , Chu Xin Cheng , Aaron D. Ames , and Yisong Yue2024
Preferential Bayesian optimization (PBO) is a framework for optimizing a decision-maker’s latent preferences over available design choices. While preferences often involve multiple conflicting objectives, existing work in PBO assumes that preferences can be encoded by a single objective function. For example, in robotic assistive devices, technicians often attempt to maximize user comfort while simultaneously minimizing mechanical energy consumption for longer battery life. Similarly, in autonomous driving policy design, decision-makers wish to understand the trade-offs between multiple safety and performance attributes before committing to a policy. To address this gap, we propose the first framework for PBO with multiple objectives. Within this framework, we present dueling scalarized Thompson sampling (DSTS), a multi-objective generalization of the popular dueling Thompson algorithm, which may be of interest beyond the PBO setting. We evaluate DSTS across four synthetic test functions and two simulated exoskeleton personalization and driving policy design tasks, showing that it outperforms several benchmarks. Finally, we prove that DSTS is asymptotically consistent. As a direct consequence, this result provides, to our knowledge, the first convergence guarantee for dueling Thompson sampling in the PBO setting.