Skip to content

Latest commit

 

History

History
126 lines (108 loc) · 7.02 KB

multi_objective.md

File metadata and controls

126 lines (108 loc) · 7.02 KB
id title
multi_objective
Multi-Objective Bayesian Optimization

BoTorch provides first-class support for Multi-Objective (MO) Bayesian Optimization (BO) including implementations of qLogNoisyExpectedHypervolumeImprovement (qLogNEHVI)12, qLogExpectedHypervolumeImprovement (qLogEHVI), qLogNParEGO1, and analytic ExpectedHypervolumeImprovement (EHVI) with gradients via auto-differentiation acquisition functions3.

The goal in MOBO is learn the Pareto front: the set of optimal trade-offs, where an improvement in one objective means deteriorating another objective. Botorch provides implementations for a number of acquisition functions specifically for the multi-objective scenario, as well as generic interfaces for implemented new multi-objective acquisition functions.

Multi-Objective Acquisition Functions

MOBO leverages many advantages of BoTorch to make provide practical algorithms for computationally intensive and analytically intractable problems. For example, analytic EHVI has no known analytical gradient for when there are more than two objectives, but BoTorch computes analytic gradients for free via auto-differentiation, regardless of the number of objectives 3.

For analytic and MC-based MOBO acquisition functions such as qLogNEHVI, qLogEHVI, and qLogNParEGO, BoTorch leverages GPU acceleration and quasi-second order methods for acquisition optimization for efficient computation and optimization in many practical scenarios 13. The MC-based acquisition functions support using the sample average approximation for rapid convergence 4.

All analytic MO acquisition functions derive from MultiObjectiveAnalyticAcquisitionFunction and all MC-based acquisition functions derive from MultiObjectiveMCAcquisitionFunction. These abstract classes easily integrate with BoTorch's standard optimization machinery.

qLogNParEGO supports optimization via random scalarizations. In the batch setting, it uses a new random scalarization for each candidate 3. Candidates are selected in a sequential greedy fashion, each with a different scalarization, via the optimize_acqf_list function.

For a more in-depth example using these acquisition functions, check out the Multi-Objective Bayesian Optimization tutorial notebook.

Multi-Objective Utilities

BoTorch provides several utility functions for evaluating performance in MOBO including a method for computing the Pareto front is_non_dominated and efficient box decomposition algorithms for efficiently partitioning the the space dominated DominatedPartitioning or non-dominated NonDominatedPartitioning by the Pareto frontier into axis-aligned hyperrectangular boxes. For exact box decompositions, BoTorch uses a two-step approach similar to that in 5, where (1) Algorithm 1 from [Lacour17]_ is used to find the local lower bounds for the maximization problem and (2) the local lower bounds are used as the Pareto frontier for the minimization problem, and [Lacour17]_ is applied again to partition the space dominated by that Pareto frontier. Approximate box decompositions are also supported using the algorithm from 6. See Appendix F.4 in 3 for an analysis of approximate vs exact box decompositions with EHVI. These box decompositions (approximate or exact) can also be used to efficiently compute hypervolumes.

Additionally, variations on ParEGO can be trivially implemented using an augmented Chebyshev scalarization as the objective with an EI-type single-objective acquisition function such as qLogNoisyExpectedImprovement. The get_chebyshev_scalarization convenience function generates these scalarizations.

Footnotes

  1. S. Daulton, M. Balandat, and E. Bakshy. Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement. Advances in Neural Information Processing Systems 34, 2021. paper 2 3

  2. S. Ament, S. Daulton, D. Eriksson, M. Balandat, and E. Bakshy. Unexpected Improvements to Expected Improvement for Bayesian Optimization. Advances in Neural Information Processing Systems 36, 2023. paper "Log" variances of acquisition functions, such as qLogNoisyExpectedHypervolumeImprovement, offer improved numerics compared to older counterparts such as qNoisyExpectedHypervolumeImprovement.

  3. S. Daulton, M. Balandat, and E. Bakshy. Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization. Advances in Neural Information Processing Systems 33, 2020. paper 2 3 4 5

  4. M. Balandat, B. Karrer, D. R. Jiang, S. Daulton, B. Letham, A. G. Wilson, and E. Bakshy. BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization. Advances in Neural Information Processing Systems 33, 2020. paper

  5. K. Yang, M. Emmerich, A. Deutz, et al. Efficient computation of expected hypervolume improvement using box decomposition algorithms. J Glob Optim 75, 2019. paper

  6. I. Couckuyt, D. Deschrijver and T. Dhaene. Towards Efficient Multiobjective Optimization: Multiobjective statistical criterions. IEEE Congress on Evolutionary Computation, Brisbane, QLD, 2012. paper