Skip to content

Code for our AISTATS 2020 paper "Distributionally Robust Bayesian Quadrature Optimization"

License

Notifications You must be signed in to change notification settings

thanhnguyentang/drbqo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

This is the code for our AISTATS 2020 paper "Distributionally Robust Bayesian Quadrature Optimization".

Dependencies

Run

Synthetic examples:

Within the main directory drbqo, run:

    python -m examples.run_drbqo_synthetic

Cross-validation hyperparameter optimization:

  • Change DRBQO_MAIN_DIR in drbqo/examples/cv/__init__.py to the full path of your local drbqo main directory

  • To run DRBQO for Elasticnet, within the main directory drbqo, run:

    python -m examples.cv.elasticnet.main
  • To run DRBQO for CNN, within the main directory drbqo, run:
    python -m examples.cv.cnn.main

Reference


@InProceedings{pmlr-v108-nguyen20a,
  title = 	 {Distributionally Robust Bayesian Quadrature Optimization},
  author = 	 {Tang Nguyen, Thanh and Gupta, Sunil and Ha, Huong and Rana, Santu and Venkatesh, Svetha},
  booktitle = 	 {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics},
  pages = 	 {1921--1931},
  year = 	 {2020},
  editor = 	 {Chiappa, Silvia and Calandra, Roberto},
  volume = 	 {108},
  series = 	 {Proceedings of Machine Learning Research},
  address = 	 {Online},
  month = 	 {26--28 Aug},
  publisher = 	 {PMLR},
  pdf = 	 {http://proceedings.mlr.press/v108/nguyen20a/nguyen20a.pdf},
  url = 	 {http://proceedings.mlr.press/v108/nguyen20a.html},
  abstract = 	 {Bayesian quadrature optimization (BQO) maximizes the expectation of an expensive black-box integrand taken over a known probability distribution. In this work, we study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d samples. A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set. Though Monte Carlo estimate is unbiased, it has high variance given a small set of samples; thus can result in a spurious objective function. We adopt the distributionally robust optimization perspective to this problem by maximizing the expected objective under the most adversarial distribution. In particular, we propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose. We demonstrate the empirical effectiveness of our proposed framework in synthetic and real-world problems, and characterize its theoretical convergence via Bayesian regret.}
}

Contact

Thanh Tang Nguyen.

MIT License

About

Code for our AISTATS 2020 paper "Distributionally Robust Bayesian Quadrature Optimization"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages