Optimization research in the Applied Mathematics and Computational Research Division focuses on the development of efficient optimization algorithms for computationally expensive black box problems. These problems arise in many DOE-relevant science areas, including climate science, quantum computing, high energy physics, design tasks, etc. These applications often involve simulation models with parameters whose adjustment significantly influences the simulation outcomes. The optimization targets are application dependent, e.g. one may want to adjust parametrs such that the discrepancy between simulation and observation is minimized, or one may want to maximize a certain quantity such as the number of electrons emitted from laser-plasma accelerators.

Optimization problem characteristics

The optimization problems we are working on can have the following characteristics :

  • Objective function values are obtained from a time consuming black-box simulation model or from an experiment

  • Derivative information of the objective function is not available

  • Problems are multimodal, i.e., there are several local optima (we want to find the global optimum)

  • Large parameter spaces

  • Parameters can be continuous, mixed-integer, integer, binary

Additional challenges are posed by the presence of the following characteristics

  • Constraints may be present: they may be computationally cheap or computed by an expensive black-box simulation model

  • The simulation model may not return a function value (not a bug in the simulation model, e.g., may be some exception case in the simulation model)

  • Multiple conflicting expensive black box objective functions have to be optimized simultaneously to find trade-off solutions

  • Multiple levels of simulation model fidelity are available

  • Uncertainty, for example, in observation data that is used to calibrate simulations, in simulation outputs, in simulation parameters, in model choice (model discrepancy)

Challenge: Optimization problems are becoming increasingly complex

Goal: find a near-optimal solution by doing only very few expensive function evaluations in order to keep the optimization time acceptable

  • Widely used algorithms such as evolutionary methods require too many expensive function evaluations and quickly become intractable

  • Gradient-approximating methods are often too inexact and require a large number of function evaluations

  • Increasing compute power allows improved simulation capabilities, thus not reducing the computational burden of function evaluations

Sophisticated and practical optimization algorithms are needed to efficiently and effectively find the optimal parameters

Our Approach: use surrogate models to approximate the expensive functions and guide the iterative optimization search

We use computationally cheap approximation models (surrogate models, emulators s(x)) of the expensive functions, f(x), in order to predict function values at unsampled points and to guide the optimization search for improved solutions. In general, surrogate model optimization works as follows (see also the cartoon below):

  • Step 1: Create an initial experimental design. Evaluate f(x) at the selected points. Fit the surrogate model s(x) to the data: f(x) = s(x) + e(x), where e(x) is the difference between the true simulation model output and the surrogate model prediction

  • Step 2: Use s(x) to select a new evaluation point xnew and compute f(xnew).

  • Step 3: If the stopping criterion is not satisfied, update s(x) with the new data and go to Step 2. Otherwise, stop.