Optimization Research in CRD

Optimization Algorithm Development

Multi-objective and Bilevel Optimization in Co-Optima

Juliane Mueller: JulianeMueller@lbl.gov

When developing new fuels, one objective is to minimize the cost of the fuel. Another objective is to maximize its efficiency for a given engine operating condition. In multi-objective optimization, we optimize both objectives simultaneously. On top of that, a given fuel has different efficiencies for different engine operation conditions, thus leading to a bilevel multi-objective optimization problem. As experiments must be conducted in the laboratory in order to determine a fuel's efficiency for a given engine, the time to obtain a measurement value is extremely large. We are developing methods that will help engineers to make decisions for iterative experimentation.

Expensive Optimization Under Uncertainty

Juliane Mueller: JulianeMueller@lbl.gov

Noise is often present in measurement data. The accuracy of simulation models is often assessed by comparing the simulation's (deterministic) output to that noisy data. The goal is to infer the simulation model parameter that are most likely to explain the observation data. When the simulation is computationally expensive, evaluating it at a large set of sample points is infeasible and samples in uninteresting regions of the parameter space are often taken. Adaptive sampling methods allow us to explore the parameter space only in the interesting regions, and thus we can potentially find better parameter sets more efficiently. This project is funded by the Berkeley Lab LDRD ECD (early career development) program.

Large-Dimensional Optimization

Juliane Mueller: JulianeMueller@lbl.gov

Many parameter tuning tasks, such as event generator tuning often involve many parameters whose effect on the outcome is not well understood. Within the scope of the SciDAC FASTMath Institute research, we are focusing on developing efficient algorithms for large-dimensional expensive optimization problems.

Optimization with Hidden Constraints

Juliane Mueller: JulianeMueller@lbl.gov

We are currently working on optimization algorithms for problems whose objective function evaluation involves running a computationally expensive black box simulation that may fail to evaluate ("hidden constraints"). These problems arise, for example, in combustion simulations.

We are using a piecewise linear model that is trained on already evaluated points and predicts whether or not a new point will be evaluable. We use a dynamic threshold that increases as the number of function evaluations increases in order to increase the probability of selecting an evaluable point as the algorithm proceeds.

This work has been accepted for publication by the INFORMS Journal on Computing.

White areas = function does not evaluate successfully (NaN);

Colorful areas = function evaluates successfully; possibly disconnected, and multimodal.

We assign the value 1 to points x that evaluated successfully and value 0 to points that failed (evaluated points are shown as red dots). A piecewise linear approximation model s_g(x) predicts whether an untried point will evaluate successfully. The threshold enforces a lower limit on the desired probability of evaluability.

Multifidelity Optimization

Juliane Mueller: JulianeMueller@lbl.gov

Many simulation models come in various fidelity levels, for example, by coarsening the resolution, we obtain faster simulations that are, however, less accurate. We are working on developing optimization algorithms that are able to exploit the information from the low fidelity model in order to find the optimum of the high fidelity model. These problems arise, for example, in cosmology simulations and in climate simulations.

Our developed algorithm has been accepted for publication in INFOR: Information Systems and Operational Research.

Adjoint-based optimization of conservation laws on deforming domains using high-order methods

Matthew Zahr: mjzahr@lbl.gov

The fully discrete adjoint equations and the corresponding adjoint method are derived for a globally high-order accurate discretization of conservation laws on parametrized, deforming domains. The conservation law on the deforming domain is transformed into one on a fixed reference domain by the introduction of a time-dependent mapping that encapsulates the domain deformation and parametrization, resulting in an Arbitrary Lagrangian-Eulerian form of the governing equations. A high-order discontinuous Galerkin method is used to discretize the transformed equation in space and a high-order diagonally implicit Runge-Kutta scheme is used for the temporal discretization. Quantities of interest that take the form of space-time integrals are discretized in a solver-consistent manner. The corresponding fully discrete adjoint method is used to compute exact gradients of quantities of interest along the manifold of solutions of the fully discrete conservation law. These quantities of interest and their gradients are used in the context of gradient-based PDE-constrained optimization.

Flow vorticity around flapping airfoil undergoing pure heaving motion (left) and energetically optimal rigid motion (center) and non-rigid motion (right) at fixed thrust

Visualization of the flow field around 3D flapping wing in pure heaving (left) and undergoing the energetically optimal motion at neutral thrust (right).

Collaborations with Other Divisions

Optimization of Eddy-diffusivity mass-flux (EDMF) for Barbados Oceanographic and Meteorological Experiment (BOMEX)

With Wolfgang Langhans: wlanghans@lbl.gov

The figure shows an isosurface of Cloud liquid water (at 0.01 g/kg) from the Barbados Oceanographic and Meteorological Experiment (BOMEX) case. The clouds are produced by a Large-eddy simulation and are shallow cumulus clouds commonly found in trade wind regions over the ocean. These are obviously 3D simulations.

The goal of the optimization is to calibrate a 1D model (vertical only) that reproduces the domain-mean behavior of the Large-eddy simulations. The 1D model will then represent the effects of these clouds in a global climate model which otherwise would be unable to represent the effects of such small-scale clouds.

This work has been published in the Journal of Advances in Modeling Earth Systems.

Analysis of Solid-Liquid Interfaces with Standing Waves

With Osman Karslioglu, Hendrik Bluhm, Chuck Fadley, Mathias Gehlmann, Slavomír Nemšák

Analysis of solid-liquid interfaces can provide important insights into electrochemical devices such as batteries, fuel-cells and electrolyzers, as well as electrochemical processes such as corrosion.

We would like to study solid-liquid interfaces using X-ray photoelectron spectroscopy, which has been nearly impossible until recently due to strong interaction of electrons with matter. Using ultrathin liquid layers (10-20 nm) and hard X-rays, this goal has been achieved. However, the signal is integrated over the whole thickness of the liquid, which makes interpretation difficult.

In order to obtain “depth-resolved” information, we use X-ray standing waves* to create photoemission. The signal obtained then needs to be deconvoluted by an iterative trial-error process involving the simulation of experimental data using an X-ray optics simulation code.

* Spectroscopic techniques utilizing X-ray standing waves are useful for obtaining structural information at the atomic scale. The standing waves are formed due to diffraction of X-rays from a periodic structure. This structure can be the atomic lattice of a single-crystal or the layers of a multilayered solid prepared in a laboratory. In our case it is the latter.

This work has been published in the Journal of Electron Spectroscopy and Related Phenomena.

A comparison of the experimental rocking curves for various core-level intensities with theoretical calculations based on the optimized sample configuration defined in the Figure on the right.

The final depth distributions in the sample as derived by fitting experimental rocking curves to theory.

Figures by S. Nemšák

Data-informed Surrogate Models for Decision Support

LDRD (Water-Energy-Nexus): Enabling Water-Energy Decision Support Using Watershed-scale Surrogate Models (2019)

Juliane Mueller: JulianeMueller@lbl.gov

In this project, we are developing computationally cheap data-informed surrogate models for predicting future groundwater levels to enable the sustainable management of groundwater in California. We use LSTM-RNN's and Fuzzy logic models that we train on time series data including temperature and precipitation observations. Our decision support tool will allow management agencies to run many what-if scenarios for future climate within minutes.

Collaborators: Deb Agarwal, Jangho Park, Reetik Sahu (CRD); Bhavna Arora, Boris Faybishenko, Charuleka Varadharajan (EESA)

Training of and forecasting with LSTM for one well in Butte County, CA


blue = training data;

green = validation data;

red = LSTM prediction;

black = testing data

Training of and forecasting with LSTM for 5 wells in Rifle, CO

Current projects

SciDAC-4: Inference at Extreme Scales

As part of the Simulation Toolkit team, we are developing an optimization tool ("co-optimizer") that allows for a multi-objective bilevel optimization of fuels and engine operating conditions with the goal to maximize efficiency and minimize fuel costs. We are using statistical surrogate models that we train on experimental data in order to guide future experimentation.

Within FASTMath we are developing new approaches for large-scale optimization problems when function evaluations are computationally expensive. We are exploiting statistical methods to reduce the number of parameters and trade-off solution accuracy and efficiency.

LDRD: Computationally Expensive Optimization Under Uncertainty (2018 & 2019)

The goal of this project is to develop new surrogate model based methods for computationally expensive black-box optimization problems that have uncertainty. Uncertainty arises from noisy observation data that we are using to estimate the parameters in our simulation model; from stochasticity in the forward simulation; and from model specification. An important application we are working on is the inference of cosmological parameters.

More to come. Check back soon..