- covariance_functionName of covariance function.
C++ Type:UserObjectName
Controllable:No
Description:Name of covariance function.
- tuning_algorithmnoneHyper parameter optimizaton algorithm
Default:none
C++ Type:MooseEnum
Controllable:No
Description:Hyper parameter optimizaton algorithm
ActiveLearningGaussianProcess
Permit re-training Gaussian Process surrogate model for active learning.
Description
The theory behind Gaussian Process (GP) is described in GaussianProcessTrainer. ActiveLearningGaussianProcess
is slightly similar to the GaussianProcessTrainer
class in that it trains a GP model. However, a key feature of ActiveLearningGaussianProcess
is that it permits re-training the GP model on-the-fly during the active learning process. This means that the input the inputs and output data set sizes will be dynamic and re-training the GP can be performed several times as dictated by the learning (or acquisition) function during active learning.
Just like the GaussianProcessTrainer
class, the GP model during active learning can be trained using either of the following options:
PETSc/TAO
Relies on the TAO optimization library from PETSc. Several optimization algorithms are available from this library. Note that these algorithms perform deterministic optimization.
Adaptive moment estimation (Adam)
Relies on the pseudocode provided in Kingma and Ba (2014). Adam permits stochastic optimization, wherein, a batch of the training data can be randomly chosen at each iteration.
Interaction between ActiveLearningMonteCarloSampler
, ActiveLearningGaussianProcess
, and ActiveLearningGPDecision
Figure 1: Schematic of active learning in Monte Carlo simulations with parallel computing. The interaction between the three objects, ActiveLearningMonteCarloSampler
, ActiveLearningGaussianProcess
, and ActiveLearningGPDecision
, is presented.
Usage of active learning
Please refer to ActiveLearningGPDecision on a detailed description on using active learning.
Input Parameters
- batch_size0The batch size for Adam optimization
Default:0
C++ Type:unsigned int
Controllable:No
Description:The batch size for Adam optimization
- execute_onTIMESTEP_ENDThe list of flag(s) indicating when this object should be executed, the available options include FORWARD, ADJOINT, HOMOGENEOUS_FORWARD, ADJOINT_TIMESTEP_BEGIN, ADJOINT_TIMESTEP_END, NONE, INITIAL, LINEAR, NONLINEAR, TIMESTEP_END, TIMESTEP_BEGIN, MULTIAPP_FIXED_POINT_END, MULTIAPP_FIXED_POINT_BEGIN, FINAL, CUSTOM.
Default:TIMESTEP_END
C++ Type:ExecFlagEnum
Controllable:No
Description:The list of flag(s) indicating when this object should be executed, the available options include FORWARD, ADJOINT, HOMOGENEOUS_FORWARD, ADJOINT_TIMESTEP_BEGIN, ADJOINT_TIMESTEP_END, NONE, INITIAL, LINEAR, NONLINEAR, TIMESTEP_END, TIMESTEP_BEGIN, MULTIAPP_FIXED_POINT_END, MULTIAPP_FIXED_POINT_BEGIN, FINAL, CUSTOM.
- filenameThe name of the file which will be associated with the saved/loaded data.
C++ Type:FileName
Controllable:No
Description:The name of the file which will be associated with the saved/loaded data.
- iter_adam1000Tolerance value for Adam optimization
Default:1000
C++ Type:unsigned int
Controllable:No
Description:Tolerance value for Adam optimization
- learning_rate_adam0.001The learning rate for Adam optimization
Default:0.001
C++ Type:double
Controllable:No
Description:The learning rate for Adam optimization
- prop_getter_suffixAn optional suffix parameter that can be appended to any attempt to retrieve/get material properties. The suffix will be prepended with a '_' character.
C++ Type:MaterialPropertyName
Controllable:No
Description:An optional suffix parameter that can be appended to any attempt to retrieve/get material properties. The suffix will be prepended with a '_' character.
- show_optimization_detailsFalseSwitch to show TAO or Adam solver results
Default:False
C++ Type:bool
Controllable:No
Description:Switch to show TAO or Adam solver results
- standardize_dataTrueStandardize (center and scale) training data (y values)
Default:True
C++ Type:bool
Controllable:No
Description:Standardize (center and scale) training data (y values)
- standardize_paramsTrueStandardize (center and scale) training parameters (x values)
Default:True
C++ Type:bool
Controllable:No
Description:Standardize (center and scale) training parameters (x values)
- tao_optionsCommand line options for PETSc/TAO hyperparameter optimization
C++ Type:std::string
Controllable:No
Description:Command line options for PETSc/TAO hyperparameter optimization
- tune_parametersSelect hyperparameters to be tuned
C++ Type:std::vector<std::string>
Controllable:No
Description:Select hyperparameters to be tuned
- tuning_maxMaximum allowable tuning value
C++ Type:std::vector<double>
Controllable:No
Description:Maximum allowable tuning value
- tuning_minMinimum allowable tuning value
C++ Type:std::vector<double>
Controllable:No
Description:Minimum allowable tuning value
- use_interpolated_stateFalseFor the old and older state use projected material properties interpolated at the quadrature points. To set up projection use the ProjectedStatefulMaterialStorageAction.
Default:False
C++ Type:bool
Controllable:No
Description:For the old and older state use projected material properties interpolated at the quadrature points. To set up projection use the ProjectedStatefulMaterialStorageAction.
Optional Parameters
- allow_duplicate_execution_on_initialFalseIn the case where this UserObject is depended upon by an initial condition, allow it to be executed twice during the initial setup (once before the IC and again after mesh adaptivity (if applicable).
Default:False
C++ Type:bool
Controllable:No
Description:In the case where this UserObject is depended upon by an initial condition, allow it to be executed twice during the initial setup (once before the IC and again after mesh adaptivity (if applicable).
- control_tagsAdds user-defined labels for accessing object parameters via control logic.
C++ Type:std::vector<std::string>
Controllable:No
Description:Adds user-defined labels for accessing object parameters via control logic.
- enableTrueSet the enabled status of the MooseObject.
Default:True
C++ Type:bool
Controllable:Yes
Description:Set the enabled status of the MooseObject.
- execution_order_group0Execution order groups are executed in increasing order (e.g., the lowest number is executed first). Note that negative group numbers may be used to execute groups before the default (0) group. Please refer to the user object documentation for ordering of user object execution within a group.
Default:0
C++ Type:int
Controllable:No
Description:Execution order groups are executed in increasing order (e.g., the lowest number is executed first). Note that negative group numbers may be used to execute groups before the default (0) group. Please refer to the user object documentation for ordering of user object execution within a group.
- force_postauxFalseForces the UserObject to be executed in POSTAUX
Default:False
C++ Type:bool
Controllable:No
Description:Forces the UserObject to be executed in POSTAUX
- force_preauxFalseForces the UserObject to be executed in PREAUX
Default:False
C++ Type:bool
Controllable:No
Description:Forces the UserObject to be executed in PREAUX
- force_preicFalseForces the UserObject to be executed in PREIC during initial setup
Default:False
C++ Type:bool
Controllable:No
Description:Forces the UserObject to be executed in PREIC during initial setup
- use_displaced_meshFalseWhether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used.
Default:False
C++ Type:bool
Controllable:No
Description:Whether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used.
Advanced Parameters
Input Files
- (modules/stochastic_tools/test/tests/reporters/AISActiveLearning/ais_al.i)
- (modules/stochastic_tools/test/tests/reporters/ActiveLearningGP/main_tao.i)
- (modules/stochastic_tools/test/tests/reporters/BFActiveLearning/main_adam.i)
- (modules/stochastic_tools/test/tests/reporters/ActiveLearningGP/main_adam.i)
References
- D. P. Kingma and J. Ba.
Adam: a method for stochastic optimization.
arXiv:1412.6980 [cs.LG], 2014.
URL: https://doi.org/10.48550/arXiv.1412.6980.[BibTeX]