- variableThe name of the variable that this object applies to
C++ Type:AuxVariableName
Controllable:No
Description:The name of the variable that this object applies to
ProcessorIDAux
Auxiliary kernel for displaying mesh partitioning. Each node or element can display its corresponding processor ID.

Coarse regular mesh partitioned 5 ways.
This AuxKernel should be used with care in regression tests. Partitioning is often different between different platforms and clearly running on a different numbers of processors will change this field substantially.
Creates a field showing the processors and partitioning.
Input Parameters
- blockThe list of blocks (ids or names) that this object will be applied
C++ Type:std::vector<SubdomainName>
Controllable:No
Description:The list of blocks (ids or names) that this object will be applied
- boundaryThe list of boundaries (ids or names) from the mesh where this boundary condition applies
C++ Type:std::vector<BoundaryName>
Controllable:No
Description:The list of boundaries (ids or names) from the mesh where this boundary condition applies
- check_boundary_restrictedTrueWhether to check for multiple element sides on the boundary in the case of a boundary restricted, element aux variable. Setting this to false will allow contribution to a single element's elemental value(s) from multiple boundary sides on the same element (example: when the restricted boundary exists on two or more sides of an element, such as at a corner of a mesh
Default:True
C++ Type:bool
Controllable:No
Description:Whether to check for multiple element sides on the boundary in the case of a boundary restricted, element aux variable. Setting this to false will allow contribution to a single element's elemental value(s) from multiple boundary sides on the same element (example: when the restricted boundary exists on two or more sides of an element, such as at a corner of a mesh
- execute_onLINEAR TIMESTEP_ENDThe list of flag(s) indicating when this object should be executed, the available options include NONE, INITIAL, LINEAR, NONLINEAR, TIMESTEP_END, TIMESTEP_BEGIN, FINAL, CUSTOM, PRE_DISPLACE, ALWAYS.
Default:LINEAR TIMESTEP_END
C++ Type:ExecFlagEnum
Options:NONE, INITIAL, LINEAR, NONLINEAR, TIMESTEP_END, TIMESTEP_BEGIN, FINAL, CUSTOM, PRE_DISPLACE, ALWAYS
Controllable:No
Description:The list of flag(s) indicating when this object should be executed, the available options include NONE, INITIAL, LINEAR, NONLINEAR, TIMESTEP_END, TIMESTEP_BEGIN, FINAL, CUSTOM, PRE_DISPLACE, ALWAYS.
- prop_getter_suffixAn optional suffix parameter that can be appended to any attempt to retrieve/get material properties. The suffix will be prepended with a '_' character.
C++ Type:MaterialPropertyName
Controllable:No
Description:An optional suffix parameter that can be appended to any attempt to retrieve/get material properties. The suffix will be prepended with a '_' character.
Optional Parameters
- control_tagsAdds user-defined labels for accessing object parameters via control logic.
C++ Type:std::vector<std::string>
Controllable:No
Description:Adds user-defined labels for accessing object parameters via control logic.
- enableTrueSet the enabled status of the MooseObject.
Default:True
C++ Type:bool
Controllable:Yes
Description:Set the enabled status of the MooseObject.
- seed0The seed for the master random number generator
Default:0
C++ Type:unsigned int
Controllable:No
Description:The seed for the master random number generator
- use_displaced_meshFalseWhether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used.
Default:False
C++ Type:bool
Controllable:No
Description:Whether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used.
Advanced Parameters
Input Files
- (modules/tensor_mechanics/test/tests/gravity/block-gravity-kinetic-energy.i)
- (modules/external_petsc_solver/test/tests/external_petsc_problem/petsc_transient_as_master.i)
- (test/tests/transfers/multiapp_nearest_node_transfer/parallel_sub.i)
- (test/tests/mesh/splitting/grid_from_generated.i)
- (test/tests/meshgenerators/distributed_rectilinear/generator/distributed_rectilinear_mesh_generator.i)
- (test/tests/outputs/variables/nemesis_hide.i)
- (test/tests/partitioners/grid_partitioner/grid_partitioner.i)
- (test/tests/partitioners/block_weighted_partitioner/block_weighted_partitioner.i)
- (test/tests/meshgenerators/centroid_partitioner/centroid_partitioner_mg.i)
- (test/tests/relationship_managers/evaluable/evaluable.i)
- (test/tests/meshgenerators/distributed_rectilinear/dmg_displaced_mesh/pbc_adaptivity.i)
- (test/tests/bcs/dmg_periodic/dmg_periodic_bc.i)
- (modules/phase_field/examples/grain_growth/grain_growth_3D.i)
- (test/tests/meshgenerators/distributed_rectilinear/ghosting_elements/num_layers.i)
- (modules/phase_field/test/tests/grain_tracker_test/grain_halo_over_bc.i)
- (modules/phase_field/test/tests/grain_tracker_test/distributed_poly_ic.i)
- (test/tests/outputs/nemesis/nemesis_elemental.i)
- (modules/phase_field/test/tests/MultiSmoothCircleIC/test_problem.i)
- (test/tests/mesh/splitting/geometric_neighbors.i)
- (test/tests/mesh/splitting/grid_from_file.i)
- (test/tests/relationship_managers/two_rm/two_rm.i)
- (test/tests/mesh/nemesis/nemesis_repartitioning_test.i)
- (test/tests/mesh/custom_partitioner/custom_linear_partitioner_test.i)
- (test/tests/partitioners/random_partitioner/random_partitioner.i)
- (test/tests/relationship_managers/evaluable/edge_neighbors.i)
- (test/tests/partitioners/file_mesh_skip_partition/file_mesh_skip_partitioning.i)
- (test/tests/mesh/subdomain_partitioner/subdomain_partitioner.i)
- (modules/phase_field/test/tests/grain_tracker_test/grain_tracker_remapping_test.i)
- (modules/contact/test/tests/tension_release/8ElemTensionRelease.i)
- (test/tests/partitioners/custom_partition_generated_mesh/custom_partition_generated_mesh.i)
- (modules/contact/test/tests/bouncing-block-contact/variational-frictional.i)
- (test/tests/partitioners/hierarchical_grid_partitioner/hierarchical_grid_partitioner.i)
- (modules/phase_field/test/tests/feature_flood_test/parallel_feature_count.i)
- (test/tests/auxkernels/ghosting_aux/no_algebraic_ghosting.i)
- (test/tests/relationship_managers/geometric_neighbors/geometric_edge_neighbors.i)
- (modules/external_petsc_solver/test/tests/partition/moose_as_master.i)
- (modules/external_petsc_solver/test/tests/partition/petsc_transient_as_sub.i)
- (modules/phase_field/test/tests/grain_growth/voronoi_adaptivity_ghost.i)
- (test/tests/auxkernels/ghosting_aux/ghosting_aux.i)
- (test/tests/partitioners/petsc_partitioner/petsc_partitioner.i)
- (test/tests/mesh/centroid_partitioner/centroid_partitioner_test.i)
- (test/tests/meshgenerators/distributed_rectilinear/partition/squarish_partition.i)
- (modules/external_petsc_solver/test/tests/external_petsc_problem/petsc_transient_as_sub.i)
(modules/tensor_mechanics/test/tests/gravity/block-gravity-kinetic-energy.i)
starting_point = 2e-1
offset = 1.0
[GlobalParams]
displacements = 'disp_x disp_y'
[]
[Mesh]
file = long-bottom-block-1elem-blocks.e
[]
[Problem]
kernel_coverage_check = false
material_coverage_check = false
[]
[Variables]
[disp_x]
block = '1 2'
[]
[disp_y]
block = '1 2'
[]
[]
[AuxVariables]
[pid]
order = CONSTANT
family = MONOMIAL
[]
[kinetic_energy]
order = CONSTANT
family = MONOMIAL
[]
[]
[AuxKernels]
[pid]
type = ProcessorIDAux
variable = pid
execute_on = 'initial timestep_end'
[]
[kinetic_energy]
type = KineticEnergyAux
block = '1 2'
variable = kinetic_energy
newmark_velocity_x = vel_x
newmark_velocity_y = vel_y
newmark_velocity_z = 0.0
density = density
[]
[]
[ICs]
[disp_y]
type = ConstantIC
block = 2
variable = disp_y
value = '${fparse starting_point + offset}'
[]
[]
[Modules/TensorMechanics/DynamicMaster]
[all]
add_variables = true
hht_alpha = 0.0
beta = 0.25
gamma = 0.5
mass_damping_coefficient = 0.0
stiffness_damping_coefficient = 0.0
displacements = 'disp_x disp_y'
generate_output = 'stress_xx stress_yy'
block = '1 2'
strain = FINITE
[]
[]
[Kernels]
[gravity]
type = Gravity
value = -9.81
variable = disp_y
[]
[]
[Materials]
[elasticity_2]
type = ComputeIsotropicElasticityTensor
block = '2'
youngs_modulus = 1e4
poissons_ratio = 0.3
[]
[elasticity_1]
type = ComputeIsotropicElasticityTensor
block = '1'
youngs_modulus = 1e7
poissons_ratio = 0.3
[]
[stress]
type = ComputeFiniteStrainElasticStress
block = '1 2'
[]
[density]
type = GenericConstantMaterial
block = '1 2'
prop_names = 'density'
prop_values = '7750'
[]
[]
[BCs]
[botx]
type = DirichletBC
variable = disp_x
boundary = 40
value = 0.0
[]
[boty]
type = DirichletBC
variable = disp_y
boundary = 40
value = 0.0
[]
[]
[Executioner]
type = Transient
end_time = 0.5
dt = 0.01
dtmin = .05
solve_type = 'PJFNK'
petsc_options = '-snes_converged_reason -ksp_converged_reason -pc_svd_monitor '
'-snes_linesearch_monitor'
petsc_options_iname = '-pc_type -pc_factor_shift_type -pc_factor_shift_amount -mat_mffd_err '
'-ksp_gmres_restart'
petsc_options_value = 'lu NONZERO 1e-15 1e-5 100'
l_max_its = 100
nl_max_its = 20
line_search = 'none'
snesmf_reuse_base = false
[TimeIntegrator]
type = NewmarkBeta
beta = 0.25
gamma = 0.5
[]
[]
[Debug]
show_var_residual_norms = true
[]
[Outputs]
exodus = false
csv = true
[]
[Preconditioning]
[smp]
type = SMP
full = true
[]
[]
[Postprocessors]
active = 'total_kinetic_energy'
[total_kinetic_energy]
type = ElementIntegralVariablePostprocessor
variable = kinetic_energy
block = '1 2'
[]
[]
(modules/external_petsc_solver/test/tests/external_petsc_problem/petsc_transient_as_master.i)
[Mesh]
# It is a mirror of PETSc mesh (DMDA)
type = PETScDMDAMesh
[]
[AuxVariables]
[./u]
order = FIRST
family = LAGRANGE
[../]
[]
[Problem]
type = ExternalPETScProblem
sync_variable = u
[]
[Executioner]
type = Transient
num_steps = 10
[./TimeStepper]
type = ExternalPetscTimeStepper
[../]
[]
[AuxVariables]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[]
[AuxKernels]
[pid_aux]
type = ProcessorIDAux
variable = pid
execute_on = 'INITIAL'
[]
[]
[Outputs]
exodus = true
[]
[MultiApps]
[./sub_app]
type = TransientMultiApp
input_files = 'moose_as_sub.i'
app_type = ExternalPetscSolverTestApp
[../]
[]
[Transfers]
[./tosub]
type = MultiAppMeshFunctionTransfer
to_multi_app = sub_app
source_variable = u
variable = v
[../]
[]
(test/tests/transfers/multiapp_nearest_node_transfer/parallel_sub.i)
[Mesh]
type = GeneratedMesh
dim = 1
nx = 180
parallel_type = replicated
[]
[Variables]
[./u]
order = FIRST
family = LAGRANGE
[./InitialCondition]
type = ConstantIC
value = 1.0
[../]
[../]
[]
[AuxVariables]
[./pid]
order = constant
family = monomial
[../]
[]
[Kernels]
[./diff]
type = Diffusion
variable = u
[../]
[]
[BCs]
[./left]
type = DirichletBC
variable = u
boundary = left
value = 0
[../]
[./right]
type = DirichletBC
variable = u
boundary = right
value = 1
[../]
[]
[Executioner]
type = Transient
num_steps = 1
dt = 1
solve_type = 'PJFNK'
petsc_options_iname = '-pc_type -pc_hypre_type'
petsc_options_value = 'hypre boomeramg'
[]
[Outputs]
exodus = true
[]
[AuxKernels]
[./pid]
type = ProcessorIDAux
variable = pid
[../]
[]
(test/tests/mesh/splitting/grid_from_generated.i)
[Mesh]
type = GeneratedMesh
dim = 2
nx = 10
ny = 10
[Partitioner]
type = GridPartitioner
nx = 2
ny = 2
nz = 1
[]
[]
[Variables]
[u]
[]
[]
[Kernels]
[diff]
type = Diffusion
variable = u
[]
[]
[BCs]
[left]
type = DirichletBC
variable = u
boundary = 'left'
value = 0
[]
[right]
type = DirichletBC
variable = u
boundary = 'right'
value = 1
[]
[]
[Executioner]
type = Steady
solve_type = PJFNK
petsc_options_iname = '-pc_type -pc_hypre_type'
petsc_options_value = 'hypre boomeramg'
[]
[Outputs]
exodus = true
[]
[AuxVariables]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[]
[AuxKernels]
[pid_aux]
type = ProcessorIDAux
variable = pid
execute_on = 'INITIAL'
[]
[]
(test/tests/meshgenerators/distributed_rectilinear/generator/distributed_rectilinear_mesh_generator.i)
[Mesh]
[gmg]
type = DistributedRectilinearMeshGenerator
dim = 2
nx = 100
ny = 100
[]
[]
[Variables]
[./u]
order = FIRST
family = LAGRANGE
[../]
[]
[AuxVariables]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[npid]
family = Lagrange
order = first
[]
[]
[AuxKernels]
[pid_aux]
type = ProcessorIDAux
variable = pid
execute_on = 'INITIAL'
[]
[npid_aux]
type = ProcessorIDAux
variable = npid
execute_on = 'INITIAL'
[]
[]
[Kernels]
[./diff]
type = Diffusion
variable = u
[../]
[]
[BCs]
[./left]
type = DirichletBC
variable = u
preset = false
boundary = 'left'
value = 0
[../]
[./right]
type = DirichletBC
variable = u
preset = false
boundary = 'right'
value = 1
[../]
[]
[Executioner]
type = Steady
petsc_options_iname = '-pc_type'
petsc_options_value = 'hypre'
solve_type = 'NEWTON'
[]
[Outputs]
exodus = true
[]
(test/tests/outputs/variables/nemesis_hide.i)
# Solving for 2 variables, putting one into hide list and the other one into show list
# We should only see the variable that is in show list in the output.
[Mesh]
[gen]
type = GeneratedMeshGenerator
dim = 2
xmin = 0
xmax = 1
ymin = 0
ymax = 1
nx = 2
ny = 2
elem_type = QUAD4
[]
# This should be the same as passing --distributed-mesh on the
# command line. You can verify this by looking at what MOOSE prints
# out for the "Mesh" information.
parallel_type = distributed
[./Partitioner]
type = LibmeshPartitioner
partitioner = linear
[../]
[]
[Functions]
[./fn_x]
type = ParsedFunction
value = x
[../]
[./fn_y]
type = ParsedFunction
value = y
[../]
[]
[Variables]
[./u]
[../]
[./v]
[../]
[]
[AuxVariables]
[./aux_u]
[../]
[./aux_v]
[../]
[./proc_id]
order = CONSTANT
family = MONOMIAL
[../]
[]
[Kernels]
[./diff_u]
type = Diffusion
variable = u
[../]
[./diff_v]
type = Diffusion
variable = v
[../]
[]
[AuxKernels]
[./auxk_u]
type = FunctionAux
variable = aux_u
function = 'x*x+y*y'
[../]
[./auxk_v]
type = FunctionAux
variable = aux_v
function = '-(x*x+y*y)'
[../]
[./auxk_proc_id]
variable = proc_id
type = ProcessorIDAux
[../]
[]
[BCs]
[./u_bc]
type = FunctionDirichletBC
variable = u
boundary = '1 3'
function = fn_x
[../]
[./v_bc]
type = FunctionDirichletBC
variable = v
boundary = '0 2'
function = fn_y
[../]
[]
[Executioner]
type = Steady
solve_type = 'NEWTON'
[]
[Outputs]
console = true
[./out]
type = Nemesis
hide = 'u aux_v'
[../]
[]
(test/tests/partitioners/grid_partitioner/grid_partitioner.i)
[Mesh]
type = GeneratedMesh
dim = 2
nx = 10
ny = 10
[Partitioner]
type = GridPartitioner
nx = 2
ny = 2
nz = 1
[]
[]
[Variables]
[u]
[]
[]
[Kernels]
[diff]
type = Diffusion
variable = u
[]
[]
[BCs]
[left]
type = DirichletBC
variable = u
boundary = 'left'
value = 0
[]
[right]
type = DirichletBC
variable = u
boundary = 'right'
value = 1
[]
[]
[Executioner]
type = Steady
solve_type = PJFNK
petsc_options_iname = '-pc_type -pc_hypre_type'
petsc_options_value = 'hypre boomeramg'
[]
[Outputs]
exodus = true
[]
[AuxVariables]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[]
[AuxKernels]
[pid_aux]
type = ProcessorIDAux
variable = pid
execute_on = 'INITIAL'
[]
[]
(test/tests/partitioners/block_weighted_partitioner/block_weighted_partitioner.i)
[Mesh]
type = FileMesh
file = block_weighted_partitioner.e
[Partitioner]
type = BlockWeightedPartitioner
block = '1 2 3'
weight = '3 1 10'
[]
[]
[Variables]
[u]
[]
[]
[Kernels]
[diff]
type = Diffusion
variable = u
[]
[]
[BCs]
[left]
type = DirichletBC
variable = u
boundary = 'left'
value = 0
[]
[right]
type = DirichletBC
variable = u
boundary = 'right'
value = 1
[]
[]
[Executioner]
type = Steady
solve_type = Newton
petsc_options_iname = '-pc_type -pc_hypre_type'
petsc_options_value = 'hypre boomeramg'
[]
[Outputs]
exodus = true
[]
[AuxVariables]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[npid]
family = Lagrange
order = first
[]
[]
[AuxKernels]
[pid_aux]
type = ProcessorIDAux
variable = pid
execute_on = 'INITIAL'
[]
[npid_aux]
type = ProcessorIDAux
variable = npid
execute_on = 'INITIAL'
[]
[]
(test/tests/meshgenerators/centroid_partitioner/centroid_partitioner_mg.i)
[Mesh]
[./gmg]
type = GeneratedMeshGenerator
dim = 2
nx = 10
ny = 100
xmin = 0.0
xmax = 1.0
ymin = 0.0
ymax = 10.0
# The centroid partitioner orders elements based on
# the position of their centroids
partitioner = centroid
# This will order the elements based on the y value of
# their centroid. Perfect for meshes predominantly in
# one direction
centroid_partitioner_direction = y
# The centroid partitioner behaves differently depending on
# whether you are using Serial or DistributedMesh, so to get
# repeatable results, we restrict this test to using ReplicatedMesh.
parallel_type = replicated
[]
[]
[Variables]
active = 'u'
[./u]
order = FIRST
family = LAGRANGE
[../]
[]
[AuxVariables]
[./proc_id]
order = CONSTANT
family = MONOMIAL
[../]
[]
[Kernels]
active = 'diff'
[./diff]
type = Diffusion
variable = u
[../]
[]
[AuxKernels]
[./proc_id]
type = ProcessorIDAux
variable = proc_id
[../]
[]
[BCs]
active = 'left right'
[./left]
type = DirichletBC
variable = u
boundary = 3
value = 0
[../]
[./right]
type = DirichletBC
variable = u
boundary = 1
value = 1
[../]
[]
[Executioner]
type = Steady
solve_type = 'PJFNK'
[]
[Outputs]
file_base = out
[./exodus]
type = Exodus
elemental_as_nodal = true
[../]
[]
(test/tests/relationship_managers/evaluable/evaluable.i)
[Mesh]
type = GeneratedMesh
dim = 2
nx = 8
ny = 8
[]
[GlobalParams]
order = CONSTANT
family = MONOMIAL
[]
[Variables]
[u]
order = FIRST
family = LAGRANGE
[]
[]
[AuxVariables]
[evaluable0]
[]
[evaluable1]
[]
[evaluable2]
[]
[proc]
[]
[]
[AuxKernels]
[evaluable0]
type = ElementUOAux
variable = evaluable0
element_user_object = evaluable_uo0
field_name = "evaluable"
execute_on = initial
[]
[evaluable1]
type = ElementUOAux
variable = evaluable1
element_user_object = evaluable_uo1
field_name = "evaluable"
execute_on = initial
[]
[evaluable2]
type = ElementUOAux
variable = evaluable2
element_user_object = evaluable_uo2
field_name = "evaluable"
execute_on = initial
[]
[proc]
type = ProcessorIDAux
variable = proc
execute_on = initial
[]
[]
[UserObjects]
[evaluable_uo0]
type = ElemSideNeighborLayersTester
execute_on = initial
element_side_neighbor_layers = 2
rank = 0
[]
[evaluable_uo1]
type = ElemSideNeighborLayersTester
execute_on = initial
element_side_neighbor_layers = 2
rank = 1
[]
[evaluable_uo2]
type = ElemSideNeighborLayersTester
execute_on = initial
element_side_neighbor_layers = 2
rank = 2
[]
[]
[Executioner]
type = Steady
[]
[Outputs]
exodus = true
[]
[Problem]
solve = false
kernel_coverage_check = false
[]
(test/tests/meshgenerators/distributed_rectilinear/dmg_displaced_mesh/pbc_adaptivity.i)
[Mesh]
[dmg]
type = DistributedRectilinearMeshGenerator
dim = 2
nx = 40
ny = 40
xmax = 40
ymax = 40
[]
[]
[GlobalParams]
displacements = 'disp_x disp_y'
[]
[Variables]
[./u]
order = FIRST
family = LAGRANGE
[../]
[./disp_x]
order = FIRST
family = LAGRANGE
[../]
[./disp_y]
order = FIRST
family = LAGRANGE
[../]
[]
[AuxVariables]
[./pid]
order = CONSTANT
family = monomial
[]
[]
[AuxKernels]
[./pidaux]
type = ProcessorIDAux
variable = pid
[../]
[]
[Kernels]
[./diff]
type = Diffusion
variable = u
[../]
[./forcing]
type = GaussContForcing
variable = u
[../]
[./dot]
type = TimeDerivative
variable = u
[../]
[./diff_x]
type = Diffusion
variable = disp_x
[../]
[./diff_y]
type = Diffusion
variable = disp_y
[../]
[]
[BCs]
[./Periodic]
[./x]
variable = u
primary = 'left'
secondary = 'right'
translation = '40 0 0'
[../]
[./y]
variable = u
primary = 'bottom'
secondary = 'top'
translation = '0 40 0'
[../]
[../]
[./left_x]
type = DirichletBC
variable = disp_x
boundary = left
value = -0.01
[../]
[./right_x]
type = DirichletBC
variable = disp_x
boundary = right
value = 0.01
[../]
[./left_y]
type = DirichletBC
variable = disp_y
boundary = left
value = -0.01
[../]
[./right_y]
type = DirichletBC
variable = disp_y
boundary = right
value = 0.01
[../]
[]
[Executioner]
type = Transient
dt = 1
num_steps = 5
solve_type = NEWTON
[]
[Outputs]
exodus = true
[]
[Adaptivity]
initial_steps = 2
steps = 1
marker = marker
initial_marker = marker
max_h_level = 2
[./Indicators]
[./indicator]
type = GradientJumpIndicator
variable = u
[../]
[../]
[./Markers]
[./marker]
type = ErrorFractionMarker
indicator = indicator
coarsen = 0.1
refine = 0.7
[../]
[../]
[]
(test/tests/bcs/dmg_periodic/dmg_periodic_bc.i)
[Mesh]
[dmg]
type = DistributedRectilinearMeshGenerator
dim = 2
nx = 40
ny = 40
nz = 0
xmax = 40
ymax = 40
zmax = 0
[]
[]
[Variables]
[./u]
order = FIRST
family = LAGRANGE
[../]
[]
[AuxVariables]
[./periodic_dist]
order = FIRST
family = LAGRANGE
[../]
[./pid]
order = CONSTANT
family = monomial
[]
[]
[AuxKernels]
[./pidaux]
type = ProcessorIDAux
variable = pid
[../]
[]
[Kernels]
[./diff]
type = Diffusion
variable = u
[../]
[./forcing]
type = GaussContForcing
variable = u
[../]
[./dot]
type = TimeDerivative
variable = u
[../]
[]
[AuxKernels]
[./periodic_dist]
type = PeriodicDistanceAux
variable = periodic_dist
point = '4 6 0'
[../]
[]
[BCs]
[./Periodic]
[./all]
variable = u
auto_direction = 'x y'
[../]
[../]
[]
[Executioner]
type = Transient
dt = 1
num_steps = 20
solve_type = NEWTON
nl_rel_tol = 1e-12
[]
[Outputs]
execute_on = 'timestep_end'
exodus = true
[]
(modules/phase_field/examples/grain_growth/grain_growth_3D.i)
# This simulation predicts GB migration of a 2D copper polycrystal with 100 grains represented with 18 order parameters
# Mesh adaptivity and time step adaptivity are used
# An AuxVariable is used to calculate the grain boundary locations
# Postprocessors are used to record time step and the number of grains
[Mesh]
# Mesh block. Meshes can be read in or automatically generated
type = GeneratedMesh
dim = 3 # Problem dimension
nx = 10 # Number of elements in the x-direction
ny = 10 # Number of elements in the y-direction
nz = 10
xmin = 0 # minimum x-coordinate of the mesh
xmax = 1000 # maximum x-coordinate of the mesh
ymin = 0 # minimum y-coordinate of the mesh
ymax = 1000 # maximum y-coordinate of the mesh
zmin = 0
zmax = 1000
uniform_refine = 1 # Initial uniform refinement of the mesh
parallel_type = distributed
[]
[GlobalParams]
# Parameters used by several kernels that are defined globally to simplify input file
op_num = 15 # Number of order parameters used
var_name_base = gr # Base name of grains
order = CONSTANT
family = MONOMIAL
[]
[Variables]
# Variable block, where all variables in the simulation are declared
[./PolycrystalVariables]
order = FIRST
family = LAGRANGE
[../]
[]
[UserObjects]
[./voronoi]
type = PolycrystalVoronoi
grain_num = 25 # Number of grains
rand_seed = 10
coloring_algorithm = jp
[../]
[./grain_tracker]
type = GrainTracker
threshold = 0.2
connecting_threshold = 0.08
compute_halo_maps = true # Only necessary for displaying HALOS
polycrystal_ic_uo = voronoi
[../]
[]
[ICs]
[./PolycrystalICs]
[./PolycrystalColoringIC]
polycrystal_ic_uo = voronoi
[../]
[../]
[]
[AuxVariables]
# Dependent variables
[./bnds]
# Variable used to visualize the grain boundaries in the simulation
order = FIRST
family = LAGRANGE
[../]
[./unique_grains]
[../]
[./var_indices]
[../]
[./ghost_regions]
[../]
[./halos]
[../]
[./halo0]
[../]
[./halo1]
[../]
[./halo2]
[../]
[./halo3]
[../]
[./halo4]
[../]
[./halo5]
[../]
[./halo6]
[../]
[./halo7]
[../]
[./halo8]
[../]
[./halo9]
[../]
[./halo10]
[../]
[./halo11]
[../]
[./halo12]
[../]
[./halo13]
[../]
[./halo14]
[../]
[./proc]
[../]
[]
[Kernels]
# Kernel block, where the kernels defining the residual equations are set up.
[./PolycrystalKernel]
# Custom action creating all necessary kernels for grain growth. All input parameters are up in GlobalParams
[../]
[]
[AuxKernels]
# AuxKernel block, defining the equations used to calculate the auxvars
[./bnds_aux]
# AuxKernel that calculates the GB term
type = BndsCalcAux
variable = bnds
execute_on = 'initial timestep_end'
[../]
[./unique_grains]
type = FeatureFloodCountAux
variable = unique_grains
flood_counter = grain_tracker
field_display = UNIQUE_REGION
execute_on = 'initial timestep_end'
[../]
[./var_indices]
type = FeatureFloodCountAux
variable = var_indices
flood_counter = grain_tracker
field_display = VARIABLE_COLORING
execute_on = 'initial timestep_end'
[../]
[./ghosted_entities]
type = FeatureFloodCountAux
variable = ghost_regions
flood_counter = grain_tracker
field_display = GHOSTED_ENTITIES
execute_on = 'initial timestep_end'
[../]
[./halos]
type = FeatureFloodCountAux
variable = halos
flood_counter = voronoi
field_display = HALOS
execute_on = 'initial timestep_end'
[../]
[./halo0]
type = FeatureFloodCountAux
variable = halo0
map_index = 0
field_display = HALOS
flood_counter = grain_tracker
execute_on = 'initial timestep_end'
[../]
[./halo1]
type = FeatureFloodCountAux
variable = halo1
map_index = 1
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo2]
type = FeatureFloodCountAux
variable = halo2
map_index = 2
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo3]
type = FeatureFloodCountAux
variable = halo3
map_index = 3
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo4]
type = FeatureFloodCountAux
variable = halo4
map_index = 4
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo5]
type = FeatureFloodCountAux
variable = halo5
map_index = 5
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo6]
type = FeatureFloodCountAux
variable = halo6
map_index = 6
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo7]
type = FeatureFloodCountAux
variable = halo7
map_index = 7
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo8]
type = FeatureFloodCountAux
variable = halo8
map_index = 8
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo9]
type = FeatureFloodCountAux
variable = halo9
map_index = 9
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo10]
type = FeatureFloodCountAux
variable = halo10
map_index = 10
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo11]
type = FeatureFloodCountAux
variable = halo11
map_index = 11
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo12]
type = FeatureFloodCountAux
variable = halo12
map_index = 12
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo13]
type = FeatureFloodCountAux
variable = halo13
map_index = 13
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo14]
type = FeatureFloodCountAux
variable = halo14
map_index = 14
field_display = HALOS
flood_counter = grain_tracker
[../]
[./proc]
type = ProcessorIDAux
variable = proc
execute_on = 'initial timestep_end'
[../]
[]
[Materials]
[./CuGrGr]
# Material properties
type = GBEvolution
T = 450 # Constant temperature of the simulation (for mobility calculation)
wGB = 125 # Width of the diffuse GB
GBmob0 = 2.5e-6 #m^4(Js) for copper from Schoenfelder1997
Q = 0.23 #eV for copper from Schoenfelder1997
GBenergy = 0.708 #J/m^2 from Schoenfelder1997
[../]
[]
[Postprocessors]
# Scalar postprocessors
[./dt]
# Outputs the current time step
type = TimestepSize
[../]
[]
[Executioner]
type = Transient # Type of executioner, here it is transient with an adaptive time step
scheme = bdf2 # Type of time integration (2nd order backward euler), defaults to 1st order backward euler
#Preconditioned JFNK (default)
solve_type = 'PJFNK'
# Uses newton iteration to solve the problem.
petsc_options_iname = '-pc_type'
petsc_options_value = 'asm'
l_max_its = 30 # Max number of linear iterations
l_tol = 1e-4 # Relative tolerance for linear solves
nl_max_its = 40 # Max number of nonlinear iterations
nl_rel_tol = 1e-10 # Absolute tolerance for nonlienar solves
start_time = 0.0
end_time = 4000
[./TimeStepper]
type = IterationAdaptiveDT
dt = 25 # Initial time step. In this simulation it changes.
optimal_iterations = 6 # Time step will adapt to maintain this number of nonlinear iterations
[../]
# [./Adaptivity]
# # Block that turns on mesh adaptivity. Note that mesh will never coarsen beyond initial mesh (before uniform refinement)
## initial_adaptivity = 2 # Number of times mesh is adapted to initial condition
# refine_fraction = 0.6 # Fraction of high error that will be refined
# coarsen_fraction = 0.1 # Fraction of low error that will coarsened
# max_h_level = 3 # Max number of refinements used, starting from initial mesh (before uniform refinement)
# [../]
[]
[Outputs]
exodus = true
csv = true
[./pg]
type = PerfGraphOutput
execute_on = 'initial final' # Default is "final"
level = 2 # Default is 1
[../]
[]
(test/tests/meshgenerators/distributed_rectilinear/ghosting_elements/num_layers.i)
[GlobalParams]
displacements = 'disp_x disp_y'
[]
[Mesh]
[gmg]
type = DistributedRectilinearMeshGenerator
dim = 2
nx = 10
ny = 10
partition="linear"
num_side_layers = 2
[]
[]
[AuxVariables]
[ghosting0]
order = CONSTANT
family = MONOMIAL
[]
[ghosting1]
order = CONSTANT
family = MONOMIAL
[]
[ghosting2]
order = CONSTANT
family = MONOMIAL
[]
[evaluable0]
order = CONSTANT
family = MONOMIAL
[]
[evaluable1]
order = CONSTANT
family = MONOMIAL
[]
[evaluable2]
order = CONSTANT
family = MONOMIAL
[]
[proc]
order = CONSTANT
family = MONOMIAL
[]
[]
[AuxKernels]
[ghosting0]
type = ElementUOAux
variable = ghosting0
element_user_object = ghosting_uo0
field_name = "ghosted"
execute_on = initial
[]
[ghosting1]
type = ElementUOAux
variable = ghosting1
element_user_object = ghosting_uo1
field_name = "ghosted"
execute_on = initial
[]
[ghosting2]
type = ElementUOAux
variable = ghosting2
element_user_object = ghosting_uo2
field_name = "ghosted"
execute_on = initial
[]
[evaluable0]
type = ElementUOAux
variable = evaluable0
element_user_object = ghosting_uo0
field_name = "evaluable"
execute_on = initial
[]
[evaluable1]
type = ElementUOAux
variable = evaluable1
element_user_object = ghosting_uo1
field_name = "evaluable"
execute_on = initial
[]
[evaluable2]
type = ElementUOAux
variable = evaluable2
element_user_object = ghosting_uo2
field_name = "evaluable"
execute_on = initial
[]
[proc]
type = ProcessorIDAux
variable = proc
execute_on = initial
[]
[]
[UserObjects]
[ghosting_uo0]
type = ElemSideNeighborLayersGeomTester
execute_on = initial
element_side_neighbor_layers = 2
rank = 0
[]
[ghosting_uo1]
type = ElemSideNeighborLayersGeomTester
execute_on = initial
element_side_neighbor_layers = 2
rank = 1
[]
[ghosting_uo2]
type = ElemSideNeighborLayersGeomTester
execute_on = initial
element_side_neighbor_layers = 2
rank = 2
[]
[]
[Variables]
[./u]
[../]
[./disp_x]
[../]
[./disp_y]
[../]
[]
[Kernels]
[./diff]
type = CoefDiffusion
variable = u
coef = 0.1
[../]
[./time]
type = TimeDerivative
variable = u
[../]
[./diff_x]
type = Diffusion
variable = disp_x
[../]
[./diff_y]
type = Diffusion
variable = disp_y
[../]
[]
[BCs]
[./left]
type = DirichletBC
variable = u
boundary = left
value = 0
[../]
[./right]
type = DirichletBC
variable = u
boundary = right
value = 1
[../]
[./left_x]
type = DirichletBC
variable = disp_x
boundary = left
value = -0.01
[../]
[./right_x]
type = DirichletBC
variable = disp_x
boundary = left
value = 0.01
[../]
[./left_y]
type = DirichletBC
variable = disp_y
boundary = left
value = -0.01
[../]
[./right_y]
type = DirichletBC
variable = disp_y
boundary = right
value = 0.01
[../]
[]
[Executioner]
type = Transient
num_steps = 3
dt = 0.1
solve_type = PJFNK
petsc_options_iname = '-pc_type -pc_hypre_type'
petsc_options_value = 'hypre boomeramg'
[]
[Outputs]
exodus = true
[]
(modules/phase_field/test/tests/grain_tracker_test/grain_halo_over_bc.i)
[Mesh]
type = GeneratedMesh
dim = 2
nx = 35
ny = 35
xmax = 1000
ymax = 1000
elem_type = QUAD4
parallel_type = replicated # Periodic BCs
[]
[GlobalParams]
op_num = 8 # Number of order parameters used
var_name_base = 'gr' # Base name of grains
[]
[Variables]
[./PolycrystalVariables]
[../]
[]
[UserObjects]
[./voronoi]
type = PolycrystalVoronoi
rand_seed = 12
grain_num = 15 # Number of grains
coloring_algorithm = bt
[../]
[./grain_tracker]
type = GrainTracker
threshold = 0.2
connecting_threshold = 0.08
flood_entity_type = ELEMENTAL
compute_halo_maps = true # Only necessary for displaying HALOS
[../]
[]
[ICs]
[./PolycrystalICs]
[./PolycrystalColoringIC]
polycrystal_ic_uo = voronoi
[../]
[../]
[]
[AuxVariables]
[./bnds]
[../]
[./unique_grains]
order = CONSTANT
family = MONOMIAL
[../]
[./var_indices]
order = CONSTANT
family = MONOMIAL
[../]
[./ghost_regions]
order = CONSTANT
family = MONOMIAL
[../]
[./halos]
order = CONSTANT
family = MONOMIAL
[../]
[./proc_id]
order = CONSTANT
family = MONOMIAL
[../]
[./halo0]
order = CONSTANT
family = MONOMIAL
[../]
[./halo1]
order = CONSTANT
family = MONOMIAL
[../]
[./halo2]
order = CONSTANT
family = MONOMIAL
[../]
[./halo3]
order = CONSTANT
family = MONOMIAL
[../]
[./halo4]
order = CONSTANT
family = MONOMIAL
[../]
[./halo5]
order = CONSTANT
family = MONOMIAL
[../]
[./halo6]
order = CONSTANT
family = MONOMIAL
[../]
[./halo7]
order = CONSTANT
family = MONOMIAL
[../]
[]
[Kernels]
[./PolycrystalKernel]
[../]
[]
[AuxKernels]
[./bnds_aux]
type = BndsCalcAux
variable = bnds
execute_on = 'initial timestep_end'
[../]
[./unique_grains]
type = FeatureFloodCountAux
variable = unique_grains
flood_counter = grain_tracker
field_display = UNIQUE_REGION
execute_on = 'initial timestep_end'
[../]
[./var_indices]
type = FeatureFloodCountAux
variable = var_indices
flood_counter = grain_tracker
field_display = VARIABLE_COLORING
execute_on = 'initial timestep_end'
[../]
[./ghosted_entities]
type = FeatureFloodCountAux
variable = ghost_regions
flood_counter = grain_tracker
field_display = GHOSTED_ENTITIES
execute_on = 'initial timestep_end'
[../]
[./halos]
type = FeatureFloodCountAux
variable = halos
flood_counter = grain_tracker
field_display = HALOS
execute_on = 'initial timestep_end'
[../]
[./proc_id]
type = ProcessorIDAux
variable = proc_id
[../]
[./halo0]
type = FeatureFloodCountAux
variable = halo0
map_index = 0
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo1]
type = FeatureFloodCountAux
variable = halo1
map_index = 1
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo2]
type = FeatureFloodCountAux
variable = halo2
map_index = 2
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo3]
type = FeatureFloodCountAux
variable = halo3
map_index = 3
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo4]
type = FeatureFloodCountAux
variable = halo4
map_index = 4
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo5]
type = FeatureFloodCountAux
variable = halo5
map_index = 5
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo6]
type = FeatureFloodCountAux
variable = halo6
map_index = 6
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo7]
type = FeatureFloodCountAux
variable = halo7
map_index = 7
field_display = HALOS
flood_counter = grain_tracker
[../]
[]
[BCs]
[./Periodic]
[./top_bottom]
auto_direction = 'x y'
[../]
[../]
[]
[Materials]
[./CuGrGr]
type = GBEvolution
T = '450'
wGB = 125
GBmob0 = 2.5e-6
Q = 0.23
GBenergy = 0.708
[../]
[]
[Postprocessors]
[./dt]
type = TimestepSize
[../]
[]
[Executioner]
type = Transient
scheme = bdf2
solve_type = PJFNK
petsc_options_iname = '-pc_type -pc_hypre_type -ksp_gmres_restart -mat_mffd_type'
petsc_options_value = 'hypre boomeramg 101 ds'
l_max_its = 30
l_tol = 1e-4
nl_max_its = 40
nl_rel_tol = 1e-11
dt = 25
num_steps = 1
[]
[Outputs]
exodus = true # Exodus file will be outputted
[]
(modules/phase_field/test/tests/grain_tracker_test/distributed_poly_ic.i)
[Mesh]
# Mesh block. Meshes can be read in or automatically generated
type = GeneratedMesh
uniform_refine = 1 # Initial uniform refinement of the mesh
dim = 2 # Problem dimension
nx = 12 # Number of elements in the x-direction
ny = 12 # Number of elements in the y-direction
xmax = 1000 # maximum x-coordinate of the mesh
ymax = 1000 # maximum y-coordinate of the mesh
elem_type = QUAD4 # Type of elements used in the mesh
parallel_type = distributed
[]
[GlobalParams]
# Parameters used by several kernels that are defined globally to simplify input file
op_num = '8' # Number of order parameters used
var_name_base = 'gr' # Base name of grains
order = 'CONSTANT'
family = 'MONOMIAL'
[]
[Variables]
# Variable block, where all variables in the simulation are declared
[PolycrystalVariables]
order = FIRST
family = LAGRANGE
[]
[]
[UserObjects]
[voronoi]
type = PolycrystalVoronoi
grain_num = 12 # Number of grains
coloring_algorithm = jp
rand_seed = 10
[]
[grain_tracker]
type = GrainTracker
threshold = 0.2
verbosity_level = 1
connecting_threshold = 0.08
flood_entity_type = ELEMENTAL
compute_halo_maps = true # For displaying HALO fields
execute_on = 'initial timestep_end'
polycrystal_ic_uo = voronoi
[]
[]
[ICs]
[PolycrystalICs]
[PolycrystalColoringIC]
polycrystal_ic_uo = voronoi
[]
[]
[]
[AuxVariables]
# Dependent variables
[bnds]
# Variable used to visualize the grain boundaries in the simulation
order = FIRST
family = LAGRANGE
[]
[unique_grains]
[]
[var_indices]
[]
[ghost_regions]
[]
[halos]
[]
[halo0]
[]
[halo1]
[]
[halo2]
[]
[halo3]
[]
[halo4]
[]
[halo5]
[]
[halo6]
[]
[halo7]
[]
[centroids]
order = CONSTANT
family = MONOMIAL
[]
[proc_id]
[]
[voronoi_id]
[]
[evaluable_elems]
[]
[]
[Kernels]
# Kernel block, where the kernels defining the residual equations are set up.
[PolycrystalKernel]
# Custom action creating all necessary kernels for grain growth. All input parameters are up in GlobalParams
[]
[]
[AuxKernels]
# AuxKernel block, defining the equations used to calculate the auxvars
[bnds_aux]
# AuxKernel that calculates the GB term
type = BndsCalcAux
variable = bnds
execute_on = 'initial timestep_end'
[]
[unique_grains]
type = FeatureFloodCountAux
variable = unique_grains
flood_counter = grain_tracker
field_display = UNIQUE_REGION
execute_on = 'initial timestep_end'
[]
[var_indices]
type = FeatureFloodCountAux
variable = var_indices
flood_counter = grain_tracker
field_display = VARIABLE_COLORING
execute_on = 'initial timestep_end'
[]
[ghosted_entities]
type = FeatureFloodCountAux
variable = ghost_regions
flood_counter = grain_tracker
field_display = GHOSTED_ENTITIES
execute_on = 'initial timestep_end'
[]
[halos]
type = FeatureFloodCountAux
variable = halos
flood_counter = grain_tracker
field_display = HALOS
execute_on = 'initial timestep_end'
[]
[halo0]
type = FeatureFloodCountAux
variable = halo0
map_index = 0
field_display = HALOS
flood_counter = grain_tracker
[]
[halo1]
type = FeatureFloodCountAux
variable = halo1
map_index = 1
field_display = HALOS
flood_counter = grain_tracker
[]
[halo2]
type = FeatureFloodCountAux
variable = halo2
map_index = 2
field_display = HALOS
flood_counter = grain_tracker
[]
[halo3]
type = FeatureFloodCountAux
variable = halo3
map_index = 3
field_display = HALOS
flood_counter = grain_tracker
[]
[halo4]
type = FeatureFloodCountAux
variable = halo4
map_index = 4
field_display = HALOS
flood_counter = grain_tracker
[]
[halo5]
type = FeatureFloodCountAux
variable = halo5
map_index = 5
field_display = HALOS
flood_counter = grain_tracker
[]
[halo6]
type = FeatureFloodCountAux
variable = halo6
map_index = 6
field_display = HALOS
flood_counter = grain_tracker
[]
[halo7]
type = FeatureFloodCountAux
variable = halo7
map_index = 7
field_display = HALOS
flood_counter = grain_tracker
[]
[centroids]
type = FeatureFloodCountAux
variable = centroids
execute_on = 'timestep_end'
field_display = CENTROID
flood_counter = grain_tracker
[]
[proc_id]
type = ProcessorIDAux
variable = proc_id
execute_on = 'initial'
[]
[voronoi_id]
type = VoronoiICAux
variable = voronoi_id
execute_on = 'initial'
polycrystal_ic_uo = voronoi
[]
[]
[Materials]
[CuGrGr]
# Material properties
type = GBEvolution
T = '450' # Constant temperature of the simulation (for mobility calculation)
wGB = 125 # Width of the diffuse GB
GBmob0 = 2.5e-6 # m^4(Js) for copper from Schoenfelder1997
Q = 0.23 # eV for copper from Schoenfelder1997
GBenergy = 0.708 # J/m^2 from Schoenfelder1997
[]
[]
[Postprocessors]
# Scalar postprocessors
[dt]
# Outputs the current time step
type = TimestepSize
[]
[]
[Executioner]
# Uses newton iteration to solve the problem.
type = Transient # Type of executioner, here it is transient with an adaptive time step
scheme = bdf2 # Type of time integration (2nd order backward euler), defaults to 1st order backward euler
solve_type = PJFNK
petsc_options_iname = '-pc_type -pc_hypre_type -ksp_gmres_restart -mat_mffd_type'
petsc_options_value = 'hypre boomeramg 101 ds'
l_max_its = 30 # Max number of linear iterations
l_tol = 1e-4 # Relative tolerance for linear solves
nl_max_its = 40 # Max number of nonlinear iterations
nl_rel_tol = 1e-10 # Absolute tolerance for nonlienar solves
start_time = 0.0
num_steps = 2
dt = 300
[]
[Outputs]
csv = true
[]
(test/tests/outputs/nemesis/nemesis_elemental.i)
[Mesh]
type = GeneratedMesh
dim = 2
nx = 4
ny = 4
[]
[Variables]
[./u]
[../]
[]
[AuxVariables]
[./proc_id]
order = CONSTANT
family = MONOMIAL
[../]
[]
[Kernels]
[./diff]
type = Diffusion
variable = u
[../]
[]
[AuxKernels]
[./proc_id]
type = ProcessorIDAux
variable = proc_id
[../]
[]
[BCs]
[./left]
type = DirichletBC
variable = u
boundary = left
value = 0
[../]
[./right]
type = DirichletBC
variable = u
boundary = right
value = 1
[../]
[]
[Executioner]
type = Steady
solve_type = 'PJFNK'
petsc_options_iname = '-pc_type -pc_hypre_type'
petsc_options_value = 'hypre boomeramg'
[]
[Outputs]
execute_on = 'timestep_end'
nemesis = true
[]
(modules/phase_field/test/tests/MultiSmoothCircleIC/test_problem.i)
[Mesh]
type = GeneratedMesh
dim = 2
nx = 20
ny = 20
xmin = 0
xmax = 50
ymin = 0
ymax = 50
elem_type = QUAD4
[]
[Variables]
[./c]
order = FIRST
family = LAGRANGE
[../]
[]
[AuxVariables]
[./features]
order = CONSTANT
family = MONOMIAL
[../]
[./ghosts]
order = CONSTANT
family = MONOMIAL
[../]
[./halos]
order = CONSTANT
family = MONOMIAL
[../]
[./proc_id]
order = CONSTANT
family = MONOMIAL
[../]
[]
[ICs]
[./c]
type = LatticeSmoothCircleIC
variable = c
invalue = 1.0
outvalue = 0.0001
circles_per_side = '2 2'
pos_variation = 10.0
radius = 8.0
int_width = 5.0
radius_variation_type = uniform
avoid_bounds = false
[../]
[]
[BCs]
[./Periodic]
[./c]
variable = c
auto_direction = 'x y'
[../]
[../]
[]
[Kernels]
[./diff]
type = Diffusion
variable = c
[../]
[]
[AuxKernels]
[./features]
type = FeatureFloodCountAux
variable = features
execute_on = 'initial timestep_end'
flood_counter = features
[../]
[./ghosts]
type = FeatureFloodCountAux
variable = ghosts
field_display = GHOSTED_ENTITIES
execute_on = 'initial timestep_end'
flood_counter = features
[../]
[./halos]
type = FeatureFloodCountAux
variable = halos
field_display = HALOS
execute_on = 'initial timestep_end'
flood_counter = features
[../]
[./proc_id]
type = ProcessorIDAux
variable = proc_id
execute_on = 'initial timestep_end'
[../]
[]
[Postprocessors]
[./features]
type = FeatureFloodCount
variable = c
flood_entity_type = ELEMENTAL
execute_on = 'initial timestep_end'
[../]
[]
[Problem]
type = FEProblem
solve = false
[]
[Executioner]
type = Steady
[]
[Outputs]
exodus = true
[]
(test/tests/mesh/splitting/geometric_neighbors.i)
[Mesh]
type = GeneratedMesh
dim = 2
nx = 8
ny = 2
xmax = 8
ymax = 2
# We are testing geometric ghosted functors
# so we have to use distributed mesh
parallel_type = distributed
[]
[Variables]
[./u]
[../]
[]
[AuxVariables]
[./ghosted_elements]
order = CONSTANT
family = MONOMIAL
[../]
[./proc]
order = CONSTANT
family = MONOMIAL
[../]
[]
[AuxKernels]
[./random_elemental]
type = ElementUOAux
variable = ghosted_elements
element_user_object = ghost_uo
field_name = "ghosted"
execute_on = initial
[../]
[./proc]
type = ProcessorIDAux
variable = proc
execute_on = initial
[../]
[]
[UserObjects]
[./ghost_uo]
type = ElemSideNeighborLayersGeomTester
execute_on = initial
element_side_neighbor_layers = 2
[../]
[]
[Postprocessors]
[./num_elems]
type = NumElems
[../]
[]
[Executioner]
type = Steady
[]
[Outputs]
exodus = true
[]
[Problem]
solve = false
kernel_coverage_check = false
[]
(test/tests/mesh/splitting/grid_from_file.i)
[Mesh]
type = FileMesh
file = grid_from_file.e
[Partitioner]
type = GridPartitioner
nx = 2
ny = 2
nz = 1
[]
[]
[Variables]
[u]
[]
[]
[Kernels]
[diff]
type = Diffusion
variable = u
[]
[]
[BCs]
[left]
type = DirichletBC
variable = u
boundary = 'left'
value = 0
[]
[right]
type = DirichletBC
variable = u
boundary = 'right'
value = 1
[]
[]
[Executioner]
type = Steady
solve_type = PJFNK
petsc_options_iname = '-pc_type -pc_hypre_type'
petsc_options_value = 'hypre boomeramg'
[]
[Outputs]
exodus = true
[]
[AuxVariables]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[]
[AuxKernels]
[pid_aux]
type = ProcessorIDAux
variable = pid
execute_on = 'INITIAL'
[]
[]
(test/tests/relationship_managers/two_rm/two_rm.i)
[Mesh]
type = GeneratedMesh
dim = 2
nx = 8
ny = 8
output_ghosting = true
[]
[Variables]
[u]
order = FIRST
family = LAGRANGE
[]
[]
[AuxVariables]
[proc]
order = 'CONSTANT'
family = 'MONOMIAL'
[]
[]
[AuxKernels]
[proc]
type = ProcessorIDAux
variable = proc
execute_on = 'initial'
[]
[]
[UserObjects]
[evaluable_uo0]
type = TwoRMTester
execute_on = 'initial'
element_side_neighbor_layers = 2
rank = 0
[]
[]
[Executioner]
type = Steady
[]
[Outputs]
exodus = true
[]
[Problem]
solve = false
kernel_coverage_check = false
[]
(test/tests/mesh/nemesis/nemesis_repartitioning_test.i)
[Mesh]
file = cylinder/cylinder.e
nemesis = true
# leaving skip_partitioning off lets us exodiff against a gold
# standard generated with default libMesh settings
# skip_partitioning = true
[]
[Variables]
[u]
order = FIRST
family = LAGRANGE
[]
[]
[AuxVariables]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[]
[Kernels]
[diff]
type = Diffusion
variable = u
[]
[]
[AuxKernels]
[pid_aux]
type = ProcessorIDAux
variable = pid
execute_on = 'INITIAL'
[]
[]
[BCs]
[left]
type = DirichletBC
variable = u
boundary = 1
value = 0
[]
[right]
type = DirichletBC
variable = u
boundary = 2
value = 1
[]
[]
[Executioner]
type = Steady
solve_type = 'NEWTON'
nl_rel_tol = 1e-6
nl_abs_tol = 1e-14
[Adaptivity]
steps = 1
refine_fraction = 0.1
coarsen_fraction = 0.1
max_h_level = 2
[]
[]
[Postprocessors]
[sum_sides]
type = StatVector
stat = sum
object = nl_wb_element
vector = num_partition_sides
[]
[min_elems]
type = StatVector
stat = min
object = nl_wb_element
vector = num_elems
[]
[max_elems]
type = StatVector
stat = max
object = nl_wb_element
vector = num_elems
[]
[]
[VectorPostprocessors]
[nl_wb_element]
type = WorkBalance
execute_on = initial
system = nl
balances = 'num_elems num_partition_sides'
outputs = none
[]
[]
[Outputs]
[out]
type = CSV
execute_on = FINAL
[]
[]
(test/tests/mesh/custom_partitioner/custom_linear_partitioner_test.i)
###########################################################
# This is a test of the custom partitioner system. It
# demonstrates the usage of a linear partitioner on the
# elements of a mesh.
#
# @Requirement F2.30
###########################################################
[Mesh]
[gen]
type = GeneratedMeshGenerator
dim = 2
nx = 10
ny = 100
xmin = 0.0
xmax = 1.0
ymin = 0.0
ymax = 10.0
[]
# Custom linear partitioner
[./Partitioner]
type = LibmeshPartitioner
partitioner = linear
[../]
parallel_type = replicated
[]
[Variables]
active = 'u'
[./u]
order = FIRST
family = LAGRANGE
[../]
[]
[AuxVariables]
[./proc_id]
order = CONSTANT
family = MONOMIAL
[../]
[]
[Kernels]
active = 'diff'
[./diff]
type = Diffusion
variable = u
[../]
[]
[AuxKernels]
[./proc_id]
type = ProcessorIDAux
variable = proc_id
[../]
[]
[BCs]
active = 'left right'
[./left]
type = DirichletBC
variable = u
boundary = 3
value = 0
[../]
[./right]
type = DirichletBC
variable = u
boundary = 1
value = 1
[../]
[]
[Executioner]
type = Steady
solve_type = 'PJFNK'
[]
[Outputs]
file_base = custom_linear_partitioner_test_out
[./exodus]
type = Exodus
elemental_as_nodal = true
[../]
[]
(test/tests/partitioners/random_partitioner/random_partitioner.i)
[Mesh]
type = GeneratedMesh
dim = 2
nx = 10
ny = 10
[Partitioner]
type = RandomPartitioner
[]
[]
[Variables]
[u]
[]
[]
[Kernels]
[diff]
type = Diffusion
variable = u
[]
[]
[BCs]
[left]
type = DirichletBC
variable = u
boundary = 'left'
value = 0
[]
[right]
type = DirichletBC
variable = u
boundary = 'right'
value = 1
[]
[]
[Executioner]
type = Steady
solve_type = PJFNK
petsc_options_iname = '-pc_type -pc_hypre_type'
petsc_options_value = 'hypre boomeramg'
[]
[Outputs]
exodus = true
[]
[AuxVariables]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[]
[AuxKernels]
[pid_aux]
type = ProcessorIDAux
variable = pid
execute_on = 'INITIAL'
[]
[]
(test/tests/relationship_managers/evaluable/edge_neighbors.i)
[Mesh]
type = GeneratedMesh
dim = 2
nx = 8
ny = 8
# We are testing geometric ghosted functors
# so we have to use distributed mesh
parallel_type = distributed
[]
[GlobalParams]
order = CONSTANT
family = MONOMIAL
[]
[Variables]
[./u]
[../]
[]
[AuxVariables]
[ghosting0]
[]
[ghosting1]
[]
[ghosting2]
[]
[proc]
[]
[]
[AuxKernels]
[ghosting0]
type = ElementUOAux
variable = ghosting0
element_user_object = ghosting_uo0
field_name = "ghosted"
execute_on = initial
[]
[ghosting1]
type = ElementUOAux
variable = ghosting1
element_user_object = ghosting_uo1
field_name = "ghosted"
execute_on = initial
[]
[ghosting2]
type = ElementUOAux
variable = ghosting2
element_user_object = ghosting_uo2
field_name = "ghosted"
execute_on = initial
[]
[proc]
type = ProcessorIDAux
variable = proc
execute_on = initial
[]
[]
[UserObjects]
[ghosting_uo0]
type = ElemSideNeighborLayersTester
execute_on = initial
element_side_neighbor_layers = 2
rank = 0
[]
[ghosting_uo1]
type = ElemSideNeighborLayersTester
execute_on = initial
element_side_neighbor_layers = 2
rank = 1
[]
[ghosting_uo2]
type = ElemSideNeighborLayersTester
execute_on = initial
element_side_neighbor_layers = 2
rank = 2
[]
[]
[Executioner]
type = Steady
[]
[Outputs]
exodus = true
[]
[Problem]
solve = false
kernel_coverage_check = false
[]
(test/tests/partitioners/file_mesh_skip_partition/file_mesh_skip_partitioning.i)
[Mesh]
[generate_2d]
type = FileMeshGenerator
file = 2d_base.e
skip_partitioning = true
[]
[extrude]
type = MeshExtruderGenerator
input = generate_2d
extrusion_vector = '0 0 1'
num_layers = 5
[]
[Partitioner]
type = HierarchicalGridPartitioner
nx_nodes = 2
ny_nodes = 1
nz_nodes = 1
nx_procs = 1
ny_procs = 1
nz_procs = 2
[]
[]
[Variables]
[u]
[]
[]
[AuxVariables]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[]
[AuxKernels]
[pid]
type = ProcessorIDAux
variable = pid
[]
[]
[Kernels]
[diff]
type = Diffusion
variable = u
[]
[]
[BCs]
[left]
type = DirichletBC
variable = u
boundary = left
value = 0
[]
[right]
type = DirichletBC
variable = u
boundary = right
value = 1
[]
[]
[Executioner]
type = Steady
solve_type = 'PJFNK'
petsc_options_iname = '-pc_type -pc_hypre_type'
petsc_options_value = 'hypre boomeramg'
[]
[Outputs]
exodus = true
[]
(test/tests/mesh/subdomain_partitioner/subdomain_partitioner.i)
[Mesh]
[file]
type = FileMeshGenerator
file = test_subdomain_partitioner.e
[]
[./Partitioner]
type = LibmeshPartitioner
partitioner = subdomain_partitioner
blocks = '1 2 3 4; 1001 1002 1003 1004'
[../]
[]
[Variables]
[./u]
order = FIRST
family = LAGRANGE
[../]
[]
[AuxVariables]
[./proc_id]
order = CONSTANT
family = MONOMIAL
[../]
[]
[Kernels]
[./diff]
type = Diffusion
variable = u
[../]
[]
[AuxKernels]
[./proc_id]
type = ProcessorIDAux
variable = proc_id
[../]
[]
[Executioner]
type = Steady
solve_type = 'PJFNK'
[]
[Outputs]
file_base = subdomain_partitioner_out
[./exodus]
type = Exodus
elemental_as_nodal = true
[../]
[]
(modules/phase_field/test/tests/grain_tracker_test/grain_tracker_remapping_test.i)
# This simulation predicts GB migration of a 2D copper polycrystal with 100 grains represented with 18 order parameters
# Mesh adaptivity and time step adaptivity are used
# An AuxVariable is used to calculate the grain boundary locations
# Postprocessors are used to record time step and the number of grains
[Mesh]
# Mesh block. Meshes can be read in or automatically generated
type = GeneratedMesh
dim = 2 # Problem dimension
nx = 12 # Number of elements in the x-direction
ny = 12 # Number of elements in the y-direction
xmax = 1000 # maximum x-coordinate of the mesh
ymax = 1000 # maximum y-coordinate of the mesh
elem_type = QUAD4 # Type of elements used in the mesh
uniform_refine = 1 # Initial uniform refinement of the mesh
[]
[GlobalParams]
# Parameters used by several kernels that are defined globally to simplify input file
op_num = 8 # Number of order parameters used
var_name_base = gr # Base name of grains
order = CONSTANT
family = MONOMIAL
[]
[Variables]
# Variable block, where all variables in the simulation are declared
[./PolycrystalVariables]
order = FIRST
family = LAGRANGE
[../]
[]
[UserObjects]
[./voronoi]
type = PolycrystalVoronoi
grain_num = 12 # Number of grains
coloring_algorithm = jp
rand_seed = 10
output_adjacency_matrix = true
[../]
[./grain_tracker]
type = GrainTracker
threshold = 0.2
verbosity_level = 1
connecting_threshold = 0.08
flood_entity_type = ELEMENTAL
compute_halo_maps = true # For displaying HALO fields
polycrystal_ic_uo = voronoi
error_on_grain_creation = true
execute_on = 'initial timestep_end'
[../]
[]
[ICs]
[./PolycrystalICs]
[./PolycrystalColoringIC]
polycrystal_ic_uo = voronoi
[../]
[../]
[]
[AuxVariables]
# Dependent variables
[./bnds]
# Variable used to visualize the grain boundaries in the simulation
order = FIRST
family = LAGRANGE
[../]
[./unique_grains]
[../]
[./var_indices]
[../]
[./ghost_regions]
[../]
[./halos]
[../]
[./halo0]
[../]
[./halo1]
[../]
[./halo2]
[../]
[./halo3]
[../]
[./halo4]
[../]
[./halo5]
[../]
[./halo6]
[../]
[./halo7]
[../]
[./centroids]
order = CONSTANT
family = MONOMIAL
[../]
[./proc_id]
[../]
[]
[Kernels]
# Kernel block, where the kernels defining the residual equations are set up.
[./PolycrystalKernel]
# Custom action creating all necessary kernels for grain growth. All input parameters are up in GlobalParams
[../]
[]
[AuxKernels]
# AuxKernel block, defining the equations used to calculate the auxvars
[./bnds_aux]
# AuxKernel that calculates the GB term
type = BndsCalcAux
variable = bnds
execute_on = 'initial timestep_end'
[../]
[./unique_grains]
type = FeatureFloodCountAux
variable = unique_grains
flood_counter = grain_tracker
field_display = UNIQUE_REGION
execute_on = 'initial timestep_end'
[../]
[./var_indices]
type = FeatureFloodCountAux
variable = var_indices
flood_counter = grain_tracker
field_display = VARIABLE_COLORING
execute_on = 'initial timestep_end'
[../]
[./ghosted_entities]
type = FeatureFloodCountAux
variable = ghost_regions
flood_counter = grain_tracker
field_display = GHOSTED_ENTITIES
execute_on = 'initial timestep_end'
[../]
[./halos]
type = FeatureFloodCountAux
variable = halos
flood_counter = grain_tracker
field_display = HALOS
execute_on = 'initial timestep_end'
[../]
[./halo0]
type = FeatureFloodCountAux
variable = halo0
map_index = 0
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo1]
type = FeatureFloodCountAux
variable = halo1
map_index = 1
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo2]
type = FeatureFloodCountAux
variable = halo2
map_index = 2
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo3]
type = FeatureFloodCountAux
variable = halo3
map_index = 3
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo4]
type = FeatureFloodCountAux
variable = halo4
map_index = 4
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo5]
type = FeatureFloodCountAux
variable = halo5
map_index = 5
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo6]
type = FeatureFloodCountAux
variable = halo6
map_index = 6
field_display = HALOS
flood_counter = grain_tracker
[../]
[./halo7]
type = FeatureFloodCountAux
variable = halo7
map_index = 7
field_display = HALOS
flood_counter = grain_tracker
[../]
[./centroids]
type = FeatureFloodCountAux
variable = centroids
execute_on = timestep_end
field_display = CENTROID
flood_counter = grain_tracker
[../]
[./proc_id]
type = ProcessorIDAux
variable = proc_id
execute_on = initial
[../]
[]
[BCs]
# Boundary Condition block
[]
[Materials]
[./CuGrGr]
# Material properties
type = GBEvolution
T = 450 # Constant temperature of the simulation (for mobility calculation)
wGB = 125 # Width of the diffuse GB
GBmob0 = 2.5e-6 # m^4(Js) for copper from Schoenfelder1997
Q = 0.23 # eV for copper from Schoenfelder1997
GBenergy = 0.708 # J/m^2 from Schoenfelder1997
[../]
[]
[Postprocessors]
# Scalar postprocessors
[./dt]
# Outputs the current time step
type = TimestepSize
[../]
[]
[Executioner]
# Uses newton iteration to solve the problem.
type = Transient # Type of executioner, here it is transient with an adaptive time step
scheme = bdf2 # Type of time integration (2nd order backward euler), defaults to 1st order backward euler
solve_type = PJFNK
petsc_options_iname = '-pc_type -pc_hypre_type -ksp_gmres_restart -mat_mffd_type'
petsc_options_value = 'hypre boomeramg 101 ds'
l_max_its = 30 # Max number of linear iterations
l_tol = 1e-4 # Relative tolerance for linear solves
nl_max_its = 40 # Max number of nonlinear iterations
nl_rel_tol = 1e-10 # Absolute tolerance for nonlienar solves
start_time = 0.0
num_steps = 15
dt = 300
[]
[Problem]
type = FEProblem
[]
[Outputs]
csv = true
exodus = true
[./pg]
type = PerfGraphOutput
level = 2 # Default is 1
[../]
[]
(modules/contact/test/tests/tension_release/8ElemTensionRelease.i)
[Mesh]
file = 8ElemTensionRelease.e
partitioner = centroid
centroid_partitioner_direction = x
[]
[GlobalParams]
volumetric_locking_correction = false
displacements = 'disp_x disp_y'
[]
[Functions]
[./up]
type = PiecewiseLinear
x = '0 1 2 3'
y = '0 0.0001 0 -.0001'
[../]
[]
[AuxVariables]
[./status]
[../]
[./pid]
order = CONSTANT
family = MONOMIAL
[../]
[]
[Modules/TensorMechanics/Master]
[./all]
add_variables = true
strain = FINITE
[]
[]
[Contact]
[./dummy_name]
primary = 2
secondary = 3
penalty = 1e6
model = frictionless
tangential_tolerance = 0.01
[../]
[]
[AuxKernels]
[./pid]
type = ProcessorIDAux
variable = pid
execute_on = 'initial timestep_end'
[../]
[./status]
type = PenetrationAux
quantity = mechanical_status
variable = status
boundary = 3
paired_boundary = 2
execute_on = timestep_end
[../]
[]
[BCs]
[./lateral]
type = DirichletBC
variable = disp_x
boundary = '1 4'
value = 0
[../]
[./bottom_up]
type = FunctionDirichletBC
variable = disp_y
boundary = 1
function = up
[../]
[./top]
type = DirichletBC
variable = disp_y
boundary = 4
value = 0.0
[../]
[]
[Materials]
[./stiffStuff1]
type = ComputeIsotropicElasticityTensor
block = '1 2'
youngs_modulus = 1.0e6
poissons_ratio = 0.3
[../]
[./stiffStuff1_stress]
type = ComputeFiniteStrainElasticStress
block = '1 2'
[../]
[]
[Executioner]
type = Transient
solve_type = 'PJFNK'
petsc_options_iname = '-pc_type -pc_hypre_type -ksp_gmres_restart'
petsc_options_value = 'hypre boomeramg 101'
line_search = 'none'
nl_rel_tol = 1e-8
nl_abs_tol = 1e-9
l_tol = 1e-4
l_max_its = 100
nl_max_its = 10
dt = 0.1
num_steps = 30
[./Predictor]
type = SimplePredictor
scale = 1.0
[../]
[]
[Outputs]
exodus = true
[]
(test/tests/partitioners/custom_partition_generated_mesh/custom_partition_generated_mesh.i)
[Mesh]
[generate_2d]
type = GeneratedMeshGenerator
dim = 2
nx = 10
ny = 10
[]
[extrude]
type = MeshExtruderGenerator
input = generate_2d
extrusion_vector = '0 0 1'
num_layers = 5
[]
[Partitioner]
type = GridPartitioner
nx = 1
ny = 1
nz = 4
[]
[]
[Variables]
[u]
[]
[]
[AuxVariables]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[]
[AuxKernels]
[pid]
type = ProcessorIDAux
variable = pid
[]
[]
[Kernels]
[diff]
type = Diffusion
variable = u
[]
[]
[BCs]
[left]
type = DirichletBC
variable = u
boundary = left
value = 0
[]
[right]
type = DirichletBC
variable = u
boundary = right
value = 1
[]
[]
[Executioner]
type = Steady
solve_type = 'PJFNK'
petsc_options_iname = '-pc_type -pc_hypre_type'
petsc_options_value = 'hypre boomeramg'
[]
[Outputs]
exodus = true
[]
(modules/contact/test/tests/bouncing-block-contact/variational-frictional.i)
starting_point = 2e-1
# We offset slightly so we avoid the case where the bottom of the secondary block and the top of the
# primary block are perfectly vertically aligned which can cause the backtracking line search some
# issues for a coarse mesh (basic line search handles that fine)
offset = 1e-2
[GlobalParams]
displacements = 'disp_x disp_y'
diffusivity = 1e0
correct_edge_dropping = true
[]
[Mesh]
[file_mesh]
type = FileMeshGenerator
file = long-bottom-block-1elem-blocks-coarse.e
[]
[]
[Variables]
[disp_x]
block = '1 2'
scaling = 1e1
[]
[disp_y]
block = '1 2'
scaling = 1e1
[]
[frictional_normal_lm]
block = 4
scaling = 1e3
[]
[frictional_tangential_lm]
block = 4
scaling = 1e2
[]
[]
[ICs]
[disp_y]
block = 2
variable = disp_y
value = '${fparse starting_point + offset}'
type = ConstantIC
[]
[]
[Kernels]
[disp_x]
type = MatDiffusion
variable = disp_x
[]
[disp_y]
type = MatDiffusion
variable = disp_y
[]
[]
[AuxVariables]
[procid]
family = MONOMIAL
order = CONSTANT
[]
[]
[AuxKernels]
[procid]
type = ProcessorIDAux
variable = procid
[]
[]
[Constraints]
[frictional_normal_lm]
type = ComputeFrictionalForceLMMechanicalContact
primary_boundary = 10
secondary_boundary = 20
primary_subdomain = 3
secondary_subdomain = 4
variable = frictional_normal_lm
friction_lm = frictional_tangential_lm
disp_x = disp_x
disp_y = disp_y
mu = 0.1
normalize_c = true
c = 1.0e-2
c_t = 1.0e-1
[]
[normal_x]
type = NormalMortarMechanicalContact
primary_boundary = 10
secondary_boundary = 20
primary_subdomain = 3
secondary_subdomain = 4
variable = frictional_normal_lm
secondary_variable = disp_x
component = x
use_displaced_mesh = true
compute_lm_residuals = false
[]
[normal_y]
type = NormalMortarMechanicalContact
primary_boundary = 10
secondary_boundary = 20
primary_subdomain = 3
secondary_subdomain = 4
variable = frictional_normal_lm
secondary_variable = disp_y
component = y
use_displaced_mesh = true
compute_lm_residuals = false
[]
[tangential_x]
type = TangentialMortarMechanicalContact
primary_boundary = 10
secondary_boundary = 20
primary_subdomain = 3
secondary_subdomain = 4
variable = frictional_tangential_lm
secondary_variable = disp_x
component = x
use_displaced_mesh = true
compute_lm_residuals = false
[]
[tangential_y]
type = TangentialMortarMechanicalContact
primary_boundary = 10
secondary_boundary = 20
primary_subdomain = 3
secondary_subdomain = 4
variable = frictional_tangential_lm
secondary_variable = disp_y
component = y
use_displaced_mesh = true
compute_lm_residuals = false
[]
[]
[BCs]
[botx]
type = DirichletBC
variable = disp_x
boundary = 40
value = 0.0
[]
[boty]
type = DirichletBC
variable = disp_y
boundary = 40
value = 0.0
[]
[topy]
type = FunctionDirichletBC
variable = disp_y
boundary = 30
function = '${starting_point} * cos(2 * pi / 40 * t) + ${offset}'
[]
[leftx]
type = FunctionDirichletBC
variable = disp_x
boundary = 50
function = '1e-2 * t'
[]
[]
[Executioner]
type = Transient
end_time = 200
dt = 5
dtmin = .1
solve_type = 'PJFNK'
petsc_options = '-snes_converged_reason -ksp_converged_reason'
petsc_options_iname = '-pc_type -pc_factor_shift_type -pc_factor_shift_amount'
petsc_options_value = 'lu NONZERO 1e-15'
l_max_its = 30
nl_max_its = 25
line_search = 'none'
nl_rel_tol = 1e-12
[]
[Debug]
show_var_residual_norms = true
[]
[Outputs]
exodus = true
[]
[Preconditioning]
[smp]
type = SMP
full = true
[]
[]
(test/tests/partitioners/hierarchical_grid_partitioner/hierarchical_grid_partitioner.i)
[Mesh]
type = GeneratedMesh
dim = 2
nx = 8
ny = 8
[Partitioner]
type = HierarchicalGridPartitioner
nx_nodes = 2
ny_nodes = 2
nx_procs = 2
ny_procs = 2
[]
[]
[Variables/u]
[]
[Kernels/diff]
type = Diffusion
variable = u
[]
[BCs]
[left]
type = DirichletBC
variable = u
boundary = 'left'
value = 0
[]
[right]
type = DirichletBC
variable = u
boundary = 'right'
value = 1
[]
[]
[Executioner]
type = Steady
[]
[Outputs]
exodus = true
[]
[AuxVariables/pid]
family = MONOMIAL
order = CONSTANT
[]
[Problem]
solve = false
[]
[AuxKernels/pid]
type = ProcessorIDAux
variable = pid
execute_on = 'INITIAL'
[]
(modules/phase_field/test/tests/feature_flood_test/parallel_feature_count.i)
[Mesh]
type = ImageMesh
dim = 2
file = spiral_16x16.png
scale_to_one = false
[]
[Variables]
[./u]
order = CONSTANT
family = MONOMIAL
[../]
[]
[AuxVariables]
[./feature]
order = CONSTANT
family = MONOMIAL
[../]
[./proc_id]
order = CONSTANT
family = MONOMIAL
[../]
[./feature_ghost]
order = CONSTANT
family = MONOMIAL
[../]
[]
[AuxKernels]
[./nodal_flood_aux]
type = FeatureFloodCountAux
variable = feature
flood_counter = flood_count_pp
execute_on = 'initial timestep_end'
[../]
[./proc_id]
type = ProcessorIDAux
variable = proc_id
execute_on = 'initial timestep_end'
[../]
[./ghost]
type = FeatureFloodCountAux
variable = feature_ghost
field_display = GHOSTED_ENTITIES
flood_counter = flood_count_pp
execute_on = 'initial timestep_end'
[../]
[]
[Functions]
[./tif]
type = ImageFunction
component = 0
[../]
[]
[ICs]
[./u_ic]
type = FunctionIC
function = tif
variable = u
[../]
[]
[Postprocessors]
[./flood_count_pp]
type = FeatureFloodCount
variable = u
threshold = 1.0
execute_on = 'initial timestep_end'
[../]
[]
[Problem]
type = FEProblem
solve = false
[]
[Executioner]
type = Steady
[]
[Outputs]
csv = true
[]
(test/tests/auxkernels/ghosting_aux/no_algebraic_ghosting.i)
[Mesh]
type = GeneratedMesh
dim = 2
nx = 10
ny = 10
[Partitioner]
type = GridPartitioner
nx = 2
ny = 2
[]
output_ghosting = true
[]
[Variables]
[u]
[]
[]
[Kernels]
[diff]
type = Diffusion
variable = u
[]
[]
[BCs]
[left]
type = DirichletBC
variable = u
boundary = 'left'
value = 0
[]
[right]
type = DirichletBC
variable = u
boundary = 'right'
value = 1
[]
[]
[Executioner]
type = Steady
solve_type = PJFNK
petsc_options_iname = '-pc_type -pc_hypre_type'
petsc_options_value = 'hypre boomeramg'
[]
[Outputs]
exodus = true
[]
[AuxVariables]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[]
[AuxKernels]
[pid]
type = ProcessorIDAux
variable = pid
[]
[]
[Problem]
default_ghosting = false
[]
(test/tests/relationship_managers/geometric_neighbors/geometric_edge_neighbors.i)
# This test will show 2 layers of geometric ghosting and 0 layers of evaluable
# ghosting. The 2 layers of geometric ghosting corresponds to the 2 layers we
# have explicitly requested. There is no evaulable ghosting because we have not
# requested any algebraic or coupling functors.
[Mesh]
type = GeneratedMesh
dim = 2
nx = 8
ny = 8
# We are testing geometric ghosted functors
# so we have to use distributed mesh
parallel_type = distributed
[]
[GlobalParams]
order = CONSTANT
family = MONOMIAL
[]
[Variables]
[./u]
[../]
[]
[AuxVariables]
[ghosting0]
[]
[ghosting1]
[]
[ghosting2]
[]
[evaluable0]
[]
[evaluable1]
[]
[evaluable2]
[]
[proc]
[]
[]
[AuxKernels]
[ghosting0]
type = ElementUOAux
variable = ghosting0
element_user_object = ghosting_uo0
field_name = "ghosted"
execute_on = initial
[]
[ghosting1]
type = ElementUOAux
variable = ghosting1
element_user_object = ghosting_uo1
field_name = "ghosted"
execute_on = initial
[]
[ghosting2]
type = ElementUOAux
variable = ghosting2
element_user_object = ghosting_uo2
field_name = "ghosted"
execute_on = initial
[]
[evaluable0]
type = ElementUOAux
variable = evaluable0
element_user_object = ghosting_uo0
field_name = "evaluable"
execute_on = initial
[]
[evaluable1]
type = ElementUOAux
variable = evaluable1
element_user_object = ghosting_uo1
field_name = "evaluable"
execute_on = initial
[]
[evaluable2]
type = ElementUOAux
variable = evaluable2
element_user_object = ghosting_uo2
field_name = "evaluable"
execute_on = initial
[]
[proc]
type = ProcessorIDAux
variable = proc
execute_on = initial
[]
[]
[UserObjects]
[ghosting_uo0]
type = ElemSideNeighborLayersGeomTester
execute_on = initial
element_side_neighbor_layers = 2
rank = 0
[]
[ghosting_uo1]
type = ElemSideNeighborLayersGeomTester
execute_on = initial
element_side_neighbor_layers = 2
rank = 1
[]
[ghosting_uo2]
type = ElemSideNeighborLayersGeomTester
execute_on = initial
element_side_neighbor_layers = 2
rank = 2
[]
[]
[Executioner]
type = Steady
[]
[Outputs]
exodus = true
[]
[Problem]
solve = false
kernel_coverage_check = false
[]
(modules/external_petsc_solver/test/tests/partition/moose_as_master.i)
[Mesh]
[gmg]
type = DistributedRectilinearMeshGenerator
dim = 2
nx = 20
ny = 21
partition = square
[]
[]
[Variables]
[./u]
[../]
[]
[AuxVariables]
[./v]
[../]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[]
[AuxKernels]
[pid_aux]
type = ProcessorIDAux
variable = pid
execute_on = 'INITIAL'
[]
[]
[Kernels]
[./diff]
type = Diffusion
variable = u
[../]
[./td]
type = TimeDerivative
variable = u
[../]
[./cf]
type = CoupledForce
coef = 10000
variable = u
v=v
[../]
[]
[BCs]
[./left]
type = DirichletBC
variable = u
boundary = left
value = 0
[../]
[./right]
type = DirichletBC
variable = u
boundary = right
value = 1
[../]
[]
[Executioner]
type = Transient
num_steps = 10
dt = 0.2
solve_type = 'PJFNK'
fixed_point_max_its = 10
fixed_point_rel_tol = 1e-8
fixed_point_abs_tol = 1e-9
nl_rel_tol = 1e-6
nl_abs_tol = 1e-12
petsc_options_iname = '-pc_type -pc_hypre_type'
petsc_options_value = 'hypre boomeramg'
[]
[Outputs]
exodus = true
[]
[Postprocessors]
[./picard_its]
type = NumFixedPointIterations
execute_on = 'initial timestep_end'
[../]
[]
[MultiApps]
[./sub_app]
type = TransientMultiApp
input_files = 'petsc_transient_as_sub.i'
app_type = ExternalPetscSolverApp
library_path = '../../../../external_petsc_solver/lib'
[../]
[]
[Transfers]
[./fromsub]
type = MultiAppMeshFunctionTransfer
from_multi_app = sub_app
source_variable = u
variable = v
[../]
[]
(modules/external_petsc_solver/test/tests/partition/petsc_transient_as_sub.i)
[Mesh]
# It is a mirror of PETSc mesh (DMDA)
type = PETScDMDAMesh
[]
[AuxVariables]
[./u]
order = FIRST
family = LAGRANGE
[../]
[]
[Problem]
type = ExternalPETScProblem
sync_variable = u
[]
[Executioner]
type = Transient
[./TimeStepper]
type = ExternalPetscTimeStepper
[../]
[]
[AuxVariables]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[]
[AuxKernels]
[pid_aux]
type = ProcessorIDAux
variable = pid
execute_on = 'INITIAL'
[]
[]
[Outputs]
exodus = true
[]
(modules/phase_field/test/tests/grain_growth/voronoi_adaptivity_ghost.i)
[Mesh]
[drmg]
type = DistributedRectilinearMeshGenerator
dim = 2
nx = 30
ny = 30
nz = 0
xmin = 0
xmax = 1000
ymin = 0
ymax = 1000
zmin = 0
zmax = 0
elem_type = QUAD4
partition = linear
[]
[]
[GlobalParams]
op_num = 4
var_name_base = gr
[]
[Variables]
[./PolycrystalVariables]
[../]
[]
[UserObjects]
[./voronoi]
type = PolycrystalVoronoi
rand_seed = 105
grain_num = 4
coloring_algorithm = bt
[../]
[]
[ICs]
[./PolycrystalICs]
[./PolycrystalColoringIC]
polycrystal_ic_uo = voronoi
[../]
[../]
[]
[AuxVariables]
[./bnds]
order = FIRST
family = LAGRANGE
[../]
[ghosting0]
order = CONSTANT
family = MONOMIAL
[]
[ghosting1]
order = CONSTANT
family = MONOMIAL
[]
[ghosting2]
order = CONSTANT
family = MONOMIAL
[]
[evaluable0]
order = CONSTANT
family = MONOMIAL
[]
[evaluable1]
order = CONSTANT
family = MONOMIAL
[]
[evaluable2]
order = CONSTANT
family = MONOMIAL
[]
[proc]
order = CONSTANT
family = MONOMIAL
[]
[]
[AuxKernels]
[ghosting0]
type = ElementUOAux
variable = ghosting0
element_user_object = ghosting_uo0
field_name = "ghosted"
execute_on = initial
[]
[ghosting1]
type = ElementUOAux
variable = ghosting1
element_user_object = ghosting_uo1
field_name = "ghosted"
execute_on = initial
[]
[ghosting2]
type = ElementUOAux
variable = ghosting2
element_user_object = ghosting_uo2
field_name = "ghosted"
execute_on = initial
[]
[evaluable0]
type = ElementUOAux
variable = evaluable0
element_user_object = ghosting_uo0
field_name = "evaluable"
execute_on = initial
[]
[evaluable1]
type = ElementUOAux
variable = evaluable1
element_user_object = ghosting_uo1
field_name = "evaluable"
execute_on = initial
[]
[evaluable2]
type = ElementUOAux
variable = evaluable2
element_user_object = ghosting_uo2
field_name = "evaluable"
execute_on = initial
[]
[proc]
type = ProcessorIDAux
variable = proc
execute_on = initial
[]
[]
[UserObjects]
[ghosting_uo0]
type = ElemSideNeighborLayersGeomTester
execute_on = initial
element_side_neighbor_layers = 2
rank = 0
[]
[ghosting_uo1]
type = ElemSideNeighborLayersGeomTester
execute_on = initial
element_side_neighbor_layers = 2
rank = 1
[]
[ghosting_uo2]
type = ElemSideNeighborLayersGeomTester
execute_on = initial
element_side_neighbor_layers = 2
rank = 2
[]
[]
[Kernels]
[./PolycrystalKernel]
[../]
[]
[AuxKernels]
[./BndsCalc]
type = BndsCalcAux
variable = bnds
execute_on = timestep_end
[../]
[]
[BCs]
[./Periodic]
[./All]
auto_direction = 'x y'
[../]
[../]
[]
[Materials]
[./Copper]
type = GBEvolution
T = 500 # K
wGB = 60 # nm
GBmob0 = 2.5e-6 #m^4/(Js) from Schoenfelder 1997
Q = 0.23 #Migration energy in eV
GBenergy = 0.708 #GB energy in J/m^2
[../]
[]
[Postprocessors]
active = ''
[./ngrains]
type = FeatureFloodCount
variable = bnds
threshold = 0.7
[../]
[]
[Preconditioning]
active = ''
[./SMP]
type = SMP
full = true
[../]
[]
[Executioner]
type = Transient
scheme = 'bdf2'
solve_type = 'PJFNK'
petsc_options_iname = '-pc_type -pc_hypre_type -ksp_gmres_restart'
petsc_options_value = 'hypre boomeramg 31'
l_tol = 1.0e-4
l_max_its = 30
nl_max_its = 20
nl_rel_tol = 1.0e-13
start_time = 0.0
num_steps = 2
dt = 80.0
[./Adaptivity]
initial_adaptivity = 2
refine_fraction = 0.7
coarsen_fraction = 0.1
max_h_level = 1
[../]
[]
[Outputs]
exodus = true
[]
(test/tests/auxkernels/ghosting_aux/ghosting_aux.i)
[Mesh]
type = GeneratedMesh
dim = 2
nx = 10
ny = 10
[Partitioner]
type = GridPartitioner
nx = 2
ny = 2
[]
output_ghosting = true
[]
[Variables]
[u]
[]
[]
[Kernels]
[diff]
type = Diffusion
variable = u
[]
[]
[AuxVariables]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[]
[AuxKernels]
[pid]
type = ProcessorIDAux
variable = pid
[]
[]
[BCs]
[left]
type = DirichletBC
variable = u
boundary = 'left'
value = 0
[]
[right]
type = DirichletBC
variable = u
boundary = 'right'
value = 1
[]
[]
[Executioner]
type = Steady
solve_type = PJFNK
petsc_options_iname = '-pc_type -pc_hypre_type'
petsc_options_value = 'hypre boomeramg'
[]
[Outputs]
exodus = true
[]
[Problem]
default_ghosting = true
[]
(test/tests/partitioners/petsc_partitioner/petsc_partitioner.i)
[Mesh]
type = GeneratedMesh
dim = 2
nx = 10
ny = 10
[Partitioner]
type = PetscExternalPartitioner
part_package = parmetis
[]
parallel_type = distributed
# Need a fine enough mesh to have good partition
uniform_refine = 1
[]
[Variables]
[u]
[]
[]
[Kernels]
[diff]
type = Diffusion
variable = u
[]
[]
[BCs]
[left]
type = DirichletBC
variable = u
boundary = 'left'
value = 0
[]
[right]
type = DirichletBC
variable = u
boundary = 'right'
value = 1
[]
[]
[Executioner]
type = Steady
solve_type = PJFNK
petsc_options_iname = '-pc_type -pc_hypre_type'
petsc_options_value = 'hypre boomeramg'
[]
[AuxVariables]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[npid]
family = Lagrange
order = first
[]
[]
[AuxKernels]
[pid_aux]
type = ProcessorIDAux
variable = pid
execute_on = 'INITIAL'
[]
[npid_aux]
type = ProcessorIDAux
variable = npid
execute_on = 'INITIAL'
[]
[]
[Postprocessors]
[sum_sides]
type = StatVector
stat = sum
object = nl_wb_element
vector = num_partition_sides
[]
[min_elems]
type = StatVector
stat = min
object = nl_wb_element
vector = num_elems
[]
[max_elems]
type = StatVector
stat = max
object = nl_wb_element
vector = num_elems
[]
[]
[VectorPostprocessors]
[nl_wb_element]
type = WorkBalance
execute_on = initial
system = nl
balances = 'num_elems num_partition_sides'
outputs = none
[]
[]
[Outputs]
exodus = true
[out]
type = CSV
execute_on = FINAL
[]
[]
(test/tests/mesh/centroid_partitioner/centroid_partitioner_test.i)
###########################################################
# This test exercises the parallel computation aspect of
# the framework. A Centroid partitioner is used to split
# the mesh into chunks for several processors along a
# vector (y-axis).
#
# @Requirement F2.30
###########################################################
[Mesh]
[gen]
type = GeneratedMeshGenerator
dim = 2
nx = 10
ny = 100
xmin = 0.0
xmax = 1.0
ymin = 0.0
ymax = 10.0
[]
# The centroid partitioner orders elements based on
# the position of their centroids
partitioner = centroid
# This will order the elements based on the y value of
# their centroid. Perfect for meshes predominantly in
# one direction
centroid_partitioner_direction = y
# The centroid partitioner behaves differently depending on
# whether you are using Serial or DistributedMesh, so to get
# repeatable results, we restrict this test to using ReplicatedMesh.
parallel_type = replicated
[]
[Variables]
active = 'u'
[./u]
order = FIRST
family = LAGRANGE
[../]
[]
[AuxVariables]
[./proc_id]
order = CONSTANT
family = MONOMIAL
[../]
[]
[Kernels]
active = 'diff'
[./diff]
type = Diffusion
variable = u
[../]
[]
[AuxKernels]
[./proc_id]
type = ProcessorIDAux
variable = proc_id
[../]
[]
[BCs]
active = 'left right'
[./left]
type = DirichletBC
variable = u
boundary = 3
value = 0
[../]
[./right]
type = DirichletBC
variable = u
boundary = 1
value = 1
[../]
[]
[Executioner]
type = Steady
solve_type = 'PJFNK'
[]
[Outputs]
file_base = out
[./exodus]
type = Exodus
elemental_as_nodal = true
[../]
[]
(test/tests/meshgenerators/distributed_rectilinear/partition/squarish_partition.i)
[Mesh]
[gmg]
type = DistributedRectilinearMeshGenerator
dim = 2
nx = 20
ny = 30
partition = square
[]
[]
[Variables]
[./u]
order = FIRST
family = LAGRANGE
[../]
[]
[AuxVariables]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[npid]
family = Lagrange
order = first
[]
[]
[AuxKernels]
[pid_aux]
type = ProcessorIDAux
variable = pid
execute_on = 'INITIAL'
[]
[npid_aux]
type = ProcessorIDAux
variable = npid
execute_on = 'INITIAL'
[]
[]
[Kernels]
[./diff]
type = Diffusion
variable = u
[../]
[]
[BCs]
[./left]
type = DirichletBC
variable = u
preset = false
boundary = 'left'
value = 0
[../]
[./right]
type = DirichletBC
variable = u
preset = false
boundary = 'right'
value = 1
[../]
[]
[Executioner]
type = Steady
petsc_options_iname = '-pc_type'
petsc_options_value = 'hypre'
solve_type = 'NEWTON'
[]
[Outputs]
exodus = true
[]
(modules/external_petsc_solver/test/tests/external_petsc_problem/petsc_transient_as_sub.i)
[Mesh]
# It is a mirror of PETSc mesh (DMDA)
type = PETScDMDAMesh
[]
[AuxVariables]
[./u]
order = FIRST
family = LAGRANGE
[../]
[]
[Problem]
type = ExternalPETScProblem
sync_variable = u
[]
[Executioner]
type = Transient
[./TimeStepper]
type = ExternalPetscTimeStepper
[../]
[]
[AuxVariables]
[pid]
family = MONOMIAL
order = CONSTANT
[]
[]
[AuxKernels]
[pid_aux]
type = ProcessorIDAux
variable = pid
execute_on = 'INITIAL'
[]
[]
[Outputs]
exodus = true
[]