Nek5000/RS CFD Modeling of HTTF Lower Plenum Flow Mixing Phenomenon

Contact: Jun Fang, fangj.at.anl.gov

Model link: HTTF Lower Plenum CFD Model

Overview

Accurate modeling and simulation capabilities are becoming increasingly important to speed up the development and deployment of advanced nuclear reactor technologies, such as high temperature gas-cooled reactors. Among the identified safety-relevant phenomena for the gas-cooled reactors (GCR), the outlet plenum flow distribution was ranked to be of high importance with a low knowledge level in the phenomenon identification and ranking table (PIRT) (Ball et al., 2008). The heated coolant (e.g., helium) flows downward through the GCR core region and enters the outlet/lower plenum through narrow channels, which causes the jetting of the gas flow. The jets have a non-uniform temperature and risk yielding high cycling thermal stresses in the lower plenum, negative pressure gradients opposing the flow ingress, and hot streaking. These phenomena cannot be accurately captured by 1-D system codes; instead, the modeling requires using higher fidelity CFD codes that can predict the temperature fluctuations in the HTTF lower plenum. In this study, a detailed CFD model is established for the lower plenum of scaled, electrically-heated GCR test facility (i.e., High Temperature Test Facility, or HTTF) at Oregon State University. Specifically, the flow mixing phenomenon in HTTF lower plenum is simulated with spectral element CFD software, Nek5000 and its GPU-oriented variant nekRS. The turbulence effects are modeled by the two-equation URANS model. Two sets of boundary conditions are studied in the Nek5000 and nekRS simulations, respectively. The velocity and temperature fields are examined to understand the thermal-fluid physics happening during the lower plenum mixing. As part of the HTTF international benchmark campaign, the simulation results generated from this study will be used for code-to-code and code-to-data comparisons in the near future, which would lay a solid foundation for the use of CFD in GCR research and development.

commentnote

This documentation assumes that the reader has basic knowledge of Nek5000, and is able to run the example cases provided in the Nek5000 tutorial. The specific Nek5000 version used here is v19.0. Although Nek5000 input files are not tested regularly in the Moose CI test suite, Nek5000 does offer reliable backward compatibility. No foreseeable compatibility issues are expected. In rare situations where there is indeed a compatibility issue, please reach out to the Nek5000 developer team via Nek5000 Google Group. Meanwhile, nekRS code is still under active development, and there is only a beta version of nekRS online tutorial.

Geometry and Meshing

The HTTF lower plenum (i.e., core outlet plenum) geometry is illustrated in Figure 1, which contains 163 ceramic cylindrical posts on which the core structure stands. It is enclosed by lower side reflectors, lower plenum floor and lower plenum roof. The height of the lower plenum is 22.2 . Helium gas enters the lower plenum through core coolant channels and exits through the horizontal hot duct. There are 234 inlet channels with the same diameter of 1 (i.e., 2.54 ). A cylindrical hot duct is connected to the lower plenum, which functions as the outlet channel. Rakes are installed in the hot duct to house measurement instruments in the experiments.

Figure 1: Overview of the HTTF lower plenum geometry.

Nek5000/RS requires a pure hexahedral mesh to perform the calculations. Due to the domain complexity, it is challenging to create an unstructured mesh with only hexahedral cells. A two-step approach is adopted in the meshing practice. We first used the commercial software, ANSYS Mesh, to generate an unstructured computational mesh consisting of tetrahedra cells (in the bulk) and wedge cells (in the boundary layer region), and then converted it into a pure hexahedra mesh using a native quadratic tet-to-hex utility. To ensure the accuracy of wall-resolved RANS calculations, the first layer of grid points off the wall is kept at a distance of . As shown in Figure 2, the resulting mesh has 2.60 million cells, and the total degrees of freedom is approximately 72.86 million in Nek5000 simulations with a polynomial order of 3. Note that the polynomial order is kept relatively low to limit the computational costs. It can be readily increased for higher-fidelity simulations, such as a Large Eddy Simulation (LES), given necessary computing hours.

Figure 2: The hexahedral mesh of HTTF lower plenum from the tet-to-hex conversion.

Nek5000 Investigation

Boundary conditions

The boundary conditions adopted in the Nek5000 simulations are taken from the corresponding system modeling of HTTF system using RELAP5-3D (Halsted, 2022). The 234 inlet channels are divided into five rings as shown in Figure 3 based on the radial locations, and each ring with specific mass flow rate and temperature. Inside the lower plenum (LP), the floor, roof, posts and side reflector walls all have distinct temperature values. The LP roof has the highest temperature while the side walls sees the lowest. Details of the thermal boundary conditions are listed in Table 1. Inlet channels within a specific ring are assumed to have fixed inflow velocity according to the mass flow rate, and the no-slip condition is imposed to all wall surfaces. A natural pressure condition is given to the outlet face of hot duct.

Figure 3: Inlet boundary conditions of the five inlet rings/groups: temperature (left) and velocity (right).

Table 1: Boundary conditions specified for the HTTF lower plenum CFD simulations.

BoundaryBC TypeValue
Melting temperature
Ring 1, 0.651 , 562.2
Ring 2, 4.919 , 561.8
Ring 3, 7.423 , 541.3
Ring 4, 7.704 , 512.0
Ring 5, 2.337 , 471.6
LP Roof543.48
LP Floor494.65
Side Reflectors452.37
LP Posts526.39
Hot duct496.64
Rakes and all other wallsAdiabaticN/A

The mean LP pressure is 100.5 , with which the helium gas has a density of 0.1004 . A non-dimensionalization process is performed for the Nek5000 calculations. The reference velocity is 3.47 that is the highest inlet velocity observed for Ring 3 inlet channels. The temperature difference with respect to the side reflector wall temperature (452.37 ) is normalized by , where the maximum temperature is from Ring 1 inlet channels. During the post-processing, the CFD results can be easily converted back to dimensional quantities for further analyses.

Nek5000 case setups

The reader can find the Nek5000 case files and the associated HTTF lower plenum mesh files (through github LFS) in VTB repository. The mesh information is contained in two files: httf.re2 (grid coordinates, sideset ids, etc.) and httf.co2 (mesh cell connectivity). As for the Nek5000 case files, there are SIZE, httf.par, httf.usr, and two auxiliary files limits.f and utilities.f that hosts various post-processing functions, such as solution monitoring, planar averaging, and so forth. SIZE is the metafile to specify primary discretization parameters. httf.usr is the script where users specify the boundary conditions, turbulence modeling approach and post-processing. httf.par is to provide the simulation input parameters, such as material properties, time step size, Reynolds number.

Now let's dive into the most important case file httf.usr. The sideset ids contained in the mesh file are first translated into CFD boundary condition settings in usrdat2 block

      subroutine usrdat2()  ! This routine to modify mesh coordinates
      implicit none
      include 'SIZE'
      include 'TOTAL'

      real wd
      common /walldist/ wd(lx1,ly1,lz1,lelv)

      integer iel, ifc, id_face
      integer i, e, f, n
      real Dhi

      do iel=1,nelv
      do ifc=1,2*ndim
        id_face = bc(5,ifc,iel,1)
        if (id_face.eq.3) then        ! surface 1-5 for inlet
           cbc(ifc,iel,1) = 'v  '
        elseif (id_face.eq.4) then
           cbc(ifc,iel,1) = 'v  '
        elseif (id_face.eq.5) then
           cbc(ifc,iel,1) = 'v  '
        elseif (id_face.eq.6) then
           cbc(ifc,iel,1) = 'v  '
        elseif (id_face.eq.7) then
           cbc(ifc,iel,1) = 'v  '
        elseif (id_face.eq.8) then    ! surface 6-11 for walls
           cbc(ifc,iel,1) = 'W  '
        elseif (id_face.eq.9) then
           cbc(ifc,iel,1) = 'W  '
        elseif (id_face.eq.10) then
           cbc(ifc,iel,1) = 'W  '
        elseif (id_face.eq.11) then
           cbc(ifc,iel,1) = 'W  '
        elseif (id_face.eq.12) then
           cbc(ifc,iel,1) = 'W  '
        elseif (id_face.eq.13) then
           cbc(ifc,iel,1) = 'W  '
        elseif (id_face.eq.14) then    ! surface 12 for outlet
           cbc(ifc,iel,1) = 'O  '
        endif
      enddo
      enddo

      do i=2,ldimt1
      do e=1,nelt
      do f=1,ldim*2
        cbc(f,e,i)=cbc(f,e,1)
        id_face = bc(5,f,e,1)
        if(cbc(f,e,1).eq.'W  ') cbc(f,e,i)='t  '
        if(cbc(f,e,1).eq.'v  ') cbc(f,e,i)='t  '
        if(i.eq.2 .and. id_face.eq.12) cbc(f,e,i)='I  ' !Temp BC of adiabatic walls
      enddo
      enddo
      enddo

      return
      end
c-----------------------------------------------------------------------
(htgr/httf/lower_plenum_mixing/httf.usr)

The URANS model is specified in the following code block

      subroutine usrdat3()
      implicit none
      include 'SIZE'
      include 'TOTAL'

      real wd
      common /walldist/ wd(lx1,ly1,lz1,lelv)

      integer ifld_k,ifld_omg,m_id,w_id
      real coeffs(21) !array for passing your own coeffs
      logical ifcoeffs

      ifld_k = 3 !address of tke equation in t array
      ifld_omg = 4 !address of omega equation in t array
      ifcoeffs=.false. !set to true to pass your own coeffs

C     Supported models:
c     m_id = 0 !regularized high-Re k-omega (no wall functions)
c     m_id = 1 !regularized low-Re k-omega
c     m_id = 2 !regularized high-Re k-omega SST (no wall functions)
c     m_id = 3 !regularized low-Re k-omega SST
      m_id = 4 !standard k-tau model
c     m_id = 5 !low-Re   k-tau

C     Wall distance function:
c     w_id = 0 ! user specified
c     w_id = 1 ! cheap_dist (path to wall, may work better for periodic boundaries)
      w_id = 2 ! distf (coordinate difference, provides smoother function)

      call rans_init(ifld_k,ifld_omg,
     &               ifcoeffs,coeffs,w_id,wd,m_id)

      call count_boundaries

      return
      end
C-----------------------------------------------------------------------
(htgr/httf/lower_plenum_mixing/httf.usr)

The specific inlet temperature and mass flow rates are implemented in userbc with the non-dimensionalized values

      if(id_face.eq.3) then !Ring 1
        uy = -0.613902735
        if(ifield.eq.2) temp = 1.0
      elseif(id_face.eq.4) then !Ring 2
        uy = -0.927738111
        if(ifield.eq.2) temp = 0.996358008
      elseif(id_face.eq.5) then !Ring 3
        uy = -1.0
        if(ifield.eq.2) temp = 0.809705909
      elseif(id_face.eq.6) then !Ring 4
        uy = -0.36324936
        if(ifield.eq.2) temp = 0.542929983
      elseif(id_face.eq.7) then !Ring 5
        uy = -0.367304324
        if(ifield.eq.2) temp = 0.175088774

      elseif(id_face.eq.8) then !LP Roof
        if(ifield.eq.2) temp = 0.829554766
      elseif(id_face.eq.9) then !LP Floor
        if(ifield.eq.2) temp = 0.384958572
      elseif(id_face.eq.11) then !LP Posts
        if(ifield.eq.2) temp = 0.673950651
      elseif(id_face.eq.13) then !Hot Duct
        if(ifield.eq.2) temp = 0.403077483
      endif
(htgr/httf/lower_plenum_mixing/httf.usr)

Nek5000 simulation results

The aforementioned CFD cases were carried out on the Sawtooth supercomputer located at Idaho National Laboratory. A typical simulation job would use 64 compute nodes with 48 parallel processes per node. A characteristics based time-stepping has been used in the simulations to avoid the limitations imposed by the Courant-Friedrichs-Lewy (CFL) number due to the explicit treatment of the non-linear convection term. This treatment is necessary due to the existence of small local mesh cells created during the unstructured meshing and tet-to-hex conversion. A maximum CFL number of 1.75 has been reached which corresponds to an average time step size of . A total non-dimensional simulation time of 5.6 is achieved. Due to the nature of the unsteady RANS turbulence model, only a quasi-steady state can be reached. Velocity fluctuations are observed when helium flow enters the lower plenum and passes through the LP posts.

Snapshots of velocity and temperature fields from quasi-steady state are shown in Figure 4 and Figure 5 at the lower plenum plane (). Significant fluctuations are observed for both velocity and temperature.

Figure 4: The quasi-steady-state velocity field (non-dimensional) in the lower plenum.

Figure 5: The quasi-steady-state temperature field (non-dimensional) in the lower plenum.

In addition to the instantaneous solutions, a time averaging analysis is also conducted for simulation results from 3.8 to 5.6 time units. Figure 6 and Figure 7 illustrate the smooth profiles of time averaged velocity and temperature field respectively. Large velocity difference is noticed due to the geometric constraints in lower plenum. On the contrary, the temperature has a more uniform distribution for the most part of lower plenum. To better understand the modeling uncertainties of numerical simulations in the related applications, the Nek5000 calculations will be compared with other CFD software's predictions. This comparative analysis will provide insights into the accuracy and reliability of the simulation results and help identify potential sources of error in the modeling process. By examining the differences and similarities between the results obtained from different software, researchers and engineers can improve the modeling techniques and optimize the simulation settings to achieve better agreement with experimental data. Such a comprehensive analysis will help to enhance the credibility and usefulness of numerical simulations for various engineering applications.

Figure 6: Time-averaged velocity field (non-dimensional) in the lower plenum.

Figure 7: Time-averaged temperature field (non-dimensional) in the lower plenum.

Nek5000 run Script

To submit a Nek5000 simulation job on Sawtooth, the following batch script is used. The user can adjust the number of nodes, and number of processes per node, as well as the simulation time for any specific cases.


#!/bin/bash
#PBS -N httf
#PBS -l select=64:ncpus=48:mpiprocs=48
#PBS -l walltime=01:00:00
#PBS -j oe
#PBS -P neams
#PBS -o httf.out
cd $PBS_O_WORKDIR
export OMP_NUM_THREADS=1
echo  httf > SESSION.NAME
echo `pwd`'/' >> SESSION.NAME
rm -rf *.sch
rm -rf ioinfo
module purge
module load intel-mpi/2018.5.288-intel-19.0.5-alnu
mpirun ./nek5000 > logfile

NekRS Investigation

We initiated our simulation efforts using the Nek5000 solver. However, we opted to transition to NekRS due to its remarkable enhancement in computational efficiency. In the era of Exascale computing, there's a notable shift towards accelerated computing using Graphics Processing Units (GPUs) among leadership class supercomputers. To fully harness this new heterogeneous computing hardware, NekRS emerges as an exascale-ready thermal-fluids solver for incompressible and low-Mach number flows. Like its predecessor, Nek5000, it is based on the spectral element method and is scalable to processors and beyond. The principal differences between Nek5000 and NekRS are that the former is based on F77/C, coupled with MPI and fast matrix-matrix product routines for performance, while the latter is based on C++ coupled with MPI and OCCA kernels for GPU support. The Open Concurrent Compute Abstraction provides backends for CUDA, HIP, OpenCL, and DPC++, for performance portability across all the major GPU vendors. We note that, at 80% parallel efficiency, which corresponds to 2–3 million points per MPI rank, NekRS is running at 0.1 to 0.3 seconds of wall clock time per timestep, which is nearly a 3x improvement over production runs of Nek5000 at a similar 80% strong-scale limit on CPU-based platforms.

Boundary conditions

The boundary conditions of the nekRS study are taken from the corresponding system modeling of HTTF primary loop using RELAP5-3D conducted by Canadian Nuclear Laboratories (Podila et al., 2022). The reference lower plenum pressure is 211.9 kPa, and the helium gas has a density of 0.1950 kg/m with a reference temperature of 895.7 K. Table 2 summarizes the key thermophysical properties of helium flow used in the NekRS simulations. The 234 inlet channels are divided into 32 groups here as shown in Figure 8 based on the radial locations and polar angles, and each group with specific mass flow rate and temperature. Inlet channels within a certain group are assumed to have the same inflow velocity corresponding to the specific mass flow rate. Details of the inlet boundary conditions are listed in Table 3. Figure 9 visually illustrates the boundary conditions that are enumerated in Table 3.

Figure 8: The grouping of lower plenum inlet channels.

Table 2: Helium thermo-physical properties and flow conditions.

ValueUnit
Reference pressure211.9
Reference temperature895.7
Helium density0.1075
Helium heat capacity5.193×10
Helium viscosity4.274×10
Helium thermal conductivity0.3323
Helium Prandtl number0.67
Hot duct diameter0.2984
Mean flow velocity at Reynolds number at hot duct41.48
Reynolds number at hot duct3.1137×10

All wall surfaces are assumed to have no-slip velocity boundary condition and adiabatic thermal boundary condition. And a natural pressure condition is given to the outlet face of hot duct. The CFD simulations were carried out in a dimensionless manner. The reference velocity is the mean flow velocity at hot duct, which is 41.48 m/s. Normalization of temperature is accomplished by referencing it to the maximum inlet temperature of 984.34 K at CG0 group and the minimum inlet temperature of 810.05 K at the outer bypass group. Subsequently, during the post-processing stage, it is straightforward to convert the computational fluid dynamics (CFD) outcomes back into dimensional values, enabling additional analyses to be conducted.

Table 3: Inlet boundary conditions for the HTTF lower plenum simulations.

Zone IDVelocity ()Temperature ()
CG030.389984.34
CG-1A23.513924.658
CG-2A25.709889.08
CG-3A26.618944.901
CG-4A26.405926.654
CG-5A21.83866.424
CG-1B23.513925.415
CG-2B25.69890.425
CG-3B26.61947.609
CG-4B26.446932.229
CG-5B21.828867.567
CG-1C23.511924.788
CG-2C25.702889.276
CG-3C26.608945.155
CG-4C26.394926.902
CG-5C21.825866.606
CG-1D23.52922.995
CG-2D25.794886.395
CG-3D26.772941.098
CG-4D26.586922.794
CG-5D21.91863.558
CG-1E23.517920.054
CG-2E25.9881.331
CG-3E27.101932.826
CG-4E27.129913.384
CG-5E22.303856.71
CG-1F23.521922.956
CG-2F25.796886.341
CG-3F26.775941.027
CG-4F26.589922.725
CG-5F21.911863.508
Outer bypass20.669810.051

Figure 9: Boundary conditions: (a) prescribed inlet velocity and no-slip wall condition; (b) prescribed inlet temperature and adiabatic walls.

NekRS case setups

Readers can access the nekRS case files and the corresponding mesh files in the VTB repository via GitHub Large File Storage (LFS). The mesh data is contained in two files: httf.re2 (which includes grid coordinates and sideset ids) and httf.co2 (containing mesh cell connectivity information). Regarding the nekRS case files, there are four basic files:

  • httf.udf serves as the primary nekRS kernel file and encompasses algorithms executed on the hosts. It's responsible for configuring RANS model settings, collecting time-averaged statistics, and determining the frequency of calls to userchk, which is defined in the usr file.

  • httf.oudf is a supplementary file housing kernel functions for devices and also plays a role in specifying boundary conditions.

  • httf.usr is a legacy file inherited from Nek5000 and can be utilized to establish initial conditions and define post-processing capabilities.

  • httf.par is employed to input simulation parameters, including material properties, time step size, and Reynolds number.

There are also supportive scripts in the case folder. linearize_bad_elements.f and BAD_ELEMENTS are used to fix the mesh cells that potentially have negative Jacobian values from the quadratic tet-to-hex conversion. Data file InletProf.dat contains the fully developed turbulence solutions of a circular pipe, which is used to customize the profiles of velocity, TKE and tau at HTTF inlet channels.

Now let's dive into the most important case file httf.usr. The sideset ids contained in the mesh file are first translated into CFD boundary condition settings in usrdat2 block

      subroutine usrdat2()  ! This routine to modify mesh coordinates
      include 'SIZE'
      include 'TOTAL'

      integer iel, ifc, id_face
      integer i, e, f

      do iel=1,nelv
      do ifc=1,2*ndim
        id_face = bc(5,ifc,iel,1)
        if (id_face.ge.5.and.id_face.le.11) then        !CG0-CG5 and OB
           cbc(ifc,iel,1) = 'v  '
           boundaryID(ifc,iel)  = 1
           boundaryIDt(ifc,iel) = 1
        elseif (id_face.eq.4) then 			!walls
           cbc(ifc,iel,1) = 'W  '
           boundaryID(ifc,iel)  = 2
           boundaryIDt(ifc,iel) = 2
        elseif (id_face.eq.3) then                 	!outlet
           cbc(ifc,iel,1) = 'o  '
           boundaryID(ifc,iel)  = 3
           boundaryIDt(ifc,iel) = 3
        else
           cbc(ifc,iel,1) = 'E  '
           boundaryID(ifc,iel)  = 0
           boundaryIDt(ifc,iel) = 0
        endif
      enddo
      enddo

      do i=2,ldimt1
      do e=1,nelt
      do f=1,ldim*2
        cbc(f,e,i)=cbc(f,e,1)
        if(cbc(f,e,1).eq.'W  ') then
           cbc(f,e,i)='t  '
           if(i.eq.2) cbc(f,e,i)='I  '	!Insulated temp BC on walls
        endif
        if(cbc(f,e,1).eq.'v  ') cbc(f,e,i)='t  '
        if(cbc(f,e,1).eq.'o  ') cbc(f,e,i)='o  '
      enddo
      enddo
      enddo

      return
      end
c-----------------------------------------------------------------------
(htgr/httf/lower_plenum_mixing/nekrs_case/httf.usr)

The fixes to the negative Jacobian errors in mesh file is called in usrdat

      subroutine usrdat   ! This routine to modify element vertices
      include 'SIZE'      ! _before_ mesh is generated, which 
      include 'TOTAL'     ! guarantees GLL mapping of mesh.

      integer n_correction
      real minJacobian
 
      n_correction = 2
      call linearize_bad_elements(n_correction)  ! 2 (iterations for correction) should be enough, if not, set to 3

      minScaleJacobian = 1e-3
      call linearize_low_jacobian_elements(n_correction,
     & minScaleJacobian) ! linearize element that with lower jacobian than minJacobian

      return
      end
c-----------------------------------------------------------------------
(htgr/httf/lower_plenum_mixing/nekrs_case/httf.usr)

As for the implementation of inlet boundary conditions, a set of data arrays is created in usr and passed over to the udf and oudf. At the usr file side, the main subroutine is getinlet. It assembles the non-dimensionalized inlet velocity, temperature, k and into the following arrays

      real uin,vin,win,tin
      common /inarrs/
     $     vin(lx1,ly1,lz1,lelv),
     $     t1in(lx1,ly1,lz1,lelv),
     $     t2in(lx1,ly1,lz1,lelv),
     $     t3in(lx1,ly1,lz1,lelv)
(htgr/httf/lower_plenum_mixing/nekrs_case/httf.usr)

The corresponding information is transfered to httf.udf at

  //Inlet condition
  nrs->o_usrwrk = platform->device.malloc(4*nrs->fieldOffset,sizeof(dfloat));
  double *vin  = (double *) nek::scPtr(1);
  double *t1in = (double *) nek::scPtr(2);
  double *t2in = (double *) nek::scPtr(3);
  double *t3in = (double *) nek::scPtr(4);
  nrs->o_usrwrk.copyFrom(vin, mesh->Nlocal*sizeof(dfloat),0*nrs->fieldOffset*sizeof(dfloat));
  nrs->o_usrwrk.copyFrom(t1in,mesh->Nlocal*sizeof(dfloat),1*nrs->fieldOffset*sizeof(dfloat));
  nrs->o_usrwrk.copyFrom(t2in,mesh->Nlocal*sizeof(dfloat),2*nrs->fieldOffset*sizeof(dfloat));
  nrs->o_usrwrk.copyFrom(t3in,mesh->Nlocal*sizeof(dfloat),3*nrs->fieldOffset*sizeof(dfloat));
}
(htgr/httf/lower_plenum_mixing/nekrs_case/httf.udf)

And then the velocity and passive scalar (temperature, k and ) boundary conditions are applied in httf.oudf at

void velocityDirichletConditions(bcData *bc)
{
  bc->u = 0.0;
  bc->v = bc->usrwrk[bc->idM + 0*bc->fieldOffset];
  bc->w = 0.0;
}

void scalarDirichletConditions(bcData *bc)
{
  bc->s = 0;
  if(bc->id==1){
    if(bc->scalarId == 0) bc->s = bc->usrwrk[bc->idM + 1*bc->fieldOffset];
    if(bc->scalarId == 1) bc->s = bc->usrwrk[bc->idM + 2*bc->fieldOffset];
    if(bc->scalarId == 2) bc->s = bc->usrwrk[bc->idM + 3*bc->fieldOffset];
  } 
}
(htgr/httf/lower_plenum_mixing/nekrs_case/httf.oudf)

The following code snippet in httf.udf controls the frequency that usrchk is called during the simulations, which is once in every 1,000 steps in the current case.


  if ((tstep%1000)==0){
    nek::ocopyToNek(time, tstep);
    nek::userchk();
  }

Time averaged statitics is collected using the following lines in httf.udf


  6 #include "plugins/tavg.hpp"

 45   tavg::setup(nrs);

 73   tavg::run(time);

 85   if (nrs->isOutputStep) {
 86     tavg::outfld();
 87   }

NekRS simulation results

A total simulation time of 12 non-dimensional units is achieved. Due to the nature of unsteady RANS turbulence model, only a quasi-steady state can be reached. Velocity fluctuations are observed when helium flow enters the lower plenum and passes through the LP posts. The subsequent post-processing focuses primarily on the time-averaged solutions. For that purpose, an analysis involving time-averaging is carried out on simulation results spanning from 6.0 to 12.0 time units. Figure 10 and Figure 11 portray the more uniform profiles of the time-averaged velocity and temperature distributions, respectively. Through the elimination of instantaneous fluctuations, the time-averaged outcomes reveal notably symmetric patterns in both the velocity and temperature fields. Also, Figure 10 illustrates the symmetrical horizontal jets emerging through the openings between the lower plenum posts as the helium flow enters the hot duct. Notably, flow recirculations become evident within the wake regions behind these posts.

Figure 10: Time averaged velocity distribution in the lower plenum at y = -0.1 (top view).

Figure 11: Time averaged temperature fields at y = -0.1 and z = 0.

In addition to the horizontal perspective, a vertical viewpoint offers a clearer insight into the injection of helium flow originating from the inlet channels. As evidenced by Figure 11 and Figure 12, the presence of helium jets is quite conspicuous within the lower plenum, characterized by notably higher velocity and temperature values. Moreover, the inflow jets exhibit greater prominence in regions farther from the hot duct, owing to the diminished local horizontal flow. Conversely, areas nearer to the hot duct showcase more pronounced disturbances in the inflow jets due to the stronger local horizontal flow. Consequently, in Figure 12, the jets positioned away from the hot duct achieve deeper penetration, while those in close proximity to the hot duct experience quicker diffusion. The presence of hot streaking is further confirmed within the lower plenum, particularly in areas distant from the hot duct. The streams of elevated temperature directly impact the lower plenum posts and floor in this specific region, resulting in uneven thermal stresses. In general, the lower section of the lower plenum encounters higher temperatures, notably at the juncture between the lower plenum and the hot duct, as visually depicted in Figure 12. Figure 13 offers an improved perspective of the temperature distribution on the lower plenum floor, enabling the identification of the hotspot.

Figure 12: Jetting of helium flow from the inlet channels.

Figure 13: Time averaged temperature distribution on the lower plenum floor.

NekRS run Script

To submit a nekRS simulation job on common leadership class facilities, dedicated job submission scripts are provided in nekRS repository for supercomputers, such as Summit-OLCF, Polaris-ALCF, etc. Taking Polaris as an example, to submit the nekRS job, simply do


./nrsqsub_polaris httf 50 06:00:00

References

  1. Sydney J Ball, M Corradini, Stephen Eugene Fisher, R Gauntt, G Geffraye, Jess C Gehin, Y Hassan, David Lewis Moses, John-Paul Renier, R Schultz, and T Wei. Next Generation Nuclear Plant Phenomena Identification and Ranking Tables (PIRTs) Volume 2: Accident and Thermal Fluids Analysis PIRTs. Technical Report NUREG/CR-6944, Oak Ridge National Laboratory, 2008. doi:10.2172/1001274.[BibTeX]
  2. Joshua K. Halsted. Validation analysis for the osu httf pg-28 test using relap5-3d and star-ccm+. Master's thesis, Oregon State University, 2022.[BibTeX]
  3. K Podila, Q Chen, X Huang, C Li, Y Rao, G Waddington, and T Jafri. Coupled simulations for prismatic gas-cooled reactor. Nuclear Engineering and Design, 395:111858, 2022. URL: https://www.sciencedirect.com/science/article/pii/S0029549322002126, doi:10.1016/j.nucengdes.2022.111858.[BibTeX]