Skip to main content
SHARE

Known Issues with SCALE 6.1

Since SCALE 6.1 was released on July 22, 2011, end users and the SCALE development team have identified a few issues that impact the performance of the code package. Several important issues are addressed with the SCALE 6.1.2 Update (https://www.ornl.gov/scale/scale/scale-613-update), which is recommended for all SCALE users. These and other issues that are not corrected in the update are detailed below. Where possible, these known issues will be corrected in the next release of SCALE and possibly in a patch to the current release. Possible user corrections or workarounds are noted below.

KENO V.a Requires Cuboidal Outermost Region to Enable the Use of Albedo Boundary Conditions

In all versions of SCALE, the Monte Carlo code KENO V.a only implements the use of non-vacuum albedo boundary conditions (e.g., mirror, periodic, white) when the outermost geometry region of the model is a cuboidal region. This limitation is noted in the user documentation in the section on Albedo data, where it is stated that “Albedo boundary conditions are applied only to the outermost region of a problem. In KENO V.a this geometry region must be a rectangular parallelepiped.”

It was recently discovered that—beginning with the release of SCALE 6.1 in 2011—KENO V.a will accept non-compliant input that specifies albedo boundary conditions for non-cuboidal outer shapes and will then attempt to complete the calculation. For example, a user can specify a cylinder as the outermost region and add a mirror boundary condition on the top or bottom to effectively double the volume of the system considered. A user could also add a mirror boundary condition to both the top and the bottom of the cylinder to simulate a bounding case of an infinite system. While these scenarios are accepted and perform as expected in KENO-VI, KENO V.a requires the addition of a cuboidal region (typically an empty void region) to enable the use of these albedo boundary conditions.

For calculations using KENO V.a in SCALE 6.1–6.2.2 with non-compliant input in which albedo boundary conditions are applied but without the required cuboidal outermost region, the calculation will proceed without warning, and an underestimation of k-eff often results. The magnitude of underestimation in k-eff can vary widely, depending on the system modeled and the desired boundary conditions, but it can exceed several percent in k-eff.

It is strongly recommended that users who rely on albedo boundary conditions in KENO V.a review their input models to ensure that the outermost region is a cube or cuboid, per the documentation requirement. Note that input models that were generated and applied with SCALE 6 and earlier versions that included the check for the cuboidal outer boundary will continue to produce the expected results with SCALE 6.1–6.2.2.

In testing the extent of this issue by placing mirror boundary conditions on non-cuboidal outer shapes, it was found that cylinders oriented along the x-, y-, or z-axis most often produce non-conservative results without warning. The calculation will terminate prior to completion for cases in which a sphere is the outermost shape. The calculation will terminate with an error message for cases in which a hemicylinder or hemisphere is the outermost shape. The calculation performs as expected for cases in which a cube or cuboid is the outermost shape.

This issue applies to all SCALE 6.1–6.2.2 sequences that implement KENO V.a, including CSAS5, TSUNAMI-3D-K5, T5-DEPL, and STARBUCS. No other SCALE sequences are impacted by this issue. The error condition for the attempted use of albedo boundary conditions on non-cuboidal outer shapes in KENO V.a will be restored in the pending release of SCALE 6.2.3, thus preventing users from inadvertently entering non-compliant input.

ORIGEN Input Concentrations for Stream Blending Calculation

An error was reported in the stream blending option of ORIGEN in SCALE 6.1. Other previous versions are not affected. As documented in the user manual, the NGO parameter indicates the type of subcase that follows the current calculation so that appropriate data are retained. When the blended compositions from previous subcases are requested as the starting concentrations in the current subcase (KBLEND = -1), the value of NGO in the preceding subcase should request the blended concentrations using NGO = -1. However, this option results in some streams being omitted from the starting concentrations in the blended case. The use NGO = 1 in the prior subcase, instead of NGO = -1, avoids this issue and the concentrations from all blended streams are retained. However, this use of NGO is inconsistent with the user manual.

It is important that users verify that the blended stream compositions by printing the concentrations of the streams before and after blending. The blending example case shown in Section F7.6.5 of the SCALE Manual has been modified to function correctly. The use of NGO = -1 in this example results in the fuel compositions being omitted from the glass matrix composition. The revised input should be as follows:

=origen
1$$ 2 1t
pwr nuclear data library
2** 2r 1.0
3$$ 33 a4 44 a16 2 a33 18 e 2t
35$$ 0 4t
56$$ 10 10 a13 5 4 3 0 2 1 e 57** a3 1-14 e 5t
pwr - 3.2% enriched u
40 kg U
58** 10r1.2 60** 8i110 1100
61** f 1e-6
66$$ a5 2 a9 2 e
73$$ 922340 922350 922360 922380 80000
74** 11.48 1280. 0.44 38708. 5382.3
75$$ 4r2 4
6t
'
' decay the irradiation fuel composition and apply removal fractions
56$$ 0 10 a6 -1 a10 -10 a15 0 a17 4 a20 10 e 5t
60** 3 5 10 30 60 90 120 160 270 365
65$$ a4 1 a25 1 a46 1 e
61** f1-3
' Keep Se, Dy (99.8%); Rb, Sr, Te, Cs, Ba (77.8%); and U, Np, Pu, Am, Cm (1%)
79** f0 a34 0.998 a37 0.778 a38 0.778 a52 0.778 a55 0.778 a56 0.778
a66 0.998 a92 0.01 a93 0.01 a94 0.01 a95 0.01 a96 0.01 e
6t
'
' define glass matrix compositions
56$$ 0 1 a6 1 a10 0 a13 15 a15 2 a17 4 a20 1 <--- previously was a6 -1
57** 0 a3 1e-05 e t
100 kg glass
60** 1 61** f0.001
65$$ a4 1 1 a25 1 a46 1 e
' Li, B, O, F, Na, Mg, Al, Si, Cl, Ca, Mn, Fe, Ni, Zr, Pb
73$$ 30000 50000 80000 90000 11000 12000 13000 140000 170000
200000 250000 260000 280000 400000 820000
74** 2180 2110 46400 61 7650 490 2180 25400 49 1080
1830 8610 700 880 49
75$$ 15r 4
6t
'
54$$ a11 2 e
56$$ a2 1 a6 1 a10 0 a15 3 a17 2 a20 -1 e
57** 1 a3 1e-05 e t
final blended case
100 kg U
60** 2
61** f0.001
65$$ a4 1 a25 1 a46 1 e
' use default 18-group gamma energy structure
81$$ 2 0 26 1 e
82$$ 2 e
' 44 neutron energy group structure
84**
2.0000000e+07 8.1873000e+06 6.4340000e+06 4.8000000e+06
3.0000000e+06 2.4790000e+06 2.3540000e+06 1.8500000e+06 1.4000000e+06
9.0000000e+05 4.0000000e+05 1.0000000e+05 2.5000000e+04 1.7000000e+04
3.0000000e+03 5.5000000e+02 1.0000000e+02 3.0000000e+01 1.0000000e+01
8.1000000e+00 6.0000000e+00 4.7500000e+00 3.0000000e+00 1.7700000e+00
1.0000000e+00 6.2500000e-01 4.0000000e-01 3.7500000e-01 3.5000000e-01
3.2500000e-01 2.7500000e-01 2.5000000e-01 2.2500000e-01 2.0000000e-01
1.5000000e-01 1.0000000e-01 7.0000000e-02 5.0000000e-02 4.0000000e-02
3.0000000e-02 2.5300000e-02 1.0000000e-02 7.5000000e-03 3.0000000e-03
1.0000000e-05 e
6t
56$$ f0 t
end

Date Identified: 05/20/2014; Date Resolved: 08/13/2014

Problems with SCALE 6.1 Installer with Java 7

The IZPack installer distributed with SCALE 6.1 was created using Java 6. With Java 7 now being deployed by many IT departments to address issues with Java 6, the behavior of the SCALE 6.1 installer has changed.

The instructions provided in the Scale6.1_Readme file states:

To begin installation of SCALE 6.1 for Windows, double-click the scale-6.1-setup.jar file on the DVD. Linux and Mac systems will not allow the first installation disk to eject if the install program is running from the DVD. For Linux or Mac, copy the scale-6.1-setup.jar to your local disk and double-click the local version or issue the following command java –jar scale-6.1-setup.jar in the location where the installer .jar file was copied.

Revised instructions for use with Java 7 are:

To begin installation of Scale 6.1 for Windows, Linux, and Mac systems copy the scale-6.1-setup.jar to your local disk and issue the following command from the Command Prompt (DOS Window) or Terminal:

java –jar scale-6.1-setup.jar -direct in the location where the installer .jar file was copied. The -direct option resolves issues associated with Java 7 but is not available when double-clicking the .jar file for installation.

Date Identified: 05/08/2013; Date Resolved: 09/13/2013

NEWT Mesh Generator Issue for Hexagonal Geometries

An issue in NEWT’s automatic mesh generation routines was recently identified for hexagonal-array geometries. In certain instances, NEWT will place an incorrect material in a computational cell of the problem. The issue can be identified by inspecting the “newtmatl.ps” file that is generated when the NEWT parameter option “drawit=yes” is enabled. When viewing this file, some computational cells may appear as a different color compared to adjacent cells, as shown in the figure below.

In order to circumvent this issue, it is recommended that users construct hexagonal geometries by placing fuel pins using holes rather than an array. Using holes should eliminate the issue and provide a computational speedup due to a reduction in computational cells versus using an array.

Date Identified: 09/13/2013

Critical Spectrum Calculations with NEWT

Corrected in SCALE 6.1.2

An issue was identified in NEWT that causes few-group homogenization calculations to fail in critical spectrum mode when using the user specified critical buckling value or critical height. There is currently no workaround for this issue, but it will be corrected in the pending SCALE 6.1.2 patch.

Date Identified: 01/31/2013

ORIGEN Irradiation Calculations

Corrected in SCALE 6.1.2

An issue in ORIGEN for irradiation calculations that will occasionally cause large masses fission products to be produced from non-fissile materials. This error only affects SCALE 6.1 (and 6.1.1) calculations with time steps of 5-35 days. When the error is encountered and hydrogen exists in the system, large masses of fission products with A>162 are produced (~1E8 grams) and are easily identified. When hydrogen does not exist in the system, the error may be more difficult to detect as it only affects the transitions for a small set of fission products with A>162. Users can work around this issue by modify the time steps, and it will be corrected in the pending SCALE 6.1.2 patch.

Date Identified: 11/25/2012

Possible Inaccurate Implicit Sensitivities with TSUNAMI

Corrected in SCALE 6.1.2

An error has been identified that affects some TSUNAMI sensitivity analysis calculations where implicit sensitivities for some nuclides may not be accurately computed. The issue was found in BONAMIST, used to generate the implicit sensitivity data, when examining a MOX pin-cell benchmark. The error has been observed to impact the sensitivity for U-238 in this test case, but has not been show to impact critical experiments or realistic application systems. Users who performed recommended direct perturbation calculations would observe the discrepancy in any previous calculations. This issue will be corrected in the SCALE 6.1.2 patch. A possible user workaround for this issue is to change the order of nuclides in the input read compositions data block.

For the MOX fuel pin test case, the following results were observed with a significant difference in the U-238 implicit contribution leading to a significant change in the sensitivity coefficient. As show in the figure below, the differences occur in the resonance region for U-238, but Pu-239 is largely unaffected.

Sensitivity of keff to U-238 Total Cross Section for MOX Fuel Pin Test Case

  Explicit Implicit Sensitivity
SCALE 6.1.0  -3.9231E-02   -4.8164E-05   -3.9279E-02 
SCALE 6.1.2  -3.9470E-02    1.4935E-02   -2.4536E-02 

A more typical result is shown below for critical experiment MIX-COMP-THERM-001-001, where only small differences are observed.

Sensitivity of keff to U-238 Total Cross Section for MIX-COMP-THERM-001-001

  Explicit Implicit Sensitivity
SCALE 6.1.0  -1.0675E-01   1.8605E-02   -8.8143E-02 
SCALE 6.1.2  -1.0695E-01   1.8571E-02   -8.8380E-02 

Date Identified: 11/07/2012

Possible Incorrect Selection of an Axial Burnup Profile in STARBUCS Burnup Credit Criticality Calculations

STARBUCS has the option to use axial burnup profiles that are dependent on assembly average burnup and provides three default axial burnup profiles (i.e., the NAX=-18 input option) applicable to an assembly averageburnup as follows: (1) burnup less than 18 GWd/MTU; (2) burnup greater than or equal to 18 GWd/MTU and less than 30 GWd/MTU; and (3) burnup greater than or equal to 30 GWd/MTU. It has been noticed that for assembly average burnup values at which the axial burnup profile changes (i.e., 18 and 30 GWd/MTU in the case of the STARBUCS default burnup profiles), and depending on the number of libraries per cycle (NLIB) provided in the burnup history data or in the search parameter data, STARBUCS may select an incorrect burnup profile. For the search parameter data block specification: POWER=50.0 NLIB=7 BU=30, STARBUCS may select the axial burnup profile that is applicable to the burnup range [18 – 30) GWd/MTU in place of the axial burnup profile that is applicable to an assembly average burnup of 30 GWd/MTU. This problem is caused by a rounding error, which will be corrected in a future release. Currently, the STARBUCS internal calculation NLIB*BU/NLIB does not always produces the required precision for the assembly average burnup values of 18 and 30 GWd/MTU to enable the selection of the intended axial burnup profile. To avoid this error, the assembly average burnup values being used as the boundaries for the burnup intervals that define different axialburnup profiles should be increased by a very small amount. For example, if the assembly average values for loading curve analyses are 18 and 30 GWd/MTU, the following input specifications in the read search data block produce correct selection of the intended default axial burnup profiles:

POWER=50.0 NLIB=7
BU=17.999 18.001 29.999 30.001 end

Note that the STARBUCS output file provides information about the axial burnup profile selected for an average assembly burnup value, such as:

axial profile from database
assembly avg burnup 18.000 gwd/mtu, profile 2

Date Identified: 8/5/2012

Error in ENDF/B-VII.0 Decay Data

Corrected in SCALE 6.1.2

An error in the nuclear decay data for 234Th has been identified in ENDF/B-VII.0, which is used for the SCALE decay library. A review of the problem indicates that the error was introduced in the evaluated ENDF/B-VII.0 decay sub-library released by the National Nuclear Data Center (NNDC) in December 2006. The NNDC has confirmed the problem and recently released an updated decay library with ENDF/B-VII.1. Currently, ORNL is working closely with NNDC to identify the nature and extent of the nuclear data evaluation problem and is preparing a patch for the ENDF/B-VII.0-based decay library distributed with SCALE 6.1. It is important to note that ORNL has performed extensive validation using the ENDF/B-VII.0-based decay library in SCALE 6.1 and has NOT identified any discrepancies for benchmark problems involving irradiated fuel isotopic compositions, decay heat, and source terms. The error has been observed for problems involving the decay of 238U. As an example, the gamma ray spectra calculated using SCALE 6.0 (ENDF/B-VI decay data) and SCALE 6.1 (ENDF/B-VII.0 decay data) are shown in the figure below. The spectrum obtained using ENDF/B-VII.0 data is significantly over estimated, caused primarily by incorrect production of 234Pa from 234Th decay. Additional information on the error in ENDF/B-VII.0 and decay evaluation improvements for ENDF/B-VII.1 are posted on the NNDC website (http://www.nndc.bnl.gov/exfor/endfb7.1_decay.jsp).

Date Identified: March 18, 2012

Minor Issues Identified with Fixed-Source Monte Carlo Capabilities

Corrected in SCALE 6.1.1

A few minor issues were identified with the SCALE fixed-source Monte Carlo code Monaco and an associated utility, especially related to seldom-used optional features. These features will be corrected in a pending patch for SCALE. These features should be used with caution until the patch is applied.

  • When specifying the special distribution pwrNeutronAxialProfileReverse or pwrGammaAxialProfileReverse for a spatial source distribution, the un-reversed profile is erroneously returned.

    Impact: This is a seldom-used feature that was implemented for compatibility with previous MORSE calculations. Problems run using one of the special axial distributions containing the word reverse are in fact not reversed, and erroneous results could result due to an inaccurate source specification.
     
  • The sum of the point detector group-wise results may be higher than the point detector energy-integrated (total) results. The reported total is correct. The group-wise values are high due to rejecting negative contributions (which happen a small fraction of the time due to the multi-group energy/angle physics).

    Impact: The energy-integrated results are correct. Only energy-dependent results are in error for some calculations. If the use of energy-dependent results is desired, users should verify that they sum to the total value.If a source specification utilizes different Watt spectra distributions in multiple sources, the energies sampled for one source may include energies from the wrong distribution.
     
  • If a source specification utilizes different Watt spectra distributions in multiple sources, the energies sampled for one source may include energies from the wrong distribution.

    Impact: Only models that implement more than one Watt spectrum are impacted. Since Watt spectra from different isotopes are quite similar, the impact of this discrepancy may not be noticeable. For Watt spectra that are very different, results may differ. 
  • The utility program mim2wwinp does not format MCNP *.wwinp files correctly for photon-only problems. MCNP interprets a *.wwinp with only one particle listed as neutrons, even in a "mode p" problem. The *.wwinp file produced by SCALE needs to specifically identify that there are 0 neutron groups for photon-only problems.

    Impact: Subsequent MCNP calculations that use the SCALE generated .wwinp files for photon-only problems will not run.

Date Identified: 3/22/2012

Discrepancy Observed with Small Number Densities with 44-Group ENDF/B-V Data and CENTRM

Corrected in SCALE 6.1.1

An issue has been identified that can lead to non-conservative keff values when using the 44-group ENDF/B-V data with CENTRM for high-leakage models with trace-element number densities below ~10-9 atoms/barn-cm when running SCALE 5.1 – SCALE 6.1. The effect on the 238-group ENDF/B-V,VI, and VII libraries is minimal. There is no effect on continuous-energy Monte Carlo calculations.

In the dozens of test cases examined thus far, the discrepancy is only realized in cases that meet ALL of the following conditions:

  1. The number density of at least one nuclide has a small fractional concentration of 10-8 or less relative to the total mixture number density. Typically this corresponds to an absolute concentration less than ~10-9 to 10-10 atoms/barn-cm, but greater than zero.
  2. The SCALE 44-group ENDF/B-V library or a user-generated broad group library with few groups in the U-238 resolved resonance range (1 eV-4 KeV) is used.
  3. CENTRM is used for resonance self-shielding. This is the default behavior in SCALE 6.1, but NITAWL processing is the default behavior for SCALE 5.1 and 6.0 for the ENDF/B-V cross-section data, so the user must explicitly request CENTRM processing to observe the discrepancy with SCALE 5.1 or 6.0.
  4. The system is sensitive to the high-energy portion of the resolved range, which most commonly occurs for high leakage systems. Low-leakage criticality and depletion models examined realized only a minimal impact.
  5. Calculations are performed with SCALE 5.1, 6.0 or 6.1.

Impact on calculations: 

  1. Continuous-energy KENO calculations do not use CENTRM and are not affected.
  2. The impact for all 238-group calculations examined thus far is small, on the order of a few pcm.
  3. Eigenvalues and isotopic concentrations computed for the 44-group ENDF/B-V depletion cases examined are not significantly affected, as these are low-leakage systems [reflected lattice geometries]. For most cases that meet all of the above criteria, including burned fuel criticality safety calculations that include small concentrations of fission products, the discrepancy introduces an error on the order of 100 pcm.
  4. In a contrived case that artificially introduces a trace material into a plutonium nitrate system, a discrepancy of ~3% delta-k was observed. This is the maximum discrepancy observed for the real and hypothetical systems examined thus far, but it should not be considered a bounding value.

Corrective Action

  1. The SCALE Team is developing a patch that corrects this issue.
  2. Users should examine calculations to determine if they meet the criteria provided above.
  3. The eigenvalue for suspect systems should be examined using a different library, such as the 238-group ENDF/B-V to determine if a particular system is impacted.
  4. Users should install the SCALE 6.1 patch when it is available and repeat any suspect calculations.

Date Identified: 1/9/12

Acknowledgement: This issue was first identified by SCALE user Dale Lancaster

Optional Output Edit in STARBUCS

In STARBUCS burnup credit loading curve search calculations, an optional input prt=short may be used within the READ SEARCH input block to restrict the final output to contain only relevant information for a burnup loading curve calculation. In SCALE 6.1, this optional input causes the calculation to crash.

Users should only use the default parameter prt=long, which retains all SCALE output information for the last step of the iterative fuel enrichment search process. As prt=long is the default option in STARBUCS, there is no need for this input option to be specified in a STARBUCS input file.

Date Identified: 2/10/2011

MacOS System Requirements

The SCALE 6.1 Readme states that the system will operate on Mac OSX version 10.5 or newer, where Mac OSX 10.6 or newer is actually required to properly execute SCALE 6.1.

The symptoms are such that the SCALE runtime will execute and a job banner will be produced, but the executable modules will fail.

If messages are turned on (-m flag on the batch6.1 command) the following message will be reported:

'dyld: unknown required load command 0x80000022'

The solution is to upgrade to Mac OSX 10.6 or newer.

Date Identified: 2/23/2012

Windows ORIGEN and OPUS Sample Problems

Corrected in SCALE 6.1.1

There has been an issue identified when running the ORIGEN and OPUS sample problems on Windows.

Specifically, the sample problems' shell script uses an invalid path when attempting to copy needed resources into the working directory. Without these needed resources, both sample problems fail to produce the expected results.

The fix is simple. For the origen.input and opus.input files, located in
scale6.1\smplprbs\Windows, replace

=shell
copy z:\scale_staging\data\arplibs\w17_e40.arplib ft33f001
end

with

=shell
copy %DATA%\arplibs\w17_e40.arplib ft33f001
end

Date Identified: 8/30/2011

Unable to access jarfile ... ScaleDiff.jar

Corrected in SCALE 6.1.1

There has been an issue identified where when running the sample problems,
the ScaleDiff.jar file is not found producing an 'Unable to access jarfile ... ScaleDiff.jar' message.

The issue is due to not having the source code installed.

The ScaleDiff-Samples.xml zip file contains the following:
• samples.xml
• ScaleDiff.jar

Do the following to update your Scale6.1 install

1. Extract the contents into your Scale6.1 directory. You will be prompted to ‘copy and replace’ your samples.xml file.

2. Move the Scale6.1\ScaleDiff.jar file into your Scale6.1\cmds directory. You will be prompted ‘copy and replace’ your ScaleDiff.jar file.

The updated Scale6.1\samples.xml, and Scale6.1\cmds\ScaleDiff.jar files should be available to verify Scale as detailed in the readme file.

Updated: 11/15/2011

table_of_content_*.txt: no such file or directory

Corrected in SCALE 6.1.1

When running the sample problems an error may occur similar to the following,

C:\Scale6.1\Windows_amd64\bin\grep: table_of_content_*.txt: No such file or directory

This is due to a typo in the scale\samples.xml file.

'table_of_content_*' should be 'table_of_contents_*'. Notice the extra 's'.

Edit your Scale\samples.xml file, find 'table_of_content_*' and replace with 'table_of_contents_*'.

Date Identified: 10/6/2011

ORIGEN 200-group cross section library

Corrected in SCALE 6.1.1

A problem was identified in the energy-group boundaries of the ORIGEN 200-neutron-group cross-section library, origen.rev02.jeff200g. The boundaries were generated with constant lethargy instead of the boundaries of the SCALE 200-group transport library. Use of this library is currently not recommended, as it will produce erroneous results. An update to the library will be available soon.

Date Identified: 10/24/2011

ORIGEN natural isotopic abundances

Corrected in SCALE 6.1.1

The natural isotopic abundances for several elements in the ORIGEN library are incorrect. The abundances have been corrected and an updated library will be available soon. The use of natural isotopic abundances (NEX1=4) for input element concentrations enter in gram units may result in incorrect isotopic concentrations for Mg, Ge, Kr, Sr, and Te. If atom units (gram atoms) are used, incorrect isotopic concentration may occur for F, Na, Mg, Al, P, Sc, Mn, Co, Ge, As, Kr, Sr, Y, Nb, Rh, Te, I, Cs, Pr, Tb, Ho, Tm, and Au.

Date Identified: 10/24/2011

Problem with thermal energy cutoff in continuous-energy KENO calculations

Internal testing of continuous-energy calculations with KENO has revealed a considerable non-conservative change in keff, on the order of 20%, for cases involving BeO. Users who properly validate continuous-energy KENO calculations for these systems would notice a strong systematic bias for bound BeO cases prior to use in safety calculations. Nevertheless, users should not use be-beo in continuous-energy KENO calculations.

Note that multigroup calculations in KENO are not affected by this issue, and updates to the continuous-energy data for bound BeO will be available soon.

Further explanation:

Scale continuous energy neutron cross-section libraries are based on ENDF/B-VI Release 8 and ENDF/B-VII Release 0. While most of the neutron cross sections are for nuclides that are assumed to be free (not bound in a molecule), some nuclide cross sections are for bound nuclei that are commonly referred to as s(a,b) cross sections or thermal kernels. Hydrogen bound in water or Be in BeO are some example nuclei that have bound thermal cross sections. Scale continuous-energy neutron cross-section libraries were generated by processing the ENDF thermal kernel data for incident neutron energies of 5.05eV or below. To provide flexibility in analysis without the need to regenerate the cross section library, KENO was designed to implement a user-selectable value for the thermal cutoff for s(a,b) treatment, with default neutron cutoff energy of 3eV. Above this cutoff the effects of thermal motion of the molecule are assumed to be negligible.

As a result of a recent internal testing, it was discovered that KENO does not apply the thermal cutoff value to the use of s(a,b) treatment. If the evaluation does not have data up to 5.05eV, the short collision time method is used to extend the incoherent inelastic scattering data up to 5.05eV. Coherent elastic scattering is generated only for the energy range specified in the ENDF file. It was discovered that for Be in BeO, the coherent elastic and incoherent inelastic scattering cross sections extended beyond 3eV but did not have the same upper cut-off value. When KENO ignores the default thermal cut-off value of 3eV, it tries to sample from both coherent elastic and incoherent inelastic and obtains the wrong cross section between the cut-off values of these reactions.

Date Identified: 10/25/2011