|Appendix L - Quality Assurance|
Within the SSHAC hazard assessment framework, a traditional verification and validation (V&V) program is limited to specific numerical tools, such as the software used to perform the PSHA calculations. A quality or “cross-check” protocol may also be used to ensure the accuracy of compiled tables, data sets, and other project products. However, it is not possible to apply a V&V program to the SSHAC process itself. Similarly, it does not make sense to impose a restriction on the use of data for cases where a formal quality assurance program for the collection of field data outside of the project cannot be verified (e.g., if a quality control program cannot be verified for a USGS or university data set). The rejection of data sets in these cases could seriously diminish, instead of enhance, the process. This is because a key part of a SSHAC Level 3 or 4 process is the evaluation of data by the evaluator experts. Therefore, the evaluator experts are able to make an informed assessment of the quality of various data sets, whether or not those data were gathered within a formal quality program. This does not mean, however, that nonqualified data used in a SSHAC process can be considered qualified after their use in the process.
Beyond the assurance of quality arising from that external scientific review process, a fundamental component of the SSHAC process is the evaluation of the data, models, and methods by the evaluator experts as a means of establishing the quality, relevance, technical basis, and uncertainties. Further, in the integration stage of the SSHAC assessment process, the TI team or evaluator experts build models and apply weights to elements of the model based on due consideration of the technical support for various models and methods proposed by the technical community. Therefore, it is the collective, informed judgment of the TI Team (via the process of data evaluation and model integration) and the concurrence of the PPRP (via the process of participatory peer review), as well as adherence to the national standards described above, that ultimately lead to the assurance of quality in the process followed and in the products resulting from the SSHAC hazard assessment framework.
The TI Team, Project Manager, and Sponsors determined the approach for quality control for the CEUS-SSC Project in 2008, taking into account the SSHAC assessment process and national standards described above. The approach was documented in the CEUS-SSC Project Plan, dated June 2008. The technical assessments made as part of the CEUS-SSC entailed the use of a wide range of databases, including those that have been subject to peer review in the professional literature, those that have been gathered for scholarly research, and those that have been developed for site-specific commercial application. In creating the CEUS-SSC model, the TI Team had extensive interactions with the technical community about identifying data, evaluating alternative hypotheses, and collecting feedback regarding all assessments. These interactions helped ensure a high level of review for the TI Team’s technical assessments.
A participatory peer review process was used for both the technical and process elements of the project. This process provides high confidence that the project assessments and results will be accepted by the technical community. The level of assurance exceeds that associated with publication in a peer-reviewed technical journal. In addition to the peer review process that is afforded by the SSHAC Level 3 process, certain other activities were conducted as best business practices. These activities are described below.
A hazard input document (HID) was developed that documents and summarizes the key elements of the SSC model, including logic trees, parameter distributions, and derived Mmax and recurrence parameters. The HID specifies the exact inputs provided by the SSC model to the hazard calculations and thus provides a clear record of how the SSC model is translated into hazard calculations. As discussed in Task 2 of the CEUS-SSC Project Plan, “Develop a Database,” the management and documentation of the data were done in accordance with a data management procedure developed specifically for this project. As part of Task 7 of the Project Plan, “Construct a Preliminary SSC Model,” new computer codes were developed for estimating seismicity rates and b-values. These computer codes were documented and are available as part of project documentation on the CEUS-SSC Project website. All hazard calculations were conducted using software that had been previously qualified in accordance with 10 CFR 50, Appendix B (Quality Assurance Criteria for Nuclear Power Plants and Fuel Reprocessing Plants) requirements. Also, an internal documentation package was prepared to archive the hazard calculations. The results for seven test sites were documented as example calculations in Chapter 8 of the main project report.
L.4.1 External Review of Earthquake Catalog
The initial earthquake catalog assembled for the project was submitted for external review by seismologists familiar with compiling and analyzing earthquake data in the CEUS. The reviewers are listed as follows:
These reviewers provided recommendations for additional sources of data and for treatment of the data from various catalog sources; they also provided specific recommendations on individual earthquakes. These recommendations were considered in developing the final project catalog.
A series of simulation tests were performed to verify a number of processes used in the development of the earthquake catalog, including the following:
L.4.3 Checks for Consistency in Magnitude Conversion from Intensity
Because a large part of the earthquake catalog contains pre-instrumental earthquakes, relationships are needed to estimate earthquake magnitude from the reported level of shaking intensity. In past studies, the body-wave-magnitude scale has been used, and relationships have been developed to estimate body-wave magnitude from maximum shaking intensity in order to use the pre-instrumental earthquake data. However, the CEUS-SSC Project earthquake catalog uses the moment magnitude scale as the uniform measure of earthquake size. As part of the catalog development, assessments were performed to ensure consistency between conversions from intensity to moment magnitude and conversions from intensity to body-wave magnitude, and consistency between body-wave magnitude and moment magnitude.
Two principal computer programs were used in the development of the earthquake catalog: EQCLUST, which was used to identify dependent earthquakes, and EQPARAM, which was used to estimate earthquake catalog completeness. Both programs were checked as part of the verification process of the EPRI-SOG set of computer programs (EPRI, 1988).
The recurrence parameters (i.e., rate and b-value) were calculated using penalized-likelihood formulation that allows for spatial variation in the rate and b-value and also quantifies the uncertainty in these parameters. The methodology divides the source zones into cells of dimensions a quarter- or half-degree and then calculates the rate and b-value in each cell using the likelihood function of the data in that cell, together with penalty functions that tend to smooth the cell-to-cell variation in the rate or the b-value. In addition, this procedure characterizes epistemic uncertainty in the recurrence parameters by generating eight alternative maps of the recurrence parameters. The following is a partial list of the tests and confirmation steps that were performed to ensure that the methodology was adequate for the desired purpose and that it was properly implemented in the software.
The recurrence comparisons presented in Sections 6.4.2 and 7.5.2 compare the cumulative earthquake counts over an entire source zone, as predicted by the methodology, to the corresponding counts as observed in the catalog. These comparisons provide a consistency check for the methodology and its implementation in the software because the penalized-likelihood formulation operates at the level of each individual cell without explicitly considering the total rates. These comparisons indicate a good match to the catalog, after taking into account the error bars introduced by the size of the catalog.
Similar comparisons were performed for interesting portions of certain source zones, as presented in Section 184.108.40.206. These comparisons also indicate a good match to the catalog at these smaller spatial scales, after taking into account the error bars introduced by the size of the catalog.
Maps were generated for the mean rates and b-values and associated uncertainties, as well as for the eight realizations of the recurrence parameters for each zone. These maps are presented in Sections 6.4.2 and 7.5.2 and Appendix J. All maps were examined to verify that the recurrence parameters were reasonable.
The purpose of this test was to confirm that the methodology does not interpret chance variations in activity as spatial variations in rate and/or b-value. A synthetic catalog was generated, under the assumption of a spatially homogeneous rate, a b-value of 1, Poisson occurrences, and independent earthquake locations. The rate and duration were selected so that the mean number of earthquakes per cell was comparable to that of the Midcontinent source zone (approximately one event for every three cells). Calculations were performed for one rectangular source zone of dimensions 5 degrees by 5 degrees (containing 100 half-degree cells), using objective smoothing and unit weights for all magnitude bins. The methodology produced homogeneous rates and b-values, and these values were consistent with those used to generate the synthetic catalog. This confirms that the spatial variations detected by the program are not due to chance variations resulting from the limited duration of the catalog.
The purpose of this test was to verify that the eight alternative maps provide an adequate representation of the epistemic uncertainty in recurrence parameters. Two separate sets of eight alternative maps were generated for the Northern Appalachian source zone using different values of the seed for the Latin hypercube randomization algorithm, and then hazard calculations were performed. The two sets of hazard calculations produced consistent results, demonstrating that the eight alternative maps provide an adequate representation of epistemic uncertainty in hazard.
Modifications were required to the software that had been previously qualified in accordance with Appendix B of 10 CFR 50; these modifications were necessary to accommodate new elements in the source characterization. The modifications were checked by performing a number of tests that exercise the modified features of the software. This was done by comparing the results obtained with the new feature to either the equivalent results obtained with the qualified features of the software or the results from independent calculations. A brief summary of these tests is provided below.
Calculations were performed for a source zone with variable b and for an equivalent problem where each cell is modeled as a separate source zone with constant rate and b. This test showed consistent results.
Calculations were performed for a number of options regarding dip, orientation, and behavior at boundaries. These tests showed consistent results.
For the sake of efficiency, a new approach was developed to introduce “pinch points” in the portion of the logic tree associated with sources that make small contributions to total hazard. This approach was tested by comparing fractiles hazard curves with and without this pinch point, showing consistent results.
CEUS-SSC Project Website