In compliance and litigation settings, we are routinely confronted by an opposing party alleging some form of inappropriate conduct. Disputes often arise involving overpayment allegations, audits of insurance claims, or purported false claims and any of these allegations may invoke sampling and extrapolation as the basis for such claims. In those cases, an aptitude for evaluating and disputing statistical sampling is valuable. This is the second of a two-part series addressing techniques for scrutinizing sampling analysis.
… Click Here to see Part One of our series on Disputing Statistical Sampling and Extrapolation
Sampling requires a rigorous degree of technical expertise, thoughtful planning and attentive execution to ensure the process is conducted objectively, and that it yields sound and reliable conclusions. Ensuring those requirements are met by opposing parties is equally important, therefore, this series highlights common areas for evaluating and scrutinizing opposing sampling and extrapolation analysis. Part two of this series continues to highlight areas for scrutiny …
Although selecting a sample randomly is anticipated to achieve representativeness, it is not guaranteed. Therefore another common argument is that the sample is not representative of the population in question. Common causes for such an argument involve unique subsets of claims in which an analyst may only audit certain subsets, while attempting to extrapolate its conclusions across the broader population. Recall that conclusions about a sample may only be extrapolated across the population from which they are randomly sampled. Examples could include disproportionate samples of high-dollar claims or a focus on one particular facility or provider (i.e. potentially “rogue” providers).
Precision and Confidence Interval
Scrutinizing the actual level of precision achieved in an opposing analysis is also common. Recall that the usefulness of an estimate based on sampling and extrapolation depends on both precision and confidence level of the conclusion. Naturally, a more precise conclusion would result in a smaller range of expected values for a stated level of confidence. As such, many argue that actual precision levels do not meet reasonable thresholds to achieve valid and meaningful results. Read more about Extrapolating Sample Results with Forensus’ Framework.
Notwithstanding efforts to invalidate an analysis due to insufficient precision, an analyst may also seek to show an opposing analysis did not meet sufficient precision to rely upon the point-estimate, and thereby argue that the lower-bound is a more appropriate measure. A variety of industry guidelines stipulate a certain degree of confidence and precision in order to utilize the point estimate. Identifying these relevant guidelines and illustrating a material departure from the provision is an alternative method for disputing the findings of opposing analysis.
Degree of Non-Sampling Error
Beyond the technical aspects of a particular analysis, which generally influence an analysis’ sampling error (i.e. precision), it is also worthwhile to consider the practical aspects of an analysis’ execution and the potential existence of systematic errors that could lead to material non-sampling error.
Non-sampling error can occur due to bias, clerical errors or other data quality issues. It can significantly impact and misrepresent an analysis’ findings, and it is not incorporated into an analysis’ confidence and precision, therefore we have no objective way of knowing how significantly non-sampling error impacts a particular analysis. As such, the integrity of a sampling’s execution should be scrutinized to ensure non-sampling error was properly limited during the course of analysis. Non-sampling error can unduly influence an analysis in a variety of areas, but it is commonly seen if the following are not properly executed:
- Data Quality
Data quality is a pervasive issue in applied statistics and issues of data integrity can be fatal to statistical arguments. For instance, units to be sampled within a population must not overlap. As such, a properly defined population should be reviewed before sampling to ensure overlapping sampling units, including duplicates, do not exist.
- Defining the Population
A population should be properly defined before commencing the remaining steps of statistical sampling (i.e. sample selection, extrapolation, etc.). Too often, we see errors in a population revised after an analysis is completed in an effort to cure the errors resulting in the conclusions. Imagine building a home without a properly constructed foundation – simply ensuring the roof is flat upon completion, while convenient, ignores the important role the foundation plays in the rest of the home. A properly defined population in sampling is analogous to a properly laid foundation in home building.
- Measurement Error
Ensure measurement is conducted by an objective party, in accordance with an established written standard (i.e. acceptance criteria, quantification method, etc.), without significant bias and documented in such a way that decisions can be effectively evaluated. Failure to meet any of these objectives can result is significant measurement error that may materially impact or otherwise invalidate an analysis. Even small fluctuations to a sample’s measured characteristics can have a dramatic impact on a study’s conclusions due to the magnifying effect of extrapolation.
Conclusion; Disputing Statistical Sampling
Sampling requires a rigorous degree of technical expertise, thoughtful planning and attentive execution to ensure the process is conducted objectively, and that it yields sound and reliable conclusions. Those requirements should be met by opposing parties and an aptitude for evaluating and scrutinizing opposing analysis is valuable, therefore, this series highlighted common areas for evaluating and scrutinizing opposing sampling and extrapolation analysis.
Disputing Statistical Sampling Scrutinizing Statistical Sampling
©Forensus Group, LLC | 2017