While many providers repay millions of dollars to CMS recovery audit contractors (“RACs”), a growing number are employing successful appeal strategies, allowing them to overcome the government’s demands and retain their revenue … appealing overpayments and demands for overpayment

Overview of Recovery Audit Contractors (“RACs”)

The Centers for Medicare & Medicaid Services (“CMS”) implemented a national recovery audit program in 2010 in an effort to identify and collect improper Medicare payments while limiting fraud, waste and abuse in the U.S. healthcare system.[i]   Since that time, healthcare providers have experienced a barrage of added scrutiny through both pre- and post-payment audits, resulting in the enforcement of billions of dollars in repayment demands imposed by the government. appealing overpayments and demands for overpayment from healthcare providers

A key component of the government’s efforts to identify and collect overpayments from federal healthcare programs is its reliance on privatized enforcement. The identification and collection of improper payments is conducted on the behalf of CMS through a variety of private contractors, including RACs.  The ability to develop a successful appeal is increasingly vital for healthcare providers seeking to retain payments to which they are entitled. appealing overpayments and demands for overpayment from healthcare providers

Types of RAC Audits

Contractors typically conduct two types of post-payment audits: automated and complex.[ii]  Automated audits normally detect payment errors through the use of quantitative analysis requiring limited interaction with the provider.  Complex audits involve a manual review of records by clinical and statistical experts.  These complex audits offer much greater subjectivity for RACs because they are required to review only a sample of claims when evaluating documentation, clinical decision-making and code application.  In many cases, RACs extrapolate or project their findings from the sample across a much larger population of payments, even those they did not review.

Appeal Strategies for Scrutinizing RAC Statistical Analysis

In addition to legal arguments, healthcare providers should also scrutinize the process and procedures by which a contractor estimates an overpayment amount. Complex audits require a rigorous degree of audit planning to ensure the process is conducted fairly and yields sound and reliable conclusions.  This process also necessitates a high degree of statistical competency to ensure that claims are properly sampled and extrapolated to achieve sufficiently precise results.  As such, contractors are required to abide by the Medicare Program Integrity Manual (“MPIM”), which includes guidelines on data analysis, statistical sampling, extrapolation, and estimation of overpayments.[iii]  Successful appeals often highlight a contractor’s nonconformance with these guidelines and several decisions have been rendered regarding these arguments.  appealing overpayments and demands for overpayment from healthcare providers

  • Generally Accepted Standards

Statistical sampling and extrapolation, when conducted properly, is a widely accepted method for estimating overpayments. The MPIM requires that any sampling methodology be reviewed by a statistician or a person with equivalent expertise in probability sampling and estimation methods.  However, that review alone does not protect statistical analysis and the resulting conclusions from appeal.  In fact, a great degree of subjectivity exists when implementing statistical procedures as they relate to these matters.  Several appeal decisions have concluded that no “generally accepted standards” exist for the application of sampling.[iv]  In King, which involved RAC overpayment demands, the Council observed that “[w]hile there may well be theories on the ‘right way’ to conduct a sample, there is no formal recognition of ‘generally accepted statistical principles and procedures.’”[v]  

This decision, along with others like it, highlights the difficulty in challenging the subjective methodology of a contractor’s statistical analysis. Appeal decisions have consistently shown that successful appellant arguments rest not solely on a contractor’s statistical technique, but more broadly on whether the contractor’s conclusions are arbitrary and capricious.

  • Burden of Proof

According to CMS Ruling 86-1, the burden of proof is on the appellant to prove a contractor’s statistical sampling methodology was invalid and not on the contractor to establish that it chose the most precise methodology.[vi]  This ruling, along with a lack of clear industry standards, explains why the Council and federal courts are reluctant to overturn a contractor’s methodology without explicit evidence of errors.  Multiple decisions have followed this reasoning.[vii]  In Border Ambulance Service, the Council stated that “[a]ppellant’s challenges to the sample are not based on demonstrable errors in the sample or reference to specific supporting evidence in the record.  Rather, the appellant’s arguments are based upon the testimony of its statistical expert and its cross examination of the PSC’s statistical expert. The appellant’s speculative assertions do not satisfy its burden of proving that the statistical sampling methodology at issue is invalid.”[viii]

While these decisions appear to limit potential challenges to sampling, they also provide a roadmap for arguments that can be successful. A successful argument should be based on data and the facts of the case rather than statistical design alone.  Instead of engaging in a “battle of the experts,” appellants and their experts should use case facts (i.e., the claims in question) to demonstrate their theories.  For example, statistical experts may opine that a particular sample is biased, and therefore not statistically valid.  That argument alone may not be successful.  Instead, appellants should demonstrate that an unbiased sample would produce materially different results in a comparable audit in addition to arguing the theoretical concept of bias.  In other words, an appellant may need to perform its own audits using more appropriate data. appealing overpayments and demands for overpayment from healthcare providers

  • Sample Size

A common strategy when appealing an extrapolation involves the sample size and precision of a contractor’s conclusions. Since larger sample sizes increase precision, many providers argue a sample size is ‘too small’ when they believe the conclusions to be invalid.  Unfortunately, such arguments yield little success.  Once again, the obstacle is that no industry standards exist for appropriate sample sizes.  Multiple U.S. District Courts have ruled that no minimum amount exists for sample size, including decisions in Ratanasen v. California,[ix] Webb v. Shalala,[x] and Pruchniewski v. Leavitt.[xi]  Again, these decisions give clear guidance regarding the types of sample size arguments that may succeed.  Such arguments should demonstrate that, when performed properly (i.e., with a larger sample), the contractor’s audit would have concluded materially different results.  Such an argument will likely deem the contractor’s analysis, and resulting conclusions, to be invalid.

Interestingly, the U.S. Department of Health and Human Services Office of the Inspector General (“OIG”) recently published its own guidance regarding sample size thresholds. In April 2013, the OIG updated its Provider Self-Disclosure Protocol (“SDP”), which allows providers to self-report instances of potential fraud or false billings.[xii]  The SDP is separate and unique from CMS’ own Self-Referral Disclosure Protocol (“SRDP”) which applies only to potential violations of the Stark Law.[xiii]  Among other things, the SDP requires the submission of a detailed sampling plan with a sample size of at least 100 claims.  While this guidance does not apply to CMS contractors for the purposes of recovery audits, the updated SDP should be a reasonable indication of the minimum sample size necessary to generate reliable and precise conclusions, at least for the OIG.  Also consider reviewing Forensus’ Sampling Framework STEP FOUR which addresses sample size selection.

  • Precision

The relevance of an estimate based on sampling and extrapolation depends on both precision and confidence levels of the conclusion. Precision explains a range of accuracy related to an estimated overpayment amount, while confidence is the degree of certainty that the sample correctly depicts the population. For example, an estimated overpayment amount of $200,000 with a two-sided 90 percent confidence interval and a precision amount of $10,000 would be interpreted to mean the true overpayment is expected to be within $10,000 of the $200,000 estimate with a 90 percent level of confidence.  In other words, there is 90 percent confidence that the actual overpayment amount lies within the range of $190,000 and $210,000.  Naturally, a more precise conclusion would result in a smaller range of possible overpayment amounts.  As such, many appeals argue that contractor estimates do not meet reasonable precision thresholds.

Multiple Council and federal court decisions have confirmed that no specific level of sampling precision is required. Therefore, appellants must show that the contractor’s level of precision yields unreasonable results.  In Pruchniewski v. Leavitt, the Court held that because there is no established standard of precision for this type sampling, the ALJ was correct in concluding that providers, like the plaintiff, must “go further and establish that the degree of imprecision is such that the extrapolation does not reasonably approach the actual overpayment, that is, it is so imprecise as to be arbitrary and capricious.”[xiv]

While the court did not endorse the appellant’s argument, it opened the door for appellants to delve further and demonstrate that a more precise analysis could result in materially different conclusions.

  • Representativeness and Randomness of Sample

A common argument used by appellants is that the sample is not representative of the population in question. Common causes for such an argument involve unique subsets of claims in which a contractor may only audit certain subsets, while attempting to extrapolate its conclusions across the broader population.  Examples could include disproportionate samples of high-dollar claims or a focus on one particular facility or provider (i.e. potentially “rogue” providers).  The Council and federal courts have not been persuaded absent a showing that such a lack of representation adversely affected the contractor’s conclusions.[xv]  Once again, the burden of proof lies with the appellant.

Another common area for appeal involves how the sample was selected.  Contractors typically use statistical software to help select a sample, such as RAT-STATS, which was developed by the government and is commonly used for statistical analysis.[xvi] While the Council and federal courts have widely held that such software programs are a reliable means of selecting a sample,[xvii] these programs are only as effective as their operator.  Appellants should scrutinize the contractor’s work plan to ensure RAT-STATS was used as intended and that the resulting outputs were properly employed.  Errors in the selection of a sample or intentionally including specific claims in an otherwise random selection may result in biased and invalid conclusions.

For instance, in Sanders, a case involving RAC overpayment demands, the Council found that “… either the samples themselves were not drawn correctly or the claims were not correctly assigned to the correct stratum in every case consistent with the probability sample design.”[xviii]  In part due to this finding of improper sampling, the extrapolation was set aside and overpayments were limited to the actual claims sampled.  This case highlights that simply using RAT-STATS does not protect against appeals, much like using a calculator may not prevent a calculation error.

Conclusion

The role of statistical analysis and audit design in both estimating overpayments, and in appealing those estimates, cannot be overstated. In the face of this and other audit scrutiny, many providers successfully appealed the results of these recovery audits, contributing directly to their bottom line.  In many cases, in-depth legal analysis or additional audit activities will be required to fully explore these avenues. appealing overpayments and demands for overpayment from healthcare providers

appealing overpayments and demands for overpayment from healthcare providers

  Explore our Statistical Sampling Services


This post does not, in any way, constitute legal advice, nor does it offer guidance or legal opinions about how courts or other bodies may interpret particular issues of statistical analysis. The cases cited are merely examples of relevant issues, and they are included to demonstrate how the issues were treated based on the facts and circumstances of each specific case.

 

[i] FY 2012 Report to Congress, Centers for Medicare & Medicaid Services, Recovery Auditing in Medicare and Medicaid for Fiscal Year 2012 at 11. Available at http://www.cms.gov/Research-Statistics-Data-and-Systems/Monitoring-Programs/Medicare-FFS-Compliance-Programs/Recovery-Audit-Program/Downloads/Report-To-Congress-Recovery-Auditing-in-Medicare-and-Medicaid-for-Fiscal-Year-2012_013114.pdf.
[ii] RACs are also permitted to use semi-automated reviews, in which the RAC identifies potentially erroneous payments based on claims data and gives the provider an opportunity to provide medical records to substantiate the claim.  For example, the semi-automated review process is used when a RAC identifies claims that are “medically unlikely,” such as claims for more than three units of incision and drainage of post-operative wound infections.  Because these claims are denied in the same manner as an automated review, the AHA combines semi-automated and automated review data in its RACTrac survey.
[iii] Centers for Medicare & Medicaid Services, Medicare Program Integrity Manual, May 27, 2011 at Ch. 8, § 8.4.1.5, Available at http://www.cms.gov/Regulations-and-Guidance/Guidance/Manuals/Downloads/pim83c08.pdf.
[iv] Pruchniewski v. Leavitt, 2006 WL 2331071, (M.D. Fla 2006).
[v] King, 2011 WL 6960267 (H.H.S. May 10, 2011).
[vi] Program Memorandum Carriers, Centers for Medicare & Medicaid Services, Use of Statistical Sampling for Overpayment Estimation When Performing Administrative Reviews of Part B Claims, Jan. 8, 2011. Available at https://www.cms.gov/Regulations-and-Guidance/Guidance/Transmittals/downloads/B0101.pdf.
[vii] Momentum EMS, 2010 WL 7232787 (H.H.S. Nov. 22, 2010).
[viii] Border Ambulance Service, LLC, 2010 WL 7209455 (H.H.S. Nov. 10, 2010).
[ix] Ratanasen v. California, 11 F.3d 1467 (9th Cir. 1993).
[x] Webb v. Shalala, 49 F.Supp.2d 1114 (W.D. Ark. 1999).
[xi] Pruchniewski v. Leavitt, 2006 WL 2331071, (M.D. Fla 2006).
[xii] OIG’s Provider Self-Disclosure Protocol (April 17, 2013), US Department of Health & Human Services Office of Inspector General, Available at http://oig.hhs.gov/compliance/self-disclosure-info/files/Provider-Self-Disclosure-Protocol.pdf.
[xiii] CMS’ Self Referral Disclosure Protocol (May 6, 2011), Centers for Medicare and Medicaid Services, Available at http://www.cms.gov/Medicare/Fraud-and-Abuse/PhysicianSelfReferral/Downloads/6409_SRDP_Protocol.pdf
[xiv] Pruchniewski v. Leavitt, 2006 WL 2331071, (M.D. Fla 2006).
[xv] Place for Achieving Total Health, 2010 WL 2895718 (H.H.S. Mar. 3, 2010).
[xvi] RAT-STATS Statistical Software, Available at http://oig.hhs.gov/compliance/rat-stats/index.asp.
[xvii] John v. Sebelius, 2010 WL 3951465 (E.D. Ark. 2010).
[xviii] Sanders, 2011 WL 6960281, (H.H.S. May 12, 2011).

Portions of this post were authored by Christopher Haney, Mary Malone and Colin McCarthy and published in the American Bar Association’s Health Lawyer, Volume 26, Number 6, Aug 2014.

appealing overpayments and demands for overpayment from healthcare providers