This means that if I were to pull some statistically valid random sample of claims from randomly selected medical practices, I could expect that in 95 of those , the true error rate would be somewhere between 8. For the practice, this would mean that if I were to conduct a random probe audit, I would likely find somewhere in the neighborhood of 9.
But I would also likely be missing over 80 percent of risk opportunities, meaning that in a random probe audit, I could not afford to pull enough charts for each provider to cover the range of unique procedures reported by those providers. Risk-based auditing, when done correctly, should be able to increase that error rate based on the simple fact that the model will have identified those procedures that have a higher risk of audit and are statistically more likely to have billing errors, as defined by the algorithms and supported by CERT.
Our system uses true predictive analytics to predict high-value targets, whether they be associated with the provider in general or specific codes and modifiers for that provider. While this article is not about our system, suffice it to say that we use supervised learning techniques and a significant database of claims that have already been selected for an audit, thereby training our algorithms to classify risk by knowing what is unique about those particular claims.
How do we measure accuracy using these models? In predictive analytics, the benchmark is whether your prediction is better than chance. At the most basic level, this would mean, if using a predictive model, the error rate would be higher than 9.
Try it yourself. See what your error rate is using a random probe audit and then try to risk adjust using simple methods, like baselining or benchmarking. If these methods are really effective, meaning more accurate than the probe audit chance , we would expect to see a higher error rate, potentially.
Remember, this is not always true, because we are predicting the likelihood of an audit and not the likelihood of an error, but it is my opinion that it does serve as a reasonable proxy. Recently, we conducted an analysis of results for nearly 3, audits that were conducted by providers using our system to identify risk.
While there was a pretty wide range of findings, the average error rate was nearly 18 percent, or just about twice that of chance. I was excited because it supports both what CMS is doing to identify potential billing issues and gives practices a new model to use to help level the playing field. Remember, we are not predicting billing errors at least not yet , but rather likelihood of an audit.
But we should remember that many of the variables used to predict the likelihood of an audit are the same used to predict billing error. One might conclude that perhaps this means that risk-based auditing is simply missing nearly 80 percent of the billing errors, but to make this assumption, you would have to assume an error rate of percent, which is simply improbable.
Rather, we benchmark against the average error rate of 9. Granted, the FPS was created by a private company Verizon , but the fact that CMS saw its potential and adopted the model is a wonder in and of itself. While the jury may still be out as to where CMS is planning to go with these types of models, they have made it very clear that medical practices and healthcare providers need to up their game and get on board with methods that provide greater support for overall compliance.
Log In Create Account Menu. No products in the cart. Risk-based Auditing Does it Really Work? March 14, Frank D. The bottom line? Probe audits and utilization studies are out, and advanced statistics is in. Comment on this article.
Share This Article. Share on facebook Facebook. Share on twitter Twitter. Share on linkedin LinkedIn. Share on email Email. Share on print Print. Linkedin Twitter Facebook. Stay Connected. Instead, it is used to determine a net financial error rate, and thereby to indicate whether further analysis is warranted. For example, an analyst may select 30 sample units for a probe sample, then audit those 30 claims to calculate the percentage of overpayments in the sample.
If the percentage exceeds some established threshold e. However, if the error rate does not exceed the threshold, the analyst might end their audit, finding that insufficient errors exist to justify further analysis. Such a conclusion could save significant cost and effort, while also allowing the analyst to focus on other areas of greater risk.
Therefore, the analyst only has to randomly select and review an additional claims. The results of all claims reviewed as part of the complete full sample i.
The key here is that the probe sample be statistically appropriate for inclusion in the full sample. Generally, a sample size of units is sufficient for a probe sample, with some organizations requiring a minimum sample size e. More importantly, it is also worthwhile to understand the methods by which the probe sample is designed and selected.
Similarly, a probe sample should be properly randomized and selected. Judgment samples typically result from haphazard selection or by means of convenience i. Ensuring a probe sample is designed to be statistically valid is critical. Too many analysts wait until the probe sample is reviewed before considering whether the probe results can be re-used — by then, it is often too late to prevent duplicating sampling efforts.
Analysts should confer with their statistical experts in advance to identify necessary planning considerations of a probe sample. Probe samples may be useful reducing the amount of time and effort required in matters involving statistically valid sampling analysis. To best utilize probe samples, they should be appropriately designed and selected in a manner that will allow them to be incorporated into a full analysis. If properly executed, probe samples can effectively be used in healthcare audits, investigations, and self-disclosure, and CIA claim reviews.
Just a compare and contrast…I do not view a probe sample as also being known as a discovery sample. Probe samples use a sample size of 20 — 40 most organizations will use a sample size of 30 for probes since it is right in the middle of the 20 — 40 range. Discovery samples use a sample size of There are a few other differences, but I at least wanted to share my understanding of how a Probe audit is different from a Discovery audit.
Could you share your source for the probe sample consisting of 30 — 50 samples…. I would be very interested and much appreciative! The key consideration for probe and discovery samples is their purpose, rather than their size. Both are used to ascertain an estimated error rate for the total population, and they are both generally used to determine whether further statistical sampling analysis is warranted. In that sense, the terminology is interchangeable you may see the term exploratory sample too.
As for the size of these samples, there are generally no hard and fast rules in statistics governing their size.
Benchmarks do exist as you noted, and in the healthcare CIA context, the Office of Inspector General stipulates a minimum size of a discovery sample is However, this is not the only acceptable size for a discovery sample in all forums. Similarly, depending on the forum, the minimum size of a probe sample can vary, if it exists at all.
0コメント