[VIDEO, BLOG] Clinical utility: Reflections on an imperfect science

May 1, 2014

Assessing the clinical utility of diagnostic testing has always been a complicated undertaking. And as new genetic tests enter the equation, that complexity will only increase. To examine the issues surrounding clinical utility, I’d like to share a quick story.

Assessing the clinical utility of diagnostic testing has always been a complicated undertaking. And as new genetic tests enter the equation, that complexity will only increase. To examine the issues surrounding clinical utility, I’d like to share a quick story.

During a physical examination 2 decades ago, a close family member was told that she may have mitral valve prolapse. To evaluate the physical finding that brought mitral valve prolapse into the picture, she underwent echocardiography. The results? An official interpretation that she “might have mitral valve prolapse.” To me, this was equivalent to a home plate umpire calling “it might be a strike!”

Perhaps it’s overly simplistic to expect the echocardiographer to say “present” or “absent” on the interpretation. But it does seem important that the tests we are given help us make the call. In any case, on the merits of uncertain physical findings and an equivocal echocardiogram, the family physician prescribed prophylactic antibiotics for dental procedures anyway.

Two uneventful years passed, and after another uncertain finding on her physical exam, the recommendation was that the echocardiogram be repeated to monitor progress. My response was to ask why she should pay $200 to keep doing what they’re doing (occasional pre-treatments with penicillin). She declined to repeat the test and we went about our lives.

A few years later, still healthy, this family of 5 applied for individual health insurance. The application was submitted as the head of the household was transitioning between jobs. The selective rejection of the affected individual-based on review of medical records, a pre-existing condition, and “refusal to undergo recommended diagnostic testing”-was an unexpected and distressing shock.

This true story highlights some important challenges in the diagnostic process that can be summarized by 2 questions: How good is the test? And will the result of the test change what we do? For most clinicians, the diagnostic process is driven by the need to assign a patient to the best treatment group (including consideration of no treatment needed). Genetic testing, though, is expanding this view to include family profiling for specific genetic traits, so that another person’s treatment can be determined.

The expanded view of the diagnostic process created by genetic testing strains the legacy definition of medical necessity for many insurance plans. Hopefully common sense will prevail, along with evidence-based protocols.

 

The fundamental questions remain, though: How good are molecular diagnostic tests? And will the results of molecular diagnostic tests change what we do? To answer these questions, three levels of evidence are typically assessed:

  • Analytic validity. In essence, is the methodology of the test reproducible? If 100 different technicians perform the same test, will they get the same test result? Answering this question is not as simple as it seems. For instance, if a potassium level is 4.45 mg/dl, should that be reported as 4.4 (normal) or 4.5 (abnormal)?
     

  • Clinical validity. How good is this test at confirming the presence or absence of the targeted condition? Basic stuff. The sensitivity of Prostate Specific Antigen (PSA) for cancer screening is 80 percent because 1 out of 5 individuals with proven cancer have negative tests (false negative). Specificity is a measure of how specific a test is for only the targeted condition. Rheumatoid Factor, for instance, is positive in many systemic inflammatory conditions other than Rheumatoid Arthritis; it’s not very sensitive (false positive).
     

  • Clinical utility. Does the result of this test help decide between alternative treatment options enough to justify the risk and cost of the test? This is the “so what?” question for health plan medical directors, who might find grappling with it similar to grappling with the wonderful question in the literature of fuzzy logic: How many grains of sand must you drop on a table before you have a heap of sand? Sometimes each bit of clinical evidence is like one grain of sand, and the interpretation of the size of the clinical heap evolves with every new grain. The KRAS assay in colon cancer is already a classic example of the evolution of the value of a test. Before we knew that KRAS-positive patients responded to some chemotherapy agents differently than KRAS-negative patients, there was no clinical utility for KRAS testing. Now that we know, it matters.

The discussion of clinical utility is ongoing and complex, as if the sand heap analogy had an added layer, forcing us to determine if the heap were small, medium, or large.

In summary, our industry is at a crossroads when it comes to assessing clinical utility. The family member who might have mitral valve prolapse is still in the prime of life and without a clear diagnosis after more than 20 years. The questions surrounding the diagnostic process are real. Who should have which tests? When? How often?

Clinicians will continue to struggle, working in the shifting space between the science and art of medical practice. Emerging technology will continue to emerge at an ever-faster pace. We will continually be reminded that we must measure what we want to manage, we must always learn more, and we need to keep doing the best that we can.

Douglas J. Moeller, MD, is a medical director with McKesson Health Solutions. Dr Moeller has provided clinical coding and content management expertise since the launch of the McKesson Diagnostics Exchange, an online test registry and shared workflow solution that payers, laboratories, and other stakeholders use to understand the clinical and financial impact of molecular diagnostic tests.