
In addition to chronic obstructive pulmonary disease, high blood pressure and having had a stroke, he had had a cardiac pacemaker placed. He felt best when his blood pressure remained high at 170/90. His regular doctor didn’t change a thing.
Well, say goodbye to hospice, you cruel, malicious heel. Zimmerman and colleagues, reporting in the April 9 JAMA (subscription required to get to the full article) found that there were 22 randomized control clinical trials evaluating ‘palliative care,’ which was defined as a service that provides or coordinates comprehensive care for patients with terminal illness. Only 4 of 13 studies found that there was an improvement in quality of life, 1 of 14 found that there was symptom improvement and 1 of 7 found that there were cost savings. Yet, palliative care is a growth industry with approximately 4,500 standing programs, of which 46% are for profit, involving 1.3 million patients. It’s been covered by Medicare since 1982, along with the vast majority of other commercial and government health insurance programs.
The Disease Management Care Blog has already fussed over the “evidence-based” double standard being applied by many of the health care illuminati to ‘population-based care approaches’ to care versus yearly check-ups, the medical home and integrated delivery systems. While that cognitive dissonance can be a source of great fun, it’s time to be serious.
The DMCB thinks it’s difficult for the vast majority of hospice programs to accommodate randomized clinical trials (RCTs) let alone concoct one. It’s simply “out of reach” in terms of their business model or available resources. What’s more, the vast majority of hospice programs leaders, their workers, their patients, the families and the physicians wouldn’t want to bother with RCTs because the questions they answer wouldn’t add to their knowledge. Sound familiar?
How can we reconcile what we know and accept about hospice versus the lack of evidence? The DMCB commends an excellent essay authored by Don Berwick in a preceding issue of JAMA (subscription again). In it, he points out RCTs are better suited for the circumscribed evaluation of tests, drugs and procedures. They fall short, however, in evaluating complex multi-component interventions that depend on ‘context’ and the ‘mechanism.’ It’s not a matter of getting to the right answers, it’s a matter of getting to the right questions.
He suggests new evaluation methodologies are necessary when the question is not if something works, but why and under what circumstances. He also recommends that a) the high threshold used to decide whether change is warranted needs to shift if the status quo is unacceptable, b) sources of bias are better accommodated than eliminated and c) academician-front-line caregiver gap needs to be bridged.
Can you think of a big health care policy debate that would benefit from this approach? Hint here.
The Disease Management Blog doesn’t presume that its readers rely on this corner of the blogsphere for news and information, so you’ve probably already heard that the ACCORD (the catchy acronym stands for the Action to Control Cardiovascular Risk in Diabetes) was suspended. This my blog though, so that doesn’t mean I can’t weigh in with what I’ve learned and offer up some additional speculation through the lens of disease management. Read on if you are so inclined…….
Interesting stuff. Over 10,000 persons were randomly allocated to either tight blood glucose control (target A1c less than 6%: that is VERY aggressive) versus moderate control (an A1c between 7% and 8%, which isn’t bad but doesn’t meet guidelines of the American Diabetes Association or HEDIS). As an aside, there was an additional “2x2 factorial design” that also tests the benefits of 1) aggressive blood pressure control and 2) treatment to increase the “good” or HDL cholesterol. Research subjects began to be recruited in January 2001 in 77 outpatient clinics across the United States and Canada.
What was found? Over an average of 4 years, the researchers noted an increase in the death rate among the approximately 5000 subjects assigned to the A1c less than 6% group versus the 5000 in the other group. This was quite counterintuitive: 257 died in the tight control group, versus 203 in the group assigned to an A1c between 7% and 8%. This was statistically significant and apparently not a function of the types of diabetes drugs used. The portion of the trial on tight blood sugar control has been halted; the other research on blood pressure and HDL is continuing.
To put the calculated excess death rate of 3 per 1000 into perspective, the numbers suggest that a doctor (ensconced in a population-based program of course) would need to aggressively target 333 persons with diabetes down to an A1c of 6% or less to provoke one extra death (for more on number needed to treat go here). Deaths were evenly split between cardiovascular categories and “other” (for example, cancer). Persons assigned to the low A1c group had a lower rate of heart attacks, but the irony is that they were more likely to die if that happened. Participants are being notified of the trial result and the persons in the low A1c group are being reassigned to an A1c between 7% and 8%.
Note this is an “intention to treat” analysis. In other words, the data hasn’t been sorted by the actual A1c. A technically correct interpretation is that trying to get a person with diabetes to an A1c less than 6% is associated with excess mortality. That is slightly different than the conclusion getting a person to an A1c less than 6% is associated with excess mortality. Not everyone in the aggressive control group actually got to an A1c of less than 6%.
Why is this interesting? Older readers with a background in patient care may recall the debate about the “J curve” in essential hypertension back in the 1990s. The moniker “J curve” was used because the plot of BP control on the horizontal axis versus complications on the vertical axis looked like a “J.” Some studies had suggested that lowering the blood pressure “too much” (for example, less than 70 diastolic) among persons with hypertension seemed to be associated with increased mortality. Folks speculated that a lower “head of pressure” in arteries partially blocked with atherosclerosis led to premature clotting/thrombosis and death. The HOT (another catchy acronym – Hypertension Optimal Treatment) Trial found out that was not true and that aggressive lowering of blood pressure does no harm. What’s more, for persons with diabetes, HOT showed aggressive lowering of blood pressure is beneficial.
The preliminary review of the data from ACCORD makes me wonder if the previously obsolete concept of a J curve can be resurrected for diabetes mellitus. If we forego “intention to treat” for a moment and speculate persons in the ACCORD Trial who maintained an A1c between 6% and 7% did “better” than those with an A1c greater than 7%, the question is how did the persons in the less than 6% group do compared to those with an A1c between 6% and 7%? If they did worse, J curve! If they did the same, hockey stick.
Stay tuned. It will take some time for the health services researchers to pull all that apart and present all this in a peer review, transparent forum.
So what do we know?
This is another great example of why we need to uncouple short term process (the incidence of A1c testing) or clinical measures (the A1c results themselves) from the outcome measures that people really care about. And people care about death.
Patients who are engaged in their diabetes care now have even more information to better gauge how they should be treated. They can be counseled that targeting an A1c lower than 6% isn't necessarily in their best interest. However, they should factor in the intention to treat dimensions.
The mortality was observed in an “intention to treat” context. Just because a patient has an A1c less than 6% isn’t necessarily bad, it’s having that be the target that’s apparently bad. For example, a patient with an A1c between 7% and 8% who is being aggressively treated to achieve an A1c less than 6% is also in potential trouble.
The difference in mortality rates could have happened as a result of random variation and may have nothing to do with the A1c target. While possible, it’s unlikely because the researchers probably used the "chance of variation being 5% or less" (otherwise known as p<.05) threshold. In other words, the likelihood of random variation being responsible for this is less than 5% or 1 in 20.
It took four years for the difference in mortality to become apparent. Even if we forgo the intention to treat context, a brief dip of an A1c to less than 6% in an individual patient is not cause for alarm.
Note that the participants in this study are being re-targeted to an A1c between 7% and 8%, not the ADA recommended level of less than 7%. In looking over the protocol (see page 12) the researchers argue the 7%-8% treatment range what was obtained in previous studies on the benefits of diabetes control, particularly with the drug metformin. There are some interesting data out there that says a level between 7% and 8% may be good enough. This is despite what the ADA says and what the NCQA’s HEDIS optimum measure is. Once again, enrollees in disease management program have a basis to assess the ADA/HEDIS recommendations in the context of their own preferences and values. And deciding to let the A1c creep up over 7% may not be an egregious foolhardy sin.
Keep an eye out for the J curve.
The Vytorin kerfuffle has receded to a slow burn, but that hasn’t stopped the disease management blog from mulling over what can be learned about the population-based dimensions of lipid management.
First some lipid orthodoxy, then the stuff of head cramps and then some discussion of a potentially under recognized ingredient in the Vytorin saga. If you’re already familiar with statins and lipids move 4 paragraphs down.
There is a direct link between blood cholesterol levels and the later development of atherosclerosis. Statins block the body’s ability to manufacture cholesterol. Multi-year studies involving statins have shown they lower cholesterol, reduce the progression of astherosclerosis, decrease the incidence of heart attack and save lives.
In general, there is a correlation between the amount of statin taken, the degree of blood cholesterol lowering and the subsequent risk of atherosclerosis-related diseases like heart attack. The more you take, the better the risk reduction for heart attack. But here is the rub: not everyone responds to statins the same way and some individuals taking statins still get heart attacks. According to one reference, a maximum dose of one statin will result in about only 70% of adults having the kind of low blood cholesterol level that in turn correlates the greatest reduction in heart attacks. The remaining 30% have a lower risk, but it’s not optimum.
Enter ezetimibe. This is a new agent that works differently than statins. It blocks the absorption of dietary cholesterol. While taking it leads to decreases in blood cholesterol, there are no studies (unlike statins) proving that the lower blood cholesterol from ezetimibe in turn leads to a lower frequency of heart attacks. However, based on what we know about blood cholesterol, it’s not unreasonable to believe that it could but there is no proof. Hence its potential role for the 30% of persons for whom statins are not enough.
The tension between populations and individuals is what gives the disease management blog a head cramp: if a population has a high frequency of cholesterol testing and statin usage, or (even better) if blood cholesterol levels are dropping thanks to statin use, you can confidently predict that the frequency of heart attacks will go down. Some individuals on statins, however, will still get heart attacks, just like some individuals on blood pressure medicines will still have high blood pressure. It's not unusual for physicians to struggle with reconciling that lower individual risk when making treatment decisions.
Why the head cramp you ask? There isn’t a managed care organization (MCO) seeking accreditation with the National Committee for Quality Assurance (NCQA) that isn’t doing its best to reduce the number of persons at risk for a heart attack using HEDIS measurements of cholesterol levels. While there are interventions that increase measurement rates and lower cholesterol levels at a “population” level, MCOs typically apply their maximum leverage at the individual patient-physician level, using what can euphemistically be called feedback and incentives. The result? Since physicians can easily recall that last statin-taking heart attack victim and 30% of their patients don’t get to target cholesterol levels anyway, the additional MCO leverage lowers the threshold for them to prescribe ezetimibe, even though the lower cholesterol has only a potential role in the reduction of heart attacks.
Enter the ENHANCE trial. Even though ezetimibe lowers blood cholesterol, this study used something completely different and more meaningful: thickening in the inside of the carotid arteries. This is a surrogate marker for the development of atherosclerosis, which is a arguably a stronger predictor for heart attack risk than a cholesterol level. Compared to statin alone, the addition of ezetimibe (combining ezetimibe with a statin in one pill is called “Vytorin”) - even though cholesterol levels were lower – had no impact on the progression of atherosclerosis. That means it’s less likely have any impact on heart attacks or saving lives.
Lessons learned:
Even though statins reduce heart attack risk, it doesn’t have a 100% success rate. Compared to other interventions, however, it has a very good track record and lots of statin prescribing is good. Measurement of this at a population level is meaningful.
Considering the potential role of ezetimibe from a population perspective makes it a heluva lot easier to discern what it is known to do (lower cholesterol) and what it isn’t known to do (prevent heart attacks).
MCOs' running amok by leveraging NCQA HEDIS at the individual patient-physician level, where the benefit of statins are less readily discernible, may lead to short term measurement improvements (lower cholesterol). The potential price is the use of unproven treatments like Vytorin with no evidence from a population perspective that anyone is benefiting.
And here’s the punchline: ever wonder why disease management organizations are not more intimately involved in their customers’ HEDIS improvement activities? The disease management blog thinks that these companies – thanks to risk corridors that place a premium on hard money outcomes at a population level – understand the role of HEDIS. Good disease management leads to better clinical outcomes and better HEDIS rates. Improving HEDIS rates does not necessarily mean good disease management.
Something to think about.
Everyone is aware that millions of Americans are being stymied by suboptimal healthcare quality, but the Disease Management Blog has been pondering just what “is” quality? Assuming it is expressed as a fraction of a group defined by a condition that achieves a desired outcome, the higher that fraction, the better. What’s more, with continued interventions, innovations and incentives we should be blessed with 100% quality. One example is NCQA, HEDIS and beta blockers.
But is that realistic in other sectors of chronic illness care?
Those of us who have worked in the trenches know the answer is sadly no. High blood pressure is a good example. As the heart squeezes down and pushes blood out into the arteries, the pressure goes up until the heart reaches its maximum degree of contraction. The maximum pressure in the arteries at that point in time is the “top number” (systolic pressure). As the heart relaxes or dilates (it needs to fill up with blood again), the pressure falls until the heart stops the relaxation and starts to squeeze again. That point of low pressure in the arteries is the bottom number (diastolic pressure). With the contraction and relaxation of the heart, the pressure in the arteries bounces between the systolic and diastolic, typically 120 millimeters of mercury on top and 80 mm of mercury on the bottom.
No one really knows what causes high blood pressure (usually defined as 140 or more systolic, or 90 or more diastolic) but whoever figures it out will likely deserve the Nobel Prize for Medicine, since that kind of discovery can lead to a cure. Absent that cure, we’re stuck with treatments consisting of weight loss, salt restriction, other lifestyle modifications and of course, drugs. Just how well do these treatments work?
To answer that question, the intrepid disease management blog blew tanks and dove into some of the peer review literature. Trials of hypertension therapy are a good window into the topic because in research settings, motivated volunteers agree to fully comply with treatment, have close follow-up by doctor and typically have a research assistant (such as a nurse) provide direction to the patient under protocol as part of a registry. It’s not too dissimilar from disease management or the chronic care model. That was also the approach used in the landmark HOT study.
In that study, patients with hypertension were randomly assigned to one of three targeted treatment protocols. While much has been made of the outcomes in the study, what is not widely appreciated is that only 85% of those assigned to a diastolic blood pressure less than 90 actually achieved it. In other words, even in high intensity research settings, the achievement of blood pressure control is not 100%.
How do I know this? Because I was one of the investigators in HOT and despite nuclear powered education and suitcase loads of pills, some of the participants in my clinic never got to target blood pressure.
And this phenomenon is not confined to high blood pressure. Outcomes less than “perfect” are typical in other clinical trial research reports involving chronic conditions like chronic heart failure, diabetes mellitus and high cholesterol. The HEDIS beta blocker success story will probably turn out to be the exception and not the rule.
Therefore, based on what the science is telling us, even with fully motivated patients who are carpet bombed with disease management (or the advanced medical home), ideal healthcare quality cannot be expressed as 100%. However, absent an adequate comparator, clinically reported clinical trials can yield up a “best of class” success rate of what can be accomplished under optimum conditions involving an optimum population. That puts a new perspective on press releases like this, where the results (compared to published trials) probably ain’t bad, but fail to tell us what is – and what isn’t – possible.
The Disease Management Blog wonders if much of the same phenomenon is true in the disease management part of the industry. Consider the case of the two Simpsons.
Jessica, may not deserve it but let’s face it: she does not have a reputation for high intelligence. However, she is an earnest and well meaning person (Dallas fans may disagree). If Jessica were tasked with some remote patient behavior change, I suspect that, despite her low healthcare skill set, she’d have some success even if all she did was call, leave a message and remind the “client” to regularly check the blood glucose or remind the physician about this study. Jessica is on the lower part of the curve.
On the other hand, Lisa is remarkably talented and insightful and she would undoubtedly be able to adjust her interactions with patients and deploy just the right input of insight, coaching and good cheer necessary to convince even the most unmotivated person to comply with the most complex of Care Plans. Lisa is higher up on the curve.
While the value of disease management is more than a function of telephony, it’s still fun to think about. There may be a role for Jessica, depending on the needs and health status of the population - as well as the negotiated price. Lisa is important also, but having too much of her can lead to diminishing returns. I also think Jessica or Lisa can overdo it and prompt some patients to seek additional high-cost, low-value or unnecessary care. I suspect finding the right balance of “Jessica” and “Lisa” is what distinguishes the high performing disease management companies from the others.
Thought I was talking about disease management? Not exactly.
The summary above is also about the annual physical examination, which is supposed to 1) uncover treatable conditions, 2) apply only the best science for prevention and treatment of those conditions, 3) initiate and coordinate appropriate care with other healthcare providers, 4) inform patients about their best care options, 5) establish a complete and retrievable record and 6) create a baseline against which future health can be measured.
If we fairly applied the tone of many reviews about disease management to the time-honored yearly check-up, it might read something like this:
Skeptics have raised doubts about the annual physical exam in the medical literature. There have also been reports from highly respected sources that have extensively reviewed the available evidence and concluded it has failed to demonstrate that it has any consistent value. While other studies indicate it may or may not actually increase quality, its cost amounts to billions and the absence of any good studies on a return on investment makes one wonder if the physical examination industry is intentionally misleading us. Market demand for the annual examination is high however, which has been forcing many commercial health insurers to ignore their actuaries’ advice and pay for it.
Physicians have their unbiased perspective on the topic. Good thing the U.S. Congress and CMS routinely, uniformly and fairly apply a scientifically rigorous process to controversies like this before covering it under the standard benefit.