A new study of e-cigarettes’ efficacy in quitting smoking has not only pitted some of vaping’s most outspoken scientific supporters against among its fiercest academic critics, but additionally illustrates many of the pitfalls facing researchers on the topic and the ones – including policy-makers – who must interpret their work.
The furore has erupted spanning a paper published inside the Lancet Respiratory Medicine and co-authored by Stanton Glantz, director of the Center for Tobacco Control Research and Education in the University of California, San Francisco, along with a former colleague – Sara Kalkhoran, now of Harvard Medical School, who is in fact named as first author but does not enjoy Glantz’s fame (or notoriety) in tobacco control and vaping circles.
Their research sought to evaluate the success rates in quitting combustible cigarettes of smokers who vape and smokers who don’t: put simply, to discover whether usage of e-cigs is correlated with success in quitting, which might well mean that vaping allows you to stop trying smoking. To achieve this they performed a meta-analysis of 20 previously published papers. That is certainly, they didn’t conduct any new research right on actual smokers or vapers, but rather tried to blend the results of existing studies to determine if they converge over a likely answer. This is a common and well-accepted strategy to extracting truth from statistics in lots of fields, although – as we’ll see – it’s one fraught with challenges.
Their headline finding, promoted by Glantz himself online along with by the university, is the fact vapers are 28% not as likely to avoid smoking than non-vapers – a conclusion which may claim that vaping is not just ineffective in quitting smoking, but usually counterproductive.
The result has, predictably, been uproar from the supporters of E Cig Price within the scientific and public health community, specifically in Britain. Amongst the gravest charges are the ones levelled by Peter Hajek, the psychologist who directs the Tobacco Dependence Research Unit at Queen Mary University of London, calling the Kalkhoran/Glantz paper “grossly misleading”, and also by Carl V. Phillips, scientific director from the pro-vaping Consumer Advocates for Smoke-Free Alternatives Association (CASAA) in the U.S., who wrote “it is obvious that Glantz was misinterpreting the information willfully, instead of accidentally”.
Robert West, another British psychologist and also the director of tobacco studies with a centre run by University College London, said “publication with this study represents an important failure in the peer review system in this particular journal”. Linda Bauld, professor of health policy in the University of Stirling, suggested the “conclusions are tentative and sometimes incorrect”. Ann McNeill, professor of tobacco addiction within the National Addiction Centre at King’s College London, said “this review is not really scientific” and added that “the information included about two studies i co-authored is either inaccurate or misleading”.
But what, precisely, are definitely the problems these eminent critics see in the Kalkhoran/Glantz paper? To respond to a few of that question, it’s required to go under the sensational 28%, and look at what was studied and how.
Meta-analysis is really a seductive idea. If (say) you may have 100 separate studies, all of 1000 individuals, why not combine them to create – essentially – just one study of 100,000 people, the outcomes from where needs to be much less susceptible to any distortions which may have crept into someone investigation?
(This may happen, for instance, by inadvertently selecting participants having a greater or lesser propensity to stop smoking due to some factor not considered through the researchers – a case of “selection bias”.)
Of course, the statistical side of any meta-analysis is rather modern-day than just averaging the totals, but that’s the overall concept. And also from that simplistic outline, it’s immediately apparent where problems can arise.
Whether its results should be meaningful, the meta-analysis has to somehow take account of variations in the design of the person studies (they could define “smoking cessation” differently, for example). If this ignores those variations, and tries to shoehorn all results right into a model that a number of them don’t fit, it’s introducing their own distortions.
Moreover, when the studies it’s according to are inherently flawed in any respect, the meta-analysis – however painstakingly conducted – will inherit those same flaws.
This can be a charge produced by the reality Initiative, a U.S. anti-smoking nonprofit which normally takes an unwelcoming view of e-cigarettes, in regards to a previous Glantz meta-analysis which will come to similar conclusions to the Kalkhoran/Glantz study.
In a submission a year ago towards the United states Food and Drug Administration (FDA), addressing that federal agency’s require comments on its proposed e-cigarette regulation, the facts Initiative noted which it had reviewed many studies of e-cigs’ role in cessation and concluded they were “marred by poor measurement of exposures and unmeasured confounders”. Yet, it said, “many of these happen to be included in a meta-analysis [Glantz’s] that states show that smokers who use e-cigarettes are not as likely to give up smoking in comparison to those that usually do not. This meta- analysis simply lumps together the errors of inference from all of these correlations.”
In addition, it added that “quantitatively synthesizing heterogeneous studies is scientifically inappropriate and the findings of such meta-analyses are therefore invalid”. Put bluntly, don’t mix apples with oranges and anticipate to have an apple pie.
Such doubts about meta-analyses are far away from rare. Steven L. Bernstein, professor of health policy at Yale, echoed the facts Initiative’s points as he wrote within the Lancet Respiratory Medicine – exactly the same journal that published this year’s Kalkhoran/Glantz work – the studies a part of their meta-analysis were “mostly observational, often with no control group, with tobacco use status assessed in widely disparate ways” though he added that “this is no fault of [Kalkhoran and Glantz]; abundant, published, methodologically rigorous studies simply do not exist yet”.
So a meta-analysis could only be as effective as the study it aggregates, and drawing conclusions from this is just valid if the studies it’s based upon are constructed in similar ways to the other person – or, at least, if any differences are carefully compensated for. Needless to say, such drawbacks also pertain to meta-analyses which can be favourable to e-cigarettes, like the famous Cochrane Review from late 2014.
Other criticisms of the Kalkhoran/Glantz work exceed the drawbacks of meta-analyses in general, and concentrate on the specific questions posed by the San Francisco researchers as well as the ways they tried to respond to them.
One frequently-expressed concern continues to be that Kalkhoran and Glantz were studying a bad people, skewing their analysis by not accurately reflecting the real variety of e-cig-assisted quitters.
As CASAA’s Phillips highlights, the e-cigarette users inside the two scholars’ number-crunching were all current smokers who had already tried e-cigarettes once the studies on their quit attempts started. Thus, the analysis by its nature excluded people who had started vaping and quickly abandoned smoking; if these people exist in large numbers, counting them could have made e-cigarettes seem a more successful route to smoking cessation.
Another question was raised by Yale’s Bernstein, who observed that not all vapers who smoke want to quit combustibles. Naturally, those who aren’t attempting to quit won’t quit, and Bernstein observed that if these individuals kndnkt excluded from your data, it suggested “no effect of e-cigarettes, not that electronic cigarette users were less likely to quit”.
Excluding some who did find a way to quit – then including individuals who have no intention of quitting anyway – would certainly seem to impact the outcome of a study purporting to measure successful quit attempts, even though Kalkhoran and Glantz argue that their “conclusion was insensitive to a variety of study design factors, including whether the study population consisted only of smokers considering smoking cessation, or all smokers”.
But additionally there is a further slightly cloudy area which affects much science – not simply meta-analyses, and not simply these specific researchers’ work – and, importantly, is often overlooked in media reporting, as well as by institutions’ pr departments.